5/17/2013

Spring LDAP Template VS UnboundID (ApacheDS 1.5.7 VS ApacheDS 2.0.0)

For Basic impl and compare of these two.
Refer to
CRUD for LDAP (Spring LDAP Template, ApacheDS).
             

CRUD for LDAP (UnboundID JDK, ApacheDS)


For efficiency,

I wrote several test cases to get the data:

Environment: Win7, 8G memory, 3.3GHz

Efficiency Compare of ApacheDS 1.5.7 & 2.0.0 (Use UnboundId)

1.5.7
2.0.0
create 1 records
20421ms
130ms
create first 50 records
1245862ms(hasn’t test increase)
3510ms(increase 700ms each time create 50 more)
Search 1 record from 200 records
50ms
45ms
Search 1 record from 5000 records
720ms
700ms
Efficiency Compare of Spring & UnboundId (Use ApacheDS 2.0.0 as LDAP server)

Spring
UnboundId
create first 1 records
93ms
118ms
Create the  5001st records
333ms
221ms
create first 50 records
7129ms
6443ms
Search 1 record from 200 records
31ms
45ms
Search 1 record from 5000 records
730-770 ms
650-700 ms
1000 Thread Search 1 record from 5000 records
Total Cost around 360000ms
Average around 360 ms
Total Cost around 360000ms
Average around 360ms

For apacheDS 2.0.0
set HEAP=-Xms2048m -Xmx4096m
200 initialize connection 800 max connection
Maximum support 1200 threads creating and searching at the same time.
But if the records number in the LDAP is increasing, the supported thread number will decrease.

In the compare, we can see, the create efficiency of ApacheDS 1.5.7 and 2.0.0 is not on the same level. V2.0.0 is much faster. The retrieval efficiency does not have that much difference. Guess all use the classic search logic...^^

While compare Spring LDAP template and UnboundID, found that when data set is small, spring may have a better performance, but as the data size increase, spring's efficiency is decreased and UnboundID is better.

While in the testing, we can prove that LDAP is definitely not designed to do frequently create and update. 
--The creating efficiency is so bad, especially in ApacheDS 1.5.7. 
--Also if you make create and search in the same time, when thread number increase and when data set size is huge, the server can easily crush. (Heap Space Exception. Memory is not enough.)
--If the server crush, you cannot access the data anymore, and cannot resume even restart the server, also the invalid EOF exception may be thrown out.




No comments:

Post a Comment