Hi, I'm running load tests in my production environment and I've found out 
there's some kind of concurrency issues. I'm running a query that takes (in 
average) 900ms to return my results. What I've seen is: if I run queries 
with concurrent users using JMeter with a latency lower than 900ms (20 
users in 20 secs ramp-up) the response times start increasing in some kind 
of snowball effect.

Final times are:

1   15:39:14.641    Thread Group 1-1    Query   981     Success 31020   948     
31
2   15:39:15.041    Thread Group 1-2    Query   1451    Success 31020   1412    
30
3   15:39:15.446    Thread Group 1-3    Query   1943    Success 31020   1868    
30
4   15:39:15.843    Thread Group 1-4    Query   2367    Success 31020   2333    
29
5   15:39:16.244    Thread Group 1-5    Query   2842    Success 31020   2805    
56
6   15:39:16.648    Thread Group 1-6    Query   3385    Success 31020   3334    
30
7   15:39:17.044    Thread Group 1-7    Query   3890    Success 31020   3857    
30
8   15:39:17.444    Thread Group 1-8    Query   4423    Success 31020   4375    
29
9   15:39:17.851    Thread Group 1-9    Query   4952    Success 31020   4908    
31
10  15:39:18.246    Thread Group 1-10   Query   5447    Success 31020   5409    
31
11  15:39:18.650    Thread Group 1-11   Query   5952    Success 31020   5918    
30
12  15:39:19.047    Thread Group 1-12   Query   6459    Success 31020   6423    
32
13  15:39:19.452    Thread Group 1-13   Query   6994    Success 31020   6952    
30
14  15:39:19.854    Thread Group 1-14   Query   7494    Success 31020   7459    
30
15  15:39:20.253    Thread Group 1-15   Query   7976    Success 31020   7933    
31
16  15:39:20.649    Thread Group 1-16   Query   8476    Success 31020   8424    
30
17  15:39:21.055    Thread Group 1-17   Query   9001    Success 31020   8959    
30
18  15:39:21.451    Thread Group 1-18   Query   9549    Success 31020   9489    
30
19  15:39:21.850    Thread Group 1-19   Query   10057   Success 31020   10020   
75
20  15:39:22.255    Thread Group 1-20   Query   10616   Success 31020   10562   
28
21  15:39:22.657    Thread Group 1-21   Query   11071   Success 31020   11038   
29
22  15:39:23.053    Thread Group 1-22   Query   11620   Success 31020   11583   
45
23  15:39:23.453    Thread Group 1-23   Query   12125   Success 31020   12080   
30
24  15:39:23.854    Thread Group 1-24   Query   12622   Success 31020   12577   
32
25  15:39:24.259    Thread Group 1-25   Query   13141   Success 31020   13099   
31
26  15:39:24.661    Thread Group 1-26   Query   13623   Success 31020   13586   
29
27  15:39:25.056    Thread Group 1-27   Query   14182   Success 31020   14113   
30
28  15:39:25.457    Thread Group 1-28   Query   14769   Success 31020   14616   
31
29  15:39:25.855    Thread Group 1-29   Query   15158   Success 31020   15100   
34
30  15:39:26.258    Thread Group 1-30   Query   15617   Success 31020   15580   
48
31  15:39:26.662    Thread Group 1-31   Query   16095   Success 31020   16063   
31
32  15:39:27.058    Thread Group 1-32   Query   16639   Success 31020   16556   
32
33  15:39:27.457    Thread Group 1-33   Query   17105   Success 31020   17059   
34
34  15:39:27.860    Thread Group 1-34   Query   17614   Success 31020   17573   
30
35  15:39:28.262    Thread Group 1-35   Query   18113   Success 31020   18076   
30
36  15:39:28.658    Thread Group 1-36   Query   18631   Success 31020   18597   
38
37  15:39:29.058    Thread Group 1-37   Query   19088   Success 31020   19045   
39
38  15:39:29.458    Thread Group 1-38   Query   19577   Success 31020   19541   
31
39  15:39:29.861    Thread Group 1-39   Query   20081   Success 31020   20042   
30
40  15:39:30.264    Thread Group 1-40   Query   20611   Success 31020   20556   
30
41  15:39:30.660    Thread Group 1-41   Query   21082   Success 31020   21046   
30
42  15:39:31.065    Thread Group 1-42   Query   21595   Success 31020   21546   
32
43  15:39:31.460    Thread Group 1-43   Query   22080   Success 31020   22033   
31
44  15:39:31.862    Thread Group 1-44   Query   22597   Success 31020   22553   
33
45  15:39:32.262    Thread Group 1-45   Query   23075   Success 31020   23041   
129
46  15:39:32.662    Thread Group 1-46   Query   23586   Success 31020   23549   
55
47  15:39:33.062    Thread Group 1-47   Query   24100   Success 31020   24061   
31
48  15:39:33.463    Thread Group 1-48   Query   24546   Success 31020   24503   
30
49  15:39:33.865    Thread Group 1-49   Query   25053   Success 31020   24991   
29
50  15:39:34.265    Thread Group 1-50   Query   25512   Success 31020   25470   
30

I guess it's some configuration issue.

I've gone through http://neo4j.com/developer/in-production/ to set up my 
environment properly, it's using an Amazon AWS m3.large instance:

   - 7.5Gb. RAM
   - 2 core processor
   - 4Gb Heap size
   - 320 threads max

Query is:

START 
station=node:LocationIndex('withinDistance:[51.51766116938769,-0.09774009180783261,8.0]'),
 user=node(9999) 
MATCH (tt:TimeTable)<--(line:Line)<-[h:HasTrainLine]-station
    WHERE tt.from <= 2100 AND tt.to >= 2100 and tt-[:validOn]->(:Weekday{dow:5})
WITH station, tt, line, user
OPTIONAL MATCH user-[r:isFav]->station 
OPTIONAL MATCH user-[r2:Uses]->line 
WITH station, r, sum(r2.timesUsed) as times, tt 
RETURN station, times, not(r is NULL) as fav 
ORDER BY times DESC, station.name ASC  
SKIP 0 
LIMIT 16

I took a thread dump form the jvm whilst running the load tests ->
https://gist.github.com/aabutaleb/1a2cb6d1067bdff96be8

I found something interesting in line 961 
<https://gist.github.com/aabutaleb/1a2cb6d1067bdff96be8#file-threaddumps-log-L961>
 which 
shows Spatial acquiring a write lock, but this doesn't make sense since 
there're no writings in this query...

SO link: http://stackoverflow.com/q/29196410/1687972

Thanks in advance!

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to