Sounds interesting..... but how do you handle write contention to the 
memcache datastorage structure from multiple F1's serving client side score 
submissions ?

Also, I thought memcache had a size limit ?  I store a lot more than just 
username + score (including a full stream of all actions the user takes in 
the UI to prevent cheating).

-R

On Friday, August 3, 2012 2:05:34 PM UTC-4, hyperflame wrote:
>
> Richard, 
>
> I did some testing overnight, and I have some good news, and some bad 
> news. 
>
> Good news, I can give you a system that stores 1,000 users and scores 
> in roughly 1 second. In less than a second, I can pull out all 1,000 
> scores, sort the scores numerically, and print out the score list. 
> Bad news: It depends on memcache. 
>
> Details: Last night, I wrote an application to generate 1000 users and 
> 1000 scores randomly, and store them in memcache. On average, this 
> operation takes roughly 1 - 1.3 seconds, although I suspect the 
> slowness is due to the random number generator, not the memcache. I'll 
> test this more. 
>
> Then a task is enqueued, to call another F1 instance in three seconds. 
>
> The next instance pulls out all 1000 scores, sorts them using a 
> treemap, and prints out the sorted data into GAE logging in less than 
> a second. Then, the memcache is cleared for the next iteration of the 
> test, so we don't get old data. 
>
> A cron job repeats this test every 2 minutes. 
>
> Here is my memcache viewer screen: http://i.imgur.com/oypAm.png . As 
> you can see, this service ran overnight, and didn't drop a single user/ 
> score. Over 330,000 scores were posted and accessed in total. Is this 
> good enough performance for your game? 
>
>
>
> On Aug 3, 11:19 am, Richard <steven...@gmail.com> wrote: 
> > Thanks Alex, VERY much appreciated, since I can't test this myself 
> without 
> > buying a shell account somewhere. 
> > 
> > Luckily, the backend crashed due to being unable to reuse the connection 
> > for the delete.  So I added some exception handling :) 
> > 
> > Can I ask some more people to try this link:  
> http://sven-anagramhero.appspot.com/client/loadtest 
> > 
> > Please ping it once from a web browser just before you hit it.  This 
> will 
> > ensure the DB is up :) 
> > 
> > I would like to see results for loads of n > 1000 with c => 500. 
> > 
> > The server clears out results every 3 minutes (synchronized to NTP time) 
> on 
> > the minute boundary, so please try to avoid doing it exactly on that 
> > boundary (in which case the results will be spread and it makes it more 
> > difficult to ensure we did not 'lose' any). 
> > 
> > NOTE:  It seems we can store at least 1k users within 10 seconds ..... I 
> > really don't like the 6.8 second response (I would prefer 300 msec)..... 
> > viable ?  y/n ? 
> > 
> > Thanks ! 
> > 
> > -R 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > On Friday, August 3, 2012 10:02:09 AM UTC-4, alex wrote: 
> > 
> > > From Rackspace (London): 
> > 
> > > ab -n 1000 -c 200  http://sven-anagramhero.appspot.com/client/loadtest 
> > > This is ApacheBench, Version 2.3 <$Revision: 655654 $> 
> > 
> > > Server Software:        Google 
> > > Server Hostname:        sven-anagramhero.appspot.com 
> > > Server Port:            80 
> > 
> > > Document Path:          /client/loadtest 
> > > Document Length:        2 bytes 
> > 
> > > Concurrency Level:      200 
> > > Time taken for tests:   15.694 seconds 
> > > Complete requests:      1000 
> > > Failed requests:        0 
> > > Write errors:           0 
> > > Total transferred:      171000 bytes 
> > > HTML transferred:       2000 bytes 
> > > Requests per second:    63.72 [#/sec] (mean) 
> > > Time per request:       3138.712 [ms] (mean) 
> > > Time per request:       15.694 [ms] (mean, across all concurrent 
> requests) 
> > > Transfer rate:          10.64 [Kbytes/sec] received 
> > 
> > > Connection Times (ms) 
> > >               min  mean[+/-sd] median   max 
> > > Connect:        8    8   1.5      8      22 
> > > Processing:   139 2827 1197.6   2910    8487 
> > > Waiting:      139 2827 1197.6   2910    8487 
> > > Total:        147 2835 1197.6   2918    8494 
> > 
> > > Percentage of the requests served within a certain time (ms) 
> > >   50%   2918 
> > >   66%   3341 
> > >   75%   3620 
> > >   80%   3874 
> > >   90%   4257 
> > >   95%   4700 
> > >   98%   5900 
> > >   99%   6131 
> > >  100%   8494 (longest request) 
> > 
> > > ab -n 1000 -c 500  http://sven-anagramhero.appspot.com/client/loadtest 
> > > This is ApacheBench, Version 2.3 <$Revision: 655654 $> 
> > 
> > > Server Software:        Google 
> > > Server Hostname:        sven-anagramhero.appspot.com 
> > > Server Port:            80 
> > 
> > > Document Path:          /client/loadtest 
> > > Document Length:        2 bytes 
> > 
> > > Concurrency Level:      500 
> > > Time taken for tests:   6.879 seconds 
> > > Complete requests:      1000 
> > > Failed requests:        0 
> > > Write errors:           0 
> > > Total transferred:      171000 bytes 
> > > HTML transferred:       2000 bytes 
> > > Requests per second:    145.37 [#/sec] (mean) 
> > > Time per request:       3439.463 [ms] (mean) 
> > > Time per request:       6.879 [ms] (mean, across all concurrent 
> requests) 
> > > Transfer rate:          24.28 [Kbytes/sec] received 
> > 
> > > Connection Times (ms) 
> > >               min  mean[+/-sd] median   max 
> > > Connect:        8   17   9.0     11      28 
> > > Processing:   144 2210 1535.1   1885    6831 
> > > Waiting:      144 2210 1535.2   1885    6831 
> > > Total:        152 2227 1539.7   1894    6853 
> > 
> > > Percentage of the requests served within a certain time (ms) 
> > >   50%   1894 
> > >   66%   2410 
> > >   75%   3100 
> > >   80%   3225 
> > >   90%   4492 
> > >   95%   5628 
> > >   98%   6418 
> > >   99%   6484 
> > >  100%   6853 (longest request) 
> > 
> > > On Friday, August 3, 2012 3:04:55 PM UTC+2, Richard wrote: 
> > 
> > >> Connection pooling might be a good idea.  Since there are people in 
> every 
> > >> game round and each round is 3 minutes, the SQL db will always be up. 
>  I 
> > >> did try it, but I think my connection from home was limited. 
> > 
> > >> RE: SQL solution:   Can some of you with LOTS of bandwidth (from a 
> *nix 
> > >> machine), please AB the following URL: 
> > 
> > >>      http://sven-anagramhero.appspot.com/client/loadtest 
> > 
> > >> Try at least 1000 connections with 250-500 concurrent and report back 
> > >> here please. 
> > 
> > >> WRT costs:  DB read/writes are around $3/day.  Whereas 10 B1 backends 
> > >> would be almost $20/day. 
> > 
> > >> In addition, the B1 solution does not scale.  Lets say the app 
> suddenly 
> > >> gets a lot of new users.  Now I need to update the backend. 
>  Additionally, 
> > >> peak load is approximately 4x the lowest value.  Now, for part of 
> each day, 
> > >> I need to have enough instances to handle the peak load just doing 
> minimal 
> > >> work.  This is not scaling automatically.  I always need to pay the 
> maximum 
> > >> of whatever is needed to handle peak load.... or else update the 
> backends 
> > >> every few hours to add/remove B1's.  Not exactly fulfilling the 
> automatic 
> > >> scaling promise! 
> > 
> > >> On Friday, August 3, 2012 3:49:24 AM UTC-4, Mauricio Aristizabal 
> wrote: 
> > 
> > >>> Takashi, is there some more detailed information on why Google 
> doesn't 
> > >>> encourage using a connection pool?  Is it simply to encourage 
> allowing the 
> > >>> db instance to wind down instead of being kept alive only by pool 
> > >>> connection health checks?  If so I'm sure it could be configured to 
> avoid 
> > >>> this. 
> > 
> > >>> It does seem to me that it could reduce Richard's costs drastically, 
> by 
> > >>> 2/3 just on the writes to the db by his own numbers. 
> > 
> > >>> I've been using pooling without issue for several months now, though 
> > >>> admittedly with very little traffic so far, so if you think this is 
> going 
> > >>> to get me in trouble later I'm very eager to hear why. 
> > 
> > >>> On Fri, Aug 3, 2012 at 12:28 AM, Richard Watson < 
> > >>> richard.wat...@gmail.com> wrote: 
> > 
> > >>>> What are the performance characteristics of connecting to Google 
> > >>>> Compute Engine?  Maybe slap the in-memory app onto that. 
> > 
> > >>>>  -- 
> > >>>> You received this message because you are subscribed to the Google 
> > >>>> Groups "Google App Engine" group. 
> > >>>> To view this discussion on the web visit 
> > >>>>https://groups.google.com/d/msg/google-appengine/-/pTFagZqQkx4J. 
> > 
> > >>>> To post to this group, send email to 
> google-appengine@googlegroups.com. 
> > >>>> To unsubscribe from this group, send email to 
> > >>>> google-appengine+unsubscr...@googlegroups.com. 
> > >>>> For more options, visit this group at 
> > >>>>http://groups.google.com/group/google-appengine?hl=en. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/Fj-GNmS1OAUJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to