Hey guys, below is the transcript for January 19th, 2011 office hours. I wanted to thank Robert Kluin and Mike Wesner for helping out!
-- Ikai Lan Developer Programs Engineer, Google App Engine Blogger: http://googleappengine.blogspot.com Reddit: http://www.reddit.com/r/appengine Twitter: http://twitter.com/app_engine -------------------- Status #appengineX [Google App Engine http://code.google.com/appengine/ | App Engine news and articles http://reddit.com/r/appengine | Developer chat 1st Weds 7PM PST, 3rd Weds 9AM PST] [09:11] <mbw> oh, ok... robertk you are off the hook.. he just came back [09:11] == Nickname is already in use: ikai_google [09:11] <ikai_google_> ugh [09:12] <mbw> ikai_google_: I nominated robertk to run the office hours when you left [09:12] <robertk> starship, when you see the timeouts and instances with 0qps does your average latency go way up too? [09:12] <ikai_google_> who says OS X never crashes [09:12] <King946> is there any sort of etiquette when it comes to asking questions in here? [09:12] <starship> Apple [09:12] <ikai_google_> anyway I'm logged into webchat with a personal computer, couldn't identify as myself [09:12] <mbw> King946: what is your question? [09:12] == slynch_google [[email protected]] has joined #appengine [09:12] <robertk> ikai_google_: ha ha, steve j said flash causes 9x% of osx crashes, right? [09:12] <mbw> hey now... lets not get into flash bashing [09:12] <robertk> oh yeah, sorry mbw :P [09:12] <mbw> poorly written flash, yes [09:13] <ikai_google_> yeah I've used OS X enough to know macs crash, and they crash hard. I can't even boot up my work laptop right now [09:13] <starship> Another question for you: I know different requests get handled at different datacenters(not sure if that is the right term) how does this effect different Instances [09:13] == wesley_google [d8ef2d04@gateway/web/freenode/ip.216.239.45.4] has joined #appengine [09:14] <robertk> damn ikai_google_, i thought i did a lot of harry stuff to my macs (especially when i was doing a lot of C work), but i've never truely _killed_ one ;) [09:14] <mbw> hi wesley_google [09:15] <matija_j> _google: What is meaning of throttle_cold=1, throttle_cold=2 and throttle_cold=4 in request log ? [09:15] <robertk> ^^ would also like that answered :) [09:17] <robertk> googlers, starship asked why we see instances with 0qps, i sometimes see over 50% with 0qps [09:17] <King946> mbw: I'm looking for an easy way to delete my datastore... I was trying to overwrite an old GAE app with a new one, and it seems to be creating errors as i explained in the forum... http://code.google.com/appengine/forum/python-forum.html?place=topic%2Fgoogle-appengine-python%2FZxSwQfuXAJw%2Fdiscussion.... someone suggested that I delete my old datastores, and I tried doing that in the dashboard, but it doesn't seem to delete e [09:19] <mbw> King946: have you tried using the new builtins datastore_admin: on option? [09:19] <mbw> King946: it would allow you to wipe your datastore [09:19] <robertk> how much data do you have King946? [09:19] == matija_j [[email protected]] has quit [Ping timeout: 240 seconds] [09:20] == matija_j [[email protected]] has joined #appengine [09:20] == dac_ [[email protected]] has joined #appengine [09:20] <mbw> King946: your groups post leads me to believe you are having code issues though, not datastore... are you just trying to deploy a new version or trying to wipe data? [09:20] <ikai_google_> starship: 0 qps instances happen when those instances don't get requests. they should eventually be terminated [09:20] <ristoh> hey, I wanted to ask about the changes in time limit for urlfetch, can my tasks now request external resources within a 10 minute time limit? or did I misunderstand the release? [09:21] <ikai_google_> starship: do you see patterns of relatively spiky requests or is it steady? [09:21] == xenru [[email protected]] has joined #appengine [09:21] <ikai_google_> ristoh: yes, you should have a 10 minute time limit [09:21] <ikai_google_> ristoh> but you have to do it in a task queue or cron job because those requests have 10 minute deadlines [09:21] <robertk> do you have to explicitly specifiy the higher deadline? [09:21] <ristoh> ikai_google: but it can be for an external resource? ( in my case twitter API ) [09:21] <King946> mbw: i was trying to overwrite an old app with a completely different new app [09:22] <mbw> King946: all you have to do is deploy with the same version, and that should take care of it [09:22] <mbw> King946: or you could change the version, deploy a new one, and delete the old version to get a fresh set of logs going [09:22] <ikai_google_> ristoh: it should be yes [09:22] <ikai_google_> ristoh: but I don't know if it'll work for Twitter's streaming API [09:22] <ristoh> that's a great feature [09:22] <ikai_google_> ristoh: Because the URlfetch is buffered outside your app and the response may end up exceeding the buffer of 32mb [09:23] <enigmus> Is there only one memcache instance across all datacenters? (geographical locatiosn) In other words, it the memcache coherent across all instances? [09:23] <ikai_google_> enigmus: no [09:23] <ikai_google_> enigmus: in the event of a data center failover, memcache will be flushed [09:23] <King946> mbw: i've been deploying with the same version, and that hasn't worked... but I will try changing the version right now [09:23] <mbw> King946: the datastore is shared among all versions of your app though, so deploying or changing versions will not do anythign to the datastore [09:23] == matija_je [[email protected]] has joined #appengine [09:23] <mbw> King946: if you need to start fresh with the datastore, then that is where wiping the data comes into play [09:24] <ristoh> ikai_google: ic, is there an exception I could call when the buffer is exceeded on urlfetch? [09:24] <enigmus> ikai_google_: Right, but if something is *in* memcache, we are guaranteed that it is the only value for that key across the whole system? [09:24] == ksuFreeflier [[email protected]] has quit [Quit: ksuFreeflier] [09:24] == matija_j [[email protected]] has quit [Ping timeout: 240 seconds] [09:24] <starship> no it is pretty steady [09:24] <matija_je> sorry..., has anybody answered my question about throttle_code ? [09:24] <ikai_google_> throttle_code 2 refers to a request waitign in the pending queue for 10 seconds [09:24] <ikai_google_> 10 seconds is the pending timeout [09:24] <ikai_google_> I don't know off the top of my head what the other throttle code refer to [09:25] <robertk> is there any chance of getting those documented somewhere? [09:25] == ksuFreeflier [[email protected]] has joined #appengine [09:25] <King946> mbw: oh ok... when i try to delete my entities in the Datastore Admin (that is the only thing that the Datastore Admin seems to let me do) it says "Delete job with id somenumber kicked off"... i'm assuming that'd bad? [09:25] <matija_je> _google, If every my request has 200 ms latency but consumes 1700 cpu_api_ms will my app be ever penalized in any way ever ? [09:25] <ikai_google_> throttle_code 1 looks like API timeouts [09:26] <ikai_google_> matija_je: there are some subtleties to the relationship between response latency and CPU consumption but in that case no [09:26] <mbw> King946: the delete job is a map reduce, its a job you kick off and it runs in the background [09:27] <ikai_google_> matija_je: I believe the instance scheduler penalizes applications that take long to response (lots of request ms) but very little CPU ms [09:27] <ikai_google_> robertk: yes, possibly, it's on my list [09:27] <robertk> ok, thanks [09:28] <King946> mbw: how can i tell when the job is done? [09:28] <enigmus> Any updates on Appengine for Business? In particular regarding custom domain ssl? [09:28] <matija_je> ikai_google, my only problem with cpu ms is index creation... so only cpu api ms... tnx for info... [09:28] <starship> ikai_google_:I looked again and looks like they were put not gets which makes more sense [09:28] == sv0 [[email protected]] has quit [Quit: exit] [09:28] <mbw> King946: I have not used that particular feature, so I am not sure, but I would assume that admin page would tell you [09:28] <mbw> King946: how much data did you have in your datastore? [09:28] <ikai_google_> enigmus: not sure what you are asking [09:29] <robertk> King946: the jobid should be a link you can click to get some details [09:29] <SegFault|Laptop> Is there an officially supported MapReduce framework for GAE? [09:29] == dac_ [[email protected]] has quit [Ping timeout: 276 seconds] [09:29] <Wooble> ikai_google_: "Tell me when I can do SSL on my own domain." [09:29] <ikai_google_> enigmus: if something is in memcache in data center A, we fail over to data center B, that value will be unpopulated in B [09:29] <ikai_google_> Wooble: soon? [09:29] <robertk> King946: how much data did you have? hundreds or thousands of entities? [09:29] <King946> mbw: very little, it was just a couple of posts in a guestbook tutorial [09:29] <ikai_google_> SegFault: http://code.google.com/p/appengine-mapreduce/ [09:29] <enigmus> ikai_google_: ok. but is it possible to have different values in datacenter A and datacenter B? [09:29] <ikai_google_> SegFault: doesn't do reduce yet [09:30] <ikai_google_> enigmus: yes, yes it is possible. I think we flush whenever we failover but I don't know if we make any guarantee of that [09:30] <mbw> King946: then it should take almost no time to delete, you could probably just delete them with the datastore viewer [09:30] <robertk> King946: then the delete is most likely done. go to the datastore viewer and see if anything shows up [09:30] <ikai_google_> enigmus: in any maintenance scenario we always flush [09:31] == andystevko [[email protected]] has joined #appengine [09:31] <robertk> ikai_google_: i think enigmus might think you simultaneously serve an app from multiple datacenters [09:32] <enigmus> robertk: oh, you don't? I thought that. [09:32] <ikai_google_> robertk: we don't. if your app runs on HR datastore and a catastrophic event happens to primary serving data center, you fail over immediately [09:32] <ikai_google_> enigmus: if you run on HR datastore, your datastore is written to a majority of data centers [09:32] <robertk> ikai_google_: that's what i thought... just wanted to clarify that [09:32] <ikai_google_> every write [09:32] <ikai_google_> yeah, multiple data center serving is hard. Facebook wrote a zillion hacks so they could do it [09:33] == randym [[email protected]] has quit [Remote host closed the connection] [09:33] <mbw> facebook doesnt really give a rats ass about consistency either though [09:33] <ikai_google_> they make an attempt http://www.facebook.com/note.php?note_id=23844338919 [09:34] <ikai_google_> anyway if anyone isn't aware already ... you should be on High Replication datastore if possible. With new apps I HIGHLY recommend you do it [09:34] <enigmus> ikai_google_: OK, then my question does not matter anymore. With our usage, that's enough to make it work for what we're doing. Thanks. [09:34] <mbw> ikai_google_: neat link.. ill have to read this [09:34] <ikai_google_> enigmus: yes, it matters in the edge case of catastrophic failure to second data center [09:35] <ikai_google_> enigmus: but unless you are on high replication, you will have other outages [09:35] == slynch_google [[email protected]] has quit [Quit: slynch_google] [09:35] <enigmus> ikai_google_: ok [09:36] <ristoh> ikai_google: I see there's an upcoming downtime on February 7th, is that going to be the only one for February? [09:36] <robertk> i think he is/was concerned with his app serving from two different centers, each with their own memcache which would lead to request a seeing value 1 req b seeing val 2, req 3 value 1, etc... [09:36] <ikai_google_> ristoh: yes that is the only one for february [09:36] <ristoh> I'm going to be running some campaigns around valentine's day and just hoping things are not hit by a downtime [09:36] <starship> ikai-google_:I in the past have had issues being effected based on where the client was. My understanding is that they got sent to a different location. Is that incorrect? [09:37] <ikai_google_> starship: what kind of issues? [09:38] <robertk> ikai_google_: any idea what the (approx) DS CPU / Second quota limit is, or rather can you share that info? :) [09:38] <robertk> (non HR) [09:38] <starship> http://code.google.com/p/googleappengine/issues/detail?id=4162 [09:38] <ikai_google_> robertk: there's a datastore/second quota? I wasn't aware there was [09:39] <starship> that is fixed but that is where I get my understanding from [09:39] <robertk> yeah, i've hit it... several times :) [09:39] <ikai_google_> starship: oh I see [09:39] == loke [[email protected]] has joined #appengine [09:40] <ikai_google_> starship: So the way App Engine works is that when you make a request to GAE [09:40] <ikai_google_> it actually goes to a Google front end data center first [09:40] <ikai_google_> one that is relatively local to you [09:40] <ikai_google_> that front end data center then routes the request to the primary GAE serving data center [09:40] == ryan___ [[email protected]] has joined #appengine [09:40] <ikai_google_> but it does so on Google's network. with all the peering in place, it's faster than if you were to make the request directly to the data center yourself [09:41] <ikai_google_> so the fix Sean is talking about had to go out in the front end server code [09:41] * starship nods [09:41] <ikai_google_> but it doesn't roll out all at once, and didn't roll out instantaneously to all front end servers [09:42] <ikai_google_> robertk: I don't know the quota. I'll have to ask. it seems strange since we have people doing thousands on thousands of writes per second [09:42] <ikai_google_> robertk: you probably just need a quota bump [09:42] <mbw> speaking of front end servers... we still want Application/x-amf to support gzip [09:43] <enigmus> Is it possible to get access to custom domain SSL support yet? [09:43] <ikai_google_> no, it's not ready yet [09:43] <starship> ok I assume that our apps do not run on the front end servers [09:43] <robertk> ikai_google_: i was hitting a limit at around 46,200,000 api cpu ms / minute (est), or about 330K writes / minute [09:43] <ikai_google_> I know that at some point relatively soon we will be accepting trusted testers [09:44] <ikai_google_> and I'll do my usually thing where I come on IRC and ask [09:44] <ikai_google_> *usual [09:44] <ikai_google_> robertk: oh I think you might just need a quota bump [09:45] == slynch_google [~slynch@nat/google/x-cjwoukkxlmzlrzru] has joined #appengine [09:45] <mbw> Hi Steven [09:45] == cying [[email protected]] has quit [Quit: cying] [09:46] == dac_ [[email protected]] has joined #appengine [09:47] <enigmus> ikai_google_: I'd like to be on that list :) We're releasing soon, everything we do is SSL, so it would be nice to have our customers only ever see custom domain https URLs. [09:47] <matija_je> ikai_google, how about to post irc chat log on groups after every chat session as it was common year before ? [09:48] <ikai_google_> enigmus: it may not release in a reasonable timeline for you, though, so please don't depend on it [09:48] <enigmus> ikai_google_: ok [09:48] <ikai_google_> matija_je: yeah, I can do that. I wasn't aware people read them [09:49] <ikai_google_> matija_je: it's kind of hard what to gauge what is useful and not useful for people [09:49] <onest0ne> ikai_google_: how will custom domain SSL be implemented? will you be using SNI, or handing out separate IP addresses per app? [09:49] <ikai_google_> onest0ne: I can't answer any questions about custom SSL right now [09:49] <ikai_google_> I know we considered multiple options [09:50] <onest0ne> i'm guessing SNI :) [09:50] <enigmus> The issue with switching to HR, when using SSL, is that it means changing your public URLs, as you need a new appid. [09:51] <ristoh> ikai_google: I'd second the usefullness of posting the chat logs [09:51] <taaz> the new google code feature to edit files live looks pretty cool. if we used that for the appengine svn source i'm guessing it would just file patches along with all other issues? would that be a useful or annoying way to submit small patches? [09:51] <taaz> also, why is it using svn vs hg? would be cool to be able to clone the gae code and fiddle with it. (seems the ndb stuff could have been done that way) [09:52] <ikai_google_> taaz: because it used SVN before and it's not a high priority for us to switch [09:52] <ikai_google_> taaz: internally we don't use SVN [09:52] <onest0ne> @taaz, ndb is actually in hg [09:52] <ikai_google_> I use git myself, looked very briefly at hg [09:52] <taaz> onest0ne: yeah, but it's not a clone of the main repo, which seems more natural. no big deal though. [09:53] <King946> mbw: i'm still getting the same error... and the only thing in my datastore (at least according to the datastore viewer) is _AE_DatastoreAdmin_Operation entities [09:53] <ikai_google_> enigmus: so did you siwtch to HR? [09:53] <ikai_google_> enigmus: I can create an alias from your old app ID to your new app ID [09:53] <ikai_google_> enigmus: this works pretty well unless you are doing some stuff with XMPP [09:53] <ikai_google_> enigmus: there are some bugs around that [09:54] <King946> mbw: it's bizarre because when I first logged in earlier today, the error wasn't there anymore.... it had just sort of disappeared overnight... but when i tried to recreate the error it is still there [09:54] <ryan___> on classloading on app engine - we've seen what appears to be a class only being loaded once despite multiple application loads [09:54] <robertk> King946: you need to post code (to the groups or using pastebin) for us to help you [09:55] <ryan___> is app engine caching loaded classes between instance startups? [09:55] <enigmus> ikai_google_: I haven't yet, but maybe I will then, if the appid can be aliased (we don't use XMPP). Will look into switching to HR then, thanks. [09:56] <matija_je> ikai_google, is there way to query HR datastore beside with ancestor queries so that I can be sure to get consistent data from every datacenter ? [09:56] <ikai_google_> matija_je: no, not if the data is different across entity groups [09:56] <ikai_google_> matija_je: a batch get by keys is strongly consistent [09:56] <ikai_google_> matija_je: but a query across entity groups can return data that is stale [09:57] <robertk> ok, so get == strongly consistent; non acnestory _query_ != strongly consistent [09:57] <ikai_google_> ryan___: no [09:58] <ikai_google_> robertk: right. queries within a single entity group = strongly consistent (these are ancestor queries) [09:58] <ikai_google_> queries spanning multiple entity groups = eventually consistent [09:59] == sv0 [[email protected]] has joined #appengine [09:59] <robertk> ryan___: if you're using python, see http://code.google.com/appengine/docs/python/runtime.html#App_Caching [09:59] <matija_je> ikai_google, do you plan to enable us somehow to do consitent queries on HR or simple paxtor doesn't alow this ? [09:59] <ikai_google_> matija_je: well, the reason tehy're consistent is because they're in a transaction [09:59] <King946> robertk: the code is all here... http://code.google.com/appengine/articles/django-nonrel.html... I copy and pasted webapp/app.yaml webapp/index.html webapp/main.py, all exactly as they appeared under the "App Engine webapp app" section... then i overwrote that app with the code the tutorial links to here... http://bitbucket.org/twanschik/nonrel-guestbook/downloads/nonrel-guestbook.zip... I didnt change a thing except for the app [09:59] <ikai_google_> matija_je: so there's no way for us to do transactions cross entity groups [10:01] <matija_je> ikai_google, don't forget to post irc log ;) [10:01] == dcure [[email protected]] has quit [Ping timeout: 240 seconds] [10:02] <ikai_google_> matija_je: you know, I have the past few IRC transcripts saved on my desktop [10:02] <ikai_google_> with the very good intention of posting them to the groups [10:02] <ikai_google_> but after a few weeks ... I just said forget it [10:02] == Vinay_ [458f6ac9@gateway/web/freenode/ip.69.143.106.201] has joined #appengine [10:02] <matija_je> post them also... tnx [10:02] <ikai_google_> alright everyone, this is the end of IRC office hours. thanks to everyone who came out [10:03] <starship> your welcome [10:03] <ikai_google_> apologize for the low number of Googlers hre [10:03] <ikai_google_> lots of thanks to robertk and mbw, who may know App Engine better than we do in some areas [10:03] <robertk> thanks for your time ikai_google_. it's always helpful [10:03] == sv1 [[email protected]] has joined #appengine [10:04] <enigmus> thanks [10:04] <mbw> now everyone thank *_google for all their help! -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
