This past Wednesday, the App Engine team hosted the latest session of
its bimonthly IRC office hours. A transcript of the session and a
summary of the topics covered is provided below. The next session will
take place on Wednesday, October 7th from 7:00-8:00 p.m. PDT in the
#appengine channel on irc.freenode.net.


--SUMMARY-----------------------------------------------------------
- Tip for handling schema upgrades: keep a version associated with
each entity and update this version number whenever a property is
added or removed. Then you can easily query for the entities that
still need to be updated via this version number. [8:59-9:00]

- Built-in cursors for paging and datastore statistics are on track
for the next release (1.2.6). [9:03-9:04, 9:08]

- To avoid timeouts when importing large modules (or a high number of
smaller modules), consider importing these only when needed (e.g.
inside of a functions/methods) or using ZipImport. [9:10]

- Discussion on using datastore dump and restore to download all data
for a given application and re-import to the same app, a different
app, or even the local datastore. [9:08-9:13]

- Discussion on ways of profiling applications to measure and optimize
performance. [9:12, 9:14, 9:16]

- Discussion on task queue throttling. Q: What does <rate>150/h</rate>
actually mean? A: It churns through the first 150 tasks in that queue
on the hour, then waits for the next hour to chrun through the next
150. It does not necessarily evenly distribute the 150 tasks over the
hour, but you can use something like <rate>3/m</rate> if you want
this. [9:24-9:27, 9:30-9:32]

- Request for extensions to billing system to allow users to help
cover the costs of running an application. [9:35-9:40]

- Request for enhancing the Users API to support authentication via
the other systems GFC supports (OpenID, et. al.) in order to make
Friend Connect integration easier. [9:37-9:42]

- Q: Is it possible to get billing details on per request basis? A:
Yes, this is currently possible for CPU time, but not for the other
billable quotas. [9:55-9:56]


--FULL TRANSCRIPT---------------------------------------------------
[8:59am] <schtief> how can i query with GQL for a <Missing> property.
I enhanced my Data Objects with properties later and now i can not
query for the old ones (Java)
[8:59am] <Wooble> You can't; they're not indexed.
[9:00am] <cgeorg> schtief: I add a version property to all of my model
objects
[9:00am] <schtief> cgeorg: thats clever thanks
[9:00am] <cgeorg> Any time I add a property, the version gets updated,
and I run an update on all objects whose version is less than the
current to update them
[9:00am] <manav> Hi
[9:00am] <ryan_google> cgeorg: +1
[9:00am] <scudder_google> Hi all, I'd like to kick off another
installment of our official twice-a-month hour long chat sessions. Let
the questions and comments flow.
[9:00am] <maxoizo> Hi google team! Thanks for the good post in the
blog. As i understand, the megastore is "a transactional indexed
record manager built on top of BigTable" - from SIGMOD 2008. Also i
see that AppEngine use a onestore now (in java OneStore.class etc).
What is the onestore? More simple manager or something else? Will be
the onestore used in the future in parallel, or it will be completely
replaced by megastrore?
[9:01am] <ryan_google> it'd be great to see a general-purpose open
source library like that for handling schema upgrades
[9:01am] <schtief> cgeorg: thats how i did it to solve my problems :-)
this sounds so ruby like
[9:01am] <nasim> hi, i'm getting random runtime.deadlineexceedederror
from my app
[9:01am] <nasim> i'm using appengine-patch to use django
[9:02am] <schtief> why isnt there any OR shortcut in GQL all i do is
making to queries for one OR and combining the results. this shortcut
could also be done by the engine
[9:02am] <ryan_google> maxoizo: onestore is just our internal name for
the metaschema, ie the way we represent our schemaless entities and
properties in megastore
[9:02am] <cringer> You know, the only difference between MSIE and a
dinosaur is that no one pretends the dinosaur is still alive.
[9:02am] <ryan_google> it's not code
[9:02am] <manav> I have a question on TaskQueue .. my worker url
points to subdomain in my app which is not permitted as only relative
urls are allowed
[9:02am] <morais78> ryan: can you take a look at
http://code.google.com/p/googleappengine/issues/detail?id=1695
(deadline issues on import)
[9:02am] <cgeorg> Since I started this before background tasks and
unlimited request time, I had a url I could hit that would do a
reasonable batch (20-40) entities, then output an html page that had a
link back to itself and JS to click it after a second or so
[9:02am] <cgeorg> Might be better to do with a background task now
[9:02am] <ryan_google> schtief: the IN operator is a kind of OR
[9:02am] <Fizz> Java question: is there a chance to get JAXB support
in the near future?
[9:02am] <moraes> hmmm. someone please turn off c-r-i-n-g-e-r.
[9:03am] <maxoizo> What are another differences between the current
datastore and megastore in implementation for the AppEngine, apart the
mechanism of replication?
[9:03am] <ryan_google> but unfortunately we can't support general-
purpose OR as efficiently as we'd need to. more in
http://sites.google.com/site/io/under-the-covers-of-the-google-app-engine-datastore
[9:03am] <scudder_google> manav: sounds like a good feature request,
could you file it here? http://code.google.com/p/googleappengine/issues/list
[9:03am] <eighty> java question for app engine team: any updates on
auto-paging features in the sdk?
[9:03am] <tobyr> fizz: jaxb is a high priority for us, but we can't
make any promises about specific dates
[9:04am] <jaxn> Is there a way to see what tasks are in the queue?
[9:04am] <manav> thanks scudder will do it
[9:04am] <ryan_google> max: app engine's schemalessness is another big
example, since megastore uses declarative schemas
[9:04am] <schtief> ryan_google: oh cool thanks for the hint, one more
GQL question how to i query for null? i could not find a IS (NOT) NULL
[9:04am] <ryan_google> eighty: they're coming soon! hopefully 1.2.6
[9:04am] <eighty> ryan_google: yay! :)
[9:04am] <Fizz> tobyr: great, thanks.
[9:04am] <ryan_google> schtief: = NULL should work
[9:04am] <nickjohnson> schtief: You have to substitute the parameter
in your query - "WHERE blah = :1", None
[9:04am] <nickjohnson> ryan_google: I don't believe NULL is a keyword
in GQL
[9:04am] <nickjohnson> At least, not in Python
[9:05am] <nickjohnson> Which should be filed as a bug, if it's not
already
[9:05am] <eighty> ryan_google: how about a write back cache
feature? :)
[9:05am] <maxoizo> rian: Why Ryan Barrett wrote, that "We don't need
all of its features - declarative schemas, for example"? why not...?
What about secondary indices, fulltextsearch (whitch must be in the
megastore - http://www.yesco.org/resume.html)?
[9:05am] <Wooble> nickjohnson: if the model didn't have the property
in question when the entity was created, isn't it impossible to search
for it anyway, since it couldn't have been indexed?
[9:06am] <nasim> nobody here to ans my ques? :(
[9:06am] <ryan_google> nickjohnson: good point
[9:06am] <nickjohnson> Wooble: Yes, correct - but if it had the
property and it's set to None, you can search for that.
[9:06am] <ryan_google> max: we actually do have secondary indices, we
use them for queries. :P megastore doesn't have full text search built
in, exactly, but it has integration points, and we're actively working
on exposing them
[9:06am] <ryan_google> nasim: we need more info
[9:07am] <scudder_google> nasim: app ID would be a good start
[9:07am] <nasim> i'm using appengine-patch to build a django 1.1 app
here: http://dhadharu.appspot.com
[9:08am] <jaxn>ryan_google: I currently am using 22GB of storage and
don't really know why. Is there any way to get an idea of where my
storage is so that I can get that number down a bit
[9:08am] <nasim> it runs very well, except sometimes it raises
DeadlineExceededError
[9:08am] <maxoizo> ryan: thx. Next question: "Datastore dump and
restore facility": is this feature of the bulk_loader (by --dump &
restore keys) or some other mechanism, which enables us to download
the whole dumps to the server (via a gDrive?), and then data will be
inserted in the background into datastore?
[9:08am] <nasim> I'm also using pytz and i18n in the application
[9:08am] <ryan_google> jaxn: definitely! we have a feature coming up
soon that will give you stats about how much space each of your kinds,
properties, etc. take up. kind of like du for app engine
[9:08am] <ryan_google> no promises, but we're hoping it will be in
1.2.6
[9:08am] <nickjohnson> maxoizo: It's the --dump and --restore
functionality of the bulkloader
[9:09am] <nasim> in the truncated excpetion message in the appengine
admin
[9:09am] <nasim> i find that it dies during imports of pytz or some
other module
[9:09am] <nasim> that's all i know about this exception raise
[9:09am] <cgeorg> nickjohnson: so with --dump and --restore we can
make full backups and restores of our production datastore?
[9:09am] <jaxn> ryan_google: that would be big. We are growing fast.
Will it include information about indexes?
[9:09am] <cgeorg> Can the data be imported into a local datastore for
testing?
[9:10am] <nickjohnson> cgeorg: Yes. You have to do it kind-by-kind,
but it dumps and restores the data unmodified, without requiring a
config.
[9:10am] <nwinter> I have the same problem as Nasim since the
datacenter move; while importing Django (using use_library('1.0')),
random and unpleasantly frequent DeadlineExceededErrors occur while
importing whatever piece of Django
[9:10am] <nickjohnson> cgeorg: Yes, just dump from one datastore and
restore to the other, though currently this may cause problems with
automatically numbered entities.
[9:10am] <morais78> Might be related to 
http://code.google.com/p/googleappengine/issues/detail?id=1695
? I've got some data on that ticket; the deadlines happened in my app
for about 10 minutes today, then went away
[9:10am] <cgeorg> Ok, so it keeps references in tact as well?
[9:10am] <nickjohnson> Also, bear in mind that the amount of data you
can load into the local datastore is limited
[9:10am] <scudder_google> nasim: there are some cases in which a large
number/size of imports can hit the 30 sec execution deadline, common
workarounds include importing within functions/methods, wait until
needed
[9:10am] <nickjohnson> yes
[09:10am] <ryan_google> jaxn: the information about indices can be
derived from the properties and custom indices you have defined
[9:10am] <scudder_google> you might also try zipimport
[9:10am] <cgeorg> how big can the local datastore get?
[9:10am] <maxoizo> nick: And are you planning mechanism like a dump in
mysql? Via remote admin as examle&
[9:10am] <cgeorg> I think my production store is around 200mb right
now
[9:11am] <nickjohnson> If you restore to a different datastore you
dumped from, though, beware that automatic numbering will not
currently work as expected, and may overwrite existing entities
[9:11am] <nickjohnson> cgeorg: No hard limit, but evertything is
stored in memory and searched linearly, so it will slow down and grow
in size if you dump too much data in
[9:11am] <jaxn> ryan_google: that works. On a related note, what about
tools to help with profiling high CPU URIs
[9:11am] <cgeorg> I would only be restoring to local, where an
overwrite would be fine, or back to production, in the case of a
disaster
[9:11am] <cgeorg> Ok, good to know, I'm going to have to play around
with that a bit. Thanks
[9:12am] <nickjohnson> cgeorg: Restoring to the same place you dumped
from is always fine.
[9:12am] <nickjohnson> Np
[9:12am] <maxoizo> What mean "Alerting system for exceptions in your
application"? Could you say more about this?
[9:12am] <manav> is there an equivalent of optimistic-locking feature
of hibernate in AppEngine DataStore
[9:12am] <nickjohnson> maxoizo: It's a user-land library in ext that
makes error reporting more convenient for Python apps. It's not
currently documented; keep an eye out for docs in the near future.
[9:12am] <ryan_google> jaxn: that's a little out of my depth. i assume
you've seen http://code.google.com/appengine/kb/commontasks.html#profiling
[9:13am] <scudder_google> ryan_google: I just looked up that link
too :)
[9:13am] <nwinter> moiras, I had starred that issue and think that is
the same thing as Nasim and I are seeing, yeah
[9:13am] <maxoizo> nick: thanks1
[9:13am] * Wooble wonders how annoying it would be to make his app
send him an XMPP message every time there's an exception raised :)
[9:13am] <nickjohnson> Wooble: Depends how reliable your app is ;)
[9:13am] <lent> can the bulkloader that is available in python version
be used if we are using java?
[9:13am] <ryan_google> manav: hmm. the datastore itself uses
optimistic concurrency. details in
http://sites.google.com/site/io/under-the-covers-of-the-google-app-engine-datastore
. does that answer your question?
[9:13am] <nickjohnson> lent: Yes.
[9:14am] <nickjohnson> You need to create a Python version of your app
that just has an app.yaml with the remote_api mapping, and point the
bulkloader to that.
[9:14am] <jaxn> ryan_google: I have. I don't understand the output
much though. As our traffic increases it seems like our CPU usage
escalates for processes because it is waiting on the datastore. (and
that optimistic concurrency link may help me too)
[9:14am] <cgeorg> Wooble: that's actually a terrific idea. please
implement and share :) An email option would be nice too
[9:15am] <nickjohnson> cgeorg: It'd be really simple, actually. You
just have to implement a log handler that does that, and install it
when your request handler is imported.
[9:15am] <jaxn> Wooble: There is something in the AppEngine cookbook
for sending an email for exceptions (and only once an hour for each
exception using Memcache). Would be trivial to switch it to XMPP
[9:15am] <FP[1]> Are there any things on the pipeline for workflow?
[9:15am] <nickjohnson> cgeorg: The undocumented exception reporting
library I mentioned is google.appengine.ext.ereporter; it sends a
daily email report instead of on-the-spot ones
[9:15am] <manav> not sure.. let me give a use case. I retrieve a record
(row) and modify it in the interim the same record was updated.
Hibernate gives an exception (Stale data). It does this by maintaining
a version column in every table
[9:16am] <bthomson> i almost posted a comment on that import issue, i
get bursty periods where handlers go from 200ms to 15-30s
[9:16am] <scudder_google> jaxn: also I recorded a video a while back
on profiling an app, if Python is your thing 
http://www.youtube.com/watch?v=Zip1G6-NiMM
[9:16am] <ryan_google> manav: yes, the datastore works this way too
[9:16am] <jaxn> scudder_google: thanks! I will give that a look
[9:16am] <manav> cool thanks
[9:16am] <nwinter> bthomson, are you using Django 1.0 or app-engine-
patch by any chance?
[9:17am] <lent> the appengine java sdk versions 1.2.0, 1.2.1 and 1.2.2
is currently available at maven repository at www.mvnsearch.org/maven2.
when will 1.2.5 be put there?
[9:17am] <cgeorg> is there any usage example sitting around somewhere?
[9:17am] <bthomson> no nwinter, i use webapp although i import a lot
of modules
[9:17am] <nickjohnson> cgeorg: The source code has extensive
docstrings describing how to use it
[9:17am] <maxoizo> To java team: if i do (QueueFactory.getDefaultQueue
()).add() or (QueueFactory.getQueue("myqueue")).add() i get URL in
local & prod: "/_ah/queue" instead of "/_ah/queue/default" or "/_ah/
queue/myqueue"/ Is it a bug or what am i doing something wrong? In the
Python SDK all ok
[9:18am] <nickjohnson> But separate docs aren't available yet.
[9:18am] <manav> In Datastore API. When I have an app generated key.
AppEngine needs a string. How can I use a Long
[9:18am] <jaxn> Wooble: email exceptions:
http://appengine-cookbook.appspot.com/recipe/email-upon-exception-with-throttling/
[9:18am] <nickjohnson> manav: As of 1.2.5 you can pass a numeric
string if you wish. You can't yet specify your own IDs, however.
[9:18am] <cgeorg> nickjohnson: ok, that's 2 new items on the already-
too-long todo list :)
[9:18am] <manav> e.g Every tweet comes with a Long status id, and I
want to use  the same as my primary key
[9:19am] <nickjohnson> manav: I would suggest using it as a string
key, then
[9:19am] <nickjohnson> er, key name
[9:19am] <manav> k thanks nick
[9:20am] <manav> even with numeric string, if I recall it right I had
issues , I had to prefix with a char
[9:20am] <nickjohnson> Yes, as of 1.2.5 that is no longer necessary.
[9:20am] <manav> cool. that helps. thanks
[9:20am] * nickjohnson takes a deep breath
[9:21am] <joakime> I have a Table which the PrimaryKey is an Android
Device ID. so I prefix them with "k:" resulting in things like ... "k:
8657309"
[9:21am] <cheeze> Hi, developing with Python, what's best way to debug
in App Engine? I tried the methods suggested in http://paste.shehas.net/show/2/
and http://morethanseven.net/2009/02/07/pdb-and-appengine/ to no
avail.
[9:21am] <joakime> trivial to setup and query against.
[9:21am] <nickjohnson> joakime: In 1.2.5 on you no longer need the
prefix.
[9:21am] <maxoizo> To java team: if i do (QueueFactory.getDefaultQueue
()).add() or (QueueFactory.getQueue("myqueue")).add() i get URL: "/_ah/
queue" instead of "/_ah/queue/default" or "/_ah/queue/myqueue"/ Is it
a bug or what am i doing something wrong?
[9:22am] <joakime> nickjohnson: i noticed that in the release notes.
too late for me tho. I have over 200k entries in that DB now.
[9:22am] <jaxn> joakime: unsolicited tip: using the device id could
create issues as users change phones
[9:22am] <manav> For TaskQueue to be Successful.. what all need to
return Status code "200"
[9:22am] <ryan_google> maxoizo: unfortunately we might not have anyone
here who's familiar enough with task queue on java to help
[9:22am] <ryan_google> consider posting to the java group?
[9:22am] <joakime> jaxn: i have a separate table mapping users to
devices.
[9:22am] <nickjohnson> joakime: There's nothing wrong with the prefix
approach - this is just more convenient for new code, really.
[9:22am] <nickjohnson> manav: Pardon?
[9:23am] <scudder_google> maxoizo: yes, we'll have to get back to you
on that, the Java discussion group would be best
[9:23am] <manav> I have a REST Queue Resource which adds worker urls
to queue
[9:23am] <sohil> how to use appengine as backend of my site?
[9:23am] <nickjohnson> sohil: Can you be more specific?
[9:23am] <manav> The REST Queue Resource itself was returning "204"
and TaskQueue status was failure
[9:23am] <maxoizo> 2ryan: ok, thanks
[9:23am] <nickjohnson> manav: You need to return 200, then
[9:24am] <joakime> nickjohnson: any chance TaskQueues (which rock
btw!) will have throttling that actually works?
[9:24am] <nickjohnson> joakime: What do you mean by "actually works"?
What problem are you seeing?
[9:24am] <sohil> i have data on my site and i want to use the
computing power of the appengine
[9:24am] <markapp> Any idea what kind of interpolation you're using
when resizing images? They look quite bad. Any plans to change the
interpolation algorithm?
[9:24am] <nickjohnson> manav: You're right that other 2xx status codes
should probably succeed, though - can you file a bug?
[9:25am] <manav> thanks nick.. will do
[9:25am] <manav> on same note task queue rock!!!
[9:26am] <ryan_google> markapp: interesting point! we actually use the
same image handling code that picasa uses, as a shared service, which
is great for reuse but means it's not as easy for us to make changes
without also affecting them and the other internal users
[9:26am] <joakime> nickjohnson: i have situations where a bulk of
tasks get created (say 200 entries), and put in a named queue with a
rate specified of "150/h". what I see happening is 150 entries
immediately get processed, within mere seconds. and then an hour long
pause until it processes the last 50 entries in a quick burst.
[9:26am] <Wooble> sohil: "use the computing power" in what way?
[9:26am] <ryan_google> markapp: still, definitely feel free to file an
issue in the issue tracker
[9:26am] <nickjohnson> joakime: Have you tried decreasing the bucket
size?
[9:26am] <jaxn> I second that Task Queue rocks! Just wish there were
tools to see the queue :)
[9:26am] <joakime> nickjohnson: default bucket size of 5 at the
moment.
[9:27am] <moraes> relevant question #1: will 1.2.6 be the best release
ever?
[9:27am] <sohil> like map reduce functionality
[9:27am] <moraes> relevant question #2: why don't you follow pep 8?
[9:27am] <ryan_google> joakime: i suspect what he means is that you
could try setting the queue to 3/m instead of 150/h
[9:27am] <jaxn> joakime: shouldn't it be set to 150/60 per minute? So
that it spreads the 150 out over an hour instead of doing hourly
bursts? (Is that what is going on?)
[9:27am] <Wooble> sohil: app engine doesn't provide map-reduce
functionality.
[9:27am] <joakime> ryan_google: yeah, i was starting to go down that
path. :-)
[9:27am] <medecau> can anyone up my nº of apps?
[9:28am] <scudder_google> moreaes: for #2 Google has it's own internal
style guide which differs slightly from PEP 8
[9:28am] <moraes> okay :)
[9:28am] <nickjohnson> ryan_google: Actually, that wasn't what I
meant, but that does sound like a good idea. :)
[9:28am] <ryan_google> moraes: re 1.2.6, yes, of course. at least
until 1.2.7. :P
[9:28am] <moraes> hehe
[9:28am] <scudder_google> moreaes: for example, 2 space indent rather
than 4, and CamelCase method names (though for all engine we switched
to lower_with_underscores)
[9:28am] <nickjohnson> medecau: Have you run out of apps yet?
[9:29am] <medecau> i hit the limit today
[9:29am] <stumpy> what kind of things are in the pipeline to improve
the spin-up/warm up time of java instances?
[9:29am] <nickjohnson> medecau: PM me your email address
[9:29am] <medecau> <--- it's a secret
[9:30am] <drewlesue> I have a simple question. on the datastore if I
don't specify a key_name then it assigns a numeric i'd.
[9:30am] <joakime> jaxn: i probably just misinterpreted what <rate>150/
h</rate> means. I had hoped it would spread the handling out as evenly
as possible to 150 handled per hour. but instead it just churned thru
all 150 for that hour as fast as possible, and then stood back,
waiting for the next hour to tick off.
[9:30am] <drewlesue> is there a way to retreive that id
[9:30am] <nickjohnson> medecau: I can't help you if you won't tell me
your email address. :)
[9:30am] <nickjohnson> But you can message it to me privately
[9:30am] <jaxn> jaokime: yeah, I think it is a bucket of 150, once an
hour
[9:30am] <ryan_google> wooble, sohil: re mapreduce, we're actively
looking into it. task queue is a great building block. for map() the
remaining work is mainly just splitting up the input data. for reduce,
we'd also need an efficient way to do the shuffle. (the datastore is
probably too heavyweight)
[9:30am] <nickjohnson> or tell me one of your App IDs.
[9:31am] <ryan_google> drewlesue: yes, entity.key().id()
[9:31am] <Wooble> ryan_google: good to hear, it must hurt to have
amazon be first to market with cloud mapreduce :)
[9:31am] <drewlesue> awesome thanks!
[9:31am] <joakime> nickjohnson: i think I'll just have to experiment
with alternative configurations on the queue. it would help if the
devserver had an option to auto-run tasks tho. ;-)
[9:31am] <nwinter> Is it possible we'll see task queue limits that can
be scaled up with the number of users?
[9:32am] <nickjohnson> joakime: That's something we're working on
[9:32am] <manav> what are MAC tools available for deployment.
[9:32am] <tobyr> stumpy: one example is that we're making class
loading faster in general
[9:32am] <nickjohnson> manav: Eclipse for Java, and the App Engine
Launcher for Python
[9:32am] <ryan_google> nwinter: eventually yes.
[9:33am] <SpanishInquisiti> Nobody's prepared for the spanish
inquisition!
[9:33am] <nwinter> Okay, thanks!
[9:33am] <manav> thanks
[9:33am] <DannyZ> when will we be able to query on ancestor varibles
in java?
[9:33am] <sohil> ryan_google: ok after implementing mapreduce
functionality is appengine give result as fast as google ?
[9:33am] * ryan_google was was prepared for SpanishInquisiti though.
[9:33am] <jkrijthe_> Is it possible to move an app to a different
account?
[9:33am] <rdayal> haha
[9:34am] <ryan_google> sohil: um, i don't understand the question
[9:34am] <Wooble> jkrijthe_: yes; invite other account as developer,
delete the first one.
[9:34am] <Wooble> note: doesn't affect app creation quota
[9:34am] <medecau> i have an app that hits the quota half way through
the day every day, because of nº of requests but both cpu and
bandwidth stay at about half, can't google up the quota on the nº of
requests?
[9:34am] <SpanishInquisiti> oh yeah? Why does memcache have to go down
during the maintenance periods?
[9:34am] <uriel> are there any plans to integrate Google Friend
Connect with the users API (I know there is code to access gfc, but
would be nice if it was integrated, rather than have to pick between
both systems, or worse support both at the same time)
[9:34am] <jaxn> I just deployed XMPP exception notifications. (thanks
for the idea). Waiting for an exception to see if it works.
[9:34am] <ryan_google> medecau: sign up for billing?
[9:35am] <ryan_google> SpanishInquisiti: because memcache isn't
transactional across datacenters
[9:35am] <manav> how can I gracefully handle HardDeadlineError in my
app
[9:35am] <nickjohnson> manav: You can't - that's why it's hard
[9:35am] <ryan_google> and we don't want to give you inconsistent
results
[9:35am] <scudder_google> manav: you can't that it the final full stop
when you go over the deadline
[9:35am] <medecau> ryan_google: anyway on the roadmap to have devs
without creditcards have users help pay the bill?
[9:35am] <SpanishInquisiti> So every maintenance period involves
shifting datacenters, or will this eventually cease?
[9:35am] <slynch> uriel: What integrations are you looking for?
[9:35am] <nickjohnson> You need to handle the initial exception
[9:35am] <ryan_google> SpanishInquisiti: they often do, yes
[9:36am] <scudder_google> manav: instead, handle the Deadline Exceeded
exception and finish up
[9:36am] <SpanishInquisiti> Very good, ryan, but we have YET TO SEE IF
YOU ARE TRULY PREPARED!
[9:36am] <ryan_google> medecau: if you mean having your users pay us
directly, no. :P
[9:36am] <manav> is there is preemptive strategy to avoid
DeadlineExceeded/HardDealingError
[9:36am] <sohil> ryan_google: i dont know about task queue
functionality. is if appengine provide the map reduce then how it will
work mean?
[9:36am] <manav> like having readtimeous
[9:36am] <ryan_google> SpanishInquisiti: i'm an eagle scout. i've been
prepared since i was in kindgergarten.
[9:37am] <uriel> slynch: I'm looking for a single api and single set
of user types, so I don't need to store two different kinds of ids in
the db
[9:37am] <nickjohnson> SpanishInquisiti: Realistically, how many QPS
can the Spanish Inquisition muster anyway? ;)
[9:37am] <medecau> or pay on checkout and the money going to google
without being wired to me and back to google again
[9:37am] <rdayal> nickjohnson: I think they managed to reach a pretty
high QPS
[9:37am] <medecau> something like paypal
[9:37am] <ryan_google> sohil: it's too early to say. we're definitely
excited about it though. we're glad you're interested!
[9:37am] <uriel> slynch: it is more of a convenience thing, but was
looking into adding gfc support, and it turned to be more tedious than
I would have expected
[9:37am] <maxoizo> Will the next release with the support of Google
Apps domains in XMPP?
[9:37am] <ryan_google> medecau: right, understood. sorry, no, that's
not a high priority for us.
[9:37am] <scudder_google> manav: I'd need to know more specifics about
your app, but in general the approach is to perform small and related
work in a request and spread out across multiple requests
[9:37am] <nickjohnson> medecau: I think it makes more sense to send
the money to you first, or you could end up with masses of donations
you can only use on App Engine.
[9:37am] <zx5> I have created my app engine account with my gmail id.
but when my app is ready to launch, I would like to create a new app
engine account using Google Apps. My question is : is it ok to have 2
accounts, or I will have to delete my app engine account associated
with my gmail id and then create a new one using Google Apps?
[9:38am] <slynch> uriel: Do you mean that you're looking for the Users
API to support authentication via the other systems GFC supports?
(openid, et. al.)
[9:38am] <DannyZ> when will we be able to use inhertence in gae java??
[9:38am] <uriel> slynch: exactly
[9:38am] <scudder_google> manav: or you could handle the urgent items
in the request and defer the rest using the task queue
[9:38am] <SpanishInquisiti> ryan: :=) We shall see. Can we use new
allocate_ids() on 'fake' (non-stored) entities to provide counter-like
functionality?
[9:38am] <medecau> nickjohnson tbh that's not a problem
[9:38am] <nickjohnson> zx5: Is there a reason you want a different
account? As a sender address for emails?
[9:38am] <manav> In my case I am trying to access Twitter API and
(also this is not part of queue)
[9:38am] <medecau> :)
[9:38am] <mpd> maxoizo: not in the next release. hopefully soon,
though.
[9:38am] <nickjohnson> medecau: What I'm saying is that paying
directly into App Engine would not be very scalable. :)
[9:38am] <zx5> nickjohnson: yes
[9:38am] <ryan_google> SpanishInquisiti: i'm not sure i understand
what you mean by "counter-like functionality"
[9:39am] <slynch> uriel: interesting suggestion. Is there a specific
authentication you're looking for? Or you just want to be able to
support as many as possible?
[9:39am] <manav> sometimes I run into hardDeadlineError, so I have a
readTimeOut for 25sec not sure if this will help avoid the error
[9:39am] <zx5> nickjohnson: sending emails for a website from an gmail
account looks odd
[9:39am] <scudder_google> manav: I assume you are using urlfetch, in
which case you might need to set a tighter deadline on the request
[9:39am] <Wooble> nickjohnson: I think it's more a question of Google
Checkout letting you carry a balance and paying from that.
[9:39am] <maxoizo> mpd: thanks
[9:39am] <nickjohnson> zx5: In that case, fill out the 'sms issues'
form when the time comes and we can activate a second account for you
[9:39am] <SpanishInquisiti> ryan: Can we do allocate_ids
(my_random_key_path, 1) to increment a "counter"?
[9:39am] <ryan_google> if you mean, can you use allocate_ids() as a
sequence, or like e.g. mysql's auto-increment, then yes, definitely
[9:39am] <medecau> nickjohnson: do you have any suggestion for devs
without access to C.C.s?
[9:39am] <nickjohnson> Wooble: Checkout isn't for stored balances,
though.
[9:39am] <manav> I am java.net API ~ what are the thresholds
[9:39am] <uriel> slynch: I would like to support as many as possible,
obviously :)
[9:40am] <nickjohnson> medecau: I can only suggest getting a debit
card with a prepaid number, or one of the Visa gift cards.
[9:40am] <zx5> nickjohnson: I would not have to cancel my current
account? just want to be sure I am not violating any TOS
[9:40am] <nickjohnson> er, debit card with a valid CC number
[9:40am] <scudder_google> manav: the default timeout for an HTTP from
App Engine to another site on the web is 5 seconds
[9:40am] <tobyr> DannyZ: We're actively working on inheritance in JDO/
JPA, but it does require a lot of effort. Your best bet would be to
track the ORM code project separately from SDK releases
[9:40am] <nickjohnson> zx5: No, that's fine.
[9:40am] <zx5> nickjohnson: thx :-)
[9:40am] <scudder_google> *HTTP request
[9:40am] <SpanishInquisiti> ryan: We can assume it monotonically
increases but will it also return consecutive integers?
[9:40am] <slynch> uriel: Alright, good feedback. It's not on our
roadmap now but I'll make a note of it :)
[9:40am] <manav> hmm... can we get some kind of retry handler
mechanism like commons - http client
[9:40am] <uriel> slynch: I guess openid would be high on the list, but
I just found it nagging to have to pick either the 'standard' google
user accounts that seems better integrated with gae, or switch over to
gfc, the dichotomy is what is more annoying than anything else ;)
[9:40am] <Wooble> fwiw, paypal will accept payments and give you a
debit card you could use to pay google.
[9:41am] <ryan_google> SpanishInquisiti: ah, yes, you can, but it
doesn't provide the guarantees you want. for example, if you ask for 1
id but we allocate ids in batches of 10, we might give you id 3 the
first time, then id 13 the next time
[9:41am] <tobyr> DannyZ: If it's something you deeply care about, you
can even contribute patches :)
[9:41am] <ryan_google> even though you've only called allocate_ids
(..., 1) twice
[9:41am] <uriel> slynch: each opetion seems to have its addvantages,
and seems like a silly tradeoff when both could be merged as far as I
can tell
[9:41am] <nickjohnson> SpanishInquisiti: IDs are unique, but not
sequential.
[9:41am] <DannyZ> tobyr: yeah i already found where the problem is ..
dosent look so hard to do it unefficent :)
[9:42am] <slynch> uriel: I don't know all of the different ways the
user API is used internally. We may be making some assumptions of the
API that work for Google accounts, but may not work for other types.
[9:42am] <tobyr> DannyZ: Yes, covering all the edge cases and making
it fast is the tough part
[9:42am] <scudder_google> manav: sounds like a feature request to
me :) you can do some retying on your own, but at some point you might
need to give up and send an error back to the client
[9:42am] <SpanishInquisiti> ryan: performance-wise, though,
allocate_ids() would be far faster than doing a transactional
increment or the better counter cookbooks (mostly memcache with
occasional tx)?
[9:43am] <ryan_google> SpanishInquisiti: eh, maybe up to 2x as fast,
but it's still not the right tool.
[9:43am] <nickjohnson> SpanishInquisiti: Are you talking about using
allocate_ids to emulate a sharded counter? That won't work.
[9:44am] <nickjohnson> Unless you don't mind hundreds or thousands
added to your counter seemingly at random. :)
[9:44am] <manav> yup.. I can live with that for now. Also i keep
getting this warning f rom Guice 2.0
org.jboss.resteasy.plugins.guice.ModuleProcessor -registering factory
for class
[9:44am] <DannyZ> tobyr: yeah currently simply getting the class vars
including ancenstors.. currenlty its caching only defnied ones so
there will be duplicates if done sloppy :) but yeah i need to find the
time and do it because its really bad copypasting code
[9:45am] <SpanishInquisiti> nick: I was thinking of counter-stuff but
that's obviously not possible. Still useful though to provide unique
ids at faster speed.
[9:45am] <uriel> slynch: yea, I understand, just wanted to know if
there were any specific plans, thanks :)
[9:45am] <lent> does appscale have google's blessing? as it matures,
we're considering using it to meet future requirement to deploy our
app on-premise for some of our customers but i read in a blog that
they may be in violation of appengine terms agreement in regards to
not reverse-engineer gae and such.
[9:45am] <nickjohnson> SpanishInquisiti: Certainly, yes, if all you
need is unique IDs.
[9:46am] <maxoizo> mpd: Are you planning this:
http://code.google.com/p/googleappengine/issues/detail?id=2071?
[9:46am] <nickjohnson> lent: We're very enthusiastic about other
implementations of the App Engine platform
[9:46am] <ryan_google> SpanishInquisiti: the one big thing you can do
to help keep the ids dense and contiguous is to have the entity or key
you pass to allocate_ids as the model be a child entity
[9:47am] <nwinter> Since Sept. 2, my site is frequently much, much
slower than it was prior to the maintenance (usually comes in bursts)
-- any explanation for why that'd be so or guesses as to whether
performance will improve and it will stop doing that so often?
[9:47am] <ryan_google> since each entity group maintains its own id
sequence for its non-root entities
[9:47am] <mpd> maxoizo: absolutely
[9:47am] <lent> nickjohnson: that's good to hear as it may save us a
lot of re-work to go on-premise on a more conventional platform
[9:47am] <SpanishInquisiti> CapabilitySet(...).will_remain_enabled_for
(x) --- Is this fully enabled and what's the granularity of updates
around maintenance times?
[9:47am] <mpd> maxoizo: that will probably come before custom domains.
[9:47am] <joakime> nickjohnson: (pardon if this has been answered) any
chance on seeing multi-property inequality queries in the future?
[9:48am] <nickjohnson> lent: Absolutely. This is somewhat related to
our Data Liberation work, though not quite under the same umbrella:
http://www.dataliberation.org/
[9:48am] <nickjohnson> joakime: I think ryan_google can answer that
better than I can.
[9:48am] <ryan_google> nwinter: ugh, that's no fun. sorry to hear it.
i'd need more info to give you a full diagnosis, especially since
1.2.5 hit at abou the same time...but it's definitely not intentional.
[9:48am] <ryan_google> joakime: we wish! not in the near future, but
maybe eventually.
[9:48am] <manav> how can we deal with simultaneous request limit
[9:48am] <maxoizo> mdp: cool :) And would be nice to correct this:
http://code.google.com/p/googleappengine/issues/detail?id=2072
[9:48am] <nickjohnson> joakime: In general, satisfying a query with
multiple inequalities efficiently is difficult in any RDBMS.
[9:49am] <scudder_google> nwinter: also if you could send your app ID
I'd be happy to take a deeper look
[9:49am] <DannyZ> if i have an entity group is it effected some how by
the number of same entities not in the same group ? localy it is
effected is it the same on the cloud?
[9:49am] <ryan_google> nickjohnson is right. in our case, they'd need
to use something other than megastore's secondary indices, and
integrating something new like that would be a lot of work.
[9:49am] <nwinter> app id: skrit -- yeah, I haven't had time to really
drill down yet besides thinking the import issue discusses above
started happening at the same time
[9:49am] <nickjohnson> manav: Are you hitting it? For a reasonably
fast app, you won't reach that limit until you're doing several
hundred QPS, at which point you can ask for it to be increased for
your app.
[9:49am] <ryan_google> DannyZ: what do you mean by effected?
[9:50am] <joakime> nickjohnson: true. but so is the burden on the
developer to do things like "get me all purchase records for customer
x which occured on week y". to do that now, means adding extra
properties to the table just to track potential query ranges via
equality only.
[9:50am] <jaxn> QPS? How can we get this number?
[9:50am] <DannyZ> meaning if i query that specific entity group, will
the performence be effected by other entity of the same type?
[9:50am] <nwinter> when I do get time to make a bunch of tests and the
like, is posting on the App Engine group the most appropriate place
for more info?
[9:50am] <nickjohnson> joakime: Yes; denormalization like that will
improve query performance on pretty much any DB
[9:51am] <jlivni> jaxn: the dashboard lets you see your usage in QPS
[9:51am] <scudder_google> nwinter: yes that sounds like a great plan
[9:51am] <manav> yes I hit "simultaneous active request limit"
occasionally but no where close to hundreds of requests maybe 6
[9:51am] <nickjohnson> jaxn: Queries Per Second. It's the figure
graphed on your dashboard by default.
[9:51am] <uriel> other silly question: can we expect python 2.6 or 3.x
support any time in the not too far future?
[9:51am] <manav> how can I increase the limit
[9:51am] <nickjohnson> manav: Are you doing 'long polling' or anything
else that causes your requests to take 10+ seconds to return?
[9:51am] <nwinter> if it's just happening to me, then it's probably
something I can fix, which is good news, thanks!
[9:51am] <uriel> (I'm slowly moving all my code to 2.6 and writting
most non-gae new projects in 3.x so I'm curious)
[9:51am] <jlivni> i have a question about the upcoming maintenance:
how solid are you on the sept 22 date (specifically, likelyhood of it
being sept 21 for example?)
[9:51am] <jaxn> nickjohnson: my dashboard defaults to Request per
Second and QPS isn't an option
[9:52am] <manav>  no long polling but accessing 3rd party APIs mostly
which can take longer
[9:52am] <nickjohnson> jaxn: A Request is a Query.
[9:52am] <jlivni> i have an app that is going to have a huge # of hits
on sept 21, and it would be most embarrasing if appengine went away
that day
[9:52am] <jaxn> nickjohnson: ah. Thanks
[9:52am] <ryan_google> DannyZ: in short, no, the number of entities
you have, regardless of kind or ancestor or anything else, doesn't
affect performance of either write or read operations
[9:52am] <nickjohnson> manav: If you're not hitting it consistently, I
don't think you need to worry about it.
[9:52am] <lent> earlier i asked a question about when appengine java
sdk 1.2.5 will be put onto maven repository at www.mvnsearch.org/maven2
(as it currently has 1.2.0, 1.2.1 and 1.2.2). does anyone know who i
should contact to have 1.2.5 put there?
[9:52am] <nickjohnson> lent: I believe this was raised on the forum,
and it's being worked on currently.
[9:52am] <jlivni> also, do you want to know in advance if an app will
be expecting potentially > 50 or 100qps in advance? or does it not
matter ....
[9:53am] <joakime> lent: there's a repository discussion group at
apache.org for that kind of talk.
[9:53am] <lent> nickjohnson: thanks
[9:53am] <ryan_google> jlivni: it's pretty solid. plus, if we do have
to change it, we'd almost certainly postpone it, not move it up a day
[9:53am] <nickjohnson> jlivni: For that sort of traffic level, simply
enabling billing should be sufficient.
[9:53am] <Wooble> DannyZ: I don't believe the kinds and relationships
are all that relevant locally either, the datastore just sucks when
you have a lot of data.
[9:53am] <jlivni> nickjohnson: ok thanks (obviously have billing
enabled, just didnt know if you guys cared or if thats just such a
small # of your total it doesnt matter to you)
[9:53am] <DannyZ> wooble: yeah thats what i thought also..
[9:54am] <slynch> lent: It's on our radar, we're trying to nail down
some of our dependencies at the moment, but once we have that fixed,
it will be up.
[9:54am] <nwinter> I wonder how many QPS App Engine serves in total
[9:54am] <slynch> lent: I unfortunately don't have an eta for you
though, just that it's coming
[9:54am] <lent> slynch: good to know thanks for the info
[9:54am] <ryan_google> Wooble: +1. neither local sdk is intended to
handle much data volume.
[9:54am] <nickjohnson> jlivni: If you expect to exceed more than about
500 qps, that's the point at which you need to let us know.
[9:54am] <jlivni> another question: do you have any discounts,
potentially, for nonprofit organizations or special cases that google
might like to subsidize?
[9:55am] <zx5> is it possible to get billing details on per request
basis? in the case of having multiple subaccounts on same website, so
each account can be billed separately on resources it consumes
[9:55am] <SpanishInquisiti> CapabilitySet support? Can we assume
will_remain_enabled_for() method will be working for upcoming
maintenance and in general?
[9:55am] <ryan_google> jlivni: good question! not right now, but it's
an interesting idea. definitely consider posting on the group!
[9:55am] <nickjohnson> zx5: Yes, there's an API for getting CPU quota,
but not the others currently
[9:56am] <ryan_google> SpanishInquisiti: yes.
[9:56am] <nickjohnson> zx5: see
http://code.google.com/p/googleappengine/source/browse/trunk/python/google/appengine/api/quota.py
[9:56am] <zx5> nickjohnson: thx :-)
[9:56am] <manav> can we expect Flush of Task Queue in near future. I
have to rename queue's and redeploy every time I need to flush
[9:57am] <nickjohnson> manav: I'd suggest adding a version number as
an argument to your tasks, and discarding them if they're from a
previous version
[9:57am] <tobyr> ryan_google: Although we don't generally expect any
performance problems with the Java SDK's local datastore. If you ever
see it performing unusably slow, please file an issue
[9:57am] <nickjohnson> You can even do this automatically by using and
checking the os.environ['CURRENT_VERSION'] variable
[9:57am] <uriel> q: what is the recommended way to do queries on a key
prefixe (eg., all keys that start with 'foo/')? (and is it possible or
will it be possible to do them without an index?)
[9:58am] <tobyr> also DannyZ/Wooble
[9:58am] <manav> sounds like a better idea
[9:58am] <manav> also can we expect REST Jax-rs compliance from
AppEngine anytime soon
[9:58am] <nickjohnson> uriel: You can use a pair of inequalities
- .filter("foo >", a).filter("foo <", a+u'\ufffd')
[9:59am] <manav> Is there a way to cache the HtppResponse for a url on
server side with AppEngine (REST-cache feature in Jboss)
[9:59am] <uriel> nickjohnson: yes, that is what I had in mind, but
seemed a bit awkward...
[9:59am] <ryan_google> tobyr: good point, java's local datastore isn't
nearly as bad as python's. is that true of data volume as well as
individual request performance, though?
[10:00am] <uriel> ryan_google: indeed, improvements to python's dev
datastore would be nice
[10:00am] <uriel> it feels rather clunky...
[10:00am] <tobyr> ryan_google: We've tested with some pretty large
datasets. I believe you're more likely to run out of memory than see
unusable performance.
[10:00am] <ryan_google> uriel: agreed!
[10:01am] <ryan_google> tobyr: ah, nice, that's very encouraging
[10:01am] <nwinter> performance issues with the Python datastore may
actually be caused by the history file instead, in my experience
[10:01am] <SpanishInquisiti> nwinter: have you tried bdbdatastore?
[10:01am] <uriel> ryan_google: heh, maybe I will have to write some
patches ;) (too many other things in my plate for now though...)
[10:01am] <ryan_google> nwinter: the query history file? agreed, that
thing is way more trouble than it's worth. we're actually considering
getting rid of it.
[10:01am] <nwinter> I just delete the history file each time I start
the dev_appserver, I don't have much data in there
[10:02am] <nwinter> it was extremely unpleasant figuring out that
that's what was making it slow all those months ago though
[10:02am] * nickjohnson races off to a meeting - later all!
[10:02am] <SpanishInquisiti> *
[10:02am] <drewlesueur> thanks nick
[10:02am] <bthomson> history file eh? does that fix the ram
consumption?
[10:02am] <joakime> ryan_google: so when can we expect to see this
available (from google) heh -> private SearchService search =
SearchServiceFactory.getDatastoreSearchServiceFactory();
[10:02am] <rdayal> See you guys later. If you have any Google Plugin
for Eclipse questions, please ask on the groups (GWT or App Engine).
[10:02am] <SpanishInquisiti> Where is the query history file?
[10:03am] <nwinter> the issue I saw was that every request (datastore
or not) would get slower and slower, until eventually it was well over
a second
[10:03am] <nwinter> because it was reading through the whole history
file
[10:03am] <nwinter> don't know if it still does that
[10:03am] <bthomson> here it gets slower and eats ram into the multi
GB range
[10:04am] <ryan_google> joakime: i wish i knew! :|
[10:04am] <nwinter> another issue which causes the same thing (on
Windows) is Firefox having ipv6 on (in about:config set
network.dns.disableIPv6 to true)
[10:05am] <joakime> the gae version of compass (open source) seems to
do the trick for me, but the means of search index storage (as blobs
in datastore) is just scary kludgy.
[10:05am] <ryan_google> SpanishInquisiti: see the dev_appserver's --
history_path flag
[10:05am] <scudder_google> well it's a few minutes past the hour, so
I'd like to bring our official chat session to a close, a few of us
will still be around though
[10:06am] <scudder_google> thanks for participating!
[10:06am] <nwinter> thanks for the help, guys!
[10:06am] <drewlesueur> yes, thanks!
[10:06am] <ryan_google> thanks for coming!
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to