Hey Arun,
  You can increase your front-end instances to B2s and see if that
helps.  At least it will let you pay your way out of the issue while
you're searching for a better solution.

  Apptrace is a good tool for local testing.  Unfortunately it won't
run on appspot.  It might help you better understand what is causing
the memory blow ups though.

  You might try setting up some test cases and seeing how many of your
entities you can load before it blows up.  You might also post your
model def and sample data, perhaps someone can help you spot the cause
of the memory blowups.


Robert




On Fri, Feb 3, 2012 at 03:41, Arun Shanker Prasad
<[email protected]> wrote:
> Hi all,
>
> Thank You for the response.
>
> @Incure,
>
> No I was not able to find out why/where the memory leak was happening.
> I was unable to configure Apptrace for my application, it's missing
> some dependencies, did not get a chance to figure out what!
>
> I am guessing the issues might have been caused due to the 'string'
> list properties in my model (although the number of items in the
> string list were low) I have had to refactor the page to get the data
> it needed differently, but this solution is not scalable, I am still
> searching for a solution..
>
> @Robert,
>
> I am using Python. It truly is hard to find memory leaks in Python, I
> tried using Apptrace but was not able to get it to work.
>
> Functionally there is nothing much that my code does, it fetches all
> items of a particular Model instance (around 500 records) serlializes
> it into JSON and sends it back, the rest is done on the client.
>
> http://code.google.com/p/apptrace/wiki/UsingApptrace
>
> Thanks,
> Arun.
>
> On Feb 3, 11:18 am, Robert Kluin <[email protected]> wrote:
>> Hi Arun,
>>   You don't say if you're using Python or Java.  For Python debugging
>> memory usage is hard at best;  I'm not sure about Java.
>>
>>   In Python at least, every entity you load actually stores multiple
>> copies of the data and it is decoded / encoded from protobufs many
>> times.  So the end result is that your memory usage will be higher
>> than what you expect.  Possibly by quite a bit depending on your data.
>>
>>   It is also possible that you've got other issues in your code
>> leading to memory leaks.  That's harder to make general comments about
>> though.
>>
>> Robert
>>
>> On Tue, Dec 27, 2011 at 11:58, Arun Shanker Prasad
>>
>>
>>
>>
>>
>>
>>
>> <[email protected]> wrote:
>> > Hi All,
>>
>> > Hope you had a wonderful holiday :)
>>
>> > I am experiencing an issue for the past couple of weeks, am getting
>> > soft memory exceeded errors and the instance is being terminated..
>> > This only started occuring recently.
>>
>> > The pages experiencing this error are mostly reporting kind of pages,
>> > i.e pages that deal with a number of records that are fetched based on
>> > filters selected by the user and use django.core.paginator.Paginator
>> > to paginate over the result. I have a maximum of 700 records that are
>> > fetched for these reports, according to datastore statistics in the
>> > admin console the average size of the entities being fetched are 4KB.
>>
>> > Average Size of Entity:
>> > 4 KBytes
>>
>> > Some of the properties are reference properties and I do use prefetch
>> > logic to load these references as well, but most of them are also
>> > small entities.
>>
>> > By my caclulations 158MB should be enough to handle a list of ~40448
>> > records! I am not able to figure out why the request is using this
>> > much memory.
>>
>> > I have tried eliminating most of the usual suspects (large string
>> > concatination and so on), even tried deleting the large list after
>> > they are processed and calling garbage collect, nothing seems to work.
>> > Next step is to try and setup Apptrace for app engine.
>>
>> > Can anyone out there help me with this? Anyone else have any similar
>> > issues? How did you manage to sort it out?
>>
>> > C 2011-12-27 10:03:54.089
>> > Exceeded soft private memory limit with 157.648 MB after servicing 4
>> > requests total
>> > W 2011-12-27 10:03:54.090
>> > While handling this request, the process that handled this request was
>> > found to be using too much memory and was terminated. This is likely
>> > to cause a new process to be used for the next request to your
>> > application. If you see this message frequently, you may have a memory
>> > leak in your application.
>>
>> > --
>> > You received this message because you are subscribed to the Google Groups 
>> > "Google App Engine" group.
>> > To post to this group, send email to [email protected].
>> > To unsubscribe from this group, send email to 
>> > [email protected].
>> > For more options, visit this group 
>> > athttp://groups.google.com/group/google-appengine?hl=en.
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Google App Engine" group.
> To post to this group, send email to [email protected].
> To unsubscribe from this group, send email to 
> [email protected].
> For more options, visit this group at 
> http://groups.google.com/group/google-appengine?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to