Javier Ruere wrote:
>
> On 16/02/2009, at 19:09, [email protected] wrote:
>> On Monday 16 February 2009 22:41:02 Russ Ryba wrote:
>>> On Feb 15, 11:05 am, [email protected] wrote:
>>>> just a quick note, "yield" generators will hold up whole stack
>>>> (for that long time), so be careful what u keep up there - with
>>>> many paralel requests the memory may get eaten.
>>>>
>>>> On Sunday 15 February 2009 17:33:20 Russ Ryba wrote:
>>>>> I just added a tutorial to the cookbook showing how to use
>>>>> yield to serve out large content or perhaps do long polling.
>>>>> The simple example uses time.sleep to simulate some long
>>>>> process. You'll find that it does sleep correctly and delay
>>>>> spitting out content if you use telnet. It is flushing the
>>>>> content.
>>>>>
>>>>> If you have problems getting it to show up you need to play
>>>>> with the web headers. Transfer-Encoding "chunked". If your
>>>>> browser doesn't support it then it may buffer the content and
>>>>> instead of incremental download you'll see a large delay then
>>>>> the whole page.
>>>>>
>>>>> I've only tested this with the CherryPy server. I don't know
>>>>> if it works with others.
>>>>>
>>>>> http://webpy.org/cookbook/streaming_large_files
>>>>>
>>>>> Comments and feedback appreciated.
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Russ Ryba
>>> I'll have to read up on generators. I thought they reduced
>>> resource usage by saving state without resorting to threads. I
>>> admit I don't really know what they are doing behind the scenes
>>> yet, and I'm only just figuring out how to use them.
>>>
>>> Is what you said about holding up the stack limited to CPython, or
>>> would there be similar problems in stackless python as well?
>> no idea. i'm not even sure about whole stack, but surely it holds the
>> namespace=locals+globals, whatever that means (lifespan, things not
>> dying etc.).
>> i've used async stuff with medusa back then, with generators, it was
>> pretty good.
>
> (Sorry about the empty message!!)
>
> The generator of course holds all the variables but keep in mind that
> it will not live much longer than the regular function that would
> generate a normal reply would and has the added benefit of not
> creating a large buffer for large replies and lower latency for all
> kinds of replies.
>
> What I dislike is the manual setting of the Transfer-Encoding.
> Couldn't the framework set this header if the controler returns a
> generator?
> The framework could even fix some inconvenient behaviour from the
> controller like returning very small chunks by buffering the return
> values until they reach a minimum size.
>
> Is there any problem with automatically setting the header for
> generators?
>
> Javier
>
> >
Why do you need that header...?
This seems to work just fine:
#!/usr/bin/env python
import time
import web
urls = (
'/(.*)', 'hello'
)
app = web.application(urls, globals())
class hello:
def GET(self, name):
for i in xrange(4):
yield "%d\r\n" % i
time.sleep(0.5)
if __name__ == "__main__":
app.run()
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups
"web.py" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [email protected]
For more options, visit this group at http://groups.google.com/group/webpy?hl=en
-~----------~----~----~----~------~----~------~--~---