Facundo Batista wrote:
> 2008/3/20, Andrew McNabb <[EMAIL PROTECTED]>:
>
>> Since we officially encourage people to spawn processes instead of
>> threads, I think that this would be a great idea. The processing module
>> has a similar API to threading. It's easy to use, works well, and most
>>
Even I, as a strong advocate for it's inclusion think I should finish
the PEP and outline all of the questions/issues that may come out of
it.
On Thu, Mar 20, 2008 at 1:37 PM, Facundo Batista
<[EMAIL PROTECTED]> wrote:
> 2008/3/20, Andrew McNabb <[EMAIL PROTECTED]>:
>
>
> > Since we officially en
2008/3/20, Andrew McNabb <[EMAIL PROTECTED]>:
> Since we officially encourage people to spawn processes instead of
> threads, I think that this would be a great idea. The processing module
> has a similar API to threading. It's easy to use, works well, and most
> importantly, gives us some pl
On Thu, Mar 20, 2008 at 09:58:46AM -0400, Jesse Noller wrote:
> FYI: I shot an email to stdlib-sig about the fact I am proposing the
> inclusion of the pyProcessing module into the stdlib. Comments and
> thoughts regarding that would be welcome. I've got a rough outline of
> the PEP, but I need to
FYI: I shot an email to stdlib-sig about the fact I am proposing the
inclusion of the pyProcessing module into the stdlib. Comments and
thoughts regarding that would be welcome. I've got a rough outline of
the PEP, but I need to spend more time with the code examples.
-jesse
On Wed, Mar 19, 2008
Hmmm, sorry if I'm missing something obvious, but, if the occasional
background computations are sufficiently heavy -- why not fork, do
said computations in the child thread, and return the results via any
of the various available IPC approaches? I've recently (at Pycon,
mostly) been playing devil
On Wed, Mar 19, 2008 at 11:25 AM, Stefan Ring <[EMAIL PROTECTED]> wrote:
> Adam Olsen gmail.com> writes:
>
>
> > So you want responsiveness when idle but throughput when busy?
>
> Exactly ;)
>
>
> > Are those calculations primarily python code, or does a C library do
> > the grunt work? If it'
Adam Olsen gmail.com> writes:
> So you want responsiveness when idle but throughput when busy?
Exactly ;)
> Are those calculations primarily python code, or does a C library do
> the grunt work? If it's a C library you shouldn't be affected by
> safethread's increased overhead.
>
It's Python
On Wed, Mar 19, 2008 at 10:42 AM, Stefan Ring <[EMAIL PROTECTED]> wrote:
>
> On Mar 19, 2008 05:24 PM, Adam Olsen <[EMAIL PROTECTED]> wrote:
>
> > On Wed, Mar 19, 2008 at 10:09 AM, Stefan Ring <[EMAIL PROTECTED]> wrote:
> > > Adam Olsen gmail.com> writes:
> > >
> > > > Can you try with a call
Adam Olsen gmail.com> writes:
>
> On Wed, Mar 19, 2008 at 10:09 AM, Stefan Ring visotech.at> wrote:
> > Adam Olsen gmail.com> writes:
> >
> > > Can you try with a call to sched_yield(), rather than nanosleep()? It
> > > should have the same benefit but without as much performance hit.
> >
On Wed, Mar 19, 2008 at 10:09 AM, Stefan Ring <[EMAIL PROTECTED]> wrote:
> Adam Olsen gmail.com> writes:
>
> > Can you try with a call to sched_yield(), rather than nanosleep()? It
> > should have the same benefit but without as much performance hit.
> >
> > If it works, but is still too much
Adam Olsen gmail.com> writes:
> Can you try with a call to sched_yield(), rather than nanosleep()? It
> should have the same benefit but without as much performance hit.
>
> If it works, but is still too much hit, try tuning the checkinterval
> to see if you can find an acceptable throughput/re
On Tue, Mar 18, 2008 at 1:29 AM, Stefan Ring <[EMAIL PROTECTED]> wrote:
> The company I work for has over the last couple of years created an
> application server for use in most of our customer projects. It embeds Python
> and most project code is written in Python by now. It is quite
> resourc
The company I work for has over the last couple of years created an
application server for use in most of our customer projects. It embeds Python
and most project code is written in Python by now. It is quite resource-hungry
(several GB of RAM, MySQL databases of 50-100GB). And of course it is
mul
14 matches
Mail list logo