Wouldn't we still have something of a problem that we need to make Wx
integrate with ZeroMQ?

Even if the message arrives in the parent class, how does the Wx event
loop get triggered?

Adam K

On 16 June 2010 17:09, Steffen Mueller <smuel...@cpan.org> wrote:
> Hi Adam,
>
> not much time to reply. Not much time to get involved soon. Quick reply.
>
> Adam Kennedy wrote:
>> If anyone wants to investigate some kind of custom Wx loader, it might
>> let us save as much as 5-10meg per thread, which would mean a 15-30meg
>> memory reduction in Padre overall, and double or triple the thread
>> spawn time (which might get the threads fast enough to remove the need
>> to pre-spawn a bunch of threads, removing another third of a second
>> from the startup time.
>>
>> It's a job there if anyone would like it.
>
> Once somebody figured out a general idea how to do that, I'm confident
> Mattia would be willing to accept patches or ideas. He's been remarkably
> open to my suggestion of splitting out Wx::Event.
>
>> Alternatively, the upstream and downstream communication channels are
>> potentially pluggable under the new model, and the tasks are designed
>> to support moving up or down via any channel that can handle a string.
>
> I sent a notice to Adam about my new wrapper of the 0MQ message passing
> library. It works sort of like Unix sockets, and supports various
> transport layers including one that is visible only from the application
> itself.
>
> The memory hit of loading the module is currently <1M. Maybe it could be
> reduced. If I had the time to flesh out the docs, write an
> Alien::ZeroMQ, and do more testing, this might be a good candidate. The
> code is on github. Search for ZeroMQ.
>
>> So one alternative would be for someone to rewrite the upstream
>> channel to use unix sockets or shared memory and signals, or some
>> other method that doesn't require Wx at all.
>
> See above.
>
>> This would allow us to evade Wx entirely when we spawn the master
>> thread, which would save almost everything other than the Perl core,
>> and the core platform of Padre::Constant, File::HomeDir, File::Spec
>> and a few other minor friends.
>>
>> I think either of these options represents the next logical step in
>> making Padre threads suck faster.
>
> The ZeroMQ communication is quite fast, too. The slowest bit would be
> serialization with Storable. My tests of the Perl wrapper showed a
> latency of ~40us and a core2core throughput of 9Gigabit/s on a laptop.
>
> This 0MQ wrapper thing is a Saturday afternoon experiment of mine, so YMMV.
>
>> If you are interesting in some performance hacking, either should be
>> interesting.
>>
>> As an aside, if you can solve the non-TCP socket listener problem
>> cross platform, we can also upgrade the single instance server to use
>> the same code, which would let the Debian people enable it by default
>> and allow single instances to automatically handle multiple Padres
>> running from multiple places.
>
> This should be doable with 0MQ.
>
> Cheers,
> Steffen
> _______________________________________________
> Padre-dev mailing list
> Padre-dev@perlide.org
> http://mail.perlide.org/mailman/listinfo/padre-dev
>
_______________________________________________
Padre-dev mailing list
Padre-dev@perlide.org
http://mail.perlide.org/mailman/listinfo/padre-dev

Reply via email to