On 08/30/2010 01:23 PM, Jean-Baptiste BRIAUD -- Novlog wrote:
> 
> I replaced CMD_PYTHON in all places so I have :

What are "all places"? The only place I can think of is in the
generate.py script, right?! Are there any more?

> Seeing a variable CMD_PYTHON gave me the feeling Python could be called
> from Python creating subprocesses.

Yes, for creating exactly one process to invoke generator.py.

> I'm not sure why using project local generate.py would had cause JIT
> issue because it then call the "real" generate.py.

Me neither.

> So, we have one parent pypy process would have called only one pypy
> process (the "real" generate.py) and so the JIT should have worked for
> the child process.

Yes, that would be my expectation, too.

> Only one depth call should not have huge impact on JIT performance, only
> several call or recursive one might have.
> I don't have any clue why it is slower using pypy.

Me neither, but I will look into that.

>> What is your time line? You mention a run time of 1:58min for standard
>> Python (I presume this is with a filled cache). What would you consider
>> acceptable performance: 1:00, 0:30, 0:10, 0:01?!
>>
> 
> * On production : I want all the possible optimization done by the
> python tool chain in the less possible time even if I'm ready to wait
> because of the benefit of the optimizations.

JBB, you're so funny. Of course you want everything in the least
possible amount of time (who doesn't ;-). But that doesn't answer my
question.

> This is not where the pain is because we are not delivering to
> production very often, so when it happen, we can wait.

Wait - I thought you were concerned about your online builds?! Don't you
always say that our build-time is your production time?!

> Also, generate.py time can change depending the on application to build.
> So, I don't have acceptable performance time to give and for some our
> our app, it take longer than 1.58.
> => The idea is to improve what could be : multi-CPU, pypy, reducing time
> by an order of magnitude would be great, gaining few seconds is not
> really interesting.

Ok, this is more tangable. But an order of magnitude would be 12secs
instead of 2min, and I'm not sure this is achievable with the current
tool chain.

> 
> * On dev machines. Performances are criticals on Dev environment.
> That's where the pain is currently.
> We launch very often and wanted to see the result very quickly.
> Due to some reasons and maybe our way to use qooxdoo automatically,
> we are creating a new application each time (even for production).

That's what I meant.

> That's one reason why generate source doesn't save time for us.
> We're using generate source when we need to debug the js source code
> produced after generate (generate build are not readable).
> We deploy locally on a developer machine and we don't need all the
> optimizations provided by python tool chain.
> I need a deploy version here not a file:// because we are using Java as
> a backend.
> => So, the idea is to use a precompile qooxdoo version to avoid not
> useful calculation. Of course, here too multi-CPU and pypy improvements
> could apply.

Yes, I thought that would be your main use case for the need for speed.
Again, I don't think a precompiled qooxdoo would give you much over a
well saturated cache.

>> *Within* a single generator run, there is an experimental feature
>> already implemented (but undocumented) that lets you distribute the
>> compilation of classes to multiple processes. You just have to add a
>> specific config key. But it does not work properly under Windows. And of
>> course you have to have multiple cores or processors, to see any benefit
>> from using it :-). If you are building on non-Windows platforms you are
>> invited to try that out. Let me know and I'll provide details.
>>
> 
> Very useful for us because we are using Mac and Linux (Ubuntu server and
> desktop), all 64 bits in case it matters.
> Sometimes we're using Windows but it still very interesting to reduce on
> the mainly used platform.

I haven't looked into to it for quite some time, but you can try the
following: In your config.json's build job, add the key

  run-time/num-processes : <number_of_cores>

But mind that you have to use trunk/r23198 and must *not* use private
optimization (which doesn't hurt your app performance anyway).

>> I presume now that in production you are using a single cache for all
>> the build runs, which also stays intact (is not deleted) across those
>> runs.
> 
> Development environment is the main problem. That's why I though to a
> precompiled JS with all qooxdoo classes.

I thought development environment is *not* the problem?! But again, a
well-filled cache should serve your developers all the same.

> I'm not sure how we benefit from the cache since we are started from
> application create each time.

If I got you right, you are creating new qooxdoo applications online,
that's what you mean right?! They can nevertheless use the same cache,
and without deleting or clearing the cache when a new application is
added. This is crucial!


> I guess it depends on what key is used to find information in the cache.
> If it is based on application name, it is OK, but if there is some
> unique ID by application created, it won't work for us.

Most information in the cache is unique to a class file. So if 100
applications use the same cache, and are built against the same qooxdoo
SDK, the qooxdoo classes will only go into the cache once, and only
class files specific to each application will be added during build
runs; the framework classes can be re-used over and over again. So you
can safely build multiple applications against the same cache, if they
use the same qooxdoo SDK. - If you use a dedicated cache for each
application, each application has to populate the cache with the qooxdoo
classes, which means the same class information has to be calculated and
stored 100 times, once for each cache.

> We never clean the cache.

But do all applications use a common cache?

> 
> I was talking about using the generator in parallel, this is because in
> our test env (using Selenium) we are testing our software that itself
> build qooxdoo application.
> To speed up the tests, some of them are running in parallel and so the
> qx generator.

That's no problem. But again I'm puzzled, because I thought your main
concern is the speed to create your custom apps online, not some test
scenario?!

> Apparently, it is safe to use the same cache for several generator.py
> that run in parallel. We didn't experienced issue with that and it works
> fine to compile several apps in //.

Fine.

T.

------------------------------------------------------------------------------
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd
_______________________________________________
qooxdoo-devel mailing list
qooxdoo-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/qooxdoo-devel

Reply via email to