jvdias <jason.vas.d...@gmail.com> added the comment:

To Build:

$ gcc -fPIC -shared -o psempy.c psempy.so -I/usr/include/python2.6
-L/usr/lib/python2.6 -lpython2.6 && mv psempy.so psem.so
$ dd if=/dev/urandom of=app1_20090407.01.log bs=1000000 count=1
$ python
>>> import sys, os, re, datetime, psem, psem_example
>>> psem_example.compress_log( "app1", "2009", "04", "07", "01", "bzip",
"app1_20090407.01.log");
0

Example psem.so using program that compresses logs 
named *{YEAR}-${MONTH}-${DAY}* in a psem.* based 
parallel for .
On a 32 2Ghz processor SPARC, the time taken to compress
32 1MB files using the psem parallel-for (for 32 CPUs) 
was really of the order of the time taken to compress 1
1MB file - ie. roughly 1/32nd of the time taken to compress
32 files serially . 
The number of processes was made secure and "run-away" safe 
ONLY because direct access was available to the 
   semop(2), semget(2), and semctl(2) system calls.
Please can Python put this API into sys or I will create a
Python add-on module to do so - let me know whether this is
a good idea or not - thank you, Jason.

----------
Added file: http://bugs.python.org/file13656/psem_example.py

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue5725>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to