Re: repeat items in a list

2016-03-28 Thread larudwer

Am 27.03.2016 um 13:13 schrieb Antonio Caminero Garcia:

On Sunday, March 27, 2016 at 11:52:22 AM UTC+2, larudwer wrote:

how about

  >>>> sorted(["a", "b"]*3)
['a', 'a', 'a', 'b', 'b', 'b']


that's cooler, less efficient though and do not maintain the original order.
In case such order was important, you should proceed as follows:

If the elements are unique, this would work:

sorted(sequence*nrep, key=sequence.index)

Otherwise you'd need a more complex key function (maybe a method of a class with
> a static variable that tracks the number of times that such method is 
called and
> with a "dynamic index functionality" that acts accordingly (i-th 
nrep-group of value v))

> and imo it does not worth it.





in case you want to mainain order:

>>>> ["a","b"]*3
['a', 'b', 'a', 'b', 'a', 'b']

is completely suffincient.
--
https://mail.python.org/mailman/listinfo/python-list


Re: repeat items in a list

2016-03-27 Thread larudwer

how about

 sorted(["a", "b"]*3)
['a', 'a', 'a', 'b', 'b', 'b']


--
https://mail.python.org/mailman/listinfo/python-list


Re: Multiprocessing problem

2010-03-03 Thread larudwer
Hello Matt

I think the problem is here:

for n in xrange(10):
outqueue.put(str(n))-- fill the queue with 10 
elements
try:
r = inqueue.get_nowait() -- queue is still empty because 
processes need some time to start
results.append(r)
except Empty:
pass -- causing 10 passes



print -
for task in tasks:
outqueue.put(None)  -- put even more data in the queue
...
# in the meantime the processes start to run and are trying to put data
# in to the output queue. However this queue might fill up, and lock
# all processes that try to write data in the already filled up queue

print joining
for task in tasks:
task.join()-- can never succeed because processes 
are waiting for someone reading the result queue
print joined

This example works:

from Queue import Empty, Full
from multiprocessing import Queue, Process
from base64 import b64encode
import time, random

class Worker(Process):
def __init__(self, inqueue, outqueue):
Process.__init__(self)
self.inqueue = inqueue
self.outqueue = outqueue

def run(self):
inqueue = self.inqueue
outqueue = self.outqueue
c = 0
while True:
arg = inqueue.get()
if arg is None: break
c += 1
b = b64encode(arg)
outqueue.put(b)

# Clean-up code goes here
outqueue.put(c)

class Supervisor(object):
def __init__(self):
pass

def go(self):
outqueue = Queue()
inqueue = Queue()
tasks = [Worker(outqueue, inqueue) for _ in xrange(4)]
for task in tasks:
task.start()

results = []
print *
for n in xrange(10):
outqueue.put(str(n))

print -
for task in tasks:
outqueue.put(None)

print emptying queue
try:
while True:
r = inqueue.get_nowait()
results.append(r)
except Empty:
pass
print done
print len(results)

print joining
for task in tasks:
task.join()
print joined

if __name__ == __main__:
s = Supervisor()
s.go()



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing deadlock

2009-10-24 Thread larudwer

Brian Quinlan br...@sweetapp.com schrieb im Newsbeitrag 
news:mailman.1895.1256264717.2807.python-l...@python.org...

 Any ideas why this is happening?

 Cheers,
 Brian

IMHO your code is buggy. You run in an typical race condition.

consider following part in your code:

 def _make_some_processes(q):
 processes = []
 for _ in range(10):
 p = multiprocessing.Process(target=_process_worker, args=(q,))
 p.start()
 processes.append(p)
 return processes


p.start() may start an process right now, in 5 seconds or an week later, 
depending on how the scheduler of your OS works.

Since all your processes are working on the same queue it is -- very --  
likely that the first process got started, processed all the input and 
finished, while all the others haven't even got started. Though your first 
process exits, and your main process also exits, because the queue is empty 
now ;).

 while not q.empty():
 pass

If you where using p.join() your main process wourd terminate when the last 
process terminates !
That's an different exit condition!

When the main process terminates all the garbage collection fun happens. I 
hope you don't wonder that your Queue and the underlaying pipe got closed 
and collected!

Well now that all the work has been done, your OS may remember that someone 
sometimes in the past told him to start an process.

def _process_worker(q):
 while True:
 try:
 something = q.get(block=True, timeout=0.1)
 except queue.Empty:
 return
 else:
 print('Grabbed item from queue:', something)

The line

something = q.get(block=True, timeout=0.1)

should cause some kind of runtime error because q is already collected at 
that time.
Depending on your luck and the OS this bug may be handled or not. Obviously 
you are not lucky on OSX ;)

That's what i think happens.




 


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: weak reference callback

2009-08-30 Thread larudwer

Paul Pogonyshev pogonys...@gmx.net schrieb im Newsbeitrag 
news:mailman.658.1251577954.2854.python-l...@python.org...
 Hi,

 Is weak reference callback called immediately after the referenced
 object is deleted or at arbitrary point in time after that?  I.e. is
 it possible to see a dead reference before the callback is called?

 More formally, will this ever raise?

callback_called = False
def note_deletion (ref):
callback_called = True
ref = weakref.ref (x, note_deletion)
if ref () is None and not callback_called:
raise RuntimeError (reference is dead, yet callback hasn't been 
 called yet)

The Manual says:
If callback is provided and not None, and the returned weakref object is 
still alive, the callback will be called when the object is about to be 
finalized; the weak reference object will be passed as the only parameter to 
the callback; the referent will no longer be available.

This says that the Object is deleted first, and the callback functions will 
be called after that. Since after 'after that'  IS an arbitrary point in 
time your example SHOULD raise.

I think it is save to assume that this will never raise in an single 
threaded cpython application because the GIL and reference counting scheme 
etc. will prevent this.

However, this is an implementation specific detail of the cpython runtime 
and it is not save to rely on this behavior.
It may be completely different in an multi threaded environment or any other 
implementation of Python.









-- 
http://mail.python.org/mailman/listinfo/python-list


psyco V2 beta2 benchmark

2009-07-04 Thread larudwer
just out of curiosity i've downloaded the latest Version of Psyco V2 Beta 2 
and run the benchmarks against the old Version of psyco 1.6

Because it might be of common interest, i am posting the results here.

My machine is a Pentium D 3.2 Ghz running Windows XP SP 3 and Python 2.6.2.
Psyco V2 was built with 4.3.3-tdm-1 mingw32 with optimisation flags changed 
to -O3


Benchmark | avg. Base time | psyco 1.6 time 
| psyco 2.0 time | ratio | possible error +-
time_anyall all_bool_genexp   | 2.270  | 2.250 
| 2.420  | 0.930 | 8.3 %
time_anyall all_bool_listcomp | 3.450  | 1.900 
| 1.910  | 0.995 | 0.0 %
time_anyall all_genexp| 1.970  | 1.940 
| 2.160  | 0.898 | 9.6 %
time_anyall all_listcomp  | 3.485  | 1.660 
| 1.660  | 1.000 | 1.4 %
time_anyall all_loop  | 0.665  | 0.090 
| 0.090  | 1.000 | 4.4 %
time_anyall any_bool_genexp   | 2.215  | 2.130 
| 2.340  | 0.910 | 10.0 %
time_anyall any_bool_listcomp | 3.620  | 1.930 
| 1.940  | 0.995 | 9.0 %
time_anyall any_genexp| 1.985  | 1.920 
| 2.180  | 0.881 | 10.1 %
time_anyall any_listcomp  | 3.360  | 1.680 
| 1.680  | 1.000 | 8.0 %
time_anyall any_loop  | 0.660  | 0.090 
| 0.090  | 1.000 | 3.0 %
time_builtins chr(i)  | 2.420  | 0.010 
| 0.010  | 1.000 | 0.0 %
time_builtins hash(i) | 1.280  | 0.370 
| 0.080  | 4.625 | 8.1 %
time_builtins int(round(f))   | 2.635  | 1.510 
| 1.120  | 1.348 | 0.4 %
time_builtins min | 0.535  | 0.520 
| 0.120  | 4.333 | 1.9 %
time_builtins min_kw  | 4.430  | 4.400 
| 0.160  | 27.500| 0.5 %
time_builtins ord(i)  | 0.320  | 0.000 
| 0.000  | 1.000 | 6.5 %
time_builtins pow | 0.345  | 0.230 
| 0.150  | 1.533 | 2.9 %
time_builtins reduce  | 0.735  | 0.710 
| 0.020  | 35.498| 1.4 %
time_builtins round(f)| 1.720  | 0.890 
| 0.400  | 2.225 | 1.2 %
time_builtins sums| 0.180  | 0.180 
| 0.100  | 1.800 | 0.0 %
time_fib matrix   | 0.425  | 0.360 
| 0.360  | 1.000 | 2.3 %
time_fib recursive| 0.000  | 0.000 
| 0.000  | 1.000 | 0.0 %
time_fib takahashi| 0.410  | 0.320 
| 0.330  | 0.970 | 0.0 %
time_generators call next just many times | 0.900  | 0.630 
| 0.970  | 0.649 | 4.3 %
time_generators iterate just many times   | 0.660  | 0.550 
| 0.950  | 0.579 | 3.1 %
time_generators send and loop 1000| 2.805  | 2.540 
| 0.060  | 42.333| 9.8 %
time_generators send call loop 1000   | 2.505  | 2.940 
| 0.060  | 48.999| 10.9 %
time_generators send just many times  | 1.280  | 0.590 
| 0.980  | 0.602 | 3.1 %
time_iter iter| 1.490  | 0.590 
| 0.440  | 1.341 | 5.5 %
time_math floats  | 2.910  | 1.500 
| 1.630  | 0.920 | 0.7 %
time_properties method_get| 0.935  | 0.120 
| 0.130  | 0.923 | 1.1 %
time_properties method_set| 1.005  | 0.170 
| 0.180  | 0.944 | 1.0 %
time_properties property_get  | 0.960  | 0.740 
| 0.100  | 7.400 | 2.1 %
time_properties property_set  | 1.020  | 0.920 
| 0.930  | 0.989 | 0.0 %
time_properties pyproperty_get| 1.535  | 1.310 
| 0.140  | 9.357 | 0.7 %
time_properties pyproperty_set| 1.030  | 0.920 
| 0.930  | 0.989 | 2.0 %
time_subdist subdist(i)   | 3.665  | 1.640 
| 6.140  | 0.267 | 0.8 %
time_sums rounding| 0.800  | 0.790 
| 0.810  | 0.975 | 2.5 %


Running new timings with 
C:\Programme\GNU\python26\lib\site-packages\psyco\_psyco.pyd Sat Jul 04 
12:09:07 2009

time_anyall: all_bool_genexpplain:   2.36 psyco: 
2.42 ratio:  0.97
time_anyall: all_bool_listcomp  plain:   3.45 psyco: 
1.91 ratio:  1.80
time_anyall: all_genexp plain:   2.06 psyco: 
2.16 ratio:  0.95
time_anyall: all_listcomp   plain:   3.51 psyco: 
1.66 ratio:  2.11

Re: Problem with multithreading

2009-06-25 Thread larudwer

Jeffrey Barish jeff_bar...@earthlink.net schrieb im Newsbeitrag 
news:mailman.2091.1245902997.8015.python-l...@python.org...
 Jeffrey Barish wrote:

 I have a program that uses multithreading to monitor two loops.  When
 something happens in loop1, it sends a message to loop2 to have it 
 execute
 a command.  loop2 might have to return a result.  If it does, it puts the
 result in a queue.  loop1, meanwhile, would have blocked waiting for
 something to appear in the queue.  The program works for a while, but
 eventually freezes.  I know that freezing is a sign of deadlock. 
 However,
 I put in print statements to localize the problem and discovered 
 something
 weird.  The freeze always occurs at a point in the code with the 
 following
 statements:

 print about to try
 try:
 print in try
 do something

 I get about to try, but not in try.  Is this observation consistent
 with
 the deadlock theory?  If not, what could be making the program freeze at
 the try statement?  I wrote a test program using the same techniques to
 illustrate the problem, but the test program works perfectly.  I could
 post it, though, if it would help to understand what I am doing -- and
 what might be wrong in the real program.

 As I ponder this problem, I am beginning to believe that the problem is 
 not
 related to multithreading.  If the problem were due to a collision between
 the two threads then timing would matter, yet I find that the program
 always freezes at exactly the same statement (which executes perfectly
 hundreds of times before the freeze).  Moreover, the test program that I
 wrote to test the multithreading implementation works perfectly. And
 finally, there is nothing going on related to multithreading at this point
 in the code.  Why else might the program freeze at a try statement?
 -- 
 Jeffrey Barish


If you have one thread sleeping you need another running thread to waken up 
the sleeping thread.
If the running thread terminates unexpectedly the other thread will sleep 
forever.

Though. Since there is a try statement in your example code and the failure 
always happens there,
there might be the chance that some unexpected exception was thrown and 
cought somewhere else in your program.
If the Program is terminated, the last print might also have gone lost in 
some I/O buffer.
There is no guarantee that the print statement really wasn't executed.

Think about things like
  exception Queue.Empty
  Exception raised when non-blocking get() (or get_nowait()) is called on a 
Queue object which is empty.
  exception Queue.Full
  Exception raised when non-blocking put() (or put_nowait()) is called on a 
Queue object which is full.




-- 
http://mail.python.org/mailman/listinfo/python-list