Re: print small DataFrame to STDOUT and read it back into dataframe

2020-04-06 Thread Luca

On 4/6/2020 8:51 PM, Reto wrote:

out = df.to_csv(None)
new = pd.read_csv(io.StringIO(out), index_col=0)


Thank you, brother. It works

--
https://mail.python.org/mailman/listinfo/python-list


Re: print small DataFrame to STDOUT and read it back into dataframe

2020-04-06 Thread Reto
On Mon, Apr 06, 2020 at 06:29:01PM -0400, Luca wrote:
> so, given a dataframe, how do I make it print itself out as CSV?

read the docs of to_csv...

> And given CSV data in my clipboard, how do I paste it into a Jupiter cell
> (possibly along with a line or two of code) that will create a dataframe out
> of it?

```python
import pandas as pd
import io
df = pd.DataFrame(data=range(10))
out = df.to_csv(None)
new = pd.read_csv(io.StringIO(out), index_col=0)
```

That'll do the trick... any other serialization format works similarly.
you can copy out to wherever, it's just csv data.

Cheers,
Reto
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Multiprocessing queue sharing and python3.8

2020-04-06 Thread Israel Brewster
> On Apr 6, 2020, at 12:19 PM, David Raymond  wrote:
> 
> Attempting reply as much for my own understanding.
> 
> Are you on Mac? I think this is the pertinent bit for you:
> Changed in version 3.8: On macOS, the spawn start method is now the default. 
> The fork start method should be considered unsafe as it can lead to crashes 
> of the subprocess. See bpo-33725.

Ahhh, yep, that would do it! Using spawn rather than fork completely explains 
all the issues I was suddenly seeing. Didn’t even occur to me that the os I was 
running might make a difference. And yes, forcing it back to using fork does 
indeed “fix” the issue. Of course, as is noted there, the fork start method 
should be considered unsafe, so I guess I get to re-architect everything I do 
using multiprocessing that relies on data-sharing between processes. The Queue 
example was just a minimum working example that illustrated the behavioral 
differences I was seeing :-) Thanks for the pointer!

---
Israel Brewster
Software Engineer
Alaska Volcano Observatory 
Geophysical Institute - UAF 
2156 Koyukuk Drive 
Fairbanks AK 99775-7320
Work: 907-474-5172
cell:  907-328-9145

> 
> When you start a new process (with the spawn method) it runs the module just 
> like it's being imported. So your global " mp_comm_queue2=mp.Queue()" creates 
> a new Queue in each process. Your initialization of mp_comm_queue is also 
> done inside the main() function, which doesn't get run in each process. So 
> each process in the Pool is going to have mp_comm_queue as None, and have its 
> own version of mp_comm_queue2. The ID being the same or different is the 
> result of one or more processes in the Pool being used repeatedly for the 
> multiple steps in imap, probably because the function that the Pool is 
> executing finishes so quickly.
> 
> Add a little extra info to the print calls (and/or set up logging to stdout 
> with the process name/id included) and you can see some of this. Here's the 
> hacked together changes I did for that.
> 
> import multiprocessing as mp
> import os
> 
> mp_comm_queue = None #Will be initalized in the main function
> mp_comm_queue2 = mp.Queue() #Test pre-initalized as well
> 
> def some_complex_function(x):
>print("proc id", os.getpid())
>print("mp_comm_queue", mp_comm_queue)
>print("queue2 id", id(mp_comm_queue2))
>mp_comm_queue2.put(x)
>print("queue size", mp_comm_queue2.qsize())
>print("x", x)
>return x * 2
> 
> def main():
>global mp_comm_queue
>#initalize the Queue
>mp_comm_queue = mp.Queue()
> 
>#Set up a pool to process a bunch of stuff in parallel
>pool = mp.Pool()
>values = range(20)
>data = pool.imap(some_complex_function, values)
> 
>for val in data:
>print(f"**{val}**")
>print("final queue2 size", mp_comm_queue2.qsize())
> 
> if __name__ == "__main__":
>main()
> 
> 
> 
> When making your own Process object and stating it then the Queue should be 
> passed into the function as an argument, yes. The error text seems to be part 
> of the Pool implementation, which I'm not as familiar with enough to know the 
> best way to handle it. (Probably something using the "initializer" and 
> "initargs" arguments for Pool)(maybe)
> 
> 
> 
> -Original Message-
> From: Python-list  > On Behalf 
> Of Israel Brewster
> Sent: Monday, April 6, 2020 1:24 PM
> To: Python mailto:python-list@python.org>>
> Subject: Multiprocessing queue sharing and python3.8
> 
> Under python 3.7 (and all previous versions I have used), the following code 
> works properly, and produces the expected output:
> 
> import multiprocessing as mp
> 
> mp_comm_queue = None #Will be initalized in the main function
> mp_comm_queue2=mp.Queue() #Test pre-initalized as well
> 
> def some_complex_function(x):
>print(id(mp_comm_queue2))
>assert(mp_comm_queue is not None)
>print(x)
>return x*2
> 
> def main():
>global mp_comm_queue
>#initalize the Queue
>mp_comm_queue=mp.Queue()
> 
>#Set up a pool to process a bunch of stuff in parallel
>pool=mp.Pool()
>values=range(20)
>data=pool.imap(some_complex_function,values)
> 
>for val in data:
>print(f"**{val}**")
> 
> if __name__=="__main__":
>main()
> 
> - mp_comm_queue2 has the same ID for all iterations of some_complex_function, 
> and the assert passes (mp_comm_queue is not None). However, under python 3.8, 
> it fails - mp_comm_queue2 is a *different* object for each iteration, and the 
> assert fails. 
> 
> So what am I doing wrong with the above example block? Assuming that it broke 
> in 3.8 because I wasn’t sharing the Queue properly, what is the proper way to 
> share a Queue object among multiple processes for the purposes of 
> inter-process communication?
> 
> The documentation 
> (https://docs.python.org/3.8/library/multiprocessing.html#exchanging-objects-between-processes
>  
> 

Re: print small DataFrame to STDOUT and read it back into dataframe

2020-04-06 Thread Luca

On 4/6/2020 3:03 PM, Christian Gollwitzer wrote:





CSV is the most sensible option here. It is widely supported by 
spreadsheets etc. and easily copy/pasteable.


Thank you Christian.

so, given a dataframe, how do I make it print itself out as CSV?

And given CSV data in my clipboard, how do I paste it into a Jupiter 
cell (possibly along with a line or two of code) that will create a 
dataframe out of it?



--
https://mail.python.org/mailman/listinfo/python-list


Re: Multiprocessing queue sharing and python3.8

2020-04-06 Thread Israel Brewster
> On Apr 6, 2020, at 12:27 PM, David Raymond  wrote:
> 
> Looks like this will get what you need.
> 
> 
> def some_complex_function(x):
>global q
>#stuff using q
> 
> def pool_init(q2):
>global q
>q = q2
> 
> def main():
>#initalize the Queue
>mp_comm_queue = mp.Queue()
> 
>#Set up a pool to process a bunch of stuff in parallel
>pool = mp.Pool(initializer = pool_init, initargs = (mp_comm_queue,))
>...
> 
> 

Gotcha, thanks. I’ll look more into that initializer argument and see how I can 
leverage it to do multiprocessing using spawn rather than fork in the future. 
Looks straight-forward enough. Thanks again!

---
Israel Brewster
Software Engineer
Alaska Volcano Observatory 
Geophysical Institute - UAF 
2156 Koyukuk Drive 
Fairbanks AK 99775-7320
Work: 907-474-5172
cell:  907-328-9145

> 
> -Original Message-
> From: David Raymond 
> Sent: Monday, April 6, 2020 4:19 PM
> To: python-list@python.org
> Subject: RE: Multiprocessing queue sharing and python3.8
> 
> Attempting reply as much for my own understanding.
> 
> Are you on Mac? I think this is the pertinent bit for you:
> Changed in version 3.8: On macOS, the spawn start method is now the default. 
> The fork start method should be considered unsafe as it can lead to crashes 
> of the subprocess. See bpo-33725.
> 
> When you start a new process (with the spawn method) it runs the module just 
> like it's being imported. So your global " mp_comm_queue2=mp.Queue()" creates 
> a new Queue in each process. Your initialization of mp_comm_queue is also 
> done inside the main() function, which doesn't get run in each process. So 
> each process in the Pool is going to have mp_comm_queue as None, and have its 
> own version of mp_comm_queue2. The ID being the same or different is the 
> result of one or more processes in the Pool being used repeatedly for the 
> multiple steps in imap, probably because the function that the Pool is 
> executing finishes so quickly.
> 
> Add a little extra info to the print calls (and/or set up logging to stdout 
> with the process name/id included) and you can see some of this. Here's the 
> hacked together changes I did for that.
> 
> import multiprocessing as mp
> import os
> 
> mp_comm_queue = None #Will be initalized in the main function
> mp_comm_queue2 = mp.Queue() #Test pre-initalized as well
> 
> def some_complex_function(x):
>print("proc id", os.getpid())
>print("mp_comm_queue", mp_comm_queue)
>print("queue2 id", id(mp_comm_queue2))
>mp_comm_queue2.put(x)
>print("queue size", mp_comm_queue2.qsize())
>print("x", x)
>return x * 2
> 
> def main():
>global mp_comm_queue
>#initalize the Queue
>mp_comm_queue = mp.Queue()
> 
>#Set up a pool to process a bunch of stuff in parallel
>pool = mp.Pool()
>values = range(20)
>data = pool.imap(some_complex_function, values)
> 
>for val in data:
>print(f"**{val}**")
>print("final queue2 size", mp_comm_queue2.qsize())
> 
> if __name__ == "__main__":
>main()
> 
> 
> 
> When making your own Process object and stating it then the Queue should be 
> passed into the function as an argument, yes. The error text seems to be part 
> of the Pool implementation, which I'm not as familiar with enough to know the 
> best way to handle it. (Probably something using the "initializer" and 
> "initargs" arguments for Pool)(maybe)
> 
> 
> 
> -Original Message-
> From: Python-list  
> On Behalf Of Israel Brewster
> Sent: Monday, April 6, 2020 1:24 PM
> To: Python 
> Subject: Multiprocessing queue sharing and python3.8
> 
> Under python 3.7 (and all previous versions I have used), the following code 
> works properly, and produces the expected output:
> 
> import multiprocessing as mp
> 
> mp_comm_queue = None #Will be initalized in the main function
> mp_comm_queue2=mp.Queue() #Test pre-initalized as well
> 
> def some_complex_function(x):
>print(id(mp_comm_queue2))
>assert(mp_comm_queue is not None)
>print(x)
>return x*2
> 
> def main():
>global mp_comm_queue
>#initalize the Queue
>mp_comm_queue=mp.Queue()
> 
>#Set up a pool to process a bunch of stuff in parallel
>pool=mp.Pool()
>values=range(20)
>data=pool.imap(some_complex_function,values)
> 
>for val in data:
>print(f"**{val}**")
> 
> if __name__=="__main__":
>main()
> 
> - mp_comm_queue2 has the same ID for all iterations of some_complex_function, 
> and the assert passes (mp_comm_queue is not None). However, under python 3.8, 
> it fails - mp_comm_queue2 is a *different* object for each iteration, and the 
> assert fails. 
> 
> So what am I doing wrong with the above example block? Assuming that it broke 
> in 3.8 because I wasn’t sharing the Queue properly, what is the proper way to 
> share a Queue object among multiple processes for the purposes of 
> inter-process communication?
> 
> The documentation 
> (https://docs.python.org/3.8/library/multiprocessi

RE: Multiprocessing queue sharing and python3.8

2020-04-06 Thread David Raymond
Looks like this will get what you need.


def some_complex_function(x):
global q
#stuff using q

def pool_init(q2):
global q
q = q2

def main():
#initalize the Queue
mp_comm_queue = mp.Queue()

#Set up a pool to process a bunch of stuff in parallel
pool = mp.Pool(initializer = pool_init, initargs = (mp_comm_queue,))
...



-Original Message-
From: David Raymond 
Sent: Monday, April 6, 2020 4:19 PM
To: python-list@python.org
Subject: RE: Multiprocessing queue sharing and python3.8

Attempting reply as much for my own understanding.

Are you on Mac? I think this is the pertinent bit for you:
Changed in version 3.8: On macOS, the spawn start method is now the default. 
The fork start method should be considered unsafe as it can lead to crashes of 
the subprocess. See bpo-33725.

When you start a new process (with the spawn method) it runs the module just 
like it's being imported. So your global " mp_comm_queue2=mp.Queue()" creates a 
new Queue in each process. Your initialization of mp_comm_queue is also done 
inside the main() function, which doesn't get run in each process. So each 
process in the Pool is going to have mp_comm_queue as None, and have its own 
version of mp_comm_queue2. The ID being the same or different is the result of 
one or more processes in the Pool being used repeatedly for the multiple steps 
in imap, probably because the function that the Pool is executing finishes so 
quickly.

Add a little extra info to the print calls (and/or set up logging to stdout 
with the process name/id included) and you can see some of this. Here's the 
hacked together changes I did for that.

import multiprocessing as mp
import os

mp_comm_queue = None #Will be initalized in the main function
mp_comm_queue2 = mp.Queue() #Test pre-initalized as well

def some_complex_function(x):
print("proc id", os.getpid())
print("mp_comm_queue", mp_comm_queue)
print("queue2 id", id(mp_comm_queue2))
mp_comm_queue2.put(x)
print("queue size", mp_comm_queue2.qsize())
print("x", x)
return x * 2

def main():
global mp_comm_queue
#initalize the Queue
mp_comm_queue = mp.Queue()

#Set up a pool to process a bunch of stuff in parallel
pool = mp.Pool()
values = range(20)
data = pool.imap(some_complex_function, values)

for val in data:
print(f"**{val}**")
print("final queue2 size", mp_comm_queue2.qsize())

if __name__ == "__main__":
main()



When making your own Process object and stating it then the Queue should be 
passed into the function as an argument, yes. The error text seems to be part 
of the Pool implementation, which I'm not as familiar with enough to know the 
best way to handle it. (Probably something using the "initializer" and 
"initargs" arguments for Pool)(maybe)



-Original Message-
From: Python-list  On 
Behalf Of Israel Brewster
Sent: Monday, April 6, 2020 1:24 PM
To: Python 
Subject: Multiprocessing queue sharing and python3.8

Under python 3.7 (and all previous versions I have used), the following code 
works properly, and produces the expected output:

import multiprocessing as mp

mp_comm_queue = None #Will be initalized in the main function
mp_comm_queue2=mp.Queue() #Test pre-initalized as well

def some_complex_function(x):
print(id(mp_comm_queue2))
assert(mp_comm_queue is not None)
print(x)
return x*2

def main():
global mp_comm_queue
#initalize the Queue
mp_comm_queue=mp.Queue()

#Set up a pool to process a bunch of stuff in parallel
pool=mp.Pool()
values=range(20)
data=pool.imap(some_complex_function,values)

for val in data:
print(f"**{val}**")

if __name__=="__main__":
main()

- mp_comm_queue2 has the same ID for all iterations of some_complex_function, 
and the assert passes (mp_comm_queue is not None). However, under python 3.8, 
it fails - mp_comm_queue2 is a *different* object for each iteration, and the 
assert fails. 

So what am I doing wrong with the above example block? Assuming that it broke 
in 3.8 because I wasn’t sharing the Queue properly, what is the proper way to 
share a Queue object among multiple processes for the purposes of inter-process 
communication?

The documentation 
(https://docs.python.org/3.8/library/multiprocessing.html#exchanging-objects-between-processes
 
)
 appears to indicate that I should pass the queue as an argument to the 
function to be executed in parallel, however that fails as well (on ALL 
versions of python I have tried) with the error:

Traceback (most recent call last):
  File "test_multi.py", line 32, in 
main()
  File "test_multi.py", line 28, in main
for val in data:
  File 
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py",
 line 748, in next
raise value
  File 
"/Library/Fr

RE: Multiprocessing queue sharing and python3.8

2020-04-06 Thread David Raymond
Attempting reply as much for my own understanding.

Are you on Mac? I think this is the pertinent bit for you:
Changed in version 3.8: On macOS, the spawn start method is now the default. 
The fork start method should be considered unsafe as it can lead to crashes of 
the subprocess. See bpo-33725.

When you start a new process (with the spawn method) it runs the module just 
like it's being imported. So your global " mp_comm_queue2=mp.Queue()" creates a 
new Queue in each process. Your initialization of mp_comm_queue is also done 
inside the main() function, which doesn't get run in each process. So each 
process in the Pool is going to have mp_comm_queue as None, and have its own 
version of mp_comm_queue2. The ID being the same or different is the result of 
one or more processes in the Pool being used repeatedly for the multiple steps 
in imap, probably because the function that the Pool is executing finishes so 
quickly.

Add a little extra info to the print calls (and/or set up logging to stdout 
with the process name/id included) and you can see some of this. Here's the 
hacked together changes I did for that.

import multiprocessing as mp
import os

mp_comm_queue = None #Will be initalized in the main function
mp_comm_queue2 = mp.Queue() #Test pre-initalized as well

def some_complex_function(x):
print("proc id", os.getpid())
print("mp_comm_queue", mp_comm_queue)
print("queue2 id", id(mp_comm_queue2))
mp_comm_queue2.put(x)
print("queue size", mp_comm_queue2.qsize())
print("x", x)
return x * 2

def main():
global mp_comm_queue
#initalize the Queue
mp_comm_queue = mp.Queue()

#Set up a pool to process a bunch of stuff in parallel
pool = mp.Pool()
values = range(20)
data = pool.imap(some_complex_function, values)

for val in data:
print(f"**{val}**")
print("final queue2 size", mp_comm_queue2.qsize())

if __name__ == "__main__":
main()



When making your own Process object and stating it then the Queue should be 
passed into the function as an argument, yes. The error text seems to be part 
of the Pool implementation, which I'm not as familiar with enough to know the 
best way to handle it. (Probably something using the "initializer" and 
"initargs" arguments for Pool)(maybe)



-Original Message-
From: Python-list  On 
Behalf Of Israel Brewster
Sent: Monday, April 6, 2020 1:24 PM
To: Python 
Subject: Multiprocessing queue sharing and python3.8

Under python 3.7 (and all previous versions I have used), the following code 
works properly, and produces the expected output:

import multiprocessing as mp

mp_comm_queue = None #Will be initalized in the main function
mp_comm_queue2=mp.Queue() #Test pre-initalized as well

def some_complex_function(x):
print(id(mp_comm_queue2))
assert(mp_comm_queue is not None)
print(x)
return x*2

def main():
global mp_comm_queue
#initalize the Queue
mp_comm_queue=mp.Queue()

#Set up a pool to process a bunch of stuff in parallel
pool=mp.Pool()
values=range(20)
data=pool.imap(some_complex_function,values)

for val in data:
print(f"**{val}**")

if __name__=="__main__":
main()

- mp_comm_queue2 has the same ID for all iterations of some_complex_function, 
and the assert passes (mp_comm_queue is not None). However, under python 3.8, 
it fails - mp_comm_queue2 is a *different* object for each iteration, and the 
assert fails. 

So what am I doing wrong with the above example block? Assuming that it broke 
in 3.8 because I wasn’t sharing the Queue properly, what is the proper way to 
share a Queue object among multiple processes for the purposes of inter-process 
communication?

The documentation 
(https://docs.python.org/3.8/library/multiprocessing.html#exchanging-objects-between-processes
 
)
 appears to indicate that I should pass the queue as an argument to the 
function to be executed in parallel, however that fails as well (on ALL 
versions of python I have tried) with the error:

Traceback (most recent call last):
  File "test_multi.py", line 32, in 
main()
  File "test_multi.py", line 28, in main
for val in data:
  File 
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py",
 line 748, in next
raise value
  File 
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py",
 line 431, in _handle_tasks
put(task)
  File 
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/connection.py",
 line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
  File 
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/reduction.py",
 line 51, in dumps
cls(buf, protocol).dump(obj)
  File 
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiproces

Re: print small DataFrame to STDOUT and read it back into dataframe

2020-04-06 Thread Christian Gollwitzer

Am 06.04.20 um 17:17 schrieb Luca:

On 4/6/2020 4:08 AM, Reto wrote:

Does this help?


Thank you, but not really. What I am trying to achieve is to have a way 
to copy and paste small yet complete dataframes (which may be the result 
of previous calculations) between a document (TXT, Word, GoogleDoc) and 
Jupiter/IPython.


Did I make sense?



CSV is the most sensible option here. It is widely supported by 
spreadsheets etc. and easily copy/pasteable.


Christian
--
https://mail.python.org/mailman/listinfo/python-list


Introduction to PyLiveUpdate: A runtime python code manipulation framework

2020-04-06 Thread 0xcc
Hi everyone,

I would like to introduce PyLiveUpdate 
(https://github.com/devopspp/pyliveupdate),  
a tool that helps you modify your running python code without stopping and
restarting it. This is helpful when you want to add some code (like print for 
debug) 
or modify a function definition (like fix bugs) for a long running program 
(like a web 
server). I'm now seeking people's feedbacks on this. Welcome to give me any 
comments. Appreciated it!

Best,
CC
-- 
https://mail.python.org/mailman/listinfo/python-list


Multiprocessing queue sharing and python3.8

2020-04-06 Thread Israel Brewster
Under python 3.7 (and all previous versions I have used), the following code 
works properly, and produces the expected output:

import multiprocessing as mp

mp_comm_queue = None #Will be initalized in the main function
mp_comm_queue2=mp.Queue() #Test pre-initalized as well

def some_complex_function(x):
print(id(mp_comm_queue2))
assert(mp_comm_queue is not None)
print(x)
return x*2

def main():
global mp_comm_queue
#initalize the Queue
mp_comm_queue=mp.Queue()

#Set up a pool to process a bunch of stuff in parallel
pool=mp.Pool()
values=range(20)
data=pool.imap(some_complex_function,values)

for val in data:
print(f"**{val}**")

if __name__=="__main__":
main()

- mp_comm_queue2 has the same ID for all iterations of some_complex_function, 
and the assert passes (mp_comm_queue is not None). However, under python 3.8, 
it fails - mp_comm_queue2 is a *different* object for each iteration, and the 
assert fails. 

So what am I doing wrong with the above example block? Assuming that it broke 
in 3.8 because I wasn’t sharing the Queue properly, what is the proper way to 
share a Queue object among multiple processes for the purposes of inter-process 
communication?

The documentation 
(https://docs.python.org/3.8/library/multiprocessing.html#exchanging-objects-between-processes
 
)
 appears to indicate that I should pass the queue as an argument to the 
function to be executed in parallel, however that fails as well (on ALL 
versions of python I have tried) with the error:

Traceback (most recent call last):
  File "test_multi.py", line 32, in 
main()
  File "test_multi.py", line 28, in main
for val in data:
  File 
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py",
 line 748, in next
raise value
  File 
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py",
 line 431, in _handle_tasks
put(task)
  File 
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/connection.py",
 line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
  File 
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/reduction.py",
 line 51, in dumps
cls(buf, protocol).dump(obj)
  File 
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/queues.py",
 line 58, in __getstate__
context.assert_spawning(self)
  File 
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/context.py",
 line 356, in assert_spawning
' through inheritance' % type(obj).__name__
RuntimeError: Queue objects should only be shared between processes through 
inheritance

after I add the following to the code to try passing the queue rather than 
having it global:

#Try by passing queue
values=[(x,mp_comm_queue) for x in range(20)]
data=pool.imap(some_complex_function,values)
for val in data:
print(f"**{val}**")   

So if I can’t pass it as an argument, and having it global is incorrect (at 
least starting with 3.8), what is the proper method of getting multiprocessing 
queues to child processes?

---
Israel Brewster
Software Engineer
Alaska Volcano Observatory 
Geophysical Institute - UAF 
2156 Koyukuk Drive 
Fairbanks AK 99775-7320
Work: 907-474-5172
cell:  907-328-9145

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: print small DataFrame to STDOUT and read it back into dataframe

2020-04-06 Thread Luca

On 4/6/2020 4:08 AM, Reto wrote:

Does this help?


Thank you, but not really. What I am trying to achieve is to have a way 
to copy and paste small yet complete dataframes (which may be the result 
of previous calculations) between a document (TXT, Word, GoogleDoc) and 
Jupiter/IPython.


Did I make sense?

Thanks
--
https://mail.python.org/mailman/listinfo/python-list


[RELEASE] Python 2.7.18 release candidate 1

2020-04-06 Thread Benjamin Peterson
Greetings,
2.7.18 release candidate 1, a testing release for the last release of the 
Python 2.7 series, is now available for download. The CPython core developers 
stopped applying routine bugfixes to the 2.7 branch on January 1. 2.7.18 will 
includes fixes that were made between the release of 2.7.17 and the end of 
2019. A final—very final!—release is expected in 2 weeks.

Downloads are at:

   https://www.python.org/downloads/release/python-2718rc1/

The full changelog is at

   
https://raw.githubusercontent.com/python/cpython/v2.7.18rc1/Misc/NEWS.d/2.7.18rc1.rst

Test it out, and let us know if there are any critical problems at

https://bugs.python.org/

(This is the last chance!)

All the best,
Benjamin
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Exceptions versus Windows ERRORLEVEL

2020-04-06 Thread Stephen Tucker
Thanks, Eryk - this is very helpful.

Stephen.

On Mon, Apr 6, 2020 at 6:43 AM Eryk Sun  wrote:

> On 4/3/20, Stephen Tucker  wrote:
> >
> > Does an exception raised by a Python 3.x program on a Windows machine set
> > ERRORLEVEL?
>
> ERRORLEVEL is an internal state of the CMD shell. It has nothing to do
> with Python. If Python exits due to an unhandled exception, the
> process exit code will be 1. If CMD waits on the process, it will set
> the ERRORLEVEL based on the exit code. But CMD doesn't always wait. By
> default its START command doesn't wait. Also, at the interactive
> command prompt it doesn't wait for non-console applications such as
> "pythonw.exe"; it only waits for console applications such as
> "python.exe".
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Decorator with parameters

2020-04-06 Thread Chris Angelico
On Mon, Apr 6, 2020 at 6:36 PM ast  wrote:
>
> Hello
>
> I wrote a decorator to add a cache to functions.
> I realized that cache dictionnary could be defined
> as an object attribute or as a local variable in
> method __call__.
> Both seems to work properly.
> Can you see any differences between the two variants ?
>

There is a small difference, but it probably won't bother you. If you
instantiate your Memoize object once and then call it twice, one form
will share, the other form will have distinct caches.

memo = Memoize1(16) # or Memoize2(16)

@memo
def fib1(n): ...

@memo
def fib2(n): ...

The difference is WHEN the cache dictionary is created. That's all. :)

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Decorator with parameters

2020-04-06 Thread ast

Hello

I wrote a decorator to add a cache to functions.
I realized that cache dictionnary could be defined
as an object attribute or as a local variable in
method __call__.
Both seems to work properly.
Can you see any differences between the two variants ?

from collection import OrderedDict

class Memoize1:
def __init__(self, size=None):
self.size = size
self.cache = OrderedDict() ### cache defined as an attribute
def __call__(self, f):
def f2(*arg):
if arg not in self.cache:
self.cache[arg] = f(*arg)
if self.size is not None and len(self.cache) >self.size:
self.cache.popitem(last=False)
return self.cache[arg]
return f2

# variant

class Memoize2:
def __init__(self, size=None):
self.size = size
def __call__(self, f):
cache = OrderedDict()  ### cache defined as a local variable
def f2(*arg):
if arg not in cache:
cache[arg] = f(*arg)
if self.size is not None and len(cache) > self.size:
cache.popitem(last=False)
return cache[arg]
return f2

@Memoize1(16)
def fibo1(n):
if n < 2: return n
return fibo1(n-2)+fibo1(n-1)


@Memoize2(16)
def fibo2(n):
if n < 2: return n
return fibo2(n-2)+fibo2(n-1)
--
https://mail.python.org/mailman/listinfo/python-list


Re: print small DataFrame to STDOUT and read it back into dataframe

2020-04-06 Thread Reto
On Sat, Apr 04, 2020 at 07:00:23PM -0400, Luca wrote:
> dframe.to_string
> 
> gives:
> 
>  0  a0  b0  c0  d0
> 1  a1  b1  c1  d1
> 2  a2  b2  c2  d2
> 3  a3  b3  c3  d3>

That's not the output of to_string.
to_string is a method, not an attribute which is apparent by the

> 

comment in your output.
You need to call it with parenthesis like `dframe.to_string()`

> Can I evaluate this string to obtain a new dataframe like the one that
> generated it?

As for re-importing, serialize the frame to something sensible first.
There are several options available, csv, json, html... Take your pick.

You can find all those in the dframe.to_$something namespace
(again, those are methods, make sure to call them).

Import it again with pandas.read_$something, choosing the same serialization 
format
you picked for the output in the first place.

Does this help?

Cheers,
Reto
-- 
https://mail.python.org/mailman/listinfo/python-list