[issue38119] resource tracker destroys shared memory segments when other processes should still have valid access

2020-11-05 Thread Vinay Sharma


Change by Vinay Sharma :


--
pull_requests: +22087
pull_request: https://github.com/python/cpython/pull/23174

___
Python tracker 
<https://bugs.python.org/issue38119>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39584] multiprocessing.shared_memory: MacOS crashes by running attached Python code

2020-08-14 Thread Vinay Sharma


Change by Vinay Sharma :


--
keywords: +patch
pull_requests: +21002
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/21877

___
Python tracker 
<https://bugs.python.org/issue39584>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39584] multiprocessing.shared_memory: MacOS crashes by running attached Python code

2020-08-12 Thread Vinay Sharma


Vinay Sharma  added the comment:

I have 8GB of ram and 128 GB of hard disk.

Now, creating a shared memory segment of size 10^12 (1 terabyte) somehow 
succeeds.

Creating a shared memory segment of 10^15 (1 petabyte), mmap (not ftruncate) 
throws an error stating cannot allocate memory.

Creating a shared memory segment of 10^18 (1 exabyte), causes the system to 
crash.

Now, I understand that this should be documented for a genuine user.

But, if documented this can be used by malicious softwares using python to 
crash systems abruptly.

Also, I understand that this is an issue with macos, but shouldn't python 
handle this so that atleast python's APIs are safe.

Creating shared memory segments of size 1 exabyte are not reasonable, but if 
some makes a mistake then, we must throw an error instead of a crash.

Also, can we set a max limit on creating shared memory segments to 1TB ?
because no one would genuinily need to create a segment of that size on a 
single machine.

--

___
Python tracker 
<https://bugs.python.org/issue39584>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38119] resource tracker destroys shared memory segments when other processes should still have valid access

2020-08-06 Thread Vinay Sharma


Vinay Sharma  added the comment:

As suggested by Guido I have floated this solution to python-dev mailing list.
Link to archive: 
https://mail.python.org/archives/list/python-...@python.org/thread/O67CR7QWEOJ7WDAJEBKSY74NQ2C4W3AI/

--

___
Python tracker 
<https://bugs.python.org/issue38119>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38119] resource tracker destroys shared memory segments when other processes should still have valid access

2020-08-06 Thread Vinay Sharma


Vinay Sharma  added the comment:

Well, the chances of resource tracker dying abruptly are very less because it's 
thoroughly tested, and there are mechanisms to re-spawn resource_tracker 
process if you see the code. There is a function called `def ensure_running`.

Resource tracker is still alive even if the process for which it was created 
dies. It also handles cleaning shared semaphores. So, I guess this is something 
we can rely on for cleaning up things, because at the end of the day that's 
what it was made for.

--

___
Python tracker 
<https://bugs.python.org/issue38119>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38119] resource tracker destroys shared memory segments when other processes should still have valid access

2020-08-06 Thread Vinay Sharma


Vinay Sharma  added the comment:

That's a valid point Guido. But, I guess this can be easily handled by resource 
tracker. At this current moment resource tracker unlinks shared memory if the 
process which created it dies without unliking it.

Therefore, resource tracker handles cleaning up resources which the respective 
processes couldn't or didn't do.

So, instead of unlinking the shared memory segment, resource tracker can 
instead decrement the reference count, if the process failed to do so, and if 
the reference count becomes 0, then unlink the shared memory segment.

This approach will ensure that even if the respective processes died 
unexpectedly, there are no leaks.

--

___
Python tracker 
<https://bugs.python.org/issue38119>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41344] SharedMemory crash when size is 0

2020-07-24 Thread Vinay Sharma


Vinay Sharma  added the comment:

Also, Linux created the shared memory with the passed name if size = 0, but 
threw error when it was being mapped to the process's virtual memory. 
Therefore, this scenario should be prevented before actually creating the 
shared memory segment in cases of Linux.

--

___
Python tracker 
<https://bugs.python.org/issue41344>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41344] SharedMemory crash when size is 0

2020-07-24 Thread Vinay Sharma


Vinay Sharma  added the comment:

Hi, The patch aims to raise a value error, because before the patch also, 
ValueError was being raised in case size was negative. That's why I thought it 
would be correct behaviour if I raise ValueError in case size is 0.

Do you mean to say, that OSError should be raise in the case size=0.
Also, I think handling of Windows was fine even before the patch, it was MacOS 
and Linux which were causing problems.

--

___
Python tracker 
<https://bugs.python.org/issue41344>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39584] multiprocessing.shared_memory: MacOS crashes by running attached Python code

2020-07-20 Thread Vinay Sharma


Change by Vinay Sharma :


--
nosy: +pitrou

___
Python tracker 
<https://bugs.python.org/issue39584>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38018] Increase Code Coverage for multiprocessing.shared_memory

2020-07-20 Thread Vinay Sharma


Vinay Sharma  added the comment:

Closing this, as all the necessary PRs have been merged.

--
stage: patch review -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue38018>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41344] SharedMemory crash when size is 0

2020-07-20 Thread Vinay Sharma


Change by Vinay Sharma :


--
keywords: +patch
pull_requests: +20702
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/21556

___
Python tracker 
<https://bugs.python.org/issue41344>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41344] SharedMemory crash when size is 0

2020-07-20 Thread Vinay Sharma


New submission from Vinay Sharma :

On running this: shm = SharedMemory(create=True, size=0)
I get the following error:



Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib/python3.8/multiprocessing/shared_memory.py", line 111, in 
__init__
self._mmap = mmap.mmap(self._fd, size)
ValueError: cannot mmap an empty file
Error in sys.excepthook:
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/apport_python_hook.py", line 63, in 
apport_excepthook
from apport.fileutils import likely_packaged, get_recent_crashes
  File "/usr/lib/python3/dist-packages/apport/__init__.py", line 5, in 
from apport.report import Report
  File "/usr/lib/python3/dist-packages/apport/report.py", line 30, in 
import apport.fileutils
  File "/usr/lib/python3/dist-packages/apport/fileutils.py", line 23, in 

from apport.packaging_impl import impl as packaging
  File "/usr/lib/python3/dist-packages/apport/packaging_impl.py", line 24, in 

import apt
  File "/usr/lib/python3/dist-packages/apt/__init__.py", line 23, in 
import apt_pkg
ModuleNotFoundError: No module named 'apt_pkg'

Original exception was:
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib/python3.8/multiprocessing/shared_memory.py", line 111, in 
__init__
self._mmap = mmap.mmap(self._fd, size)
ValueError: cannot mmap an empty file
>>> shm = SharedMemory(create=True, size=0)
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib/python3.8/multiprocessing/shared_memory.py", line 111, in 
__init__
self._mmap = mmap.mmap(self._fd, size)
ValueError: cannot mmap an empty file
Error in sys.excepthook:
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/apport_python_hook.py", line 63, in 
apport_excepthook
from apport.fileutils import likely_packaged, get_recent_crashes
  File "/usr/lib/python3/dist-packages/apport/__init__.py", line 5, in 
from apport.report import Report
  File "/usr/lib/python3/dist-packages/apport/report.py", line 30, in 
import apport.fileutils
  File "/usr/lib/python3/dist-packages/apport/fileutils.py", line 23, in 

from apport.packaging_impl import impl as packaging
  File "/usr/lib/python3/dist-packages/apport/packaging_impl.py", line 24, in 

import apt
  File "/usr/lib/python3/dist-packages/apt/__init__.py", line 23, in 
import apt_pkg
ModuleNotFoundError: No module named 'apt_pkg'

Original exception was:
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib/python3.8/multiprocessing/shared_memory.py", line 111, in 
__init__
self._mmap = mmap.mmap(self._fd, size)
ValueError: cannot mmap an empty file


This can be simply resolved by adding a check when size passed is 0, so that a 
shared memory segment is never created.
Currently, the following is coded:

if not size >= 0:
raise ValueError("'size' must be a positive integer")

I believe this should be changed to:
if not size > 0:
raise ValueError("'size' must be a positive integer")

As zero is not a positive and integer and is causing problems.

--
components: Library (Lib)
messages: 373990
nosy: vinay0410
priority: normal
severity: normal
status: open
title: SharedMemory crash when size is 0
type: crash
versions: Python 3.8

___
Python tracker 
<https://bugs.python.org/issue41344>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38169] Increase Code Coverage for SharedMemory and ShareableListe

2020-07-19 Thread Vinay Sharma


Vinay Sharma  added the comment:

Remove Check to prevent creating shared_memory with size 0. Will create a 
different issue for the same.

--

___
Python tracker 
<https://bugs.python.org/issue38169>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40828] shared memory problems with multiprocessing.Pool

2020-07-19 Thread Vinay Sharma


Vinay Sharma  added the comment:

Hi, Can you please confirm the python version in which you replicated this 
issue. I tried replicating (segmentation fault) this issue in python 3.8.0b3 
but didn't succeed. And shared_memory isn't present in 3.7. Also, shared_memory 
is known to throw some warnings, and there is already a PR and issue open for 
that https://bugs.python.org/issue38119.

--
nosy: +vinay0410

___
Python tracker 
<https://bugs.python.org/issue40828>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39584] multiprocessing.shared_memory: MacOS crashes by running attached Python code

2020-07-17 Thread Vinay Sharma


Vinay Sharma  added the comment:

Hi, I tried replicating this by truncating normal files but that doesn't crash. 
The above mentioned call of ftruncate only crashes for when the file descriptor 
passed points to a shared memory segment.
And only, multiprocessing.shared_memory is currently creating shared_memory 
using _posixshmem.shm_open.

So, it can be fixed in ftruncate implementation or this can also be handled by 
multiprocessing.shared_memory.

Please let me know your thoughts about the same.

--
nosy: +christian.heimes

___
Python tracker 
<https://bugs.python.org/issue39584>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38119] resource tracker destroys shared memory segments when other processes should still have valid access

2020-07-17 Thread Vinay Sharma


Change by Vinay Sharma :


--
pull_requests: +20652
pull_request: https://github.com/python/cpython/pull/21516

___
Python tracker 
<https://bugs.python.org/issue38119>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39959] Bug on multiprocessing.shared_memory

2020-07-17 Thread Vinay Sharma


Vinay Sharma  added the comment:

@rauanargyn , persist flag won't be good idea because it cannot be supported 
for windows easily, since windows uses a reference counting mechanism to keep 
track of shared memory and frees it as soon as all the processes using it are 
done.

--

___
Python tracker 
<https://bugs.python.org/issue39959>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39959] Bug on multiprocessing.shared_memory

2020-07-17 Thread Vinay Sharma


Vinay Sharma  added the comment:

Hi, shared_memory has lot of issues which are mainly being caused due to 
resource tracking. Initially resource tracking was implemented to keep track of 
semaphores only, but for some reason resource tracker also started to keep 
track of shared_memory.
This causes shared memory to be practically useless when used by unrelated 
processes, because it will be unlinked as soon as a process dies, by processes 
which are yet to be spawned.

There is already a PR open to fix this 
https://github.com/python/cpython/pull/15989/files , by applio(a core 
developer), but for some reason it hasn't been merged yet. I will try to fix 
the conflicts and request it to be merged.

Now, this will fix most of the issues in shared memory, but still the current 
implementation of shared memory for linux won't be consistent with windows 
(which isn't at the moment also). You can read more about the same here: 
https://bugs.python.org/issue38119#msg352050

--
nosy: +vinay0410

___
Python tracker 
<https://bugs.python.org/issue39959>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38035] shared_semaphores cannot be shared across unrelated processes

2020-06-19 Thread Vinay Sharma


Vinay Sharma  added the comment:

Hi @taleinat, I had sent a mail to the python mailing list as suggested. And 
till now it has received rather positive comments, in the sense that people who 
responded see this is a useful feature to be integrated into python.

--

___
Python tracker 
<https://bugs.python.org/issue38035>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38035] shared_semaphores cannot be shared across unrelated processes

2020-06-05 Thread Vinay Sharma


Vinay Sharma  added the comment:

As suggested I have sent a mail to Python Ideas regarding this issue.
Link to Python Ideas Archive: 
https://mail.python.org/archives/list/python-id...@python.org/thread/X4AKFFMYEKW6GFOUMXMOJ2OBINNY2Q6L/

--

___
Python tracker 
<https://bugs.python.org/issue38035>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38035] shared_semaphores cannot be shared across unrelated processes

2020-02-23 Thread Vinay Sharma


Vinay Sharma  added the comment:

Hi,
I think a use case for this is 
https://docs.python.org/3/library/multiprocessing.shared_memory.html

If not, can you please suggest a way to synchronise the above across unrelated 
processes.

--

___
Python tracker 
<https://bugs.python.org/issue38035>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39584] MacOS crashes by running attached Python code

2020-02-08 Thread Vinay Sharma


New submission from Vinay Sharma :

Consider the following python Code.

```
from multiprocessing.shared_memory import SharedMemory
shm = SharedMemory(name='test-crash', create=True, size=100)
```

This causes macOS Catalina, Mojave to freeze and then crash. Although, this 
works fine on ubuntu.

After, debugging I realised that this is due to the ftruncate call. I could 
replicate the same by calling os.ftruncate and also using ftruncate in C code.

Following C++ code also crashes, which confirms that ftruncate in macOS is 
broken:

```
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

int main() {

int shm_fd = shm_open("/test-shm2", O_CREAT | O_RDWR, 0666);

if (shm_fd == -1) {
throw "Shared Memory Object couldn't be created or opened";
}

int rv = ftruncate(shm_fd, (long long)100);

}
```

Should python, in any way handle this, so as to prevent any crashes using 
python code.

--
components: C API
messages: 361629
nosy: vinay0410
priority: normal
severity: normal
status: open
title: MacOS crashes by running attached Python code
versions: Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue39584>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39584] MacOS crashes by running attached Python code

2020-02-08 Thread Vinay Sharma


Change by Vinay Sharma :


--
type:  -> crash

___
Python tracker 
<https://bugs.python.org/issue39584>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38169] Increase Code Coverage for SharedMemory and ShareableListe

2019-09-14 Thread Vinay Sharma


Change by Vinay Sharma :


--
keywords: +patch
pull_requests: +15749
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/16139

___
Python tracker 
<https://bugs.python.org/issue38169>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38169] Increase Code Coverage for SharedMemory and ShareableListe

2019-09-14 Thread Vinay Sharma


New submission from Vinay Sharma :

Add Tests for SharedMemory and ShareableList. I have also added a check to 
prevent users from creating shared memory of size 0, because after creating 
mmap will throw error while memory mapping she of size 0.

--
components: Tests
messages: 352420
nosy: vinay0410
priority: normal
severity: normal
status: open
title: Increase Code Coverage for SharedMemory and ShareableListe
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue38169>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38035] shared_semaphores cannot be shared across unrelated processes

2019-09-13 Thread Vinay Sharma


Vinay Sharma  added the comment:

A common use for the same can be shared memory. Currently shared memory can be 
used by unrelated processes, but there is no mechanism as such to synchronise 
them at the moment.

--

___
Python tracker 
<https://bugs.python.org/issue38035>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38119] resource tracker destroys shared memory segments when other processes should still have valid access

2019-09-11 Thread Vinay Sharma


Vinay Sharma  added the comment:

Hi Davin,
This PR would fix the issues mentioned by you, by not prematurely unlinking the 
shared memory segment. And, therefore it would make shared memory useful in a 
lot of use cases.

But, this would still not make Unix's implementation consistent with Windows.
Windows uses a reference counting mechanism to count the number of processes 
using a shared memory segment. When all of them are done using it, Windows 
simply unlinks and frees the memory allocated to the shared memory segment.

I know that you already know this. I am commenting to find out, that what would 
be the next steps to fix the above inconsistency. You could see my last 
comment(msg351445) in issue37754, where I have listed some ways to implement 
the above reference counting mechanism. 

If you could have a look and see which one would be the best way, I would be 
happy to make a PR for it.

--

___
Python tracker 
<https://bugs.python.org/issue38119>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38018] Increase Code Coverage for multiprocessing.shared_memory

2019-09-09 Thread Vinay Sharma


Vinay Sharma  added the comment:

Also, I haven't made a NEWS entry since it's just a short bug fix

--

___
Python tracker 
<https://bugs.python.org/issue38018>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38018] Increase Code Coverage for multiprocessing.shared_memory

2019-09-09 Thread Vinay Sharma


Vinay Sharma  added the comment:

Hi I have opened another PR: https://github.com/python/cpython/pull/15821
This should fix test failures in FreeBSD.

FreeBSD requires a leading slash in shared memory names. That's why it was 
throwing the below error:

==
ERROR: test_shared_memory_basics 
(test.test_multiprocessing_spawn.WithProcessesTestSharedMemory)
--
Traceback (most recent call last):
  File 
"/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/test/_test_multiprocessing.py",
 line 3737, in test_shared_memory_basics
shm1 = shared_memory.SharedMemory(create=True, size=1)
  File 
"/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/multiprocessing/shared_memory.py",
 line 89, in __init__
self._fd = _posixshmem.shm_open(
OSError: [Errno 22] Invalid argument: 'test01_fn'
--

test01_fn doesn't contain a leading slash that's why it is an invalid argument.

--

___
Python tracker 
<https://bugs.python.org/issue38018>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38018] Increase Code Coverage for multiprocessing.shared_memory

2019-09-09 Thread Vinay Sharma


Change by Vinay Sharma :


--
pull_requests: +15469
pull_request: https://github.com/python/cpython/pull/15821

___
Python tracker 
<https://bugs.python.org/issue38018>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37754] Consistency of Unix's shared_memory implementation with windows

2019-09-09 Thread Vinay Sharma


Vinay Sharma  added the comment:

Hi @davin,
I researched on lots of approaches to solve this problem, and I have listed 
down some of the best ones.

1. As Eryk Sun suggested initially to use advisory locking to implement a 
reference count type of mechanism. I implemented this in the current Pull 
Request already open. And it works great on most platforms, but fails on MacOS. 
This is because MacOS makes no file-system entry like Linux in /dev/shm. In 
fact it makes no filesystem entry for the created shared memory segment. 
Therefore this won't work.



2. I thought of creating a manual file entry for the created shared memory 
segment in /tmp in MacOS. This will work just fine unless the user manually 
changes the permissions of /tmp directory on MacOS. And we will have to rely on 
the fact that /tmp is writable if we use this approach.



3. Shared Semaphores: This is a very interesting approach to implement 
reference count, where a semaphore is created corresponding to every shared 
memory segment, which keeps reference count of the shared memory.
Resource Tracker will clear the shared memory segment and the shared semaphore 
as soon as the value of the shared semaphore becomes 0. 

The only problem with this approach is to decide the name of shared semaphore. 
We will have to define a standardised way to get extract shared semaphore's 
name from shared segment's name.

For instance, shared_semaphore_name = 'psem' + shared_segment_name.
This can cause problems if a semaphore with shared_semaphore_name is already 
present.

This could be solved by taking any available name and storing it inside shared 
memory segment upon creation, and then extracting this name to open the shared 
semaphore.



4. Another way could be to initialize a semaphore by sem_init() inside the 
shared memory segment. For example the first 4 bytes can be reserved for 
semaphore. But, since MacOS has deprecated sem_init(), it wouldn't be a good 
practice.



5. Atomic Calls: I think this is the most simple and best way to solve the 
above problem. We can reserve first 4 bytes for an integer which is nothing but 
the reference count of shared memory, and to prevent data races we could use 
atomic calls to update it.
gcc has inbuilt support for some atomic operations. I have tested them using 
processes by updating them inside shared memory. And they work great.

Following are some atomic calls:

type __sync_add_and_fetch (type *ptr, type value, ...)
type __sync_sub_and_fetch (type *ptr, type value, ...)
type __sync_or_and_fetch (type *ptr, type value, ...)

Documentation of these can be found at below links:
https://gcc.gnu.org/onlinedocs/gcc/_005f_005fsync-Builtins.html#g_t_005f_005fsync-Builtins
https://gcc.gnu.org/onlinedocs/gcc/_005f_005fatomic-Builtins.html#g_t_005f_005fatomic-Builtins

--

___
Python tracker 
<https://bugs.python.org/issue37754>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38035] shared_semaphores cannot be shared across unrelated processes

2019-09-05 Thread Vinay Sharma


Vinay Sharma  added the comment:

Let's say I have two processes which are accessing a shared resource (let's say 
shared memory). Now, these two processes will need some kind of synchronisation 
mechanism, to be able to access the resource properly, and avoid race condition.

Currently, shared semaphores can be shared with child processes, but there is 
no way of accessing a named shared semaphore by two separate processes.

Although, the semaphore created by default is a named semaphore, and has the 
potential to be accessed by multiple processes, but because of the flags, it 
can't be opened by any other process.

This is a feature request, and not a bug, therefore I have selected type as 
enhancement.

--

___
Python tracker 
<https://bugs.python.org/issue38035>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38035] shared_semaphores cannot be shared across unrelated processes

2019-09-04 Thread Vinay Sharma


New submission from Vinay Sharma :

Currently, shared semaphores can only be created, and existing semaphores can't 
be opened. Shared semaphores are opened using the following command.
```
sem_open(name, O_CREAT | O_EXCL, 0600, val)
```
This will raise error if a semaphore which already exists.
This behaviour works well when the file descriptors of these semaphores can be 
shared with children processes.

But, it doesn't work when an unrelated process which needs access to shared 
semaphore tries to open it.

--
components: Library (Lib)
messages: 351176
nosy: vinay0410
priority: normal
severity: normal
status: open
title: shared_semaphores cannot be shared across unrelated processes
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue38035>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38035] shared_semaphores cannot be shared across unrelated processes

2019-09-04 Thread Vinay Sharma


Change by Vinay Sharma :


--
type:  -> enhancement

___
Python tracker 
<https://bugs.python.org/issue38035>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38018] Increase Code Coverage for multiprocessing.shared_memory

2019-09-03 Thread Vinay Sharma


Vinay Sharma  added the comment:

Can anyone please review my pull request. No Reviewer has been assigned yet to 
it. And I don't have permissions to request a review.

--

___
Python tracker 
<https://bugs.python.org/issue38018>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38018] Increase Code Coverage for multiprocessing.shared_memory

2019-09-03 Thread Vinay Sharma


Change by Vinay Sharma :


--
keywords: +patch
pull_requests: +15327
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/15662

___
Python tracker 
<https://bugs.python.org/issue38018>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38018] Increase Code Coverage for multiprocessing.shared_memory

2019-09-03 Thread Vinay Sharma


New submission from Vinay Sharma :

Increase Code coverage for multiprocessing.shared_memory.SharedMemory Class

--
components: Tests
messages: 351085
nosy: vinay0410
priority: normal
severity: normal
status: open
title: Increase Code Coverage for multiprocessing.shared_memory
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue38018>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37964] F_GETPATH is not available in fcntl.fcntl

2019-08-27 Thread Vinay Sharma


Vinay Sharma  added the comment:

I have opened a PR, but no reviewers, have been assigned.
Could you please look into that ?

--

___
Python tracker 
<https://bugs.python.org/issue37964>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37964] F_GETPATH is not available in fcntl.fcntl

2019-08-27 Thread Vinay Sharma


Change by Vinay Sharma :


--
keywords: +patch
pull_requests: +15226
stage: needs patch -> patch review
pull_request: https://github.com/python/cpython/pull/15550

___
Python tracker 
<https://bugs.python.org/issue37964>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37964] F_GETPATH is not available in fcntl.fcntl

2019-08-27 Thread Vinay Sharma


New submission from Vinay Sharma :

F_GETPATH cmd/operator is not present in fcntl module.
This is specific to macos and is only available in macos.

https://developer.apple.com/library/archive/documentation/System/Conceptual/ManPages_iPhoneOS/man2/fcntl.2.html

This can be also be verified using `man fcntl`

--
components: Library (Lib)
messages: 350635
nosy: twouters, vinay0410
priority: normal
severity: normal
status: open
title: F_GETPATH is not available in fcntl.fcntl
type: enhancement
versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue37964>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37754] Consistency of Unix's shared_memory implementation with windows

2019-08-26 Thread Vinay Sharma


Vinay Sharma  added the comment:

Since, advisory locking doesn't work on integer file descriptors which are 
returned by shm_open on macos, I was thinking of an alternative way of fixing 
this.

I was thinking of using a shared semaphore, which will store the reference 
count of the processes using the shared memory segment.
resource_tracker will unlink the shared_memory and the shared semaphore, when 
the count stored by shared semaphore becomes 0.
This will ensure that neither the shared memory segment nor the shared 
semaphore leaks.

Does this sound good ?
Any suggestions would be very helpful.

--

___
Python tracker 
<https://bugs.python.org/issue37754>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37754] Consistency of Unix's shared_memory implementation with windows

2019-08-24 Thread Vinay Sharma


Vinay Sharma  added the comment:

Also, shm_open returns an integer file descriptor.
And when this file descriptor is passed too fcntl.flock (in macOS) it throws 
the following error:
OSError: [Errno 45] Operation not supported

Whereas, the same code works fine on linux.

Therefore, I have doubts whether Macos flock implementation support locking 
shared_memory files.

--

___
Python tracker 
<https://bugs.python.org/issue37754>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37754] Consistency of Unix's shared_memory implementation with windows

2019-08-24 Thread Vinay Sharma


Vinay Sharma  added the comment:

Hi,
I just opened a PR implementing a fix very similar to your suggestions. I am 
using advisory locking using fcntl.flock.
And I am locking on file descriptors.
If you see my PR, in resource tracker I am opening a file 
"/dev/shm/", and trying to acquire exclusive lock on the same.
And it's working great on Linux.
Since, resource_tracker is spawned as a different process, I can't directly use 
file descriptors.

But macOS doesn't have any memory mapped files created by shm_open in /dev/shm. 
In fact, it doesn't store any reference to memory mapped files in the 
filesystem.

Therefore it get's difficult to get the file descriptor in resource tracker.
Also, is there a good way to pass file descriptors between processes.

Any ideas on the above issue will be much appreciated.

--
title: Persistence of Shared Memory Segment after process exits -> Consistency 
of Unix's shared_memory implementation with windows

___
Python tracker 
<https://bugs.python.org/issue37754>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37754] Persistence of Shared Memory Segment after process exits

2019-08-24 Thread Vinay Sharma


Change by Vinay Sharma :


--
keywords: +patch
pull_requests: +15154
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/15460

___
Python tracker 
<https://bugs.python.org/issue37754>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37754] Persistence of Shared Memory Segment after process exits

2019-08-24 Thread Vinay Sharma


Change by Vinay Sharma :


--
versions: +Python 3.9

___
Python tracker 
<https://bugs.python.org/issue37754>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37754] Persistence of Shared Memory Segment after process exits

2019-08-21 Thread Vinay Sharma


Vinay Sharma  added the comment:

> In terms of providing "consistent behavior across platforms that can be
> reasonably supported", the behavior suggested above could not
> reasonably be supported in Windows.

I understand that persistence of a shared memory segment after all the 
processes using it exit, can be very difficult on Windows.

But, after testing shared_memory on Windows, the behavior on Windows and Unix 
is not consistent at the moment.

For instance:
Let's say a three processes P1, P2 and P3 are trying to communicate using 
shared memory.
 --> P1 creates the shared memory block, and waits for P2 and P3 to access it.
 --> P2 starts and attaches this shared memory segment, writes some data to it 
and exits.
 --> Now in case of Unix, shm_unlink is called as soon as P2 exits.
 --> Now, P3 starts and tries to attach the shared memory segment.
 --> P3 will not be able to attach the shared memory segment in Unix, because 
shm_unlink has been called on that segment.
 --> Whereas, P3 will be able to attach to the shared memory segment in Windows

One possible solution can be, to register the shared the shared_memory only 
when it's created and not when it's attached.

I think that might make Unix's implementation more consistent with windows.

Any thoughts on the same will be very helpful.

--

___
Python tracker 
<https://bugs.python.org/issue37754>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37754] Persistence of Shared Memory Segment after process exits

2019-08-20 Thread Vinay Sharma


Change by Vinay Sharma :


--
type: enhancement -> behavior

___
Python tracker 
<https://bugs.python.org/issue37754>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37754] Persistence of Shared Memory Segment after process exits

2019-08-19 Thread Vinay Sharma


Vinay Sharma  added the comment:

Hi Davin,
Thanks for replying!

As you said I went through the issue, and now understand why segments should 
not be automatically created if they don't exist.

But, after reading that thread I got to know that shared memory is supposed to 
exist even if the process exits, and it can only be freed by unlink, which I 
also believe is an important functionality required by many use cases as you 
mentioned.

But, this is not the behaviour currently.
As soon as the process exists, all the shared memory created is unlinked.

Also, the documentation currently mentions: "shared memory blocks may outlive 
the original process that created them", which is not the case at all.

Currently, the resource_tracker, unlinks the shared memory, by calling unlink 
as specified here:
```
if os.name == 'posix':
import _multiprocessing
import _posixshmem

_CLEANUP_FUNCS.update({
'semaphore': _multiprocessing.sem_unlink,
'shared_memory': _posixshmem.shm_unlink,
})
```

So, is this an expected behaviour, if yes documentation should be updated, and 
if not the code base should be.

I will be happy to submit a patch in both the cases.

PS: I personally believe from my experience that shared memory segments should 
outlive the process, unless specified otherwise. Also, a argument persist=True, 
can be added which can ensure that the shared_memory segment outlives the 
process, and can be used by processes which are spawned later.

--

___
Python tracker 
<https://bugs.python.org/issue37754>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37754] Persistence of Shared Memory Segment after process exits

2019-08-19 Thread Vinay Sharma


Change by Vinay Sharma :


--
title: alter size of segment using multiprocessing.shared_memory -> Persistence 
of Shared Memory Segment after process exits

___
Python tracker 
<https://bugs.python.org/issue37754>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37703] Inconsistent gather with child exception

2019-08-16 Thread Vinay Sharma


Vinay Sharma  added the comment:

Hi Andrew,
Thanks for replying!

I understand that the behavior change is not backward compatible. But the 
current behavior is a bit ambiguous ad mentioned by Dimitri, therefore I have 
updated the documentation and opened a Pull request.

I would be very glad if you could review it.
Thanks!

Also, would adding an optional argument like force_cancel=False to 
``gather.cancel``, which won't return even if gather is done be backward 
compatible. Should I open a PR with this change also, so that you can better 
review it and see if it looks good.

--

___
Python tracker 
<https://bugs.python.org/issue37703>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37703] Inconsistent gather with child exception

2019-08-16 Thread Vinay Sharma


Change by Vinay Sharma :


--
pull_requests: +15031
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/15312

___
Python tracker 
<https://bugs.python.org/issue37703>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37703] Inconsistent gather with child exception

2019-08-16 Thread Vinay Sharma


Change by Vinay Sharma :


Added file: https://bugs.python.org/file48548/gather_cancel_code.patch

___
Python tracker 
<https://bugs.python.org/issue37703>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37703] Inconsistent gather with child exception

2019-08-16 Thread Vinay Sharma


Vinay Sharma  added the comment:

Hi Dimitri,
You are right, gather.cancel() doesn't work once it has propagated an 
exception. This happens because after propagating the exception to the caller, 
gather is marked as done. And cancel doesn't work after a Future object has 
been marked done.
You can test the same by printing the return value of gather.cancel(). It will 
be False

I also believe that the documentation of gather should explicitly mention this. 
But, depending on the fact, whether this is an expected behaviour, current code 
base might also need changes.

Therefore I have created two patches, one updating the current documentation 
according to the current functionality, and other changing the codebase which 
supports cancelling even after raising exceptions.

I will try to contact one of the core developers on deciding which one is the 
way to go.

--
keywords: +patch
nosy: +vinay0410
Added file: https://bugs.python.org/file48547/gather_cancel_doc.patch

___
Python tracker 
<https://bugs.python.org/issue37703>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37754] alter size of segment using multiprocessing.shared_memory

2019-08-04 Thread Vinay Sharma


New submission from Vinay Sharma :

Hi,
I am opening this to discuss about some possible enhancements in the 
multiprocessing.shared_memory module.

I have recently started using multiprocessing.shared_memory and realised that 
currently the module provides no functionality to alter the size of the shared 
memory segment, plus the process has to know beforehand whether to create a 
segment or open an existing one, unlike shm_open in C, where segment can be 
automatically created if it doesn't exist.


For an end user perspective I believe that these functionalities would be 
really helpful, and I would be happy to contribute, if you believe that they 
are necessary.

I would also like to mention that I agree this might be by design, or because 
of some challenges, in which case it would be very helpful if I can know them.

--
components: Library (Lib)
messages: 348980
nosy: davin, pitrou, vinay0410
priority: normal
severity: normal
status: open
title: alter size of segment using multiprocessing.shared_memory
type: enhancement
versions: Python 3.8

___
Python tracker 
<https://bugs.python.org/issue37754>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37185] use os.memfd_create in multiprocessing.shared_memory?

2019-08-04 Thread Vinay Sharma


Vinay Sharma  added the comment:

Hi @pierreglaser,
I recently started using the shared_memory module in multiprocessing , and as 
you said using memfd_create wouldn't require resource tracking is true. But, I 
was wondering if these memory segments can't be related/mapped using a unique 
name, then how will other unrelated processes to which this file descriptor 
cannot be passed, use this shared memory segment.

Also, would releasing when all the references to the segment are dropped an 
expected behaviour.

Let's suppose a process creates a shared memory segment and exits. After 5 
seconds another process is started, which tries to access the same memory 
segment. But it won't be able to since all references would have been dropped 
by the first process, thereby releasing this memory segment.

Feel free to comment, if I misinterpreted anything.

--
nosy: +vinay0410

___
Python tracker 
<https://bugs.python.org/issue37185>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com