[issue35828] test_multiprocessing_fork - crashes in PyDict_GetItem - segmentation error

2019-03-04 Thread Michael Felt


Michael Felt  added the comment:

I see I already asked howto better utilize this info:

ConnectionRefusedError: [Errno 79] Connection refused
Warning -- files was modified by test_multiprocessing_fork
  Before: []
  After:  ['core']

-- so, more specific -- which module, or file, is doing this check - as I would 
like to do postmortem analysis of the core dump.

Thx.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35828] test_multiprocessing_fork - crashes in PyDict_GetItem - segmentation error

2019-02-20 Thread Michael Felt


Michael Felt  added the comment:

Also - this looks like a core dump was 'seen', but later removed.

Warning -- files was modified by test_multiprocessing_forkserver
  Before: []
  After:  ['core']

What can I change so that ot does not cleanup the core file?

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35828] test_multiprocessing_fork - crashes in PyDict_GetItem - segmentation error

2019-02-20 Thread Michael Felt


Michael Felt  added the comment:

Another message that surprises me is:
Warning -- multiprocessing.process._dangling was modified by 
test_multiprocessing_spawn
  Before: <_weakrefset.WeakSet object at 0x3076e810>
  After:  <_weakrefset.WeakSet object at 0x3076e390>

Normally speaking the address 0x3000 and higher should be "out of bounds" 
aka a SEGV - unless special actions are taken to open a memory region above 
0x2fff - for default AIX memory model.

Just calling malloc(), afaik, does not do automatically create a new memory 
section (0x000-0x0fff is reserved for kernel CODE, 
0x1000-0x1fff is reserved for application CODE, and 
0x2000-0x2fff is .data,.bss, "empty aka not-active", .stack).
The area between .end_bss and .stack is the area that sbrk() provides memory 
from (for calls to malloc(), e.g.)

In short, is there something I do not know understand about Python object 
pointers that 0x3000+ values are actually living in 0x2000-0x2fff 
space?

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35828] test_multiprocessing_fork - crashes in PyDict_GetItem - segmentation error

2019-02-20 Thread Michael Felt


Michael Felt  added the comment:

I am still trying to get further with this, but I won't get far enough without 
some help on how to best dig deeper.

For one, it should be leaving a core dump, but it never seems to leave the core 
dump in the working directory. I know it is doing core dump because the "errpt" 
system tells me it is.

Further, anyone who can help me better understand messages such as:
Warning -- Dangling processes: {, }

All I can figure out is that the "parent" process has children that "die". Is 
there anything useful in the SpawnProcess name info?

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35828] test_multiprocessing_fork - crashes in PyDict_GetItem - segmentation error

2019-01-30 Thread Raymond Hettinger


Change by Raymond Hettinger :


--
nosy: +davin

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35828] test_multiprocessing_fork - crashes in PyDict_GetItem - segmentation error

2019-01-30 Thread Michael Felt


Michael Felt  added the comment:

After enabling PYTHONTHREADDEBUG=1 I got the dprintf output.

I added line info (as fixed text) asin:
Python/thread_pthread.h:
  +511  PyLockStatus
  +512  PyThread_acquire_lock_timed(PyThread_type_lock lock, PY_TIMEOUT_T 
microseconds,
  +513  int intr_flag)
  +514  {
  +515  PyLockStatus success = PY_LOCK_FAILURE;
  +516  pthread_lock *thelock = (pthread_lock *)lock;
  +517  int status, error = 0;
  +518
  +519  dprintf(("519: PyThread_acquire_lock_timed(%p, %lld, %d) called\n",
  +520   lock, microseconds, intr_flag));
  +521
  +522  if (microseconds == 0) {
  +523  status = pthread_mutex_trylock( >mut );
  +524  if (status != EBUSY)
  +525  CHECK_STATUS_PTHREAD("pthread_mutex_trylock[1]");
  +526  }
  +527  else {
  +528  status = pthread_mutex_lock( >mut );
  +529  CHECK_STATUS_PTHREAD("pthread_mutex_lock[1]");
  +530  }

and can establish that USE_SEMAPHORES is not being used.

There are many reasons why - I expect - something re: Python3.5 (issue23428) 
talks about this routine and also something about CLOCk_MONOTONIC versus 
CLOCK_REALTIME (hope I spelled those right).

Further, back in Python 2.3 days - issue525532 added POSIX support for 
semaphores.

I would love to proceed - but particularly, issue23428 makes me think I should 
not think that the logic that keeps USE_SEMAPHORE is incorrect.

Help appreciated!

p.s. - deeper details

with PYTHONTHREADDEBUG=1 I no longer get a segmentation fault. Instead I get:

Total duration: 1 min 53 sec
Tests result: NO TEST RUN
Exception in thread Thread-1:
Traceback (most recent call last):
  File 
"/home/buildbot/buildarea/3.x.aixtools-aix-power6/issue/Lib/threading.py", line 
917, in _bootstrap_inner
self.run()
  File 
"/home/buildbot/buildarea/3.x.aixtools-aix-power6/issue/Lib/test/libregrtest/runtest_mp.py",
 line 145, in run
stop = self._runtest()
  File 
"/home/buildbot/buildarea/3.x.aixtools-aix-power6/issue/Lib/test/libregrtest/runtest_mp.py",
 line 135, in _runtest
result = json.loads(result)
  File 
"/home/buildbot/buildarea/3.x.aixtools-aix-power6/issue/Lib/json/__init__.py", 
line 348, in loads
return _default_decoder.decode(s)
  File 
"/home/buildbot/buildarea/3.x.aixtools-aix-power6/issue/Lib/json/decoder.py", 
line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File 
"/home/buildbot/buildarea/3.x.aixtools-aix-power6/issue/Lib/json/decoder.py", 
line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

Exception in thread Thread-2:
Traceback (most recent call last):
  File 
"/home/buildbot/buildarea/3.x.aixtools-aix-power6/issue/Lib/threading.py", line 
917, in _bootstrap_inner
self.run()
  File 
"/home/buildbot/buildarea/3.x.aixtools-aix-power6/issue/Lib/test/libregrtest/runtest_mp.py",
 line 145, in run
stop = self._runtest()
  File 
"/home/buildbot/buildarea/3.x.aixtools-aix-power6/issue/Lib/test/libregrtest/runtest_mp.py",
 line 135, in _runtest
result = json.loads(result)
  File 
"/home/buildbot/buildarea/3.x.aixtools-aix-power6/issue/Lib/json/__init__.py", 
line 348, in loads
return _default_decoder.decode(s)
  File 
"/home/buildbot/buildarea/3.x.aixtools-aix-power6/issue/Lib/json/decoder.py", 
line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File 
"/home/buildbot/buildarea/3.x.aixtools-aix-power6/issue/Lib/json/decoder.py", 
line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
?

This is prefixed by:
PyThread_allocate_lock called
PyThread_allocate_lock() -> 200a9b40
519: PyThread_acquire_lock_timed(200a9b40, 0, 0) called
577: PyThread_acquire_lock_timed(200a9b40, 0, 0) -> 1
PyThread_release_lock(200a5fe0) called
519: PyThread_acquire_lock_timed(200a9b40, 0, 0) called
577: PyThread_acquire_lock_timed(200a9b40, 0, 0) -> 0
519: PyThread_acquire_lock_timed(200a9b40, 2996, 1) called
519: PyThread_acquire_lock_timed(200fe220, 0, 0) called
577: PyThread_acquire_lock_timed(200fe220, 0, 0) -> 1
PyThread_release_lock(200fe220) called
519: PyThread_acquire_lock_timed(200fe220, 0, 0) called
577: PyThread_acquire_lock_timed(200fe220, 0, 0) -> 1
PyThread_release_lock(200fe220) called
519: PyThread_acquire_lock_timed(20156c00, 0, 0) called
577: PyThread_acquire_lock_timed(20156c00, 0, 0) -> 1
PyThread_release_lock(20156c00) called
519: PyThread_acquire_lock_timed(20156c00, 0, 0) called
577: PyThread_acquire_lock_timed(20156c00, 0, 0) -> 1
PyThread_release_lock(20156c00) called
519: PyThread_acquire_lock_timed(200fe1a0, 0, 0) called
577: PyThread_acquire_lock_timed(200fe1a0, 0, 0) -> 1
PyThread_release_lock(200fe1a0) called
PyThread_free_lock(200fe1a0) called
PyThread_free_lock(200fe220) called

[issue35828] test_multiprocessing_fork - crashes in PyDict_GetItem - segmentation error

2019-01-30 Thread Michael Felt


Michael Felt  added the comment:

OK. being more specific about the test situation.

When I run ./python -m test test_multiprocessing_fork all is fine. However, 
when I run it as: ./python -m test -j2 test_multiprocessing_main_handling 
test_multiprocessing_fork

test_multiprocessing_main_handling PASSes and test_multiprocessing_fork crashes 
(and has a segmentation core dump in the process).

--
title: test_multiprocessing_* - crash in PyDict_GetItem - segmentation error -> 
test_multiprocessing_fork - crashes in PyDict_GetItem - segmentation error

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com