I am hoping someone can assist me. I normally don't care if the _ctypes
module builds or not, but I now need to have it build.
I am running Solaris 10 with Sun's C compiler under SunStudio 11.
After running 'configure' and 'make', the _ctypes module fails with the
following error:
cc -xcode=pic32
Martin v. Löwis wrote:
> It seems r67740 shouldn't have been committed. Since this
> is a severe regression, I think I'll have to revert it, and
> release 2.5.4 with just that change.
My understanding of the problem is that clearerr() needs to be called
before any FILE read operations on *some* pl
Martin> Instead, you should commit it into trunk, and then run svnmerge.py
three
Martin> times, namely:
...
Thanks for that cheat sheet. I never would have figured that out on my
own. Well, at least not in a timely fashion.
Skip
___
Pytho
On Mon, Dec 22, 2008 at 7:34 PM, Antoine Pitrou wrote:
>
>> Now, we should find a way to benchmark this without having to steal Mike's
>> machine and wait 30 minutes every time.
>
> So, I seem to reproduce it. The following script takes about 15 seconds to
> run and allocates a 2 GB dict which it
Mike Coleman wrote:
> If you plot this, it is clearly quadratic (or worse).
Here's another comparison script that tries to probe the vagaries of the
obmalloc implementation. It looks at the proportional increases in
deallocation times for lists and dicts as the number of contained items
increases
I unfortunately don't have time to work out how obmalloc works myself,
but I wonder if any of the constants in that file might need to scale
somehow with memory size. That is, is it possible that some of them
that work okay with 1G RAM won't work well with (say) 128G or 1024G
(coming soon enough)?
2008/12/22 Ivan Krstić :
> On Dec 22, 2008, at 6:28 PM, Mike Coleman wrote:
>>
>> For (2), yes, 100% CPU usage.
>
> 100% _user_ CPU usage? (I'm trying to make sure we're not chasing some
> particular degeneration of kmalloc/vmalloc and friends.)
Yes, user. No noticeable sys or wait CPU going on.
Steven D'Aprano wrote:
> This behaviour appears to be specific to deleting dicts, not deleting
> random objects. I haven't yet confirmed that the problem still exists
> in trunk (I hope to have time tonight or tomorrow), but in my previous
> tests deleting millions of items stored in a list of t
Benjamin> If you check it into the trunk, it will find it's way into
Benjamin> 2.6, 3.1, and 3.0.
Outstanding!
Thx,
Skip
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://
> Now, we should find a way to benchmark this without having to steal Mike's
> machine and wait 30 minutes every time.
So, I seem to reproduce it. The following script takes about 15 seconds to
run and allocates a 2 GB dict which it deletes at the end (gc disabled of
course).
With 2.4, deleting t
On Dec 22, 2008, at 6:28 PM, Mike Coleman wrote:
For (2), yes, 100% CPU usage.
100% _user_ CPU usage? (I'm trying to make sure we're not chasing some
particular degeneration of kmalloc/vmalloc and friends.)
--
Ivan Krstić | http://radian.org
___
On Mon, Dec 22, 2008 at 2:22 PM, Adam Olsen wrote:
> To make sure that's the correct line please recompile python without
> optimizations. GCC happily reorders and merges different parts of a
> function.
>
> Adding a counter in C and recompiling would be a lot faster than using
> a gdb hook.
Oka
On Mon, Dec 22, 2008 at 2:54 PM, Ivan Krstić
wrote:
> It's still not clear to me, from reading the whole thread, precisely what
> you're seeing. A self-contained test case, preferably with generated random
> data, would be great, and save everyone a lot of investigation time.
I'm still working on
>> Allocation of a new pool would have to do a linear search in these
>> pointers (finding the arena with the least number of pools);
>
> You mean the least number of free pools, right?
Correct.
> IIUC, the heuristic is to favour
> a small number of busy arenas rather than a lot of sparse ones.
On Mon, Dec 22, 2008 at 4:27 PM, "Martin v. Löwis" wrote:
> You shouldn't check it in four times. But (IMO) you also shouldn't wait
> for somebody else to merge it (I know some people disagree with that
> recommendation).
I don't completely disagree. Certainly, if you want to make sure your
chang
Martin v. Löwis v.loewis.de> writes:
>
> It then occurred that there are only 64 different values for nfreepools,
> as ARENA_SIZE is 256kiB, and POOL_SIZE is 4kiB. So rather than keeping
> the list sorted, I now propose to maintain 64 lists, accessible in
> an array double-linked lists indexed by
> I would like to add it to the 2.6 and 3.0 maintenance branch and the 2.x
> trunk and the py3k branch. What is the preferred way to do that? Do I
> really have to do the same task four times or can I check it in once (or
> twice) secure in the belief that someone will come along and do a monster
> I'm currently studying all I can find on stackless python, PYPY and the
> concepts they've brought to Python, and so far I wonder : since
> stackless python claims to be 100% compatible with CPython's extensions,
> faster, and brings lots of fun stuffs (tasklets, coroutines and no C
> stack), how
On Mon, Dec 22, 2008 at 4:02 PM, wrote:
>
> I have this trivial little test case for test_file.py:
>
>+def testReadWhenWriting(self):
>+self.assertRaises(IOError, self.f.read)
>
> I would like to add it to the 2.6 and 3.0 maintenance branch and the 2.x
> trunk and the py3k bra
I have this trivial little test case for test_file.py:
+def testReadWhenWriting(self):
+self.assertRaises(IOError, self.f.read)
I would like to add it to the 2.6 and 3.0 maintenance branch and the 2.x
trunk and the py3k branch. What is the preferred way to do that? Do I
rea
> Investigating further, from one stop, I used gdb to follow the chain
> of pointers in the nextarena and prevarena directions. There were
> 5449 and 112765 links, respectively. maxarenas is 131072.
To reduce the time for keeping sorted lists of arenas, I was first
thinking of a binheap. I had f
On Mon, 22 Dec 2008 11:20:59 pm M.-A. Lemburg wrote:
> On 2008-12-20 23:16, Martin v. Löwis wrote:
> >>> I will try next week to see if I can come up with a smaller,
> >>> submittable example. Thanks.
> >>
> >> These long exit times are usually caused by the garbage collection
> >> of objects. Thi
Hello snakemen and snakewomen
I'm Pascal Chambon, a french engineer just leaving my Telecom School,
blatantly fond of Python, of its miscellaneous offsprings and of all
what's around dynamic languages and high level programming concepts.
I'm currently studying all I can find on stackless py
Not this list, sorry
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
On Dec 22, 2008, at 4:07 PM, M.-A. Lemburg wrote:
What kinds of objects are you storing in your dictionary ? Python
instances, strings, integers ?
Answered in a previous message:
On Dec 20, 2008, at 8:09 PM, Mike Coleman wrote:
The dict keys were all uppercase alpha strings of length 7. I do
2008/12/22 Guilherme Polo :
> On Mon, Dec 22, 2008 at 10:06 AM, wrote:
>
> #include "Python.h"
>
> static PyObject *MyErr;
>
> static PyMethodDef module_methods[] = {
>{"raise_test1", (PyCFunction)raise_test1, METH_NOARGS, NULL},
>{"raise_test2", (PyCFunction)raise_test2, METH_NOA
>> If that code is the real problem (in a reproducible test case),
>> then this approach is the only acceptable solution. Disabling
>> long-running code is not acceptable.
>
> By "disabling", I meant disabling the optimization that's trying to
> rearrange the arenas so that more memory can be retu
On 2008-12-22 19:13, Mike Coleman wrote:
> On Mon, Dec 22, 2008 at 6:20 AM, M.-A. Lemburg wrote:
>> BTW: Rather than using a huge in-memory dict, I'd suggest to either
>> use an on-disk dictionary such as the ones found in mxBeeBase or
>> a database.
>
> I really want this to work in-memory. I h
On Dec 22, 2008, at 1:13 PM, Mike Coleman wrote:
On Mon, Dec 22, 2008 at 6:20 AM, M.-A. Lemburg wrote:
BTW: Rather than using a huge in-memory dict, I'd suggest to either
use an on-disk dictionary such as the ones found in mxBeeBase or
a database.
I really want this to work in-memory. I have
On Mon, Dec 22, 2008 at 2:38 PM, "Martin v. Löwis" wrote:
>> Or perhaps there's a smarter way to manage the list of
>> arena/free pool info.
>
> If that code is the real problem (in a reproducible test case),
> then this approach is the only acceptable solution. Disabling
> long-running code is no
> Or perhaps there's a smarter way to manage the list of
> arena/free pool info.
If that code is the real problem (in a reproducible test case),
then this approach is the only acceptable solution. Disabling
long-running code is not acceptable.
Regards,
Martin
_
On Mon, Dec 22, 2008 at 11:01 AM, Mike Coleman wrote:
> Thanks for all of the useful suggestions. Here are some preliminary results.
>
> With still gc.disable(), at the end of the program I first did a
> gc.collect(), which took about five minutes. (So, reason enough not
> to gc.enable(), at lea
On Mon, Dec 22, 2008 at 6:20 AM, M.-A. Lemburg wrote:
> BTW: Rather than using a huge in-memory dict, I'd suggest to either
> use an on-disk dictionary such as the ones found in mxBeeBase or
> a database.
I really want this to work in-memory. I have 64G RAM, and I'm only
trying to use 45G of it
Thanks for all of the useful suggestions. Here are some preliminary results.
With still gc.disable(), at the end of the program I first did a
gc.collect(), which took about five minutes. (So, reason enough not
to gc.enable(), at least without Antoine's patch.)
After that, I did a .clear() on th
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Dec 22, 2008, at 11:38 AM, Antoine Pitrou wrote:
Barry Warsaw python.org> writes:
Please make sure these issues are release blockers. Fixes before
January 5th would be able to make it into 3.0.1.
Should http://bugs.python.org/issue4486 be a
Barry Warsaw python.org> writes:
>
> Please make sure these issues are release blockers. Fixes before
> January 5th would be able to make it into 3.0.1.
Should http://bugs.python.org/issue4486 be a release blocker as well?
(I don't think so, but...)
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Dec 21, 2008, at 6:56 AM, Dmitry Vasiliev wrote:
Barry Warsaw wrote:
Thanks. I've bumped that to release blocker for now. If there are
any
other 'high' bugs that you want considered for 3.0.1, please make the
release blockers too, for now.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Dec 19, 2008, at 9:44 PM, Martin v. Löwis wrote:
Do you think we can get 3.0.1 out on December 24th?
I won't have physical access to my build machine from December 24th to
January 3rd.
Okay. Let's just push it until after the new year then.
On Dec 22, 2008, at 9:35 AM, s...@pobox.com wrote:
I don't think there is a test case which fails with it applied and
passes
with it removed. If not, I think it might be worthwhile to write
such a
test even if it's used temporarily just to test the change. I wrote a
trivial test case:
If
> Should we add this to the active branches (2.6, trunk, py3k, 3.0)?
Sure! Go ahead.
For 2.5.3, I'd rather not add an additional test case, but merely
revert the patch.
Regards,
Martin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.pytho
Martin> It seems r67740 shouldn't have been committed. Since this is a
Martin> severe regression, I think I'll have to revert it, and release
Martin> 2.5.4 with just that change.
Martin> Unless I hear otherwise, I would release Python 2.5.4 (without a
Martin> release candidate
On Mon, Dec 22, 2008 at 10:45 AM, Guilherme Polo wrote:
> On Mon, Dec 22, 2008 at 10:06 AM, wrote:
>> On Mon, Dec 22, 2008 at 03:29, Guilherme Polo wrote:
>>> On Sun, Dec 21, 2008 at 11:02 PM, wrote:
Hello,
I'm trying to implement custom exception that have to carry some
u
On Mon, Dec 22, 2008 at 10:06 AM, wrote:
> On Mon, Dec 22, 2008 at 03:29, Guilherme Polo wrote:
>> On Sun, Dec 21, 2008 at 11:02 PM, wrote:
>>> Hello,
>>>
>>> I'm trying to implement custom exception that have to carry some
>>> useful info by means of instance members, to be used like:
>>>
>>>
On 2008-12-20 23:16, Martin v. Löwis wrote:
>>> I will try next week to see if I can come up with a smaller,
>>> submittable example. Thanks.
>> These long exit times are usually caused by the garbage collection
>> of objects. This can be a very time consuming task.
>
> I doubt that. The long exi
It seems r67740 shouldn't have been committed. Since this
is a severe regression, I think I'll have to revert it, and
release 2.5.4 with just that change.
Unless I hear otherwise, I would release Python 2.5.4
(without a release candidate) tomorrow.
Regards,
Martin
45 matches
Mail list logo