Re: [Python-Dev] Problems with definition of _POSIX_C_SOURCE

2005-03-17 Thread Martin v. Löwis
Jack Jansen wrote:
The comment in pyconfig.h suggests that defining _POSIX_C_SOURCE may 
enable certain features, but the actual system headers appear to work 
the other way around: it seems that defining this will disable features 
that are not strict Posix.

Does anyone know what the real meaning of this define is? Because if it 
is the former then Python is right, but if it is the latter Python 
really has no business defining it
As you can see from the formal definition that Tim gave you, both is
right: the macro causes system headers to provide the functions that
POSIX says they should provide, and remove functions that POSIX does
not mention, except when enabled through other feature selection macros.
in general Python isn't 100% 
posix-compliant because it'll use all sorts of platform-dependent (and, 
thus, potentially non-posix-compliant) code...
Python is 100% POSIX compliant. It also uses extensions to POSIX on
platforms that provide them, but if these extensions are not available,
it falls back to just not using them.
So Python really uses "POSIX with extension". A careful operating system
developer will understand that this is a useful programming model, and
provide feature selection macros to enable features that go beyond
POSIX. That's why you can see various feature selection macros at
the beginning of configure.in.
In case you wonder why Python defines this in the first place: some
platforms really need the definition, or else they don't provide
the proper header contents (i.e. they fall back to ISO C for some
headers), most notably Tru64 and HP-UX. Other systems implement
different versions of the same API, e.g. Solaris, and defining
_POSIX_C_SOURCE makes these systems provide the POSIX version of the
API.
This problem is currently stopping Python 2.4.1 to compile on this 
platform, so if anyone can provide any insight that would be very 
helpful...
Just define _THIS_PLATFORM_SOURCE. If there is no such define, complain
to the vendor of this platform, and ask Apple to provide such a macro.
If this falls on deaf ears, go to the block "Some systems cannot stand
_XOPEN_SOURCE being defined at all;" in configure.in and make another
entry. Make sure that entry:
- lists the precise reason for the entry (e.g. what
  structure/type/function gets hidden that shouldn't be hidden)
- lists your name as the contact to ask for details
- is specific to the particular release of this platform, so
  if future versions of this platform fix the bug, the work-around
  of disabling _XOPEN_SOURCE and _POSIX_C_SOURCE can be removed
Regards,
Martin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] properties with decorators (was: code blocks using 'for' loops and generators)

2005-03-17 Thread Nick Coghlan
Josiah Carlson wrote:
Samuele Pedroni <[EMAIL PROTECTED]> wrote:
[snip]
well, I think some people desire a more streamlined way of writing code
like:
def f(...)
...
def g(...)
...
x = h(...,f,g)
[property, setting up callbacks etc are cases of this]

I think properties are the most used case where this kind of thing would
be nice.  Though the only thing that I've ever had a gripe with
properties is that I didn't like the trailing property() call - which is
why I wrote a property helper decorator (a use can be seen in [1]).  But
my needs are small, so maybe this kind of thing isn't sufficient for
those who write hundreds of properties.
[snip]
I'm still trying to decide if the following is an elegant solution to defining 
properties, or a horrible abuse of function decorators:

def as_property(func):
  return property(*func())
class Example(object):
  @as_property
  def x():
def get(self):
  print "Getting!"
def set(self, val):
  print "Setting!"
def delete(self):
  print "Deleting!"
return get, set, delete
Cheers,
Nick.
--
Nick Coghlan   |   [EMAIL PROTECTED]   |   Brisbane, Australia
---
http://boredomandlaziness.skystorm.net
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Rationale for sum()'s design?

2005-03-17 Thread Nick Coghlan
Guido van Rossum wrote:
I guess that leaves Alex's question of whether or not supplying a string of some
description as the initial value can be legitimately translated to:
  if isinstance(initial, basestring):
return initial + type(initial)().join(seq)

If you're trying to get people in the habit of writing sum(x, "")
instead of "".join(x), I fear that they'll try sum(x, " ") instead of
" ".join(x), and be sadly disappointed.
That works for me as a reason not to provide this feature.
It's somewhat heartening when discussions like this turn out to show that the 
original design was right after all :)

Cheers,
Nick.
--
Nick Coghlan   |   [EMAIL PROTECTED]   |   Brisbane, Australia
---
http://boredomandlaziness.skystorm.net
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] itertools.walk()

2005-03-17 Thread Nick Coghlan
Bob Ippolito wrote:
I'm not sure why it's useful to explode the stack with all that 
recursion?  Mine didn't do that.  The control flow is nearly identical, 
but it looks more fragile (and you would get some really evil stack 
trace if iter_factory(foo) happened to raise something other than 
TypeError).
It was a holdover from my first version which *was* recursive. When I switched 
to using your chaining style, I didn't think to get rid of the now unneeded 
recursion.

So just drop the recursive calls to 'walk', and it should be back to your 
structure.
For the 'infinite recursion on basestring' example, PJE at one point suggested a 
"if item is iterable" guard that immediately yielded the item. Allowing breadth 
first operation requires something a little different, though:

from itertools import chain
def walk(iterable, depth_first=True, atomic_types=(basestring,), 
iter_factory=iter):
  itr = iter(iterable)
  while True:
for item in itr:
  if isinstance(item, atomic_types):
yield item
continue
  try:
subitr = iter_factory(item)
  except TypeError:
yield item
continue
  # Block simple cycles (like characters)
  try:
subitem = subitr.next()
  except StopIteration:
continue
  if subitem is item:
yield subitem
continue
  if depth_first:
itr = chain([subitem], subitr, itr)
  else:
itr = chain(itr, [subitem], subitr)
  break
else:
  break
Py> seq
[['123', '456'], 'abc', 'abc', 'abc', 'abc', ['xyz']]
Py> list(walk(seq))
['123', '456', 'abc', 'abc', 'abc', 'abc', 'xyz']
Py> list(walk(seq, depth_first=False))
['abc', 'abc', 'abc', 'abc', '123', '456', 'xyz']
Py> list(walk(seq, atomic_types=()))
['1', '2', '3', '4', '5', '6', 'a', 'b', 'c', 'a', 'b', 'c', 'a', 'b', 'c', 'a',
 'b', 'c', 'x', 'y', 'z']
Py> list(walk(seq, depth_first=False, atomic_types=()))
['a', 'b', 'c', 'a', 'b', 'c', 'a', 'b', 'c', 'a', 'b', 'c', '1', '2', '3', '4',
 '5', '6', 'x', 'y', 'z']
Raymond may simply decide to keep things simple instead, of course.
Cheers,
Nick.
--
Nick Coghlan   |   [EMAIL PROTECTED]   |   Brisbane, Australia
---
http://boredomandlaziness.skystorm.net
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Python 2.4 won the "Jolt productivity award" last night

2005-03-17 Thread Guido van Rossum
Python 2.4 won the "Jolt productivity award" last night. That's the
runner-up award; in our category, languages and development tools, the
Jolt (the category winner) went to Eclipse 3.0; the other runners-up
were IntelliJ and RealBasic (no comment :-).

Like usually, open source projects got several awards; both Subversion
and Hibernate (an open source Java persistency library) got Jolts.

Maybe from now on we should add "award-winning" whenever we refer to
Python 2.4. If someone can find out the website where the results are
summarized (I'm sure there is one but I'm too lazy to Google).

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 2.4 won the "Jolt productivity award" last night

2005-03-17 Thread Gareth McCaughan
On Thursday 2005-03-17 15:42, Guido van Rossum wrote:

> Python 2.4 won the "Jolt productivity award" last night. That's the
> runner-up award; in our category, languages and development tools, the
> Jolt (the category winner) went to Eclipse 3.0; the other runners-up
> were IntelliJ and RealBasic (no comment :-).
> 
> Like usually, open source projects got several awards; both Subversion
> and Hibernate (an open source Java persistency library) got Jolts.
> 
> Maybe from now on we should add "award-winning" whenever we refer to
> Python 2.4. If someone can find out the website where the results are
> summarized (I'm sure there is one but I'm too lazy to Google).

http://www.sdmagazine.com/jolts/ ,

but it's not been updated yet and therefore still has last year's
winners on it. I haven't found anything with more up-to-date
results.

This year's finalists are listed at
http://www.sdmagazine.com/jolts/15th_jolt_finalists.html .

-- 
g

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] properties with decorators (was: code blocks using 'for' loops and generators)

2005-03-17 Thread Josiah Carlson

Nick Coghlan <[EMAIL PROTECTED]> wrote:
> 
> Josiah Carlson wrote:
> > Samuele Pedroni <[EMAIL PROTECTED]> wrote:
> [snip]
> >>well, I think some people desire a more streamlined way of writing code
> >>like:
> >>
> >>def f(...)
> >>...
> >>def g(...)
> >>...
> >>x = h(...,f,g)
> >>
> >>[property, setting up callbacks etc are cases of this]
> > 
> > 
> > 
> > I think properties are the most used case where this kind of thing would
> > be nice.  Though the only thing that I've ever had a gripe with
> > properties is that I didn't like the trailing property() call - which is
> > why I wrote a property helper decorator (a use can be seen in [1]).  But
> > my needs are small, so maybe this kind of thing isn't sufficient for
> > those who write hundreds of properties.
> [snip]
> 
> I'm still trying to decide if the following is an elegant solution to 
> defining 
> properties, or a horrible abuse of function decorators:

[snip example]

The only issue is that you are left with a closure afterwards, no big
deal, unless you've got hundreds of thousands of examples of this.  I
like your method anyways.

 - Josiah

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] properties with decorators (was: code blocks using 'for' loops and generators)

2005-03-17 Thread Jp Calderone
On Thu, 17 Mar 2005 09:01:27 -0800, Josiah Carlson <[EMAIL PROTECTED]> wrote:
>
> Nick Coghlan <[EMAIL PROTECTED]> wrote:
> > 
> > Josiah Carlson wrote:
> > > 
> > > [snip]
> > > 
> > > I think properties are the most used case where this kind of thing would
> > > be nice.  Though the only thing that I've ever had a gripe with
> > > properties is that I didn't like the trailing property() call - which is
> > > why I wrote a property helper decorator (a use can be seen in [1]).  But
> > > my needs are small, so maybe this kind of thing isn't sufficient for
> > > those who write hundreds of properties.
> > [snip]
> > 
> > I'm still trying to decide if the following is an elegant solution to 
> > defining 
> > properties, or a horrible abuse of function decorators:
> 
> [snip example]
> 
> The only issue is that you are left with a closure afterwards, no big
> deal, unless you've got hundreds of thousands of examples of this.  I
> like your method anyways.

  No closed over variables, actually.  So no closure.

> 
>  - Josiah
> 

  Jp
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] properties with decorators (was: code blocks using

2005-03-17 Thread Josiah Carlson

Jp Calderone <[EMAIL PROTECTED]> wrote:
> 
> On Thu, 17 Mar 2005 09:01:27 -0800, Josiah Carlson <[EMAIL PROTECTED]> wrote:
> >
> > Nick Coghlan <[EMAIL PROTECTED]> wrote:
> > > 
> > > Josiah Carlson wrote:
> > > > 
> > > > [snip]
> > > > 
> > > > I think properties are the most used case where this kind of thing would
> > > > be nice.  Though the only thing that I've ever had a gripe with
> > > > properties is that I didn't like the trailing property() call - which is
> > > > why I wrote a property helper decorator (a use can be seen in [1]).  But
> > > > my needs are small, so maybe this kind of thing isn't sufficient for
> > > > those who write hundreds of properties.
> > > [snip]
> > > 
> > > I'm still trying to decide if the following is an elegant solution to 
> > > defining 
> > > properties, or a horrible abuse of function decorators:
> > 
> > [snip example]
> > 
> > The only issue is that you are left with a closure afterwards, no big
> > deal, unless you've got hundreds of thousands of examples of this.  I
> > like your method anyways.
> 
>   No closed over variables, actually.  So no closure.

My mistake (caused by a misunderstanding of when closures are not
created, obviously).

 - Josiah

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] thread semantics for file objects

2005-03-17 Thread Jeremy Hylton
Are the thread semantics for file objecst documented anywhere?  I
don't see anything in the library manual, which is where I expected to
find it.  It looks like read and write are atomic by virtue of fread
and fwrite being atomic.

I'm less sure what guarantees, if any, the other methods attempt to
provide.  For example, it looks like concurrent calls to writelines()
will interleave entire lines, but not parts of lines.  Concurrent
calls to readlines() provide insane results, but I don't know if
that's a bug or a feature.  Specifically, if your file has a line that
is longer than the internal buffer size SMALLCHUNK you're likely to
get parts of that line chopped up into different lines in the
resulting return values.

If we can come up with intended semantics, I'd be willing to prepare a
patch for the documentation.

Jeremy
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] thread semantics for file objects

2005-03-17 Thread Aahz
On Thu, Mar 17, 2005, Jeremy Hylton wrote:
>
> Are the thread semantics for file objecst documented anywhere?  I
> don't see anything in the library manual, which is where I expected to
> find it.  It looks like read and write are atomic by virtue of fread
> and fwrite being atomic.

Uncle Timmy will no doubt agree with me: the semantics don't matter.
NEVER, NEVER access the same file object from multiple threads, unless
you're using a lock.  And even using a lock is stupid.
-- 
Aahz ([EMAIL PROTECTED])   <*> http://www.pythoncraft.com/

"The joy of coding Python should be in seeing short, concise, readable
classes that express a lot of action in a small amount of clear code -- 
not in reams of trivial code that bores the reader to death."  --GvR
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] thread semantics for file objects

2005-03-17 Thread Jeremy Hylton
On Thu, 17 Mar 2005 16:25:44 -0500, Aahz <[EMAIL PROTECTED]> wrote:
> On Thu, Mar 17, 2005, Jeremy Hylton wrote:
> >
> > Are the thread semantics for file objecst documented anywhere?  I
> > don't see anything in the library manual, which is where I expected to
> > find it.  It looks like read and write are atomic by virtue of fread
> > and fwrite being atomic.
> 
> Uncle Timmy will no doubt agree with me: the semantics don't matter.
> NEVER, NEVER access the same file object from multiple threads, unless
> you're using a lock.  And even using a lock is stupid.

I'm not looking for your permission or approval.  I just want to know
what semantics are intended.  If the documentation wants to say that
the semantics are undefined that okay, although I think we need to say
more because some behavior has been provided by the implementation for
a long time.

Jeremy
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] thread semantics for file objects

2005-03-17 Thread Samuele Pedroni
Jeremy Hylton wrote:
On Thu, 17 Mar 2005 16:25:44 -0500, Aahz <[EMAIL PROTECTED]> wrote:
On Thu, Mar 17, 2005, Jeremy Hylton wrote:
Are the thread semantics for file objecst documented anywhere?  I
don't see anything in the library manual, which is where I expected to
find it.  It looks like read and write are atomic by virtue of fread
and fwrite being atomic.
Uncle Timmy will no doubt agree with me: the semantics don't matter.
NEVER, NEVER access the same file object from multiple threads, unless
you're using a lock.  And even using a lock is stupid.

I'm not looking for your permission or approval.  I just want to know
what semantics are intended.  If the documentation wants to say that
the semantics are undefined that okay, although I think we need to say
more because some behavior has been provided by the implementation for
a long time.
I think this is left unspecified for example by Java too. I would be 
surprised if Jython would offer the same characteristics in this respect 
as CPython.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] thread semantics for file objects

2005-03-17 Thread Martin v. Löwis
Jeremy Hylton wrote:
Are the thread semantics for file objecst documented anywhere?  I
don't see anything in the library manual, which is where I expected to
find it.  It looks like read and write are atomic by virtue of fread
and fwrite being atomic.
Uncle Timmy will no doubt agree with me: the semantics don't matter.
NEVER, NEVER access the same file object from multiple threads, unless
you're using a lock.  And even using a lock is stupid.

I'm not looking for your permission or approval.
Literally, the answer to your question is "no". In fact, Python does not
specify *any* interleaving semantics for threads whatsoever. The only
statement to this respect is
"""
Not all built-in functions that may block waiting for I/O allow other
threads to run.  (The most popular ones (\function{time.sleep()},
\method{\var{file}.read()}, \function{select.select()}) work as
expected.)
"""
Of course, this says it works as expected, without saying what actually
is expected.
I just want to know what semantics are intended.
But this is not what you've asked :-)
Anyway, expected by whom? Aahz clearly expects that the semantics are
unspecified, as he expects that nobody ever even attempts to read the
same file from multiple threads.
If the documentation wants to say that
the semantics are undefined that okay, 
Formally, there is no need to say that something is undefined. Not
defining anything is sufficient. So the semantics *is* undefined,
whether the documentation "wants" to say that or not.
> although I think we need to say
> more because some behavior has been provided by the implementation for
> a long time.
That immediately rings the Jython bell, and perhaps also the PyPy
bell.
So if you want to say something, just go ahead. Before I make the
documentation want to say that, I would like to make it say more
basic things first (e.g. that stores to variables are atomic).
Regards,
Martin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] thread semantics for file objects

2005-03-17 Thread Tim Peters
[Jeremy Hylton]
> Are the thread semantics for file objecst documented anywhere?

No.  At base level, they're inherited from the C stdio implementation.
 Since the C standard doesn't even mention threads, that's all
platform-dependent.  POSIX defines thread semantics for file I/O, but
fat lot of good that does you on Windows, etc.

> I don't see anything in the library manual, which is where I expected to
> find it.  It looks like read and write are atomic by virtue of fread
> and fwrite being atomic.

I wouldn't consider this as more than CPython implementation accidents
in the cases it appears to apply.  For example, in universal-newlines
mode, are you sure f.read(n) always maps to exactly one fread() call?

> I'm less sure what guarantees, if any, the other methods attempt to
> provide.

I don't believe they're _trying_ to provide anything specific.

> For example, it looks like concurrent calls to writelines() will interleave 
> entire
> lines, but not parts of lines.  Concurrent calls to readlines() provide insane
> results, but I don't know if that's a bug or a feature.  Specifically, if 
> your file has a
> line that is longer than the internal buffer size SMALLCHUNK you're likely to
> get parts of that line chopped up into different lines in the resulting 
> return values.

And you're _still_ not thinking "implementation accidents" ?

> If we can come up with intended semantics, I'd be willing to prepare a
> patch for the documentation.

I think Aahz was on target here:

NEVER, NEVER access the same file object from multiple threads, unless
you're using a lock.

And here he went overboard:

And even using a lock is stupid.

ZODB's FileStorage is bristling with locks protecting multi-threaded
access to file objects, therefore that can't be stupid.  QED
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] thread semantics for file objects

2005-03-17 Thread Jeremy Hylton
On Thu, 17 Mar 2005 23:04:16 +0100, "Martin v. Löwis"
<[EMAIL PROTECTED]> wrote:
> Jeremy Hylton wrote:
> >>>Are the thread semantics for file objecst documented anywhere?  I
> >>>don't see anything in the library manual, which is where I expected to
> >>>find it.  It looks like read and write are atomic by virtue of fread
> >>>and fwrite being atomic.
> >>
> >>Uncle Timmy will no doubt agree with me: the semantics don't matter.
> >>NEVER, NEVER access the same file object from multiple threads, unless
> >>you're using a lock.  And even using a lock is stupid.
> >
> >
> > I'm not looking for your permission or approval.
> 
> Literally, the answer to your question is "no". In fact, Python does not
> specify *any* interleaving semantics for threads whatsoever. The only
> statement to this respect is

I'm surprised that it does not, for example, guarantee that reads and
writes are atomic, since CPython relies on fread and fwrite which are
atomic.

Also, there are other operations that go to the trouble of calling
flockfile().  What's the point if we don't provide any guarantees?
<0.6 wink>.  If it is not part of the specified behavior, then I
suppose it's a quality of implementation issue.  Either way it would
be helpful if the Python documentation said something, e.g. you can
rely on readline() being threadsafe or you can't but the current
CPython implementation happens to be.

readline() seemed like an interesting case because readlines() doesn't
have the same implementation and the behavior is different.  So, as
another example, you could ask whether readlines() has a bug or not.

Jeremy
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] thread semantics for file objects

2005-03-17 Thread Jeremy Hylton
On Thu, 17 Mar 2005 17:13:05 -0500, Tim Peters <[EMAIL PROTECTED]> wrote:
> [Jeremy Hylton]
> > Are the thread semantics for file objecst documented anywhere?
> 
> No.  At base level, they're inherited from the C stdio implementation.
>  Since the C standard doesn't even mention threads, that's all
> platform-dependent.  POSIX defines thread semantics for file I/O, but
> fat lot of good that does you on Windows, etc.

Fair enough.  I didn't consider Windows at all or other non-POSIX platforms.  

> 
> > I don't see anything in the library manual, which is where I expected to
> > find it.  It looks like read and write are atomic by virtue of fread
> > and fwrite being atomic.
> 
> I wouldn't consider this as more than CPython implementation accidents
> in the cases it appears to apply.  For example, in universal-newlines
> mode, are you sure f.read(n) always maps to exactly one fread() call?

Universal newline reads and get_line() both lock the stream if the
platform supports it.  So I expect that they are atomic on those
platforms.

But it certainly seems safe to conclude this is a quality of
implementation issue.  Otherwise, why bother with the flockfile() at
all, right?  Or is there some correctness issue I'm not seeing that
requires the locking for some basic safety in the implementation.

> And even using a lock is stupid.
> 
> ZODB's FileStorage is bristling with locks protecting multi-threaded
> access to file objects, therefore that can't be stupid.  QED

Using a lock seemed like a good idea there and still seems like a good
idea now :-).

jeremy
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] thread semantics for file objects

2005-03-17 Thread Aahz
On Thu, Mar 17, 2005, Tim Peters wrote:
>
> I think Aahz was on target here:
> 
> NEVER, NEVER access the same file object from multiple threads, unless
> you're using a lock.
> 
> And here he went overboard:
> 
> And even using a lock is stupid.
> 
> ZODB's FileStorage is bristling with locks protecting multi-threaded
> access to file objects, therefore that can't be stupid.  QED

Heh.  And how much time have you spent debugging race conditions and
such?  That's the thrust of my point, same as we tell people to avoid
locks and use Queue instead.  I know that my statement isn't absolutely
true in the sense that it's possible to make code work that accesses
external objects across threads.  (Which is why I didn't garnish that
part with emphasis.)  But it's still stupid, 95-99% of the time.

Actually, I did skip over one other counter-example: stdout is usually
safe across threads provided one builds up a single string.  Still not
something to rely on.
-- 
Aahz ([EMAIL PROTECTED])   <*> http://www.pythoncraft.com/

"The joy of coding Python should be in seeing short, concise, readable
classes that express a lot of action in a small amount of clear code -- 
not in reams of trivial code that bores the reader to death."  --GvR
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] thread semantics for file objects

2005-03-17 Thread Martin v. Löwis
Jeremy Hylton wrote:
Are the thread semantics for file objecst documented anywhere?
Literally, the answer to your question is "no".
I'm surprised that it does not, for example, guarantee that reads and
writes are atomic, since CPython relies on fread and fwrite which are
atomic.
Where is the connection? Why would anything that CPython requires from
the C library have any effect on Python's documentation?
The only effect on Python documentation is that anybody writes it.
Nobody cares, so nobody writes documentation.
Remember, you were asking what behaviour is *documented*, not what
behaviour is guaranteed by the implementation (in a specific version
of the implementation).
Also, there are other operations that go to the trouble of calling
flockfile().  What's the point if we don't provide any guarantees?
Because nobody cares about guarantees in the documentation. Instead,
people care about observable behaviour. So if you get a crash due to a
race condition, you care, you report a bug, the Python developer agrees
its a bug, and fixes it by adding synchronization.
Nobody reported a bug to the Python documentation.
<0.6 wink>.  If it is not part of the specified behavior, then I
suppose it's a quality of implementation issue.  Either way it would
be helpful if the Python documentation said something, e.g. you can
rely on readline() being threadsafe or you can't but the current
CPython implementation happens to be.
It would be helpful to whom? To you? I doubt this, as you will be
the one who writes the documentation :-)
readline() seemed like an interesting case because readlines() doesn't
have the same implementation and the behavior is different.  So, as
another example, you could ask whether readlines() has a bug or not.
Nobody knows. It depends on the Python developer who reviews the bug
report. Most likely, he considers it tricky and leaves it open for
somebody else. If his name is Martin, he will find that this is not
a bug (because it does not cause a crash, and does not contradict with
the documentation), and he will reclassify it as a wishlist item. If
his name is Tim, and if he has a good day, he will fix it, and add
a comment on floating point numbers.
Regards,
Martin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] thread semantics for file objects

2005-03-17 Thread Tim Peters
[Jeremy Hylton]
...
> Universal newline reads and get_line() both lock the stream if the
> platform supports it.  So I expect that they are atomic on those
> platforms.

Well, certainly not get_line().  That locks and unlocks the stream
_inside_ an enclosing for-loop.  Looks quite possible for different
threads to read different parts of "the same line" if multiple threads
are trying to do get_line() simultaneously.  It releases the GIL
inside the for-loop too, so other threads _can_ sneak in.

We put a lot of work into speeding those getc()-in-a-loop functions. 
There was undocumented agreement at the time that they "should be"
thread-safe in this sense:  provided the platform C stdio wasn't
thread-braindead, then if you had N threads all simultaneously reading
a file object containing B bytes, while nobody wrote to that file
object, then the total number of bytes seen by all N threads would sum
to B at the time they all saw EOF.  This was a much stronger guarantee
than Perl provided at the time (and, for all I know, still provides),
and we (at least I) wrote little test programs at the time
demonstrating that the total number of bytes Perl saw in this case was
unpredictable, while Python's did sum to B.

Of course Perl didn't document any of this either, and it Pythonland
was clearly specific to the horrid tricks in CPython's fileobject.c.

> But it certainly seems safe to conclude this is a quality of
> implementation issue.

Or a sheer pigheadness-of-implementor issue .

>  Otherwise, why bother with the flockfile() at all, right?  Or is there some
> correctness issue I'm not seeing that requires the locking for some basic
> safety in the implementation.

There are correctness issues, but we still ignore them; locking
relieves, but doesn't solve, them.  For example, C doesn't (and POSIX
doesn't either!) define what happens if you mix reads with writes on a
file opened for update unless a file-positioning operation (like seek)
intervenes, and that's pretty easy for threads to run afoul of. 
Python does nothing to stop you from trying, and behavior if you do is
truly all over the map across boxes.  IIRC, one of the multi-threaded
test programs I mentioned above provoked ugly death in the bowels of
MS's I/O libraries when I threw an undisciplined writer thread into
the mix too.  This was reported to MS, and their response was "so
don't that -- it's undefined".  Locking the stream at least cuts down
the chance of that happening, although that's not the primary reason
for it.

Heck, we still have a years-open critical bug against segfaults when
one thread tries to close a file object while another threading is
reading from it, right?

>>> And even using a lock is stupid.

>> ZODB's FileStorage is bristling with locks protecting multi-threaded
>> access to file objects, therefore that can't be stupid.  QED

> Using a lock seemed like a good idea there and still seems like a good
> idea now :-).

Damn straight, and we're certain it has nothing to do with those large
runs of NUL bytes that sometime overwrite peoples' critical data for
no reason at all .
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] python-dev Summary for 2005-03-01 through 2005-03-15 [draft]

2005-03-17 Thread Brett C.
Amazingly on time thanks to the quarter being over.  You can't see me jumping 
up and down in joy over that fact, but I am while trying not to hit the ceiling 
as I do it (for those of you who have never met me, I'm 6'6" tall, so jumping 
in a room is not always the smartest thing for me, especially when ceiling fans 
are involved).

Since I will be on a plane most of tomorrow heading to DC for PyCon I won't get 
to this any sooner than Saturday while I am at the sprints.  Might send it out 
Saturday or Sunday during a lull in the sprint, so please get corrections and 
additions in by then.

--
=
Summary Announcements
=
-
Second to last summary for me
-
Just a reminder, after this Summary there is only one more left for me to 
write.  After that Tim Lesher, Tony Meyer, and Steven Bethard will be taking over.

-
See you at PyCon!
-
PyCon_ is practically upon us!  If you are going to be there, great!  Please 
feel free to say hello if you run into me (will be at the sprints and the 
conference Wednesday and Thursday; skipping Friday to see a friend).  Always 
happy to stop-and-chat.

.. _PyCon: http://www.pycon.org/

2.4.1 should be out soon

Python 2.4.1c1 is out.  Very shortly c2 will be released.  Assuming no major 
issues come up, 2.4 final will be out.

But in order to make sure no issues come up, we need the code to be tested! 
Please get the code and run the regression tests.  If you are on a UNIX system 
it is as easy as running ``make test`` (``make testall`` is even better).  The 
tests can also be run on non-UNIX systems; see 
http://docs.python.org/lib/regrtest.html on how.

=
Summaries
=
--
2.4 should be out soon
--
Python 2.4.1c1 was releaseed, but enough bugs were found and subsequently fixed 
that c2 release will occur before 2.4 final comes out.

Contributing threads:
  - `2.4.1c1 March 10th, 2.4.1 March 17th <>`__
  - `Failing tests: marshal, warnings <>`__
  - `BRANCH FREEZE for 2.4.1rc1,  UTC, 2005-03-10 <>`__
  - `branch release24-maint is unfrozen, 2.4.1rc2? <>`__
  - `os.access and Unicode <>`__
  - `RELEASED Python 2.4.1, release candidate 1 <>`__
  - `distutils fix for building Zope against Python 2.4.1c1 <>`__
  - `Python2.4.1c1 and win32com <>`__
  - `Open issues for 2.4.1 <>`__
---
Getting state of all threads in interpreter
---
Florent Guillaume wrote some code for Zope that returned the current state of 
all threads in the interpreter, regardless of whether they were hung or not. 
Tim Peters suggested someone write up some code so that this could be made 
available in Python itself.

Contributing threads:
  - `Useful thread project for 2.5? <>`__
-
No new features in micro releases
-
A bug in os.access() not allowing Unicode strings triggered the discussion of 
whether it was a bugfix to repair the issue or a new feature.  In the end it 
was decided it was a bugfix.  But the point was specified that micro releases 
should never have any new feature, no matter how small.

Contributing threads:
  - `[Python-checkins] python/dist/src/Modules  ossaudiodev.c, 1.35, 1.36 <>`__
  - `No new features <>`__
  - `os.access and Unicode <>`__
  - `rationale for the no-new-features approach <>`__
-
Python wins Jolt "Productivity Award"
-
Python was runner-up in the `15th annual Jolt Awards`_ in the category of 
"Languages and Development Environments", being given the "Productivity Award". 
 Python is now award-winning.  =)

.. _15th annual Jolt Awards: 
http://www.sdmagazine.com/jolts/15th_jolt_finalists.html

Contributing threads:
  - `FWD: SD MAgazine.com - Jolt Awards Winners <>`__
  - `Python 2.4 won the "Jolt productivity award" last night <>`__
--
New built-ins: any() and all()
--
Python 2.5 gains two new built-ins: any(), which returns True if the iterable 
passed to it contains any true items, and all(), which returns True if all the 
items in the iterable passed to it are true.

Contributing threads:
  - `Adding any() and all() <>`__

Abbreviating list comprehensions

The idea of allowing list comprehensions when the item being appended to the 
new list is passed directly in was proposed: ``[x in seq if f(x)`` would be 
equivalent to ``[x for x in seq if f(x)]``.

The debate on this one is still going, but my gut says it won't be accepted; 
TOOWTDI and all.

Contributing threads:
  - `Adding any() and all() <>`__
  - `comprehension abbreviation <>`__
-
sum() sema

[Python-Dev] Draft PEP to make file objects support non-blocking mode.

2005-03-17 Thread Donovan Baarda
G'day,

the recent thread about thread semantics for file objects reminded me I
had a draft pep for extending file objects to support non-blocking
mode. 

This is handy for handling files in async applications (the non-threaded
way of doing things concurrently).

Its pretty rough, but if I fuss over it any more I'll never get it
out...

-- 
Donovan Baarda <[EMAIL PROTECTED]>
http://minkirri.apana.org.au/~abo/
PEP: XXX
Title: Make builtin file objects support non-blocking mode
Version: $Revision: 1.0 $
Last-Modified: $Date: 2005/03/18 11:34:00 $
Author: Donovan Baarda <[EMAIL PROTECTED]>
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 06-Jan-2005
Python-Version: 3.5
Post-History: 06-Jan-2005


Abstract


This PEP suggests a way that the existing builtin file type could be 
extended to better support non-blocking read and write modes required for 
asynchronous applications using things like select and popen2.


Rationale
=

Many Python library methods and classes like select.select(), os.popen2(),
and subprocess.Popen() return and/or operate on builtin file objects.
However even simple applications of these methods and classes require the
files to be in non-blocking mode.

Currently the built in file type does not support non-blocking mode very
well.  Setting a file into non-blocking mode and reading or writing to it
can only be done reliably by operating on the file.fileno() file descriptor.
This requires using the fnctl and os module file descriptor manipulation
methods.


Details
===

The documentation of file.read() warns; "Also note that when in non-blocking
mode, less data than what was requested may be returned, even if no size
parameter was given".  An empty string is returned to indicate an EOF
condition.  It is possible that file.read() in non-blocking mode will not
produce any data before EOF is reached.  Currently there is no documented
way to identify the difference between reaching EOF and an empty
non-blocking read.

The documented behaviour of file.write() in non-blocking mode is undefined.
When writing to a file in non-blocking mode, it is possible that not all of
the data gets written.  Currently there is no documented way of handling or
indicating a partial write.

The file.read() and file.write() methods are implemented using the
underlying C read() and write() fuctions.  As a side effect of this, they
have the following undocumented behaviour when operating on non-blocking
files;

A file.write() that fails to write all the provided data immediately will
write part of the data, then raise IOError with an errno of EAGAIN.  There
is no indication how much of the data was successfully written.

A file.read() that fails to read all the requested data immediately will
return the partial data that was read.  A file.read() that fails to read any
data immediately will raise IOError with an errno of EAGAIN.


Proposed Changes


What is required is to add a setblocking() method that simplifies setting
non-blocking mode, and extending/documenting read() and write() so they can
be reliably used in non-blocking mode.


file.setblocking(flag) Extension


This method implements the socket.setblocking() method for file objects.  if
flag is 0, the file is set to non-blocking, else to blocking mode.  


file.read([size]) Changes
--

The read method's current behaviour needs to be documented, so its actual
behaviour can be used to differentiate between an empty non-blocking read,
and EOF.  This means recording that IOError(EAGAIN) is raised for an empty
non-blocking read.


file.write(str) Changes


The write method needs to have a useful behaviour for partial non-blocking
writes defined, implemented, and documented.  This includes returning how
many bytes of "str" are successfully written, and raising IOError(EAGAIN)
for an unsuccessful write (one that failed to write anything).


Impact of Changes
=

As these changes are primarily extensions, they should not have much impact
on any existing code.

The file.read() changes are only documenting current behaviour. This could
have no impact on any existing code.

The file.write() change makes this method return an int instead of returning
nothing (None). The only code this could affect would be something relying
on file.write() returning None. I suspect there is no code that would do
this.

The file.setblocking() change adds a new method. The only existing code this
could affect is code that checks for the presense/absense of a setblocking
method on a file. There may be code out there that does this to
differentiate between a file and a socket. As there are much better ways to
do this, I suspect that there would be no code that does this.


Examples


For example, the following simple code using popen2 will "hang" if the
huge_in string is larger than the os buffering can read/write in one hit.

  

Re: [Python-Dev] thread semantics for file objects

2005-03-17 Thread Jeremy Hylton
On Thu, 17 Mar 2005 23:57:52 +0100, "Martin v. Löwis"
<[EMAIL PROTECTED]> wrote:
> Remember, you were asking what behaviour is *documented*, not what
> behaviour is guaranteed by the implementation (in a specific version
> of the implementation).

Martin,

I think you're trying to find more finesse in my question than I ever
intended.  I intended to ask -- hey, what are the semantics we intend
in this case?  since the documentation doesn't say, we could improve
them by capturing the intended semantics.

> > Also, there are other operations that go to the trouble of calling
> > flockfile().  What's the point if we don't provide any guarantees?
> 
> Because nobody cares about guarantees in the documentation. Instead,
> people care about observable behaviour. So if you get a crash due to a
> race condition, you care, you report a bug, the Python developer agrees
> its a bug, and fixes it by adding synchronization.

As Tim later reported this wasn't to address a crash, but to appease a
pig headed developer :-).  I'm surprised by your claim that whether
something is a bug depends on the person who reviews it.  In practice,
this may be the case, but I've always been under the impression that
there was rough consensus about what constituted a bug and what a
feature.  I'd certainly say its a goal to strive for.

It sounds like the weakest intended behavior we have is the one Tim
reported:  "provided the platform C stdio wasn't thread-braindead,
then if you had N threads all simultaneously reading a file object
containing B bytes, while nobody wrote to that file object, then the
total number of bytes seen by all N threads would sum
to B at the time they all saw EOF."  It seems to me like a good idea
to document this intended behavior somewhere.

Jeremy
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Faster Set.discard() method?

2005-03-17 Thread Andrew McNamara
To avoid the exception in the discard method, it could be implemented as:

def discard(self, element):
"""Remove an element from a set if it is a member.

If the element is not a member, do nothing.
"""
try:
self._data.pop(element, None)
except TypeError:
transform = getattr(element, "__as_temporarily_immutable__", None)
if transform is None:
raise # re-raise the TypeError exception we caught
del self._data[transform()]

Currently, it's implemented as the much clearer:

try:
self.remove(element)
except KeyError:
pass

But the dict.pop method is about 12 times faster. Is this worth doing?

-- 
Andrew McNamara, Senior Developer, Object Craft
http://www.object-craft.com.au/
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


RE: [Python-Dev] Faster Set.discard() method?

2005-03-17 Thread Tony Meyer
> To avoid the exception in the discard method, it could be 
> implemented as:
> 
> def discard(self, element):
> """Remove an element from a set if it is a member.
> 
> If the element is not a member, do nothing.
> """
> try:
> self._data.pop(element, None)
> except TypeError:
> transform = getattr(element, 
> "__as_temporarily_immutable__", None)
> if transform is None:
> raise # re-raise the TypeError exception we caught
> del self._data[transform()]
[...]
> But the dict.pop method is about 12 times faster. Is this worth doing?

The 2.4 builtin set's discard function looks like it does roughly the same
as the 2.3 sets.Set.  Have you tried comparing a C version of your version
with the 2.4 set to see if there are speedups there, too?

IMO keeping the sets.Set version as clean and readable as possible is nice,
since the reason this exists is for other implementations (Jython, PyPy,
...) and documentation, right?  OTOH, speeding up the CPython implementation
is nice and it's read by many fewer people.

=Tony.Meyer

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Faster Set.discard() method?

2005-03-17 Thread Andrew McNamara
>> But the dict.pop method is about 12 times faster. Is this worth doing?
>
>The 2.4 builtin set's discard function looks like it does roughly the same
>as the 2.3 sets.Set.  Have you tried comparing a C version of your version
>with the 2.4 set to see if there are speedups there, too?

Ah. I had forgotten it was builtin - I'd found the python implementation
and concluded the C implementation didn't make it into 2.4 for some
reason... 8-)

Yes, the builtin set.discard() method is already faster than dict.pop().

>IMO keeping the sets.Set version as clean and readable as possible is nice,
>since the reason this exists is for other implementations (Jython, PyPy,
>...) and documentation, right?  OTOH, speeding up the CPython implementation
>is nice and it's read by many fewer people.

No, you're right - making sets.Set less readable than it already is would
be a step backwards. On the other hand, Jython and PyPy are already in
trouble - the builtin set() is not entirely compatible with sets.Set.

-- 
Andrew McNamara, Senior Developer, Object Craft
http://www.object-craft.com.au/
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


RE: [Python-Dev] Faster Set.discard() method?

2005-03-17 Thread Tony Meyer
>>> But the dict.pop method is about 12 times faster. Is this 
>>> worth doing?
>>
>> The 2.4 builtin set's discard function looks like it does 
>> roughly the same as the 2.3 sets.Set.  Have you tried comparing
>> a C version of your version with the 2.4 set to see if there are
>> speedups there, too?
> 
> Ah. I had forgotten it was builtin - I'd found the python 
> implementation and concluded the C implementation didn't make
> it into 2.4 for some reason... 8-)
> 
> Yes, the builtin set.discard() method is already faster than 
> dict.pop().

The C implementation has this code:

"""
if (PyDict_DelItem(so->data, item) == -1) {
if (!PyErr_ExceptionMatches(PyExc_KeyError))
return NULL;
PyErr_Clear();
}
"""

Which is more-or-less the same as the sets.Set version, right?  What I was
wondering was whether changing that C to a C version of your dict.pop()
version would also result in speedups.  Are Exceptions really that slow,
even at the C level?

=Tony.Meyer

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Faster Set.discard() method?

2005-03-17 Thread Andrew McNamara
>The C implementation has this code:
>
>"""
>   if (PyDict_DelItem(so->data, item) == -1) {
>   if (!PyErr_ExceptionMatches(PyExc_KeyError))
>   return NULL;
>   PyErr_Clear();
>   }
>"""
>
>Which is more-or-less the same as the sets.Set version, right?  What I was
>wondering was whether changing that C to a C version of your dict.pop()
>version would also result in speedups.  Are Exceptions really that slow,
>even at the C level?

No, exceptions are fast at the C level - all they do is set a flag. The
expense of exceptions is saving a restoring python frames, I think,
which doesn't happen in this case. So the current implementation is
ideal for C code - clear and fast.

-- 
Andrew McNamara, Senior Developer, Object Craft
http://www.object-craft.com.au/
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] RELEASED Python 2.4.1, release candidate 2

2005-03-17 Thread Anthony Baxter

On behalf of the Python development team and the Python community, I'm
happy to announce the release of Python 2.4.1 (release candidate 2).

Python 2.4.1 is a bug-fix release. See the release notes at the website
(also available as Misc/NEWS in the source distribution) for details of
the bugs squished in this release.

Assuming no major problems crop up, a final release of Python 2.4.1 will
be out around the 29th of March - straight after PyCon.

For more information on Python 2.4.1, including download links for
various platforms, release notes, and known issues, please see:

http://www.python.org/2.4.1

Highlights of this new release include:

  - Bug fixes. According to the release notes, several dozen bugs
have been fixed, including a fix for the SimpleXMLRPCServer 
security issue (PSF-2005-001).

  - A handful other bugs discovered in the first release candidate 
have been fixed in this version.

Highlights of the previous major Python release (2.4) are available 
from the Python 2.4 page, at

http://www.python.org/2.4/highlights.html

Enjoy the new release,
Anthony

Anthony Baxter
[EMAIL PROTECTED]
Python Release Manager
(on behalf of the entire python-dev team)


pgpcrbUFTQRnV.pgp
Description: PGP signature
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] thread semantics for file objects

2005-03-17 Thread Martin v. Löwis
Jeremy Hylton wrote:
It sounds like the weakest intended behavior we have is the one Tim
reported:  "provided the platform C stdio wasn't thread-braindead,
then if you had N threads all simultaneously reading a file object
containing B bytes, while nobody wrote to that file object, then the
total number of bytes seen by all N threads would sum
to B at the time they all saw EOF."  It seems to me like a good idea
to document this intended behavior somewhere.
The guarantee that "we" want to make is certainly stronger: if the
threads all read from the same file, each will get a series of "chunks".
The guarantee is that it is possible to combine the chunks in a way to
get the original contents of the file (i.e. not only the sum of the
bytes is correct, but also the contents).
However, I see little value adding this specific guarantee to the
documentation when so many other aspects of thread interleaving
are unspecified.
For example, if a thread reads a dictionary simultaneous to a write
in another thread, and the read and the write deal with different
keys, there is a guarantee that they won't affect each other. If they
operate on the same key, the read either gets the old value, or the
new value, but not both. And so on.
Writing down all these properties does little good, IMO. This includes
your proposed property of file reads: anybody reading your statement
will think "of course it works this way - why even mention it".
Regards,
Martin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com