Talin wrote:
I don't have any specific syntax proposals, but I notice that the suite
that follows the switch statement is not a normal suite, but a
restricted one, and I am wondering if we could come up with a syntax
that avoids having a special suite.
don't have KR handy, but I'm pretty
Ka-Ping Yee wrote:
And from a syntax perspective, it's a bad idea. x[] is much
more often a typo than an intentional attempt to index a
zero-dimensional array.
but how often is it a typo?
for example, judging from c.l.python traffic, forgetting to add a return
statement is a quite common,
Ka-Ping Yee wrote:
Quite a few people have expressed interest in having UUID
functionality in the standard library, and previously on this
list some suggested possibly using the uuid.py module i wrote:
http://zesty.ca/python/uuid.py
+1!
This module provides a UUID class, functions
M.-A. Lemburg wrote:
You can download a current snapshot from:
http://www.egenix.com/files/python/pybench-2.0-2006-06-09.zip
believe it or not, but this hangs on my machine, under 2.5 trunk. and
it hangs hard; nether control-c, break, or the task manager manages to
kill it.
if it's any
[EMAIL PROTECTED] wrote:
Fredrik 6. Combine 2 and 3: require the user to pass in a MAC argument
Fredrik to uuid1, but provide a SlowGetMacAddress helper that wraps
Fredrik the existing code.
Or make the MAC address an optional arg to uuid1. If given, use it. If
M.-A. Lemburg wrote:
Still, here's the timeit.py measurement of the PythonFunctionCall
test (note that I've scaled down the test in terms of number
of rounds for timeit.py):
Python 2.4:
10 loops, best of 3: 21.9 msec per loop
10 loops, best of 3: 21.8 msec per loop
10 loops, best of 3:
M.-A. Lemburg wrote:
sigh I put the headings for the timeit.py output on the
wrong blocks. Thanks for pointing this out.
so how do you explain the Try/Except results, where timeit and pybench
seems to agree?
/F
___
Python-Dev mailing list
M.-A. Lemburg wrote:
The pybench results match those of timeit.py on my test machine
in both cases. I just mixed up the headers when I wrote the email.
on a line by line basis ?
Testnamesminimum run-timeaverage run-time
this
M.-A. Lemburg wrote:
The pybench results match those of timeit.py on my test machine
in both cases.
but they don't match the timeit results on similar machines, nor do they
reflect what was done at the sprint.
Tools/pybench ~/projects/Python/Installation/bin/python Calls.py
10 loops,
M.-A. Lemburg wrote:
The results were produced by pybench 2.0 and use time.time
on Linux, plus a different calibration strategy. As a result
these timings are a lot more repeatable than with pybench 1.3
and I've confirmed the timings using several runs to make sure.
can you check in 2.0 ?
M.-A. Lemburg wrote:
Huh ? They do show the speedups you achieved at the sprint.
the results you just posted appear to show a 20% slowdown for function
calls, and a 10% speedup for exceptions.
both things were optimized at the sprint, and the improvements were
confirmed on several machines.
M.-A. Lemburg wrote:
One interesting difference I found while testing on Windows
vs. Linux is that the StringMappings test have quite a different
run-time on both systems: around 2500ms on Windows vs. 590ms
on Linux (on Python 2.4). UnicodeMappings doesn't show such
a signficant difference.
M.-A. Lemburg wrote:
Some more interesting results from comparing Python 2.4 (other) against
the current SVN snapshot (this):
been there, done that, found the results lacking.
we spent a large part of the first NFS day to investigate all
reported slowdowns, and found that only one slowdown
Raymond Hettinger wrote:
When the result of an expression is None, the interactive interpreter
correctly suppresses the display of the result. However, it also
suppresses the underscore assignment. I'm not sure if that is correct
or desirable because a subsequent statement has no way
Thomas Heller wrote:
In Include/structmember.h, there is no T_... constant for Py_ssize_t
member fields. Should there be one?
do you need one? if so, I see no reason why you cannot add it...
/F
___
Python-Dev mailing list
Python-Dev@python.org
Jim Jewett wrote:
The existing logging that she is replacing is done through methods of
the dispatcher class. The dispatcher class is only a portion of the
whole module.
the dispatcher class is never used on its own; it's a base class for
user-defined communication classes.
asyncore users
Jim Jewett wrote:
This does argue in favor of allowing the more intrusive additions to
handlers and default configuration. It would be useful to have a
handler that emitted only Apache log format records, and saved them
(by default) to a rotating file rather than stderr.(And it *might*
make
M.-A. Lemburg wrote:
This example is a bit misleading, since chances are high that
the benchmark will get a good priority bump by the scheduler.
which makes it run infinitely fast ? what planet are you buying your
hardware on ? ;-)
/F
___
M.-A. Lemburg wrote:
* Linux:
time.clock()
- not usable; I get timings with error interval of about 30%
with differences in steps of 100ms
resource.getrusage()
- error interval of less than 10%; overall 0.5%
with differences in steps of 10ms
hmm. I could have
Phillip J. Eby wrote:
If this *has* to be added to the modules that don't currently do any
logging, can we please delay the import until it's actually needed?
now that we've had a needforspeed sprint, it's clear that it's time to
start working on slowing things down again ;-)
I think it
Martin v. Löwis wrote:
since process time is *sampled*, not measured, process time isn't exactly in-
vulnerable either.
I can't share that view. The scheduler knows *exactly* what thread is
running on the processor at any time, and that thread won't change
until the scheduler makes it
Martin v. Löwis wrote:
Sure: when a thread doesn't consume its entire quantum, accounting
becomes difficult. Still, if the scheduler reads the current time
when scheduling, it measures the time consumed.
yeah, but the point is that it *doesn't* read the current time: all the
system does it
Tim Peters wrote:
Maybe this varies by Linux flavor or version? While the article above
was published in 2001, Googling didn't turn up any hint that Linux
jiffies have actually gone away, or become better loved, since then.
well, on x86, they have changed from 10 ms in 2.4 to 1 ms in early
Neal Norwitz wrote:
Any ideas?
this is a recent change, so it looks like the box simply didn't get
around to rebuild the unicodeobject module.
(I'm beginning to wonder if I didn't forget to add some header file
dependencies somewhere during the stringlib refactoring, but none of the
other
M.-A. Lemburg wrote:
Of course, but then changes to try-except logic can interfere
with the performance of setting up method calls. This is what
pybench then uncovers.
I think the only thing PyBench has uncovered is that you're convinced that it's
always right, and everybody else is always
M.-A. Lemburg wrote:
I believe that using wall-clock timers
for benchmarking is not a good approach due to the high
noise level. Process time timers typically have a lower
resolution, but give a better picture of the actual
run-time of your code and also don't exhibit as much noise
as the
Terry Reedy wrote:
But even better, the way to go to run comparison timings is to use a system
with as little other stuff going on as possible. For Windows, this means
rebooting in safe mode, waiting until the system is quiescent, and then run
the timing test with *nothing* else active
M.-A. Lemburg wrote:
I just had an idea: if we could get each test to run
inside a single time slice assigned by the OS scheduler,
then we could benefit from the better resolution of the
hardware timers while still keeping the noise to a
minimum.
I suppose this could be achieved by:
*
M.-A. Lemburg wrote:
Seriously, I've been using and running pybench for years
and even though tweaks to the interpreter do sometimes
result in speedups or slow-downs where you wouldn't expect
them (due to the interpreter using the Python objects),
they are reproducable and often enough have
M.-A. Lemburg wrote:
AFAIK, there were no real issues with pybench, only with the
fact that time.clock() (the timer used by pybench) is wall-time
on Windows and thus an MP3-player running in the background
will cause some serious noise in the measurements
oh, please; as I mentioned back
Tim Peters wrote:
abc.count(, 100)
1
uabc.count(, 100)
1
which is the same as
abc[100:].count()
1
abc.find(, 100)
100
uabc.find(, 100)
100
today, although the idea that find() can return an index that doesn't
exist in the string is particularly jarring. Since we also have:
Neal Norwitz wrote:
I've been starting to get some of the buildbots working again. There
was some massive problem on May 25 where a ton of extra files were
left around. I can't remember if I saw something about that at the
NFS sprint or not.
bob's new _struct module was checked in on May
Georg Brandl wrote:
I'd be satisfied with a deprecation warning for PyArg_Parse, though we
(*) should really figure out how to make it work on Windows. I
haven't seen anyone object to the C compiler deprecation warning.
There is something at
Georg Brandl wrote:
and links to
http://msdn2.microsoft.com/en-us/library/c8xdzzhh.aspx
which provides a pragma that does the same thing, and is documented to work
for both C and C++, and also works for macros.
But we'd have to use an #ifdef for every deprecated function.
or a
Terry Reedy wrote:
def isgenerator(func):
return func.func_code.co_flags == 99
but it is rather ugly (horrible indeed).
To me, part of the beauty of Python is that is so easy to write such things
so compactly, so that we do not need megaAPIs with hundreds of functions
and methods.
Neal Norwitz wrote:
Also, it would be easy to detect METH_OLDARGS in PyCFunction_New and raise
an appropriate exception.
I agree with Martin this should raise a deprecation warning in 2.5.
why?
this is a clear case of unnecessary meddling. removing it won't remove
much code (a whopping
Armin Rigo wrote:
As it turns out, I measured only 0.5% performance loss in Pystone.
umm. does Pystone even call lookdict?
/F
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
Thomas Wouters wrote:
I know which 'it' I meant: the same 'it' as struct already accepts in
Python 2.4 and before. Yes, it's inconsistent between formatcodes and
valuetypes -- fixing that the whole point of the change -- but that's
how you define 'compatibility'; struct, by default, should
Armin Rigo wrote:
At the moment, I'm trying to, but 2.5 HEAD keeps failing mysteriously on
the tests I try to time, and even going into an infinite loop consuming
all my memory - since the NFS sprint. Am I allowed to be grumpy here,
and repeat that speed should not be used to justify bugs?
Armin Rigo wrote:
Ah, it's a corner case of str.find() whose behavior just changed.
Previously, 'abc'.find('', 100) would return -1, and now it returns 100.
Just to confuse matters, the same test with unicode returns 100, and has
always done so in the past. (Oh well, one of these again...)
Guido van Rossum wrote:
well, the empty string is a valid substring of all possible strings
(there are no null strings in Python). you get the same behaviour
from slicing, the in operator, replace (this was discussed on the
list last week), count, etc.
Although Tim pointed out that
Armin Rigo wrote:
I know this. These corner cases are debatable and different answers
could be seen as correct, as I think is the case for find(). My point
was different: I was worrying that the recent change in str.find() would
needlessly send existing and working programs into infinite
several string methods accepts either strings or objects that support
the buffer api, and ends up raising a expected a character buffer
object error if you pass in something else. this isn't exactly helpful
for non-experts -- the term character buffer object doesn't appear in
any python
Sean Reifschneider wrote:
- Treat the negative as a reverser, so we get back (3, 2, 1).
Then we could get:
print -123
321
Yay!
and while we're at it, let's fix this:
0.66 * (1, 2, 3)
(1, 2)
and maybe even this
0.5 * (1, 2, 3)
(1, 1)
but I guess the
Fred L. Drake, Jr. wrote:
Even better:
123*-1
We'd get to explain:
- what the *- operator is all about, and
- why we'd use it with a string and an int.
I see possibilities here. :-)
the infamous *- clear operator? who snuck that one into python?
/F
Guido van Rossum wrote:
I don't know what epoll is.
On a related note, perhaps the needforspeed folks should look into
supporting kqueue on systems where it's available? That's a really
fast BSD-originated API to replace select/poll. (It's fast due to the
way the API is designed.)
roughly
Raymond Hettinger wrote:
1) Is str.rpartition() still wanted?
Yes.
I might have missed my earlier 30-minute deadline with one minute (not
my fault! I was distracted! seriously!), but this time, I actually
managed to get the code in there *before* I saw the pronouncement ;-)
/F
-1 * (1, 2, 3)
()
-(1, 2, 3)
Traceback (most recent call last):
File stdin, line 1, in module
TypeError: bad operand type for unary -
We Really Need To Fix This!
[\F]
___
Python-Dev mailing list
Python-Dev@python.org
Guido van Rossum wrote:
We Really Need To Fix This!
[\F]
Doesn't the real effbot have /F as sig?
yeah, we've had some trouble with fake bots lately. I mean, there's a
timbot posting to this thread, but I know for sure that the real Tim got
tired of hacking on Python earlier tonight, and
Fredrik Lundh wrote:
no need to wait for any raymond-cycles here; just point me to the latest
version of the proposal, and it'll be in the trunk within 30 minutes.
are these still valid?
http://mail.python.org/pipermail/python-dev/2005-August/055764.html
http://mail.python.org
Michael Hudson wrote:
Could it just be that instantiating instances of new-style classes is
slower than instantiating instances of old-style classes? There's not
anything in what you've posted to suggest that exceptions are involved
directly.
python -mtimeit -s class Exception(object): pass
Fredrik Lundh wrote:
Could it just be that instantiating instances of new-style classes is
slower than instantiating instances of old-style classes? There's not
anything in what you've posted to suggest that exceptions are involved
directly.
for completeness, here's the corresponding
so, which one is correct ?
Python 2.4.3
.replace(, a)
''
u.replace(u, ua)
u'a'
/F
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
Noam Raphael wrote:
You can find the implementation at
http://wiki.python.org/moin/AlternativePathModule?action=raw
(By the way, is there some code wiki available? It can simply be a
public svn repository. I think it will be useful for those things.)
pastebin is quite popular:
Heiko Wundram wrote:
Personally, I'm +1, but wonder if it would be enough to support '--help'
and '--version'. We then could cut out the best guess code and the code
to enable --opt=value.
The code for the above functionality is about 10-12 lines of C of the whole
patch.
If there's
fwiw, I just tested
http://pyref.infogami.com/with
on a live audience, and most people seemed to grok the context
guard concept quite quickly.
note sure about the context entry value term, though. anyone
has a better idea ?
/F
___
Python-Dev
Tim Peters wrote:
SF #1479181: split open() and file() from being aliases for each other.
Umm ... why?
[/F]
so that introspection tools can support GvR's pronouncement that open
should be used to open files, and file should be used as a type
representing
standard (current stdio-based)
Guido van Rossum wrote:
Things should be as simple as possible but no simpler. It's pretty
clear to me that dropping __context__ approaches this ideal. I'm sorry
I didn't push back harder when __context__ was first proposed -- in
retrospect, the first 5 months of PEP 343's life, before
Terry Reedy wrote:
My Why? was and is exactly a request for that further discussion.
Again: if a function has a fixed number n of params, why say that the first
k can be passed by position, while the remaining n-k *must* be passed by
name?
have you designed API:s for others than yourself,
Greg Ewing wrote:
I've been thinking about the terms guarded context
and context guard. We could say that the with-statement
executes its body in a guarded context (an abstract
notion, not a concrete object). To do this, it creates
a context guard (a concrete object) with __enter__
and
Edward Loper wrote:
One reason I see is to have keyword-only functions, i.e. with no
positional arguments at all:
def make_person(*, name, age, phone, location):
pass
which also works for methods:
def make_person(self, *, name, age, phone, location):
pass
Martin v. Löwis wrote:
I.e., why not just document that the arguments should
be used as keyword arguments, and leave it at that.
Because they wouldn't be keyword-only arguments, then.
which reminds me of the following little absurdity gem from the language
reference:
The following
Terry Reedy wrote:
And again, why would you *make* me, the user-programmer, type
make_person(name=namex, age=agex, phone=phonex, location = locationx)
#instead of
make_person(namex,agex,phonex,locationx)
?
because a good API designer needs to consider more than just the current
release.
I
A.M. Kuchling wrote:
I find this work very exciting. Time hasn't been kind to the
reference guide -- as language features were added to 2.x, not
everything has been applied to the RefGuide, and users will probably
have been forced to read a mixture of the RefGuide and various PEPs.
or as
Guido van Rossum wrote:
Agreed. Is it too late to also attempt to bring Doc/ref/*.tex
completely up to date and remove confusing language from it? Ideally
that's the authoritative Language Reference -- admittedly it's been
horribly out of date but needn't stay so forever.
it's perfectly
Tim Peters wrote:
(Or are the two goals -- completeness and readability --
incompossible, unable to be met at the same time by one document?)
No, but it's not easy, and it's not necessarily succinct. For an
existence proof, see Guy Steele's Common Lisp the Language. I
don't think it's
Terry Reedy wrote:
which reminds me of the following little absurdity gem from the language
reference:
I am not sure of what you see as absurdity,
Perhaps I do. Were you referring to what I wrote in the last paragraph of
my response to Guido?
I don't know; I've lost track of all the
for some reason, the language reference uses the term string con-
version for the backtick form of repr:
http://docs.python.org/ref/string-conversions.html
any suggestions for a better term ? should backticks be deprecated,
and documented in terms of repr (rather than the other way around)
one last one for tonight; the operator precedence summary says that
in and not in has lower precedence than is and is not, which
has lower precedence than , =, , =, , !=, ==:
http://docs.python.org/ref/summary.html
but the comparisions chapter
http://docs.python.org/ref/comparisons.html
Martin v. Löwis wrote:
Ok. I think I would use base64, of possibly compressed content. It's
more compact than your representation, as it only uses 1.3 characters
per byte, instead of the up-to-four bytes that the img2py uses.
only if you're shipping your code as PY files. in PYC format
Guido van Rossum wrote:
They're all the same priority.
yet another description that is obvious only if you already know what
it says, in other words:
Operators in the same box have the same precedence. /.../
Operators in the same box group left to right (except for com-
parisons,
trying to come up with a more concise description of the rich
comparision machinery for pyref.infogami.com, I stumbled upon
an oddity that I cannot really explain:
in the attached example below, why is the rich comparision
machinery doing *four* attempts to use __eq__ in the old-
class case?
/F
Brett Cannon wrote:
ElementTree
---
- Web page
http://effbot.org/zone/element-index.htm
- Standard library name
xml.etree
- Contact person
Fredrik Lundh
- Synchronisation history
* 1.2.6 (2.5)
xml.etree contains components from both ElementTree
Guido van Rossum wrote:
I believe the context API design has gotten totally out of hand.
I have a counter-proposal: let's drop __context__.
Heh. I was about to pull out the if the implementation is hard to
explain, it's a bad idea (and bad ideas shouldn't go into 2.X) rule
last week in
the pytut wiki (http://pytut.infogami.com/) has now been up and
running for one month, and has seen well over 250 edits from over
a dozen contributors.
to celebrate this, and to exercise the toolchain that I've deve-
loped for pytut and pyfaq (http://pyfaq.infogami.com/), I spent
a few hours
Josiah Carlson wrote:
At least for the examples of buffers that I've seen, using the buffer
interface for objects that support it is equivalent to automatically
applying str() to them. This is, strictly speaking, an optimization.
a = array.array(i, [1, 2, 3])
str(a)
array('i',
Guido van Rossum wrote:
So I have a very simple proposal: keep the __init__.py requirement for
top-level pacakages, but drop it for subpackages. This should be a
small change. I'm hesitant to propose *anything* new for Python 2.5,
so I'm proposing it for 2.6; if Neal and Anthony think this
Phillip J. Eby wrote:
The problem isn't fundamentally a technical one, but a social one. You can
effect social change through technology, but not by being some random guy
with a nagging 'bot.
Seriously, though, posting Cheesecake scores (which include ratings for
findability of code, use
Terry Reedy wrote:
1. Based on comments on c.l.py, the biggest legitimate fact-based (versus
personal-taste-based) knock again Python versus, in particular, Perl is the
lack of a CPAN-like facility. As I remember, there have even been a few
people say something like I like Python the
Guido van Rossum wrote:
Leaving aside the Perl vs. Py thing, opinions on CPAN seem to be
diverse -- yes, I've heard people say that this is something that
Python sorely lacks; but I've also heard from more than one person
that CPAN sucks from a quality perspective. So I think we shouldn't
Ian Bicking wrote:
For instance, if you really want to be confident about how your libraries
are layed out, this script is the most reliable way:
http://peak.telecommunity.com/dist/virtual-python.py
note the use of this script is the most reliable way, not something
like this script, or you
Guido van Rossum wrote:
Nick, please get unstuck on the who said what when and who wasn't
listening thing. I want this to be resolved to use the clearest
terminology possible.
which probably means that the words context and manager shouldn't
be used at all ;-)
space and potato, perhaps?
Phillip J. Eby wrote:
(frankly, do you think there's any experienced developer out there
whos first thought when asked the question how do I create a tightly
controlled Python environment isn't either can I solve this by tweaking
sys.path in my application? or disk space is cheap, bugs are
Andrew Clover wrote:
Morning!
I've done some tweaks to the previously-posted-about icon set, taking
note of some of the comments here and on -list.
you do wonderful stuff, and then you post the announcement as a
followup to a really old message, to make sure that people using a
threaded
Anthony Baxter wrote:
I also don't think it will be the death of distutils. I think that
over time the two pieces of code will become closer together -
hopefully for Python 3.0 we can formally merge the two.
I was hoping that for Python 3.0, we could get around to unkludge the
Greg Ewing wrote:
Fredrik Lundh wrote:
(distutils and setuptools are over 15000 lines of code, according to sloc-
count.
Ye cats! That's a *seriously* big ball of mud. I just checked,
and the whole of Pyrex is only 17000 lines.
correction: it's actually only 14000 lines, but it's still
Brett Cannon wrote:
Not sure whether Fredrik Lundh has responded, but I believe he once
said that he would prefer if ElementTree isn't directly modified, but
that instead patches are filed on the SF tracker and assigned to him.
Nope, Fredrik never responded. I am cc:ing him directly
Thomas Heller wrote:
I'm about to import the 0.9.9.6 tag of ctypes into Python svn.
Should this be done in exact the same way as before, so first
commit it into external/ctypes-0.9.9.6, and then 'svn copy'
the two relevant directories into the trunk, and afterwards set
all the svn props
[EMAIL PROTECTED] wrote:
Maybe they know something we don't.
oh, please. it's not like people like myself and MAL don't know anything
about package distribution...
(why is it that people who *don't* distribute stuff are a lot more im-
pressed by a magic tool than people who've spent the last
Ian Bicking wrote:
And now for a little pushback the other way -- as of this January
TurboGears has served up 100,000 egg files (I'm not sure what the window
for all those downloads is, but it hasn't been very long). Has it
occurred to you that they know something you don't about
Neal Norwitz wrote:
I was also working under the assumption that people would complain if
they didn't like something. What do people think should happen for
the Possible features section? Should I ask if there are any
objections to each item?
some discussion on python-dev for each
Phillip J. Eby wrote:
I was surprised that MAL didn't comment *then*, actually, and mistakenly
thought it meant that our last discussion on the distutils-sig (and my
attempts to deal with the problems) had been successful. Between that and
MvL's mild response to the explicit discussion of
Phillip J. Eby wrote:
The long term plan is for a tool called nest to be offered, which will
offer a command-line interface similar to that of the yum package
manager, with commands to list, uninstall, upgrade, and perform other
management functions on installed packages.
yum already exists,
Anthony Baxter wrote:
I'm not sure how people would prefer this be handled. I don't think we
need to have a PEP for it - I don't see PEPs for ctypes, elementtree,
pysqlite or cProfile, either.
That's because they're all trivial building blocks, not all-consuming world
views. Any programmer
Martin v. Löwis wrote:
It is *precisely* my concern that this happens. For whatever reason,
writing packaging-and-deployment software is totally unsexy.
for some reason, tools of this kind tend to reach the big ball of mud stage
even before they reach the dogfood stage. and once you have a
Phillip J. Eby wrote:
a technical document that, in full detail, describes the mechanisms *used* by
setuptools, including what files it creates, what the files contain, how
they are used during import, how non-setuptools code can manipulate (or at
least inpect) the data, etc, setuptools
Anthony Baxter wrote:
http://www.joelonsoftware.com/articles/fog69.html
Yes. I remember that piece. In particular, he wrote the original rant
about this about Mozilla/Firefox. How did that work out again? Oh,
that's right - we now have a much, much more successful and usable
Anthony Baxter wrote:
- Multiple installs of different versions of the same package,
including per-user installs.
yeah, but where is the documentation on how this works ? phillip points
to a 30-page API description, which says absolutely nothing whatsoever
about the files I'm going to find on
Tim Peters wrote:
Just do it. Branches under SVN are cheap, go fast, and are pretty
easy to work with. Even better, because a branch is just another
named directory in SVN's virtual file system, you can svn remove it
when you're done with it (just like any other directory).
footnote: if
Anthony Baxter wrote:
This is from bug www.python.org/sf/1465408
Because the Python.asdl and the generated Python-ast.[ch] get checked
into svn in the same revision, the svn export I use to build the
tarballs sets them all to the same timestamp on disk (the timestamp
of the checkin). make
201 - 300 of 622 matches
Mail list logo