Lars added the comment:
Happy to see progress on this issue and I can see that adding these attributes
to the ABC's in typing makes the most sense. However for my direct use-case
(simplified: using Any in a type checking descriptor) it would be really
practical to have the __name__
Lars added the comment:
I have been doing some research, but note that I don't have much experience
with the typing module. That said, there seem to be 2 main cases:
- '_SpecialForm': with instances Any, Union, etc.
- '_BaseGenericAlias'/'_SpecialGenericAl
Lars van Gemerden added the comment:
I was not aware the __name__ attribute is an implementation detail. It is
described in the docs: https://docs.python.org/3/reference/datamodel.html.
I have been using it since python 2.7, for example for logging.
The function “split_module_names” is just
New submission from Lars :
I noticed some (perhaps intentional) oddities with the __name__ attribute:
- typing classes like Any (subclass of _SpecialForm) do not have a __name__
attribute,
- abstract base classes in typing, like MutableSet do not have a __name__
attribute,
- 'Cha
Lars Hammarstrand added the comment:
Any update regarding this?
We switched to lxml to make life easier but it would be useful if this
functionality also was implemented in the standard library.
Wishlist:
1. Flag to ignore all namespaces during find().
2. Ability to set default namespace
Lars Schellhas added the comment:
@Christian Heimes, you are absolutely right. I'm sorry if I came off rude.
Actually, I just had a rough day at work with the end of it being the
realisation that this missing fix would solve my day-filling issue from today.
Furthermore, I'v
Lars Schellhas added the comment:
Excuse me, but why is this issue still open and unfixed? There are already
proposed fixes and this issue has been around for nearly 7 years now.
Filezilla has forwarded the responsibility to us for this issue
(https://trac.filezilla-project.org/ticket/10700
Lars Schellhas added the comment:
I am pretty sure that it is connected to issue 19500. And somehow that issue
still isn't resolved although there are already solutions provided.
--
nosy: +larsschellhas
___
Python tracker
<https://bugs.py
Lars added the comment:
Ok that makes sense. Thanks for letting me know. Should have read the doku more
precisely.
--
resolution: -> not a bug
stage: -> resolved
status: open -> closed
___
Python tracker
<https://bugs.python.or
New submission from Lars :
Hi everybody
I just noticed that the uuid.uuid4().hex command does not create fully random
hex values. The character on the 13th position is always 4 and the 17th
position is equally distributed 8,9,a,b.
One million uuids follow following distribution.
{
Lars Gustäbel added the comment:
tarfile does not use the `format` argument for reading, it will be detected.
You can even mix different formats in one archive and tarfile will be fine with
it.
--
nosy: +lars.gustaebel
___
Python tracker
<ht
Change by Lars Beckers :
--
nosy: +extmind
___
Python tracker
<https://bugs.python.org/issue9338>
___
___
Python-bugs-list mailing list
Unsubscribe:
Lars Friedrich added the comment:
Thank you for your reply.
I am not sure if I understood correctly:
Do you suggest to modify ctypes.__init__.py so that the __getattr__ method of
LibraryLoader catches the OSError and raises an AttributeError instead, as in
your example
New submission from Lars Friedrich :
The following creates an OSError:
import ctypes
hasattr(ctypes.windll, 'test')
The expected behavior would be to return "False"
--
components: ctypes
messages: 326528
nosy: lfriedri
priority: normal
severity: normal
status
Lars Pötter added the comment:
I wanted to login to an existing account so the password works OK in
Thunderbird.
Here in Germany it is recommended for safe passwords to use the German
umlauts(ßÄÖÜäöü). So code page 437 vs 850 or UTF-8 ?
If I could pass in the bytes then I could figure out
New submission from Lars Pötter :
if the password contains non ascii characters then the login fails:
>>> smtObj.login(MAIL_USER, MAIL_PASSWORD)
Traceback (most recent call last):
File "", line 1, in
File "/usr/lib/python3.5/smtplib.py", line 720, in
Lars Gustäbel added the comment:
Actually, it is not prohibited to add the same file to the same archive more
than once.
--
nosy: +lars.gustaebel
___
Python tracker
<http://bugs.python.org/issue30
Lars Gustäbel added the comment:
After all these years, it is not that easy to say why the decision to swallow
this exception was made. One part surely was a lack of experience with the tar
format itself and all of its implementations. The other part I guess was that
it was supposed to avoid
Lars Gustäbel added the comment:
The question is what you're trying to accomplish. If you just want to prevent
tarfile from stopping at the first invalid header in order to extract
everything following it, you may use the ignore_zeros=True keyword arg
Lars Gustäbel added the comment:
I suck :-) It is hg revision bb94f6222fef.
--
___
Python tracker
<http://bugs.python.org/issue23228>
___
___
Python-bugs-list m
Lars Gustäbel added the comment:
TarFile.makelink() has a fallback mode in case the platform does not support
links. Instead of a symlink or a hardlink it extracts the file it points to as
long as it exists in the current archive.
More precisely, makelink() calls os.symlink() and if one of
Lars Gustäbel added the comment:
Please give us some example test code that shows us what goes wrong exactly.
--
___
Python tracker
<http://bugs.python.org/issue26
Lars Gustäbel added the comment:
Closed after years of inactivity.
--
resolution: -> works for me
stage: -> resolved
status: open -> closed
___
Python tracker
<http://bugs.python.o
Lars Gustäbel added the comment:
Sorry for the glitch, I suppose everything works fine now.
--
status: open -> closed
___
Python tracker
<http://bugs.python.org/issu
Lars Gustäbel added the comment:
Closing after six years of inactivity.
--
resolution: -> wont fix
stage: -> resolved
status: open -> closed
___
Python tracker
<http://bugs.python.or
Changes by Lars Gustäbel :
--
resolution: -> fixed
stage: test needed -> resolved
status: open -> closed
versions: -Python 3.2, Python 3.3, Python 3.4
___
Python tracker
<http://bugs.python.or
Lars Gustäbel added the comment:
Thanks for the detailed report and the patch. I haven't checked yet, but I
suppose that the entire 3.x branch is affected. The first thing I have to do
now is to come up with a comprehensive testcase.
--
assignee: -> lars.gustaebel
co
Changes by Lars Gustäbel :
--
resolution: -> fixed
stage: patch review -> resolved
status: open -> closed
___
Python tracker
<http://bugs.python.or
Changes by Lars Gustäbel :
--
resolution: -> fixed
stage: patch review -> resolved
status: open -> closed
___
Python tracker
<http://bugs.python.or
Lars Gustäbel added the comment:
Martin, I followed your suggestion to raise ReadError. This needed an
additional change in copyfileobj() because it is used both for adding file data
to an archive and extracting file data from an archive.
But I think the patch is in good shape now
Lars Gustäbel added the comment:
I think a simple addition to the existing unittest for nti() will be enough.
itn() seems well-tested, and nts() and stn() are not affected, because they
don't operate on numbers.
--
Added file: http://bugs.python.org/file39832/issue24514
Lars Gustäbel added the comment:
Yes, Python 2.7 still gets bugfixes.
However, there's still some work to do on the patch (maybe clean the code,
write a test, add a NEWS entry).
--
___
Python tracker
<http://bugs.python.org/is
Lars Gustäbel added the comment:
You're welcome :-D
--
assignee: -> lars.gustaebel
priority: normal -> low
stage: -> patch review
type: -> behavior
versions: +Python 3.5, Python 3.6
___
Python tracker
<http://bugs.p
Lars Gustäbel added the comment:
The problem is that the tar archive has empty uid and gid fields, i.e. 7 spaces
terminated with a null-byte.
I attached a patch that solves the problem.
--
keywords: +patch
Added file: http://bugs.python.org/file39815/issue24514.diff
Lars Gustäbel added the comment:
The patch would change behaviour for all tarfile users by the back door, that's
why I am a little reluctant. And if the same can be achieved by a reasonably
simple change to shutil I think it's ju
Lars Gustäbel added the comment:
You don't need to patch the tarfile module. You could use os.walk() in
shutil._make_tarball() and add each file with TarFile.add(recursive=False).
--
nosy: +lars.gustaebel
___
Python tracker
<http://bugs.py
Changes by Lars Gustäbel :
Added file: http://bugs.python.org/file39580/issue24259-2.x-2.diff
___
Python tracker
<http://bugs.python.org/issue24259>
___
___
Python-bug
Lars Gustäbel added the comment:
@Martin:
This is actually a nice idea that I hadn't thought of. I updated the Python 3
patch to use a seek() that moves to one byte before the next header block,
reads the remaining byte and raises an error if it hits eof. The code looks
rather clean com
Lars Gustäbel added the comment:
@Thomas:
I think your proposal adds a little too much complexity. Also, ExFileObject is
not used during iteration, and we would like to detect broken archives without
unpacking all the data segments first.
I have written patches for Python 2 and 3
Changes by Lars Gustäbel :
Added file: http://bugs.python.org/file39544/issue24259-2.x.diff
___
Python tracker
<http://bugs.python.org/issue24259>
___
___
Python-bug
Lars Gustäbel added the comment:
@Martin:
Yes, that's right, but only for cases where the TarFile.fileobj attribute is an
actual file object. But, most of the time it is something special, e.g.
GzipFile or sys.stdin, which makes random seeking either impossible or perform
very badly.
Lars Gustäbel added the comment:
I have written a test for the issue, so that we have a basis for discussion.
There are four different scenarios where an unexpected eof can occur: inside a
metadata block, directly after a metadata block, inside a data segment or
directly after a data segment
Lars Gustäbel added the comment:
I agree with David that there is no need for tarfile to be thread-safe. There
is nothing to be gained from distributing one TarFile object among multiple
threads because it operates on a single resource which has to be accessed
sequentially anyway. So, it
Lars Gustäbel added the comment:
I would argue that a serious alternative to this patch is to simply override
the TarFile.chown() method in a subclass. However, I'm not sure if this expects
too much of the user.
--
___
Python tracker
New submission from Lars Buitinck:
PySequence_List has accepted iterables since changeset 6c82277e77f3 of May 1,
2001 ("NEEDS DOC CHANGES" :), but its documentation still only speaks of
sequences. I suggest that it is changed to promise to handle arbitrary
iterables,
New submission from Lars Buitinck:
The declaration for PyErr_WarnEx in Doc/c-api/exceptions.rst is missing a const
compared to Include/warnings.h.
--
assignee: docs@python
components: Documentation
files: pyerr_warnex_const.patch
keywords: patch
messages: 228657
nosy: docs@python
Lars Gustäbel added the comment:
Please provide a patch which allows easy addition of file-like objects (not
only io.BytesIO) and directories, preferably hard and symbolic links, too. It
would be nice to still be able to change attributes of a TarInfo before
addition. Please also add tests
Lars Gustäbel added the comment:
I don't have an idea how to make it easier and still meet all/most requirements
and without cluttering up the api. The way it currently works allows the
programmer to control every tiny aspect of a tar member. Maybe it's best to
simply add a new en
Lars Gustäbel added the comment:
tarfile needs to know the size of a file object beforehand because the tar
header is written first followed by the file object's data. If the file object
is not based on a real file descriptor, tarfile cannot simply use os.fstat()
but the user has to pas
Lars Gustäbel added the comment:
Why overcomplicate things?
import io, tarfile
with tarfile.open("foo.tar", mode="w") as tar:
b = "hello world!".encode("utf-8")
t = tarfile.TarInfo("helloworld.txt")
t.size = len(b) # this is crucia
Lars Gustäbel added the comment:
Apparently, the problem is located in TarInfo._proc_gnulong(). I attached a
patch.
When tarfile reads an archive, it strips trailing slashes from all filenames,
except GNUTYPE_LONGNAME headers, which is a bug. tarfile creates GNU_FORMAT tar
files by default
Lars Gustäbel added the comment:
The size of the buffer returned by TarInfo.fromtarfile() is checked by
TarInfo.frombuf() which raises either an EmptyHeaderError or
TruncatedHeaderError respectively.
--
assignee: -> lars.gustaebel
resolution: -> not a bug
stage: -> resolv
Lars Gustäbel added the comment:
IIRC, tarfile under 2.7 has never been explicitly unicode-safe, support for
unicode objects is heterogeneous at best. The obvious work-around is to work
exclusively with str objects.
What we can't do is to decode the utf-8 pathname from the archive
Lars H added the comment:
+1 vote for fixing this problem. Matt Hickford said it very well... the error
message is very cryptic, not giving the user a clue as to what domain the
problem lies in.
--
nosy: +Lars.H
___
Python tracker
<h
Lars Gustäbel added the comment:
That's right. But it is there.
--
___
Python tracker
<http://bugs.python.org/issue21404>
___
___
Python-bugs-list m
Lars Gustäbel added the comment:
tarfile.open() actually supports a compress_level argument for gzip and bzip2
and a preset argument for lzma compression.
--
nosy: +lars.gustaebel
___
Python tracker
<http://bugs.python.org/issue21
Lars Gustäbel added the comment:
Let me present for discussion a proposal (and a patch with documentation) with
an approach that is a little different, but in my opinion the most effective. I
hope that it will appeal to all involved.
My proposal consists of a new class SafeTarFile, that is a
Lars Gustäbel added the comment:
Jup. That's it.
--
priority: normal -> low
resolution: -> not a bug
stage: -> resolved
status: open -> closed
___
Python tracker
<http://bugs.p
Lars Gustäbel added the comment:
You can pass keyword arguments to tarfile.open(), which will be passed to the
TarFile constructor. You can also use pass fileobj arguments to tarfile.open().
--
___
Python tracker
<http://bugs.python.org/issue21
Lars Gustäbel added the comment:
That was a design decision. What would be the advantage of having the TarFile
class offer the compression itself?
--
assignee: -> lars.gustaebel
___
Python tracker
<http://bugs.python.org/issu
New submission from Lars Wirzenius:
The maildir format specification
(see http://cr.yp.to/proto/maildir.html) is clear that files named with leading
dots should be ignore:
Unless you're writing messages to a maildir, the format of a unique
name is none of your business. A unique nam
Lars Gustäbel added the comment:
Okay, let me tell you why I reject your contribution at this point.
The patch you submitted may be well-suited for your purposes but it does not
meet the requirements of a standard library implementation because it is not
generic and comprehensive enough.
It
Lars Gustäbel added the comment:
> [...] but remember, we split a volume only in the middle of a big file, not
> in any other case (AFAIK). Hopefully you don't get huge pax headers or
> anything strange. [...]
Hopefully? Sorry, but have you tested this? I did. I let GNU ta
Lars Andersson added the comment:
Thanks Victor, that fixes my problem.
I've started using tulip/master as part of my project as that also solves other
issues I have with the default asyncio of python 3.4.0, but hopefully this fix
will into tulip/master as well as python 3.4.1
Lars Gustäbel added the comment:
In the past, our answer to these kinds of bug reports has always been that you
must not extract an archive from an untrusted source without making sure that
it has no malicious contents. And that tarfile conforms to the posix
specifications with respect to
New submission from Lars Andersson:
The attached code generates an unclosed socket ResourceWarning when timing out
trying to connect to an unreachable address.
Probably not terribly serious, but in my case, it generates distracting
warnings during unit testing.
I have looked briefly at the
Lars Gustäbel added the comment:
> It's also consistent with how the tar command works afaik, just listing the
> contents of the current volume.
No, GNU tar operates on the entirety of the archive and asks for the filename
of the subsequent volume every time it hits eof in the cur
Lars Gustäbel added the comment:
I had the following idea: What about a separate class, let's call it
TarVolumeSet for now, that maps a set of (virtual) volumes onto one big
file-like object. This TarVolumeSet will be passed to a TarFile constructor as
the fileobj argument. It is subclas
Lars Gustäbel added the comment:
At first, I'd like to take back my comment on this patch being too complex for
too little benefit. That is no real argument.
Okay, I gave it a shot and I have a few more remarks:
The patch does not support iterating over a multi-volume tar archive, e.g
Lars Gustäbel added the comment:
I cannot yet go into the details, because I have not tested the patch.
The comments, docstrings and quoting are not very consistent with the rest of
the module. There are a few spelling mistakes. The open_volume() method is more
or less a copy of the open
Lars Buitinck added the comment:
I also tried
from multiprocessing.pool import Pool
but that died with
ImportError: cannot import name get_context
--
___
Python tracker
<http://bugs.python.org/issue18
Lars Buitinck added the comment:
Strange, I can't actually get it to work:
>>> from multiprocessing import Pool, get_context
>>> forkserver = get_context('forkserver')
>>> Pool(context=forkserver)
Traceback (most recent call last):
File "&qu
Lars Buitinck added the comment:
Thanks, much better than my solution!
--
status: pending -> open
___
Python tracker
<http://bugs.python.org/issue18999>
___
_
Lars Buitinck added the comment:
Ok, great.
--
___
Python tracker
<http://bugs.python.org/issue18999>
___
___
Python-bugs-list mailing list
Unsubscribe:
Lars Buitinck added the comment:
> BTW, the context objects are singletons.
I haven't read all of your patch yet, but does this mean a forkserver will be
started regardless of whether it is later used?
That would be a good thing, since starting the fork server after reading in
la
Lars Buitinck added the comment:
Ok. Do you (or jnoller?) have time to review my proposed patch, at least before
3.4 is released? I didn't see it in the release schedule, so it's probably not
planned soon, but I wouldn't want the API to change
Lars Buitinck added the comment:
I don't really see the benefit of a context manager over an argument. It's a
power user feature anyway, and context managers (at least to me) signal cleanup
actions, rather than construction options.
--
Changes by Lars Buitinck :
--
nosy: +jnoller
___
Python tracker
<http://bugs.python.org/issue18999>
___
___
Python-bugs-list mailing list
Unsubscribe:
Lars Buitinck added the comment:
In my patched version, the private popen.get_start_method gets a kwarg
set_if_needed=True. popen.Popen calls that as before, so its behavior should
not change, while the public get_start_method sets the kwarg to False.
I realise now that this has the side
Lars Buitinck added the comment:
Cleaned up the patch.
--
Added file: http://bugs.python.org/file31722/mp_getset_start_method.patch
___
Python tracker
<http://bugs.python.org/issue18
Changes by Lars Buitinck :
Removed file: http://bugs.python.org/file31721/mp_getset_start_method.patch
___
Python tracker
<http://bugs.python.org/issue18999>
___
___
Pytho
Changes by Lars Buitinck :
--
nosy: +sbt
___
Python tracker
<http://bugs.python.org/issue18999>
___
___
Python-bugs-list mailing list
Unsubscribe:
Changes by Lars Buitinck :
--
title: Allow multiple calls to multiprocessing.set_start_method -> Robustness
issues in multiprocessing.{get,set}_start_method
___
Python tracker
<http://bugs.python.org/issu
New submission from Lars Buitinck:
The new multiprocessing based on forkserver (issue8713) looks great, but it has
two problems. The first:
"set_start_method() should not be used more than once in the program."
The documentation does not explain what the effect of calling it twice
Lars Ivarsson added the comment:
The problem isn't the original requested url, as it is legit. The problem
appears after the 302 redirect when a new (malformed) url is received from the
server. There need to be some kind of check of the validity of that second url.
And, preferabl
Lars Gustäbel added the comment:
I'd like to re-emphasize that it is best to keep the whole thing as simple and
straight-forward as possible. Offer some basic operations and that's it.
Although I am pretty accustomed to the original tar command line, I think we
should copy zipfile
Lars Buitinck added the comment:
I'm sorry, I really don't understand this refcounts.dat file and I'm not going
to hack it further. Does the patch as it currently stands solve the issue with
CallMethod and CallFunction, or not? (It has the chan
Lars Buitinck added the comment:
Oops, forgot to save changes to Doc/c-api/object.rst.
PyObject_CallMethodObjArgs takes a PyObject*, mustn't that be non-const for
reference counting to work?
PyDict_GetItemString already has const, just not in refcounts.dat. Fixed.
--
Added file:
Lars Buitinck added the comment:
Redid the patch.
--
Added file: http://bugs.python.org/file28653/constness.patch
___
Python tracker
<http://bugs.python.org/issue9
Lars Buitinck added the comment:
Any reason why this issue is still open? I just got a lot of compiler warnings
when building Numpy, so this isn't just relevant to C++ programmers.
(Btw., I did RTFM: the issue's Resolution is "accepted" but that option is not
documente
Lars Buitinck added the comment:
Sorry about the bundle, I'm an hg noob and only noticed that bundles are binary
after I submitted it. Will create a regular patch next time.
--
___
Python tracker
<http://bugs.python.org/is
New submission from Lars Buitinck:
I spotted a minor typo in the "What's new" for Py 3.3, introduced yesterday.
See attached patch.
--
assignee: docs@python
components: Documentation
files: typo.hg
messages: 171333
nosy: docs@python, larsmans
priority: normal
severity
New submission from Lars Gustäbel:
Today I accidentally did this:
open(True).read()
Passing True as a file argument to open() does not fail, because a bool value
is treated like an integer file descriptor (stdout in this case). Even worse is
that the read() call hangs in an endless loop on
Lars Gustäbel added the comment:
I prepared a patch that fixes this issue and adds a few tests. Please try if it
works for you.
--
keywords: +patch
stage: -> patch review
Added file: http://bugs.python.org/file27152/issue15875.diff
___
Pyt
Changes by Lars Gustäbel :
--
assignee: -> lars.gustaebel
nosy: +lars.gustaebel
versions: +Python 3.3
___
Python tracker
<http://bugs.python.org/issu
Lars Gustäbel added the comment:
Could you provide some sample data and code? I see the problem, but I cannot
quite reproduce the behaviour you describe. In all of my testcases tarfile
either throws an exception or successfully reads the archive, but never
silently stops.
--
assignee
Lars Nordin added the comment:
Running the script without any timestamp comparison (and parsing more log
lines), gives these performance numbers:
log lines: 7,173,101
time output:
real1m9.892s
user0m53.563s
sys 0m1.592s
--
___
Python
New submission from Lars Nordin :
The datetime.strptime works well enough for me it is just slow.
I recently added a comparison to a log parsing script to skip log lines earlier
than a set date. After doing so my script ran much slower.
I am processing 4,784,212 log lines in 1,746 files
New submission from Lars Buitinck :
The section "Inplace Operators" of the module docs for operator now show up in
TOC at http://docs.python.org/dev/library/. I don't think that's intended as it
does not describe a separate module.
--
assignee: docs@python
compo
Changes by Lars Gustäbel :
--
assignee: -> lars.gustaebel
___
Python tracker
<http://bugs.python.org/issue14810>
___
___
Python-bugs-list mailing list
Un
1 - 100 of 283 matches
Mail list logo