> In that file there is a variable named message_params that is initialized
> with about 15 different sets of test data, in class TestEmailMessageBase, but
> is never referenced again. I even grepped for the variable name in all test
> py files to confirm that it isn't somehow imported somewhere
On Thu, May 6, 2021 at 2:46 PM Skip Montanaro wrote:
>
> I looked at the fast-forward stuff in 'git push --help' but couldn't
> decipher what it told me, or more importantly, how it related to my
> problem. It's not clear to me how python/cpython:main can be behind
> smontanaro/cpython:main.
Just
On Wed, Mar 17, 2021 at 6:37 PM Steve Dower wrote:
>
> On 3/17/2021 8:00 AM, Michał Górny wrote:
> > How about writing paths as bytestrings in the long term? I think this
> > should eliminate the necessity of knowing the correct encoding for
> > the filesystem.
>
> That's what we're trying to do,
On Thu, Feb 18, 2021 at 10:10 AM Larry Hastings wrote:
> Call me crazy, but... shouldn't they be checked in? I thought we literally
> had every revision going back to day zero. It should be duck soup to
> recreate the original sources--all you need is the correct revision number.
It seems to
On Wed, Feb 17, 2021 at 7:33 AM Steven D'Aprano wrote:
>
> On Tue, Feb 16, 2021 at 05:49:49PM -0600, Skip Montanaro wrote:
>
> > If someone knows how to get the original Usenet messages from what Google
> > published, let me know.
>
> I don't have those, but I do have a copy of Python 0.9.1 with u
It was not that bad, though:
https://github.com/smontanaro/python-0.9.1/compare/main...Ringdingcoder:original
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3
> When I see diffs like this (your git vs. the unshar result) I tend to
> trust unshar more:
Sorry, it was not you. I meant the github repo from this e-mail thread.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to pyth
On Wed, Feb 17, 2021 at 7:33 AM Steven D'Aprano wrote:
>
> On Tue, Feb 16, 2021 at 05:49:49PM -0600, Skip Montanaro wrote:
>
> > If someone knows how to get the original Usenet messages from what Google
> > published, let me know.
>
> I don't have those, but I do have a copy of Python 0.9.1 with u
On Wed, Oct 21, 2020 at 3:51 AM Gregory P. Smith wrote:
>
> meta: i've written too many words and edited so often i can't see my own
> typos and misedits anymore. i'll stop now. :)
Haha! Very interesting background, thank you for writing down all of this!
___
> Now run the same code inside the REPL:
>
> Python 3.8.3 (tags/v3.8.3:6f8c832, May 13 2020, 22:20:19) [MSC v.1925 32
> bit (Intel)] on win32
> Type "help", "copyright", "credits" or "license" for more information.
> >>> import sys, time
> >>> for i in range(1,11):
> ... sys.stdout.write('\r%
On Tue, Sep 18, 2018 at 8:38 AM INADA Naoki wrote:
> I think this topic should split to two topics: (1) Guard Python
> process from Spectre/Meltdown
> attack from other process, (2) Prohibit Python code attack other
> processes by using
> Spectre/Meltdown.
(3) Guard Python from performance degra
> As much as Steve is unlikely to do the work to initiate and
> maintain support of these other tools—whether due to his employer's
> interests or his own—I too was unlikely to do work like this thread is
> asking. In fact, the chances I would have done it were zero because I was
> sitting on my co
On Tue, Jun 13, 2017 at 2:26 AM, Nathaniel Smith wrote:
> On Mon, Jun 12, 2017 at 6:29 AM, Stefan Ring wrote:
>>
>> > Yury in the comment for PR 2108 [1] suggested more complicated code:
>> >
>> > do_something()
>> > try:
>> >
> Yury in the comment for PR 2108 [1] suggested more complicated code:
>
> do_something()
> try:
> do_something_other()
> except BaseException as ex:
> try:
> undo_something()
> finally:
> raise ex
And this is still bad, because it loses
> That is usually what I can expect in case of tasks executed in parallel on
> different CPUs. But my example should not be the case, due to the GIL. What
> am I missing? Thank you very much, and sorry again for the OT :(
With such finely intermingled thread activity, there might be a fair
bit of
> now that the SPEC file of fedora is open source, how about redhat's, how
> could I get it?
Fedora's spec files lives here:
http://pkgs.fedoraproject.org/cgit/rpms/python3.git
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/ma
So to sum this up, you claim that PyLong_FromUnsignedLongLong can
somehow produce a number larger than the value range of a 64 bit
number (0x10180). I have a hard time believing this.
Most likely you are looking in the wrong place, mysql_affected_rows
returns 2^64-1, and some Python co
On Wed, Oct 14, 2015 at 3:11 PM, Chris Withers wrote:
> I'm having trouble with some python processes that are using 3GB+ of memory
> but when I inspect them with either heapy or meliae, injected via pyrasite,
> those tools only report total memory usage to be 119Mb.
>
> This feels like the old "p
On Tue, Jan 20, 2015 at 3:35 PM, Neil Girdhar wrote:
> I get error:
>
> TypeError: init_builtin() takes exactly 1 argument (0 given)
>
> The only source file that can generate that error is
> Modules/_ctypes/_ctypes.c, but when I make changes to that file such as:
>
> PyErr_Format(PyExc_Ty
On Tue, Jan 6, 2015 at 8:52 AM, Dmitry Kazakov wrote:
> Greetings.
>
> I'm sorry if I'm too insistent, but it's not truly rewarding to
> constantly improve a patch that no one appears to need. Again, I
> understand people are busy working and/or reviewing critical patches,
> but 2 months of inacti
On Fri, Jan 10, 2014 at 4:35 PM, Nick Coghlan wrote:
> On 10 January 2014 13:32, Lennart Regebro wrote:
>> No, because your environment have a default language. And Python has a
>> default encoding. You only get problems when some file doesn't use the
>> default encoding.
>
> The reason Python 3
> just became harder to use for that purpose.
The entire discussion reminds me very much of the situation with file
names in OS X. Whenever I want to look at an old zip file or tarball
which happens to have been lying around on my hard drive for a decade
or more, I can't because OS X insist that f
> Yup, in fact, if I hadn't come up with the __read[gf]sword() trick,
> my only other option would have been TLS (or the GetCurrentThreadId
> /pthread_self() approach in the presentation). TLS is fantastic,
> and it's definitely an intrinsic part of the solution (the "Y" part
>
> I built something very similar for my company last year, and it’s been running
> flawlessly in production at a few customer sites since, with avg. CPU usage
> ~50%
> around the clock. I even posted about it on the Python mailing list [1] where
> there was almost no resonance at that time. I neve
Hello,
I built something very similar for my company last year, and it’s been running
flawlessly in production at a few customer sites since, with avg. CPU usage ~50%
around the clock. I even posted about it on the Python mailing list [1] where
there was almost no resonance at that time. I never p
Adam Olsen gmail.com> writes:
> So you want responsiveness when idle but throughput when busy?
Exactly ;)
> Are those calculations primarily python code, or does a C library do
> the grunt work? If it's a C library you shouldn't be affected by
> safethread's increased overhead.
>
It's Python
Adam Olsen gmail.com> writes:
>
> On Wed, Mar 19, 2008 at 10:09 AM, Stefan Ring visotech.at> wrote:
> > Adam Olsen gmail.com> writes:
> >
> > > Can you try with a call to sched_yield(), rather than nanosleep()? It
> > > should have the sam
Adam Olsen gmail.com> writes:
> Can you try with a call to sched_yield(), rather than nanosleep()? It
> should have the same benefit but without as much performance hit.
>
> If it works, but is still too much hit, try tuning the checkinterval
> to see if you can find an acceptable throughput/re
The company I work for has over the last couple of years created an
application server for use in most of our customer projects. It embeds Python
and most project code is written in Python by now. It is quite resource-hungry
(several GB of RAM, MySQL databases of 50-100GB). And of course it is
mul
29 matches
Mail list logo