in iterable while cond:
blahblah
or perhaps:
while cond for i in iterable:
blahblah
A while-for or for-while loop would be a novel invention, not seen in
any other language that I know of. I seriously doubt its usefulness
though...
Sturla Molden
(),
(i for i in range(100)) )
for i in gen: print i
Sturla Molden
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev
Mingw32CCompiler in cygwincompiler.py emits the symbol -mno-cygwin.
This is used to make Cygwin's gcc behave as mingw. As of gcc 4.6 it is
not recognized by the mingw gcc compiler itself, and causes as crash. It
should be removed because it is never needed for mingw (in any version),
only
not be that important. Just move the bottlenecks
out of Python and you are much better off.
Regards,
Sturla Molden
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman
threading
model. I just think it is a mistake to let multiple OS threads touch the
same interpreter.
Sturla Molden
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org
Terry Reedy:
MingW has become less attractive in recent years by the difficulty
in downloading and installing a current version and finding out how to
do so. Some projects have moved on to the TDM packaging of MingW.
http://tdm-gcc.tdragon.net/
MinGW has become a mess. Equation.com
Please understand that this very choice is there already.
That's great. Is that what the documentation refers to when it says
MSVCCompiler will normally choose the right compiler, linker etc. on its
own. To override this choice, the environment variables
DISTUTILS_USE_SDK and MSSdk must
At one point Mike Fletcher published a patch to make distutils use the
SDK compiler. It would make a lot of sense if it were built in to
distutils as a further compiler choice.
Please understand that this very choice is there already.
Yes you are right. I did not know about
David Cournapeau:
Autotools only help for posix-like platforms. They are certainly a big
hindrance on windows platform in general,
That is why mingw has MSYS.
mingw is not just a gcc port, but also a miniature gnu environment for
windows. MSYS' bash shell allows us to do things like:
$
The problem really is that when people ask for MingW support, they mean
all kinds of things,
Usually it means they want to build C or C++ extensions, don't have Visual
Studio, don't know about the SDK compiler, and have misunderstood the CRT
problem.
As long at Python builds with the free
Cesare Di Mauro:
I like to use Windows because it's a comfortable and productive
environment,
certainly not because someone forced me to use it.
Also, I have limited time, so I want to spend it the better I can,
focusing
on solving real problems. Setup, Next, Next, Finish, and I want it
Atomic operations (InterlockedCompareExchange, et al.) are used on the
field 'owned' in NRMUTEX. These methods require the memory to be aligned
on 32-byte boundaries. They also require the volatile qualifer. Three
small changes are therefore needed (see below).
Regards,
Sturla Molden
Den 10.03.2011 03:02, skrev Mark Hammond:
These issues are best put in the tracker so they don't get lost -
especially at the moment with lots of regulars at pycon.
Ok, sorry :-)
It would also be good to know if there is an actual behaviour bug
caused by this (ie, what problems can be
Den 10.03.2011 11:06, skrev Scott Dial:
http://www.kernel.org/doc/Documentation/volatile-considered-harmful.txt
The important part here (forgive me for being a pedant) is that register
allocation of the (1) 'owned' field is actually unwanted, and (2)
Microsoft specify 'volatile' in calls
not need any global synchronization.
(I am setting follow-up to the Python Ideas list, it does not belong on
Python dev.)
Sturla Molden
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe
Den 24.05.2011 00:07, skrev Artur Siekielski:
Oh, and using explicit shared memory or mmap is much harder, because
you have to map the whole object graph into bytes.
It sounds like you need PYRO, POSH or multiprocessing's proxy objects.
Sturla
___
Den 24.05.2011 11:55, skrev Artur Siekielski:
PYRO/multiprocessing proxies isn't a comparable solution because of
ORDERS OF MAGNITUDE worser performance. You compare here direct memory
access vs serialization/message passing through sockets/pipes.
The bottleneck is likely the serialization,
Den 24.05.2011 13:31, skrev Maciej Fijalkowski:
Not sure what scenario exactly are you discussing here, but storing
reference counts outside of objects has (at least on a single
processor) worse cache locality than inside objects.
Artur Siekielski is not talking about cache locality, but
Den 24.05.2011 11:55, skrev Artur Siekielski:
POSH might be good, but the project is dead for 8 years. And this
copy-on-write is nice because you don't need changes/restrictions to
your code, or a special garbage collector.
Then I have a solution for you, one that is cheaper than anything
Den 24.05.2011 17:39, skrev Artur Siekielski:
Disk access is about 1000x slower than memory access in C, and Python
in a worst case is 50x slower than C, so there is still a huge win
(not to mention that in a common case Python is only a few times
slower).
You can put databases in shared
Den 09.08.2011 11:33, skrev Марк Коренберг:
Probably I want to re-invent a bicycle. I want developers to say me
why we can not remove GIL in that way:
1. Remove GIL completely with all current logick.
2. Add it's own RW-locking to all mutable objects (like list or dict)
3. Add RW-locks to every
Den 12.08.2011 18:51, skrev Xavier Morel:
* Erlang uses erlang processes, which are very cheap preempted
*processes* (no shared memory). There have always been tens to
thousands to millions of erlang processes per interpreter source
contention within the interpreter going back to pre-SMP by
Den 12.08.2011 18:57, skrev Rene Nejsum:
My two danish kroner on GIL issues….
I think I understand the background and need for GIL. Without it
Python programs would have been cluttered with lock/synchronized
statements and C-extensions would be harder to write. Thanks to Sturla
Molden
Den 13.08.2011 17:43, skrev Antoine Pitrou:
These days we have PyGILState_Ensure():
http://docs.python.org/dev/c-api/init.html#PyGILState_Ensure
With the most recent Cython (0.15) we can just do:
with gil:
suite
to ensure holding the GIL.
And similarly from a thread holding the GIL
Den 10.08.2011 13:43, skrev Guido van Rossum:
They have a specific plan, based on Software Transactional Memory:
http://morepypy.blogspot.com/2011/06/global-interpreter-lock-or-how-to-kill.html
Microsoft's experiment to use STM in .NET failed though. And Linux got
rid of the BKL without STM.
Do the numbers add up?
.005 defects in 1,000 lines of code is one defect in every 200,000 lines of
code.
However they also claim that to date, the Coverity Scan service has analyzed
nearly 400,000 lines of Python code and identified 996 new defects – 860 of
which have been fixed by the
Brett Cannon br...@python.org wrote:
The Visual Studio team has publicly stated they will never support C99,
so dropping C89 blindly is going to alienate a big part of our user base
unless we switch to C++ instead. I'm fine with trying to pull in C99
features, though, that we can somehow
On 05.11.2012 15:14, Xavier Morel wrote:
Such as segfaulting the interpreter. I seem to reliably segfault
everything every time I try to use ctypes.
You can do that with C extensions too, by the way. Apart from that,
dependency on ABI is more annoying to maintain across platforms than
Den 14. mars 2013 kl. 23:23 skrev Trent Nelson tr...@snakebite.org:
For the record, here are all the Windows calls I'm using that have
no *direct* POSIX equivalent:
Interlocked singly-linked lists:
- InitializeSListHead()
- InterlockedFlushSList()
On 07.04.2013 21:50, Martin v. Löwis wrote:
So I believe that extension building is becoming more and more
painful on Windows for Python 2.7 as time passes (and it is already
way more painful than it is on Linux), and I see no way to do much
about that. The stable ABI would have been a
Antoine Pitrou skrev:
(*) http://svn.python.org/view/sandbox/trunk/ccbench/
I´ve run it twice on my dual core machine. It hangs every time, but not in the
same place:
D:\pydev\python\trunk\PCbuildpython.exe \tmp\ccbench.py
Ah, you should report a bug then. ccbench is pure Python
Sturla Molden skrev:
does not crash the interpreter, but it seems it can deadlock.
Here is what I get con a quadcore (Vista, Python 2.6.3).
This what I get with affinity set to CPU 3.
There are deadlocks happening at random locations in ccbench.py. It gets
worse with affinity set to one
Antoine Pitrou skrev:
Kristján sent me a patch which I applied and is supposed to fix this.
Anyway, thanks for the numbers. The GIL does seem to fare a bit better (zero
latency with the Pi calculation in the background) than under Linux, although it
may be caused by the limited resolution of
Sturla Molden skrev:
However, David Beazley is not talking about Windows. Since the GIL is
apparently not a mutex on Windows, it could behave differently. So I
wrote a small script that contructs a GIL battle, and record how often
a check-interval results in a thread-switch
latency, which I could have done with a
direct hook into ceval.c. So statistics to the rescue. But on the bright
side, it reduces the overhead of the profiler.
Would that help?
Sturla Molden
___
Python-Dev mailing list
Python-Dev@python.org
http
Phillip Sitbon skrev:
Some of this is more low-level. I did see higher performance when
using non-Event objects, although I have not had time to follow up and
do a deeper analysis. The GIL flashing problem with critical
sections can very likely be rectified with a call to Sleep(0) or
Kristján Valur Jónsson skrev:
Thanks, I'll take a look in that direction.
I have a suggestion, forgive me if I am totally ignorant. :-)
Sturla Molden
#include windows.h
union __reftime {
double us;
__int64 bits;
};
static volatile union __reftime __ref_perftime
Sturla Molden skrev:
I have a suggestion, forgive me if I am totally ignorant. :-)
Ah, damn... Since there is a GIL, we don't need any of that crappy
synchronization. And my code does not correct for the 20 ms time jitter
in GetSystemTimeAsFileTime. Sorry!
S.M
Antoine Pitrou skrev:
- priority requests, which is an option for a thread requesting the GIL
to be scheduled as soon as possible, and forcibly (rather than any other
threads).
So Python threads become preemptive rather than cooperative? That would
be great. :-)
time.sleep should generate a
Antoine Pitrou skrev:
- priority requests, which is an option for a thread requesting the GIL
to be scheduled as soon as possible, and forcibly (rather than any other
threads). T
Should a priority request for the GIL take a priority number?
- If two threads make a priority requests for the
Why does this happen?
type(2**31-1)
type 'long'
It seems to have broken NumPy's RNG on Win32.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
Curt Hagenlocher skrev:
Does that not happen on non-Windows platforms? 2**31 can't be
represented as a 32-bit signed integer, so it's automatically promoted
to a long.
Yes you are right.
I've now traced down the problem to an integer overflow in NumPy.
It seems to have this Pyrex code:
Martin v. Löwis skrev:
b) notice that, on Windows, minimum wait resolution may be as large as
15ms (e.g. on XP, depending on the hardware). Not sure what this
means for WaitForMultipleObjects; most likely, if you ask for a 5ms
wait, it waits until the next clock tick. It would be bad
Martin v. Löwis skrev:
Maybe you should study the code under discussion before making such
a proposal.
I did, and it does nothing of what I suggested. I am sure I can make the
Windows GIL
in ceval_gil.h and the mutex in thread_nt.h at lot more precise and
efficient.
This is the kind of code
Sturla Molden skrev:
I would turn on multimedia timer (it is not on by default), and
replace this
call with a loop, approximately like this:
for (;;) {
r = WaitForMultipleObjects(2, objects, TRUE, 0);
/* blah blah blah */ QueryPerformanceCounter(cnt);if (cnt
timeout) break
Sturla Molden skrev:
And just so you don't ask: There should not just be a Sleep(0) in the
loop, but a sleep that gets shorter and shorter until a lower
threshold is reached, where it skips to Sleep(0). That way we avoid
hammering om WaitForMultipleObjects and QueryPerformanceCounter more
Antoine Pitrou skrev:
It certainly is.
But once again, I'm no Windows developer and I don't have a native Windost host
to test on; therefore someone else (you?) has to try.
I'd love to try, but I don't have VC++ to build Python, I use GCC on
Windows.
Anyway, the first thing to try then is
Martin v. Löwis skrev:
I did, and it does nothing of what I suggested. I am sure I can make the
Windows GIL in ceval_gil.h and the mutex in thread_nt.h at lot more precise and
efficient.
Hmm. I'm skeptical that your code makes it more accurate, and I
completely fail to see that it makes
of their dependant extensions to
Py3k. The community of scientists and engineers using Python is growing,
but shutting down 2.x support might bring an end to that.
Sturla Molden
___
Python-Dev mailing list
Python-Dev@python.org
http
Kevin Modzelewski k...@dropbox.com wrote:
Since it's the question that I think most people will inevitably (and
rightly) ask, why do we think there's a place for Pyston when there's PyPy
and (previously) Unladen Swallow?
Have you seen Numba, the Python JIT that integrates with NumPy?
Kevin Modzelewski k...@dropbox.com wrote:
Using optional type annotations is a really promising strategy and may
eventually be added to Pyston, but our primary target right now is
unmodified and untyped Python code
What I meant to say is that Numba already has done the boiler-plate coding.
Björn Lindqvist bjou...@gmail.com wrote:
import numpy as np
from numpy.linalg import inv, solve
# Using dot function:
S = np.dot((np.dot(H, beta) - r).T,
np.dot(inv(np.dot(np.dot(H, V), H.T)), np.dot(H, beta) - r))
# Using dot method:
S = (H.dot(beta) -
Mike Miller python-...@mgmiller.net wrote:
The main rationale given (for not using the standard %ProgramFiles%) has been
that the full path to python is too long to type, and ease of use is more
important than the security benefits given by following Windows conventions.
C:\Program
Stefan Behnel stefan...@behnel.de wrote:
Thus my proposal to compile the modules in CPython with Cython, rather than
duplicating their code or making/keeping them CPython specific. I think
reducing the urge to reimplement something in C is a good thing.
For algorithmic and numerical code,
Stefan Behnel stefan...@behnel.de wrote:
So the
argument in favour is mostly a pragmatic one. If you can have 2-5x faster
code essentially for free, why not just go for it?
I would be easier if the GIL or Cython's use of it was redesigned. Cython
just grabs the GIL and holds on to it until it
On 05/06/14 22:51, Nathaniel Smith wrote:
This gets evaluated as:
tmp1 = a + b
tmp2 = tmp1 + c
result = tmp2 / c
All these temporaries are very expensive. Suppose that a, b, c are
arrays with N bytes each, and N is large. For simple arithmetic like
this, then costs are dominated
Julian Taylor jtaylor.deb...@googlemail.com wrote:
The problem with this approach is that it is already difficult enough to
handle memory in numpy.
I would not do this in a way that complicates memory management in NumPy. I
would just replace malloc and free with temporarily cached versions.
Nathaniel Smith n...@pobox.com wrote:
The proposal in my initial email requires zero pthreads, and is
substantially more effective. (Your proposal reduces only the alloc
overhead for large arrays; mine reduces both alloc and memory access
overhead for boyh large and small arrays.)
My
Brett Cannon bcan...@gmail.com wrote:
Nope. A new minor release of Python is a massive undertaking which is why
we have saved ourselves the hassle of doing a Python 2.8 or not giving a
clear signal as to when Python 2.x will end as a language.
Why not just define Python 2.8 as Python 2.7
Brian Curtin br...@python.org wrote:
Adding features into 3.x is already not enough of a carrot on the
stick for many users. Intentionally leaving 2.7 on a dead compiler is
like beating them with the stick.
Those who want to build extensions on Windows will just use MinGW
(currently GCC
Brian Curtin br...@python.org wrote:
Well we're certainly not going to assume such a thing. I know people do
that, but many don't (I never have).
If Python 2.7 users are left with a dead compiler on Windows, they will
find a solution. For example, Enthought is already bundling their Python
Eli Bendersky eli...@gmail.com wrote:
While we're at it, Clang in nearing a stage where it can compile C and C++
on Windows *with ABI-compatibility to MSVC* (yes, even C++) -- see
a
href=http://clang.llvm.org/docs/MSVCCompatibility.html;http://clang.llvm.org/docs/MSVCCompatibility.html/a
for
Brian Curtin br...@python.org wrote:
If Python 2.7 users are left with a dead compiler on Windows, they will
find a solution. For example, Enthought is already bundling their Python
distribution with gcc 2.8.1 on Windows.
Again, not something I think we should depend on. A lot of people use
Greg Ewing greg.ew...@canterbury.ac.nz wrote:
Julian Taylor wrote:
tp_can_elide receives two objects and returns one of three values:
* can work inplace, operation is associative
* can work inplace but not associative
* cannot work inplace
Does it really need to be that complicated? Isn't
Nathaniel Smith n...@pobox.com wrote:
with numpy.accelerate:
x = expression
y = expression
z = expression
# evaluation of x,y,z happens here
Using an alternative evaluation engine is indeed another way to
optimize execution, which is why projects like numexpr, numba, theano,
of OpenBLAS to use as BLAS and LAPACK
when building NumPy and the SciPy stack. Intel MKL or ATLAS might be
preferred though, due to concerns about the maturity of OpenBLAS.
Sturla Molden
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org
.
Apple and Cray solved the problem on their platforms by building
high-performance BLAS and LAPACK libraries into their operating systems
(Apple Accelerate Framework and Cray libsci). But AFAIK, Windows does not
have a BLAS library from Microsoft.
Sturla Molden
Nathaniel Smith n...@pobox.com wrote:
You may want to get in touch with Carl Kleffner -- he's done a bunch
of work lately on getting a mingw-based toolchain to the point where
it can build numpy and scipy.
To build *Python extensions*, one can use Carl's toolchain or the VC9
compiler for
Paul Moore p.f.mo...@gmail.com wrote:
Having said that, I'm personally not interested in this, as I am happy
with MSVC Express. Python 3.5 will be using MSVC 14, where the express
edition supports both 32 and 64 bit.
If you build Python yourself, you can (more or less) use whichever version
Larry Hastings la...@hastings.org wrote:
Just to make something clear that may not be clear to non-Windows
developers: the C library is implicitly part of the ABI.
MacOS X also has this issue, but it less known amon Mac developers! There
tends to be multiple versions of the C library, one
Victor Stinner victor.stin...@gmail.com wrote:
Is MinGW fully compatible with MSVS ABI? I read that it reuses the
MSVCRT, but I don't know if it's enough.
Not out of the box. See:
https://github.com/numpy/numpy/wiki/Mingw-static-toolchain
Sturla
Larry Hastings la...@hastings.org wrote:
So as a practical matter I think I'd prefer if we continued to only
support MSVC. In fact I'd prefer it if we removed support for other
Windows compilers, instead asking those maintainers to publish their own
patches / repos, in the way that Stackless
Merlijn van Deen valhall...@arctus.nl wrote:
VC++ 2008/2010 EE do not *bundle* a 64-bit compiler,
Actually it does, but it is not available from the UI. You can use it from
the command line, though.
Sturla
___
Python-Dev mailing list
Steve Dower steve.do...@microsoft.com wrote:
I don't have any official confirmation, but my guess would be that the
64-bit compilers were omitted from the VC 2008 Express to save space
(bearing in mind that WinXP was the main target at that time, which had
poor 64-bit support, and very few
Larry Hastings la...@hastings.org wrote:
CPython doesn't require OpenBLAS. Not that I am not receptive to the
needs of the numeric community... but, on the other hand, who in the
hell releases a library with Windows support that doesn't work with MSVC?!
It uses ATT assembly syntax instead of
Antoine Pitrou solip...@pitrou.net wrote:
But you can compile OpenBLAS with one compiler and then link it to
Python using another compiler, right? There is a single C ABI.
BLAS and LAPACK are actually Fortran, which does not have a single C ABI.
The ABI depends on the Fortran compiler. g77 and
Sturla Molden sturla.mol...@gmail.com wrote:
BLAS and LAPACK are actually Fortran, which does not have a single C ABI.
The ABI depends on the Fortran compiler. g77 and gfortran will produce
different C ABIs. This is a consistent source of PITA in any scientific
programming that combines C
Antoine Pitrou solip...@pitrou.net wrote:
It sound like whatever MSVC produces should be the defacto standard
under Windows.
Yes, and that is what Clang does on Windows. It is not as usable as MinGW
yet, but soon it will be. Clang also suffers fronthe lack of a Fortran
compiler, though.
Steve Dower steve.do...@microsoft.com wrote:
Is there some reason the Fortran part can't be separated out into a DLL?
DLL hell, I assume. Using the Python extension module loader makes it less
of a problem. If we stick with .pyd files where everything is statically
linked we can rely on the
On 28/05/15 21:37, Chris Barker wrote:
I think it's great for it to be used by end users as a system library /
utility. i.e. like you would a the system libc -- so if you can write a
little python script that only uses the stdlib -- you can simply deliver
that script.
No it is not, because
Donald Stufft don...@stufft.io wrote:
Honestly, I’m on an OS that *does* ship Python (OS X) and part of me hopes
that they stop shipping it. It’s very rare that someone ships Python as
part of their OS without modifying it in some way, and those modifications
almost always cause pain to some
Brett Cannon wrote:
> Ned also neglected to mention his byterun project which is a pure Python
> implementation of the CPython eval loop: href="https://github.com/nedbat/byterun;>https://github.com/nedbat/byterun
I would also encourage you to take a look at Numba. It is an
Victor Stinner wrote:
> Is it worth to support a compiler that in 2016 doesn't support the C
> standard released in 1999, 17 years ago?
MSVC only supports C99 when its needed for C++11 or some MS extension to C.
Is it worth supporting MSVC? If not, we have Intel C,
Nathaniel Smith wrote:
> No-one's proposing to use C99 indiscriminately;
> There's no chance that CPython is going to drop MSVC support in 3.6.
Stinner was proposing that by saying
"Is it worth to support a compiler that in 2016 doesn't support the C
standard released in 1999,
Guido van Rossum wrote:
> I'm not sure I meant that. But if I have a 3rd party extension that
> compiles with 3.5 headers using C89, then it should still compile with
> 3.6 headers using C99. Also if I compile it for 3.5 and it only uses
> the ABI it should still be linkable
wrote:
> I share Guido's priority there - source compatibility is more important than
> smoothing a few of C's rough edges. Maybe the next breaking change release
> this should be considered (python 4000... python 5000?)
I was simply pointing out that Guido's priority
. macros cannot be
replaced by inline functions, as header files must still be plain C89.
Sturla Molden
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
https://mail.python.org/mailman/opt
Matthias Klose wrote:
> GCC 5 and GCC 6 default to C11 (-std=gnu11), does the restriction to C99 mean
> that -std=gnu99 should be passed explicitly?
Also note that -std=c99 is not the same as -std=gnu99. The latter allows
GNU extensions like computed goto. Does the interpreter
"Stephen J. Turnbull" wrote:
> I may be talking through my hat here, but Apple has been using LLVM
> for several major releases now. They seem to be keeping the GCC
> frontend stuck at 4.2.1, though. So just because we've been using GCC
> 4.2.1 on Mac,
that problem - building on newer systems with deployment targets,
> installing third-party compilers, etc.
Clang is also available (and installed) on OSX 10.8 and earlier, although
gcc 4.2.1 is the default frontend to LLVM. The easiest solution to get C99
on those
Guido van Rossum wrote:
> This feels close to a code of conduct violation. This kind of language
> may be okay on the Linux kernel list, but I don't see the point of it
> here.
Sorry, I should have found a more diplomatic formulation. But the principle
remains, build problems
91 matches
Mail list logo