[pypy-dev] Re: Contribute a RISC-V 64 JIT backend

2024-01-21 Thread Maciej Fijalkowski
Hi Logan

Additionally to what Matti says, there are random fuzzing tests like
test_ll_random.py in jit/backend/test. Run those for longer than the
default (e.g. whole night) to see if they find issues

Best,
Maciej Fijalkowski

On Tue, 16 Jan 2024 at 07:02, Logan Chien  wrote:
>
> Hi,
>
> I have good news: the RISC-V backend can pass as many unit tests as the 
> AArch64 backend.  I got vmprof and codemap working this weekend.  I also 
> completed a full translation and got a workable pypy executable.
>
> I have two questions now:
>
> 1. Are there other test suites that I can check for the correctness?
> 2. How do we measure the performance?  Do we have a command line that can run 
> all benchmarks?
>
> Thank you in advance.
>
> Regards,
> Logan
>
> p.s. All changes are at: https://github.com/loganchien/pypy/tree/rv64
>
> On Mon, Jan 15, 2024 at 8:54 PM Logan Chien  wrote:
>>
>> Hi Maciej,
>>
>> Thank you for your information.  Let me conduct more surveys.  Thanks.
>>
>> Regards,
>> Logan
>>
>> On Thu, Jan 11, 2024 at 2:44 AM Maciej Fijalkowski  wrote:
>>>
>>> Hi Logan
>>>
>>> As far as I remember (and neither Armin nor I did any major pypy
>>> development recently), the vectorization was never really something we
>>> got to work to the point where it was worth it. In theory, having
>>> vectorized operations like numpy arrays to compile to vectorized CPU
>>> instructions would be glorious, but in practice it never worked well
>>> enough for us to enable it by default.
>>>
>>> Best,
>>> Maciej
>>>
>>> On Wed, 10 Jan 2024 at 08:39, Logan Chien  wrote:
>>> >
>>> > Hi Armin,
>>> >
>>> > > About the V extension, I'm not sure it would be helpful; do you plan
>>> > > to use it in the same way as our x86-64 vector extension support?  As
>>> > > far as I know this has been experimental all along and isn't normally
>>> > > enabled in a standard PyPy.  (I may be wrong about that.)
>>> >
>>> > Well, if the vector extension is not enabled by default even for x86-64 
>>> > backend, then I will have to conduct more survey, planning, and 
>>> > designing.  I haven't read the vectorization code yet.
>>> >
>>> > Anyway, I will finish the basic JIT first.
>>> >
>>> > Regards,
>>> > Logan
>>> >
>>> > On Tue, Jan 9, 2024 at 2:22 AM Armin Rigo  wrote:
>>> >>
>>> >> Hi Logan,
>>> >>
>>> >> On Tue, 9 Jan 2024 at 04:01, Logan Chien  
>>> >> wrote:
>>> >> > Currently, I only target RV64 IMAD:
>>> >> >
>>> >> > I - Base instruction set
>>> >> > M - Integer multiplication
>>> >> > A - Atomic (used by call_release_gil)
>>> >> > D - Double precision floating point arithmetic
>>> >> >
>>> >> > I don't use the C (compress) extension for now because it may 
>>> >> > complicate the branch offset calculation and register allocation.
>>> >> >
>>> >> > I plan to support the V (vector) extension after I finish the basic 
>>> >> > JIT support.  But there are some unknowns.  I am not sure whether (a) 
>>> >> > I want to detect the availability of the V extension dynamically (thus 
>>> >> > sharing the same pypy executable) or (b) build different executables 
>>> >> > for different combinations of extensions.  Also, I don't have a 
>>> >> > development board that supports the V extension.  I am searching for 
>>> >> > one.
>>> >> >
>>> >> > Another remote goal is to support RV32IMAF (singlefloats) or RV32IMAD. 
>>> >> >  In RISC-V, 32-bit and 64-bit ISAs are quite similar.  The only 
>>> >> > difference is on LW/SW (32-bit) vs. LD/SD (64-bit) and some special 
>>> >> > instructions for 64-bit (e.g. ADDW).  I isolated many of them into 
>>> >> > load_int/store_int helper functions so that it will be easy to swap 
>>> >> > implementations.  However, I am not sure if we have to change the 
>>> >> > object alignment in `malloc_nursery*` (to ensure we align to multiples 
>>> >> > of `double`).  Also, I am not sure whether it is common for RV32 cores 
>>> >> > to include the D extension.  But, anyway, RV32 will be a lower 
>&

[pypy-dev] Re: Contribute a RISC-V 64 JIT backend

2024-01-11 Thread Maciej Fijalkowski
Hi Logan

As far as I remember (and neither Armin nor I did any major pypy
development recently), the vectorization was never really something we
got to work to the point where it was worth it. In theory, having
vectorized operations like numpy arrays to compile to vectorized CPU
instructions would be glorious, but in practice it never worked well
enough for us to enable it by default.

Best,
Maciej

On Wed, 10 Jan 2024 at 08:39, Logan Chien  wrote:
>
> Hi Armin,
>
> > About the V extension, I'm not sure it would be helpful; do you plan
> > to use it in the same way as our x86-64 vector extension support?  As
> > far as I know this has been experimental all along and isn't normally
> > enabled in a standard PyPy.  (I may be wrong about that.)
>
> Well, if the vector extension is not enabled by default even for x86-64 
> backend, then I will have to conduct more survey, planning, and designing.  I 
> haven't read the vectorization code yet.
>
> Anyway, I will finish the basic JIT first.
>
> Regards,
> Logan
>
> On Tue, Jan 9, 2024 at 2:22 AM Armin Rigo  wrote:
>>
>> Hi Logan,
>>
>> On Tue, 9 Jan 2024 at 04:01, Logan Chien  wrote:
>> > Currently, I only target RV64 IMAD:
>> >
>> > I - Base instruction set
>> > M - Integer multiplication
>> > A - Atomic (used by call_release_gil)
>> > D - Double precision floating point arithmetic
>> >
>> > I don't use the C (compress) extension for now because it may complicate 
>> > the branch offset calculation and register allocation.
>> >
>> > I plan to support the V (vector) extension after I finish the basic JIT 
>> > support.  But there are some unknowns.  I am not sure whether (a) I want 
>> > to detect the availability of the V extension dynamically (thus sharing 
>> > the same pypy executable) or (b) build different executables for different 
>> > combinations of extensions.  Also, I don't have a development board that 
>> > supports the V extension.  I am searching for one.
>> >
>> > Another remote goal is to support RV32IMAF (singlefloats) or RV32IMAD.  In 
>> > RISC-V, 32-bit and 64-bit ISAs are quite similar.  The only difference is 
>> > on LW/SW (32-bit) vs. LD/SD (64-bit) and some special instructions for 
>> > 64-bit (e.g. ADDW).  I isolated many of them into load_int/store_int 
>> > helper functions so that it will be easy to swap implementations.  
>> > However, I am not sure if we have to change the object alignment in 
>> > `malloc_nursery*` (to ensure we align to multiples of `double`).  Also, I 
>> > am not sure whether it is common for RV32 cores to include the D 
>> > extension.  But, anyway, RV32 will be a lower priority for me because I 
>> > will have to figure out how to build a RV32 root filesystem first (p.s. 
>> > Debian doesn't (officially) support RV32 as of writing).
>>
>> Cool!  Here are a few thoughts I had when I looked at some RISC-V
>> early documents long ago (warning, it may be outdated):
>>
>> Yes, not using the "compress" extension is probably a good approach.
>> That looks like something a compiler might do, but it's quite a bit of
>> work both implementation-wise, and it's unclear if it would help anyway here.
>>
>> About the V extension, I'm not sure it would be helpful; do you plan
>> to use it in the same way as our x86-64 vector extension support?  As
>> far as I know this has been experimental all along and isn't normally
>> enabled in a standard PyPy.  (I may be wrong about that.)
>>
>> Singlefloats: we don't do any arithmetic on singlefloats with the JIT,
>> but it has got a few instructions to pack/unpack double floats into
>> single floats or to call a C-compiled function with singlefloat
>> arguments.  That's not optional, though I admit I don't know how a C
>> compiler compiles these operations if floats are not supported by the
>> hardware.  But as usual, you can just write a tiny C program and see.
>>
>> I agree that RV32 can be a more remote goal for now.  It should
>> simplify a lot of stuff if you can just assume a 64-bit environment.
>> Plus all the other points you mention: the hardware may not support
>> doubles, and may not be supported by Debian...
>>
>>
>> A bientôt,
>>
>> Armin Rigo
>
> ___
> pypy-dev mailing list -- pypy-dev@python.org
> To unsubscribe send an email to pypy-dev-le...@python.org
> https://mail.python.org/mailman3/lists/pypy-dev.python.org/
> Member address: fij...@gmail.com
___
pypy-dev mailing list -- pypy-dev@python.org
To unsubscribe send an email to pypy-dev-le...@python.org
https://mail.python.org/mailman3/lists/pypy-dev.python.org/
Member address: arch...@mail-archive.com


[pypy-dev] Re: Contribute a RISC-V 64 JIT backend

2024-01-08 Thread Maciej Fijalkowski
Hi Logan

Very cool you are interested in that! It's often useful to hang out on
IRC as you can ask questions directly. I have not taken any looks at
all, but can you tell me what kind of setup does one need for testing
it? Are you using real hardware or emulation?

The approach of starting with tests and getting translation done later
is very much what we have done in the past.

Best,
Maciej

On Mon, 8 Jan 2024 at 09:42, Logan Chien  wrote:
>
> Hi,
>
> I forgot to include the link in my previous email.
>
> If you want to have a look on my prototype, you can find it here:
> https://github.com/loganchien/pypy/tree/rv64
>
> Thanks.
>
> Regards,
> Logan
>
>
>
> On Sun, Jan 7, 2024 at 5:18 PM Logan Chien  wrote:
>>
>> Hi all,
>>
>> I would like to contribute a RISC-V 64 JIT backend for RPython.  I have made 
>> some progress at the end of 2023.
>>
>> ## Status
>>
>> My prototype can pass the test cases below:
>>
>> * test_runner.py
>> * test_basic.py and almost all test_ajit.py related tests (except 
>> test_rvmprof.py)
>> * test_zrpy_gc_boehm.py
>>
>> I am still working on test_zrpy_gc.py though (p.s. I can pass this if I 
>> disable malloc inlining).
>>
>> I haven't done a full translation yet.
>>
>> ## Logistic
>>
>> I wonder how you would like to review the patches?  I have roughly 73 
>> pending commits.  Each commit has a specific reason for change and 
>> corresponding test cases (if applicable).
>>
>> Is it better to just send one GitHub Pull Request containing all of them?
>>
>> Or, do you prefer one commit per Pull Request?
>>
>> Thank you.
>>
>> Regards,
>> Logan
>
> ___
> pypy-dev mailing list -- pypy-dev@python.org
> To unsubscribe send an email to pypy-dev-le...@python.org
> https://mail.python.org/mailman3/lists/pypy-dev.python.org/
> Member address: fij...@gmail.com
___
pypy-dev mailing list -- pypy-dev@python.org
To unsubscribe send an email to pypy-dev-le...@python.org
https://mail.python.org/mailman3/lists/pypy-dev.python.org/
Member address: arch...@mail-archive.com


[pypy-dev] Re: Moving to github

2023-12-30 Thread Maciej Fijalkowski
as a non-participating contributor, a non-voting +1 from me

On Fri, 29 Dec 2023 at 23:31, Simon Cross  wrote:
>
> Hi Matti,
>
> On Thu, Dec 28, 2023 at 9:22 AM Matti Picus  wrote:
> > Now that 7.3.14 has been released, I would like to move the canonical
> > repo for pypy and rpython to github. Reasons:
>
> +1 from me too and many thanks for taking on the work.
> ___
> pypy-dev mailing list -- pypy-dev@python.org
> To unsubscribe send an email to pypy-dev-le...@python.org
> https://mail.python.org/mailman3/lists/pypy-dev.python.org/
> Member address: fij...@gmail.com
___
pypy-dev mailing list -- pypy-dev@python.org
To unsubscribe send an email to pypy-dev-le...@python.org
https://mail.python.org/mailman3/lists/pypy-dev.python.org/
Member address: arch...@mail-archive.com


Re: [pypy-dev] Does PyPy memory use benefit from preforking?

2021-09-24 Thread Maciej Fijalkowski
Hi Tin

As far as I remember PyPy has GC headers that have various bits set
and cleared when walking the heap for garbage collection, so while it
does not use reference counting, it would not benefit really from
pre-forking either. There are some ideas how to make that work, btu
none of them have been implemented.

Best,
Maciej Fijalkowski

On Sat, 18 Sept 2021 at 21:58, Tin Tvrtković  wrote:
>
> Hello!
>
> A little bit of context: roughly speaking, preforking is a technique where a 
> (supervisor) process is started, the process performs some initialization and 
> then forks off into child worker processes, which it then supervises. It's 
> usually used to make several worker processes share a server TCP socket 
> (which they inherit from the supervisor).
>
> In some runtimes preforking can also be used to save memory since the child 
> processes get copy-on-write access to the supervisor memory pages. My 
> understanding is this doesn't actually yield anything on CPython since 
> essentially everything is reference counted and memory pages get copied 
> quickly.
>
> PyPy doesn't use reference counting though, so I was wondering if preforking 
> could be used with PyPy for memory saving purposes. All of this is a little 
> low-level for me, and I would appreciate any insight from the resident 
> experts :)
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


[pypy-dev] rpython.org

2020-03-10 Thread Maciej Fijalkowski
Hey everyone, rpython.org expired because I didn't pay in time

Sorry about that, I will try to be more on it. It's fixed now

Best,
Maciej
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


[pypy-dev] Pre-release of arm64

2019-07-15 Thread Maciej Fijalkowski
Hey everyone

Apparently it's enough that we put a new image on download.html page
for docker to officially have arm64 image of pypy, since that's where
they download it from.

Is it ok if I put there a pre-release for arm64?

Best,
Maciej Fijalkowski
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


[pypy-dev] More feedback on new pypy.org

2019-02-19 Thread Maciej Fijalkowski
Hi everyone!

Here is the next iteration of pypy.org -
https://baroquesoftware.com/pypy-website/web/ - after adding some
feedback. Note that no work has been done on content just yet.

Feel free to provide more feedback. We also need to decide what to do
with the logo.

Best,
Maciej Fijalkowski
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Why is pypy slower?

2019-02-13 Thread Maciej Fijalkowski
On Wed, Feb 13, 2019 at 3:57 PM Joseph Reagle  wrote:
>
>
> On 2/13/19 9:38 AM, Maciej Fijalkowski wrote:
> > My first intuition would be to run it for a bit longer (can you run
> > it in a loop couple times and see if it speeds up?) 2s might not be
> > enough for JIT to kick in on something as complicated
>
> It's a single use utility where each run processes about a 100 XML files, 
> doing things like string regex and munging thousands of times.
>
> Is it possible for pypy to remember optimizations across instantiations?

It is not possible.

Here is the explanation:
http://doc.pypy.org/en/latest/faq.html#couldn-t-the-jit-dump-and-reload-already-compiled-machine-code

Best,
Maciej Fijalkowski
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Why is pypy slower?

2019-02-13 Thread Maciej Fijalkowski
Hi Joseph

My first intuition would be to run it for a bit longer (can you run it
in a loop couple times and see if it speeds up?) 2s might not be
enough for JIT to kick in on something as complicated

On Wed, Feb 13, 2019 at 3:11 PM Joseph Reagle  wrote:
>
> Hello all, thank you for your work on pypy.
>
> I'm a pypy newbie and thought to try it on a program I use a lot and where I 
> appreciate fast response times (especially when running on a webhost). I keep 
> my bibliography notes in interconnected XML-based mindmaps (Freeplane). 
> `fe.py` parses and walks those XML files and generates output (bibtex, YAML, 
> wikipedia, or HTML that highlights a queried search) [1].
>
> [1]: https://github.com/reagle/thunderdell/blob/master/fe.py
>
> Running it with pypy is slower:
>
> ```
> > time python3 fe.py -cq Giddens
> python3 fe.py -cq Giddens  1.46s user 0.16s system 97% cpu 1.649 total
> > time pypy3 fe.py -cq Giddens
> pypy3 fe.py -cq Giddens  2.81s user 0.26s system 93% cpu 3.292 total
> ```
>
> I tried to use the pypy profiler but it would seemingly lockup (and 
> vmprof.com seems down to boot). I've attached a cProfile. As you might 
> expect, it spends a lot of time parsing XML, doing the regex search on nodes, 
> and parsing citation strings.
>
> Any idea why pypy is slower?
>
>
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Fwd: Re: Users of PyPy on ARM 32bit

2019-02-12 Thread Maciej Fijalkowski
On Mon, Feb 11, 2019 at 6:18 PM Matti Picus  wrote:
>
>
> On 8/2/19 7:44 pm, Carl Friedrich Bolz-Tereick wrote:
> >
> >
> > 
> > *From:* Gelin Yan 
> > *Sent:* February 8, 2019 3:26:30 PM GMT+01:00
> > *To:* Carl Friedrich Bolz 
> > *Subject:* Re: [pypy-dev] Users of PyPy on ARM 32bit
> >
> > Hi Carl
> >
> >
> >  We are using pypy with raspberry pi 3 for smart irrigation. Pypy
> > works fine to us.
> >
> > Regards
> >
> > gelin yan
> >
>
> Hi gelin yan. The problem we face is that it takes too long to translate
> and test ARM32. We need someone to provide us with a powerful ARM
> machine with at least 4GB of RAM so we can build and test ARM32. Without
> such support, we cannot guarantee continued support for ARM32. Could you
> help us?
>
>
> Matti

Hey.

We can build ARM32 binaries on ARM64 servers (we have one, we can get
access to more) using chroot. It's some effort, but not an
unreasonable amount. Would be cool if someone sponsors this work
though
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] let's clean up open branches

2019-02-11 Thread Maciej Fijalkowski
> Carl Friedrich Bolz  2016-02-23 18:32 +0100 statistics-maps
> Carl Friedrich Bolz  2016-02-27 17:01 +0100
> speed-up-stringsearch
> Carl Friedrich Bolz  2016-03-05 13:32 +0100 global-cell-cache
> Carl Friedrich Bolz  2016-04-29 12:08 +0300
> share-mapdict-methods
> Carl Friedrich Bolz  2016-06-06 17:17 +0200
> applevel-unroll-safe
> Carl Friedrich Bolz  2016-06-13 16:44 +0200
> hypothesis-apptest
> Carl Friedrich Bolz  2016-11-20 22:59 +0100 better-storesink
> Carl Friedrich Bolz  2017-03-29 20:11 +0200 fix-2198
> Carl Friedrich Bolz  2017-08-07 00:09 +0200
> bridgeopt-improvements
> Carl Friedrich Bolz-Tereick  2017-12-03 22:24 +0100
> intbound-improvements
> Carl Friedrich Bolz-Tereick  2018-03-12 14:47 +0100
> parser-tuning
> Catalin Gabriel Manciu  2016-04-14
> 15:52 +0300 detect_cpu_count
> Corbin Simpson  2014-07-03 11:35 -0700
> promote-unicode
> Daniel Roberts  2011-04-10 15:39 -0700 fold_intadd
> Dario Bertini  2011-07-01 12:14 +0200
> int32on64-experiment
> David Malcolm  2015-01-12 12:43 -0500 libgccjit-backend
> Devin Jeanpierre  2016-06-13 15:35 -0700
> gc-forkfriendly
> Edd Barrett  2016-08-15 14:52 +0100 w-xor-x
> Edd Barrett  2016-09-02 16:14 +0100
> asmmemmgr-for-code-only
> fijal 2016-03-29 14:15 +0200 faster-traceback
> fijal 2017-05-11 18:20 +0200 str-measure
> fijal 2017-10-25 09:24 +0200 ssl-context-share
> fijal 2017-11-02 11:37 +0100 canraise-assertionerror
> Hakan Ardo  2011-02-02 06:59 +0100 jit-fromstart
> Hakan Ardo  2011-02-17 14:18 +0100 guard-improvements
> Hakan Ardo  2011-03-27 18:01 +0200 jit-usable_retrace
> Hakan Ardo  2011-08-20 09:57 +0200 jit-limit_peeling
> Hakan Ardo  2012-01-05 10:41 +0100 jit-usable_retrace_2
> Hakan Ardo  2013-02-04 22:39 +0100 jit-usable_retrace_3
> Ilya Osadchiy  2011-10-19 22:51 +0200
> numpy-comparison
> Joannah Nanjekye  2017-02-25 13:30 +0300
> pread/pwrite
> JohnDoe 2016-04-07 12:10 +0300 get-heap-stats
> Justin Peel  2011-09-06 09:38 -0600 gc-trace-faster
> Lars Wassermann  2013-03-08 13:45 +0100
> type-specialized-instances
> Laurence Tratt  2013-11-26 15:45 + more_strategies
> Łukasz Langa  2019-02-08 17:39 +0100 py3.6
> Maciej Fijalkowski  2019-02-09 16:18 + arm64
> Manuel Jacob  2015-03-01 12:35 +0100
> spaceops-are-variables
> Manuel Jacob  2017-03-03 01:47 +0100 gcc-lto
> Manuel Jacob  2017-04-10 23:58 +0200
> llvm-translation-backend
> Manuel Jacob  2019-02-06 16:55 +0100 ann-systemexit
> Matti Picus  2016-05-05 18:04 +0300
> numpy_broadcast_nd
> Matti Picus  2016-05-30 22:11 +0300
> release-pypy3.3-v5
> Matti Picus  2016-06-03 15:44 +0300
> cpyext-inheritance
> Matti Picus  2016-07-10 08:30 -0400
> override-tp_as-methods
> Matti Picus  2016-10-02 09:17 +0300
> numpypy_pickle_compat
> Matti Picus  2016-10-17 22:56 +0300 rmod-radd-slots
> Matti Picus  2016-10-26 20:10 +0300 pypy-config
> Matti Picus  2017-06-30 18:26 -0400 cpyext-injection
> Matti Picus  2017-07-10 22:57 +0300 cpyext-add_newdoc
> Matti Picus  2017-08-04 13:39 +0300
> cpyext-debug-type_dealloc
> Matti Picus  2017-09-11 18:06 +0300
> cpyext-tp_dictoffset
> Matti Picus  2017-09-25 20:42 +0300 win32-slow-tests
> Matti Picus  2017-10-03 13:49 +0300
> release-pypy2.7-5.x
> Matti Picus  2017-10-03 13:53 +0300
> release-pypy3.5-5.x
> Matti Picus  2017-11-03 14:19 +0200 matplotlib
> Matti Picus  2017-11-09 19:58 +0200 win32-vmprof
> Matti Picus  2017-12-19 11:26 +0200
> non-linux-vmprof-stacklet-switch-2
> Matti Picus  2017-12-21 20:00 +0200
> release-pypy2.7-v5.9.x
> Matti Picus  2018-01-10 16:58 +0200
> release-pypy3.5-v5.9.x
> Matti Picus  2018-04-23 01:01 +0300 issue2806_tp_init
> Matti Picus  2018-10-12 10:35 +0300
> release-pypy3.5-6.x
> Matti Picus  2018-10-28 19:35 +0200
> unicode-from-unicode-in-c
> Matti Picus  2019-02-05 17:09 +0200 windowsinstaller
> Matti Picus  2019-02-11 11:10 +0200 unicode-utf8-py3
> mattip  2015-06-26 11:39 +0300 pypy3-release-2.6.x
> mattip  2015-10-19 17:04 +0800 ndarray-promote
> Nicolas Truessel  2017-06-16 15:11 +0200 quad-color-gc
> Philip Jenvey  2014-06-22 18:56 -0700
> pypy3-release-2.3.x
> Philip Jenvey  2014-08-07 17:20 -0700 py3k-qualname
> Philip Jenvey  2014-10-20 16:41 -0700
> pypy3-release-2.4.x
> Philip Jenvey  2016-05-24 23:14 -0700 py3k-osxfix
> Remi Meier  2017-06-14 09:51 +0200 stmgc-c8
> Remi Meier  2018-03-24 16:52 +0100 guard-value-limit
> Richard Plangger  2016-03-15 11:31 +0100
> gcstress-hypothesis
> Richard Plangger  2016-10-11 16:56 +0200 py3.3
> Richard Plangger  2016-12-20 16:57 +0100 interp-opt
> Richard Plangger  2017-03-02 18:31 +0100 cpyext-callopt
> Richard Plangger  2017-04-04 18:03 -0400
> vmprof-multiple-eval-funcs
> Richard Plangger  2017-06-07 10:41 -0400 

Re: [pypy-dev] Feedback on pypy.org website revamp

2019-02-08 Thread Maciej Fijalkowski
Feedback from Nathaniel, so he does not have to write it:

Imagining a skeptical viewer, the two questions that jump to mind: (1)
"what, python 2.7.2? what kind of cooked benchmarks are these", (2)
"yeah sure twisted, whatever, they still can't handle the C API or
python 3"

On Fri, Feb 8, 2019 at 11:17 AM Maciej Fijalkowski  wrote:
>
> Hi everyone
>
> We are looking to redesign the main pypy website, how do people feel
> about the new quick look:
>
> https://baroquesoftware.com/pypy-website/web/
>
> Best,
> Maciej Fijalkowski
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


[pypy-dev] Feedback on pypy.org website revamp

2019-02-08 Thread Maciej Fijalkowski
Hi everyone

We are looking to redesign the main pypy website, how do people feel
about the new quick look:

https://baroquesoftware.com/pypy-website/web/

Best,
Maciej Fijalkowski
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] tensorflow support

2018-09-05 Thread Maciej Fijalkowski
Hi Shuaib

The windows 64bit support is not being worked on due to lack of
commercial interest. We never manage to organize even modest funds to
work on that.

The tensorflow support - I don't frankly know, maybe someone else can speak up?

PS. If you're based in Cape Town, I'm happy to discuss in person :)

Best regards,
Maciej Fijalkowski

On Tue, Sep 4, 2018 at 2:05 PM, Shuaib Osman
 wrote:
> Hi,
>
>
>
> I’ve been using pypy on windows (32-bit) for some time now, and was
> wondering what the status is on the following:
>
>
>
> 1) Windowx x64 support
>
> 2) Tensorflow support on linux
>
>
>
> Now that numpy works quite well, if tensorflow manages to compile and run in
> pypy, it would speed up graph construction time considerably (although
> running a graph should be the same time as in Cpython). My particular use
> case is dynamically constructing complex tensorflow graphs, differentiating
> them and then running them once – so pypy could potentially offer a huge
> speedup.
>
>
>
> Thanks.
>
>
>
>
>
>
>
> Disclaimer
>
> This e-mail and any attachments thereto may contain confidential and
> proprietary information. This e-mail is intended for the addressee only and
> should only be used by the addressee for the related purpose. If you are not
> the intended recipient of this e-mail, you are requested to delete it
> immediately. Any disclosure, copying, distribution of or any action taken or
> omitted in reliance on this information is prohibited and may be unlawful.
> The views expressed in this e-mail are, unless otherwise stated, those of
> the sender and not those of the Investec Group of Companies or its
> management. E-mails cannot be guaranteed to be secure or free of errors or
> viruses. No liability or responsibility is accepted for any interception,
> corruption, destruction, loss, late arrival or incompleteness of or
> tampering or interfering with any of the information contained in this
> e-mail or for its incorrect delivery or non-delivery or for its effect on
> any electronic device of the recipient. For more information on the Investec
> Group of Companies see www.investec.com.
>
>
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
>
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


[pypy-dev] PyPy sprint in Poland

2018-01-07 Thread Maciej Fijalkowski
Hi Everyone.

It looks like I would be in Europe through April and maybe May. Anyone
fancy a sprint somewhere in Poland? Potential venues:

* we can have a sprint at my climbing spot - it's quite a problem to
get to (~2-3h by train from either Prague or Wroclaw), but it's
incredibly lovely at this time of the year. There is a venue and
internet that we can use. Limited restaurant options are a con.
Endless hiking options are a plus.

* we can try to organize a venue in Warsaw. We don't have a place just
yet, but it's easy to get to, relatively cheap, abundant in places to
eat out. We don't have a place to sprint at just yet, but maybe we can
organize something at the Uni.

* Krakow. Might be easier to organize venue. Slightly harder to get to
than Warsaw, quite a bit nicer.

* Wroclaw. We had a tiny sprint there with Armin when we merged the
JIT to pypy :-) A bit harder to get to than Warsaw, a very nice city,
we don't have a venue organized yet (but it's possible I think),
easier to do a few days excursion from there.

Thoughts?
fijal
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Pypy's facelift and certificate issues

2018-01-01 Thread Maciej Fijalkowski
Hey Daniel

Great, I'm usually at least twice a year in Prague :-)

Let's catch up in February.

Cheers,
fijal

On Sat, Dec 30, 2017 at 2:42 PM, Kotrfa <kot...@gmail.com> wrote:
> Hi,
>
> I am based in Prague (UTC+2), but during January, I am travelling and hence
> I am not very available. It should get better during February, so could we
> schedule a Hangout/Skype call or something then?
>
> Also, depending on the volume of the work on the site, I may not be able to
> work on it before July. But have no problem if someone else want start on it
> themselves and I can join :-) ... But I believe I could find some time
> before then for simple redesign.
>
> Best,
> Daniel
>
> so 30. 12. 2017 v 11:49 odesílatel Maciej Fijalkowski <fij...@gmail.com>
> napsal:
>>
>> Hi Kotrfa.
>>
>> Great, we would definitely appreciate some help. I have very sketchy
>> internet till next week, maybe we can coordinate something next week
>> on IRC? What timezone are you in?
>>
>> Best regards,
>> Maciej Fijalkowski
>>
>> On Sat, Dec 30, 2017 at 9:41 AM, Kotrfa <kot...@gmail.com> wrote:
>> > Hey,
>> >
>> > as I said, I am not a professional, but I would be willing to help with
>> > the
>> > site. I have built couple of websites in Django/Wagtail (or plain HTML),
>> > have some basic knowledge about design (+ frontend CSS/JS/HTML). The
>> > site
>> > seems quite simple, so if we just did a revamp, it shouldn't take a
>> > long.
>> > Also, if we went with Wagtail, adding blog capabilities would be also
>> > trivial.
>> >
>> > Best,
>> > Daniel
>> >
>> > so 30. 12. 2017 v 10:02 odesílatel Maciej Fijalkowski <fij...@gmail.com>
>> > napsal:
>> >>
>> >> Hi Kortrfa
>> >>
>> >> I've even paid for a design of new pypy logo that we can use on the
>> >> new website :-) Maybe it's a good time I spend some effort upgrading
>> >> it. Thanks for the reminder, it's something that's on our heads, but
>> >> it's not like we can just hire someone to do it for us.
>> >>
>> >> Cheers,
>> >> fijal
>> >>
>> >> On Wed, Dec 27, 2017 at 9:35 PM, Kotrfa <kot...@gmail.com> wrote:
>> >> > Hi,
>> >> >
>> >> > thank you for all your hard work, I really appreciate it.
>> >> >
>> >> > Firstly, it seems that the website SSL certificate expired or
>> >> > something.
>> >> > At
>> >> > my latest Chromium reports "Not secure".
>> >> >
>> >> > Secondly, I totally understand that there are more pressing issues
>> >> > than
>> >> > cool
>> >> > website, but the current design and functionality is IMO really
>> >> > putting
>> >> > people off. So I am going to through away some things which seems
>> >> > weird
>> >> > to
>> >> > ME (and I understand it's subjective - please take it as a friendly
>> >> > feedback).
>> >> >
>> >> > The jQuery versions suggests November 2012, the design idea seems
>> >> > even
>> >> > older... There are old information such as "We are soon releasing a
>> >> > beta
>> >> > supporting Python 3.3." or the fact that the donations bars are still
>> >> > present even though they are closed (and hence could be moved into
>> >> > some
>> >> > blog
>> >> > post). Blog is also terrible, this is how it renders on my laptop
>> >> > (the
>> >> > same
>> >> > with the external display): https://imgur.com/a/qkjEa . The logo
>> >> > could
>> >> > be
>> >> > surely polished as well (I like the idea, but it seems a bit
>> >> > childish).
>> >> > Even
>> >> > though I am not professional front end developer, I am quite
>> >> > confident
>> >> > this
>> >> > feedback is not far from what a professional would say. It's also not
>> >> > about
>> >> > having "cool" website for itself, but that a nice projects attracts
>> >> > good
>> >> > people. I myself am questioning how good, from technical point of
>> >> > view,
>> >> > Pypy's project can be if the site looks like like it does and the
>> >> > most
>> >> > convenient way how to reach the devs is through mailing list...
>> >> >
>> >> > Best,
>> >> > Daniel
>> >> >
>> >> > ___
>> >> > pypy-dev mailing list
>> >> > pypy-dev@python.org
>> >> > https://mail.python.org/mailman/listinfo/pypy-dev
>> >> >
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Pypy's facelift and certificate issues

2017-12-30 Thread Maciej Fijalkowski
Hi Kotrfa.

Great, we would definitely appreciate some help. I have very sketchy
internet till next week, maybe we can coordinate something next week
on IRC? What timezone are you in?

Best regards,
Maciej Fijalkowski

On Sat, Dec 30, 2017 at 9:41 AM, Kotrfa <kot...@gmail.com> wrote:
> Hey,
>
> as I said, I am not a professional, but I would be willing to help with the
> site. I have built couple of websites in Django/Wagtail (or plain HTML),
> have some basic knowledge about design (+ frontend CSS/JS/HTML). The site
> seems quite simple, so if we just did a revamp, it shouldn't take a long.
> Also, if we went with Wagtail, adding blog capabilities would be also
> trivial.
>
> Best,
> Daniel
>
> so 30. 12. 2017 v 10:02 odesílatel Maciej Fijalkowski <fij...@gmail.com>
> napsal:
>>
>> Hi Kortrfa
>>
>> I've even paid for a design of new pypy logo that we can use on the
>> new website :-) Maybe it's a good time I spend some effort upgrading
>> it. Thanks for the reminder, it's something that's on our heads, but
>> it's not like we can just hire someone to do it for us.
>>
>> Cheers,
>> fijal
>>
>> On Wed, Dec 27, 2017 at 9:35 PM, Kotrfa <kot...@gmail.com> wrote:
>> > Hi,
>> >
>> > thank you for all your hard work, I really appreciate it.
>> >
>> > Firstly, it seems that the website SSL certificate expired or something.
>> > At
>> > my latest Chromium reports "Not secure".
>> >
>> > Secondly, I totally understand that there are more pressing issues than
>> > cool
>> > website, but the current design and functionality is IMO really putting
>> > people off. So I am going to through away some things which seems weird
>> > to
>> > ME (and I understand it's subjective - please take it as a friendly
>> > feedback).
>> >
>> > The jQuery versions suggests November 2012, the design idea seems even
>> > older... There are old information such as "We are soon releasing a beta
>> > supporting Python 3.3." or the fact that the donations bars are still
>> > present even though they are closed (and hence could be moved into some
>> > blog
>> > post). Blog is also terrible, this is how it renders on my laptop (the
>> > same
>> > with the external display): https://imgur.com/a/qkjEa . The logo could
>> > be
>> > surely polished as well (I like the idea, but it seems a bit childish).
>> > Even
>> > though I am not professional front end developer, I am quite confident
>> > this
>> > feedback is not far from what a professional would say. It's also not
>> > about
>> > having "cool" website for itself, but that a nice projects attracts good
>> > people. I myself am questioning how good, from technical point of view,
>> > Pypy's project can be if the site looks like like it does and the most
>> > convenient way how to reach the devs is through mailing list...
>> >
>> > Best,
>> > Daniel
>> >
>> > ___
>> > pypy-dev mailing list
>> > pypy-dev@python.org
>> > https://mail.python.org/mailman/listinfo/pypy-dev
>> >
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Pypy's facelift and certificate issues

2017-12-30 Thread Maciej Fijalkowski
Hi Kortrfa

I've even paid for a design of new pypy logo that we can use on the
new website :-) Maybe it's a good time I spend some effort upgrading
it. Thanks for the reminder, it's something that's on our heads, but
it's not like we can just hire someone to do it for us.

Cheers,
fijal

On Wed, Dec 27, 2017 at 9:35 PM, Kotrfa  wrote:
> Hi,
>
> thank you for all your hard work, I really appreciate it.
>
> Firstly, it seems that the website SSL certificate expired or something. At
> my latest Chromium reports "Not secure".
>
> Secondly, I totally understand that there are more pressing issues than cool
> website, but the current design and functionality is IMO really putting
> people off. So I am going to through away some things which seems weird to
> ME (and I understand it's subjective - please take it as a friendly
> feedback).
>
> The jQuery versions suggests November 2012, the design idea seems even
> older... There are old information such as "We are soon releasing a beta
> supporting Python 3.3." or the fact that the donations bars are still
> present even though they are closed (and hence could be moved into some blog
> post). Blog is also terrible, this is how it renders on my laptop (the same
> with the external display): https://imgur.com/a/qkjEa . The logo could be
> surely polished as well (I like the idea, but it seems a bit childish). Even
> though I am not professional front end developer, I am quite confident this
> feedback is not far from what a professional would say. It's also not about
> having "cool" website for itself, but that a nice projects attracts good
> people. I myself am questioning how good, from technical point of view,
> Pypy's project can be if the site looks like like it does and the most
> convenient way how to reach the devs is through mailing list...
>
> Best,
> Daniel
>
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
>
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


[pypy-dev] Dual release v5.10.0 of PyPy2 and PyPy3 is out

2017-12-25 Thread Maciej Fijalkowski
The PyPy team is proud to release both PyPy2.7 v5.10 (an interpreter supporting
Python 2.7 syntax), and a final PyPy3.5 v5.10 (an interpreter for Python
3.5 syntax). The two releases are both based on much the same codebase, thus
the dual release.


This release is an incremental release with very few new features, the main
feature being the final PyPy3.5 release that works on linux and OS X with beta
windows support. It also includes fixes for `vmprof`_ cooperation with
greenlets.

Compared to 5.9, the 5.10 release contains mostly bugfixes and small
improvements.
We have in the pipeline big new features coming for PyPy 6.0 that did not make
the release cut and should be available within the next couple months.

As always, this release is 100% compatible with the previous one and fixed
several issues and bugs raised by the growing community of PyPy users.
As always, we strongly recommend updating.

There are quite a few important changes that are in the pipeline that did not
make it into the 5.10 release. Most important are speed improvements to cpyext
(which will make numpy and pandas a bit faster) and utf8 branch that changes
internal representation of unicode to utf8, which should help especially the
Python 3.5 version of PyPy.

This release concludes the Mozilla Open Source `grant`_ for having a compatible
PyPy 3.5 release and we're very grateful for that.  Of course, we will continue
to improve PyPy 3.5 and probably move to 3.6 during the course of 2018.

You can download the v5.10 releases here:

http://pypy.org/download.html

We would like to thank our donors for the continued support of the PyPy project.


We would also like to thank our contributors and
encourage new people to join the project. PyPy has many
layers and we need help with all of them: `PyPy`_ and `RPython`_ documentation
improvements, tweaking popular `modules`_ to run on pypy, or general `help`_
with making RPython's JIT even better.

.. _vmprof: http://vmprof.readthedocs.io
.. _grant: 
https://morepypy.blogspot.com/2016/08/pypy-gets-funding-from-mozilla-for.html
.. _`PyPy`: index.html
.. _`RPython`: https://rpython.readthedocs.org
.. _`modules`: project-ideas.html#make-more-python-modules-pypy-friendly
.. _`help`: project-ideas.html


What is PyPy?
=


PyPy is a very compliant Python interpreter, almost a drop-in replacement for
CPython 2.7 and CPython 3.5. It's fast (`PyPy and CPython 2.7.x`_
performance comparison)
due to its integrated tracing JIT compiler.

We also welcome developers of other `dynamic languages`_ to see what
RPython can do for them.

The PyPy release supports:

  * **x86** machines on most common operating systems
(Linux 32/64 bits, Mac OS X 64 bits, Windows 32 bits, OpenBSD, FreeBSD)

  * newer **ARM** hardware (ARMv6 or ARMv7, with VFPv3) running Linux

  * big- and little-endian variants of **PPC64** running Linux,

  * **s390x** running Linux

.. _`PyPy and CPython 2.7.x`: http://speed.pypy.org
.. _`dynamic languages`: http://rpython.readthedocs.io/en/latest/examples.html

Changelog
=


* improve ssl handling on windows for pypy3 (makes pip work)
* improve unicode handling in various error reporters
* fix vmprof cooperation with greenlets
* fix some things in cpyext
* test and document the cmp(nan, nan) == 0 behaviour
* don't crash when calling sleep with inf or nan
* fix bugs in _io module
* inspect.isbuiltin() now returns True for functions implemented in C
* allow the sequences future-import, docstring, future-import for
CPython bug compatibility
* Issue #2699: non-ascii messages in warnings
* posix.lockf
* fixes for FreeBSD platform
* add .debug files, so builds contain debugging info, instead of being stripped
* improvements to cppyy
* issue #2677 copy pure c PyBuffer_{From,To}Contiguous from cpython
* issue #2682, split firstword on any whitespace in sqlite3
* ctypes: allow ptr[0] = foo when ptr is a pointer to struct
* matplotlib will work with tkagg backend once `matplotlib pr #9356`_ is merged
* improvements to utf32 surrogate handling
* cffi version bump to 1.11.2

.. _`matplotlib pr #9356`: https://github.com/matplotlib/matplotlib/pull/9356
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] pypy 5.10 release

2017-12-25 Thread Maciej Fijalkowski
Hi Matti

The build on downloads for OS X works only on High Sierra (10.13),
since it expects utimesat in libc, which is only available from High
Sierra. I will build a Sierra version on my computer that (hopefully)
works on older OS X too.

Otherwise looks good to go, I will update the website and post on the
blog once I'm done building

Cheers,
fijal


On Sun, Dec 24, 2017 at 4:44 AM, Matti Picus  wrote:
> I have uploaded the pypy 5.10 release tarballs for both pypy2.7 and pypy3.5
> to https://bitbucket.org/pypy/pypy/downloads, sha256 hashes are available
> from
> https://bitbucket.org/pypy/pypy.org/src/41427b24c7395f4a5a6636ef0d072c83982d8945/source/download.txt?at=extradoc=file-view-default
>
> Please download and test before we release to make sure the tarballs are
> valid
> Matti
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] [pypy-commit] pypy default: "eh". On pypy we need to be careful in which order we have pendingblocks.

2017-11-20 Thread Maciej Fijalkowski
ah indeed, that's a much better fix :-)

The original was done a bit haphazardly :-)

On Sun, Nov 19, 2017 at 11:07 PM, Armin Rigo  wrote:
> Hi,
>
> On 9 November 2017 at 10:07, Antonio Cuni  wrote:
>> I suppose that the explanation that you put in the commit message should
>> also go in a comment inside the source code, else when someone sees it it's
>> just obscure.
>
> Done in d00a16ef468f.  I reverted the addition of ShuffleDict, and
> instead made a two-lines tweak to the logic at the place where
> ordering matters---with a long comment :-)
>
>
> A bientôt,
>
> Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


[pypy-dev] Let's fix buildbots

2017-11-14 Thread Maciej Fijalkowski
Hi everyone

I recently looked here: http://buildbot.pypy.org/summary

and it looks quite grim. Can we stop for a second and fix most of
those so we have some level of greenness around?

Cheers,
fijal
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] RPython proposed deprecation: translator.platform .distutils_platform

2017-11-09 Thread Maciej Fijalkowski
+1

On Mon, Nov 6, 2017 at 3:45 PM, Matti Picus  wrote:
> In addition to the arm, bsd, cygwin, darwin, freebsd, linux, maemo (really?)
> netbsd, openbsd, posix, and windows rpython/translator/platforms, we have
> one called "distutils_platform" that is allegedly supposed to allow one to
> specify the target platform and use distutils factilities to compile an
> RPython program for that target. I would like to deprecate (remove) it. The
> trigger for my proposal is the need to import setuptools to enable disutils
> to find the MSVC compiler on a system that uses the "Visual for Python"
> compilers, which I came across when setting up a new win32 build slave.
> Rather than try and fix the problem I would prefer to remove the platform.
>
> Any objections?
> Matti
>
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
>
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Leysin Winter sprint

2017-11-01 Thread Maciej Fijalkowski
Hi Anto

With the climate change >15/03 is not a very good winter sprint, is it?

On Sun, Oct 29, 2017 at 11:47 PM, Antonio Cuni  wrote:
> Hi Armin,
>
> I would probably be unable to come during the first two weeks of march, so
> for me the preference is basically "anything >= 15/03"
>
> On Sun, Oct 29, 2017 at 11:11 PM, Armin Rigo  wrote:
>>
>> Hi all,
>>
>> I'm trying to organise the next winter sprint a bit in advance.  It
>> might tentatively be occurring in March 2018.  If people have
>> preferences for the exact week, please do tell :-)
>>
>>
>> A bientôt,
>>
>> Armin.
>> ___
>> pypy-dev mailing list
>> pypy-dev@python.org
>> https://mail.python.org/mailman/listinfo/pypy-dev
>
>
>
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
>
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] fiber support

2017-10-17 Thread Maciej Fijalkowski
Heh, sure

Did you look whether such functionality can be replicated with
PyThreadState_New and PyThreadState_Swap?

Cheers,
fijal

On Tue, Oct 17, 2017 at 7:57 PM, David Callahan <dcalla...@fb.com> wrote:
> Hi Maciej, thanks for the reply
>
> I have been warned before about the performance of cpyext but we have 
> hundreds of source dependencies and removing such reliance does not help us 
> in the short term.  Thus, performance is not a concern yet since we are not 
> even at a point to run the code and know what is expensive and what is 
> irrelevant.
>
> At this point, it is not clear to me how to mimic this code using the cpyext 
> later at all.  Or if it is even necessary if PyPy has a different 
> architecture for exception handling.
>
> --david
>
> On 10/17/17, 12:10 PM, "Maciej Fijalkowski" <fij...@gmail.com> wrote:
>
> Hi David.
>
> You're probably completely aware that such calls (using cpyext) would
> completely kill performance, right? I didn't look in detail, but in
> order to get performance, you would need to write such calls using
> cffi.
>
> Best regards,
> Maciej Fijalkowski
>
> On Mon, Oct 16, 2017 at 7:18 PM, David Callahan <dcalla...@fb.com> wrote:
> > folly:fibers  
> (https://github.com/facebook/folly/tree/master/folly/fibers )
> > is a C++ package for lightweight, cooperatively scheduled threads.  We 
> have
> > an application which extends this to CPython by adding the following
> > save/restore code around task function invocation:
> >
> >
> >
> >   auto tstate = PyThreadState_Get();
> >
> >   CHECK_NOTNULL(tstate);
> >
> >   auto savedFrame = tstate->frame;
> >
> >   auto savedExcType = tstate->exc_type;
> >
> >   auto savedExcValue = tstate->exc_value;
> >
> >   auto savedExcTraceback = tstate->exc_traceback;
> >
> >   func();
> >
> >   tstate->frame = savedFrame;
> >
> >   tstate->exc_type = savedExcType;
> >
> >   tstate->exc_value = savedExcValue;
> >
> >   tstate->exc_traceback = savedExcTraceback;
> >
> >
> >
> > (here func is a boost::python::object)
> >
> >
> >
> > This does not work in PyPy 5.9 immediately because the thread state 
> object
> > does not expose these fields nor are there accessor methods.
> >
> >
> >
> > Is there a way to get similar effect in PyPy?
> >
> >
> >
> > Thanks
> >
> > david
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > ___
> > pypy-dev mailing list
> > pypy-dev@python.org
> > 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__mail.python.org_mailman_listinfo_pypy-2Ddev=DwIBaQ=5VD0RTtNlTh3ycd41b3MUw=lFyiPUrFdOHdaobP7i4hoA=J_q51G31j5IKPKCE_2ZALuqBrWgdBs58pczmLTt_ml8=ZPehk3U44BJaOoWPII5iCNEv91mTCk_VoMR4Xq9Oh9k=
> >
>
>
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] fiber support

2017-10-17 Thread Maciej Fijalkowski
Hi David.

You're probably completely aware that such calls (using cpyext) would
completely kill performance, right? I didn't look in detail, but in
order to get performance, you would need to write such calls using
cffi.

Best regards,
Maciej Fijalkowski

On Mon, Oct 16, 2017 at 7:18 PM, David Callahan <dcalla...@fb.com> wrote:
> folly:fibers  (https://github.com/facebook/folly/tree/master/folly/fibers )
> is a C++ package for lightweight, cooperatively scheduled threads.  We have
> an application which extends this to CPython by adding the following
> save/restore code around task function invocation:
>
>
>
>   auto tstate = PyThreadState_Get();
>
>   CHECK_NOTNULL(tstate);
>
>   auto savedFrame = tstate->frame;
>
>   auto savedExcType = tstate->exc_type;
>
>   auto savedExcValue = tstate->exc_value;
>
>   auto savedExcTraceback = tstate->exc_traceback;
>
>   func();
>
>   tstate->frame = savedFrame;
>
>   tstate->exc_type = savedExcType;
>
>   tstate->exc_value = savedExcValue;
>
>   tstate->exc_traceback = savedExcTraceback;
>
>
>
> (here func is a boost::python::object)
>
>
>
> This does not work in PyPy 5.9 immediately because the thread state object
> does not expose these fields nor are there accessor methods.
>
>
>
> Is there a way to get similar effect in PyPy?
>
>
>
> Thanks
>
> david
>
>
>
>
>
>
>
>
>
>
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
>
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] dist.py:261: UserWarning: Unknown distribution option: 'cffi_modules'

2017-10-06 Thread Maciej Fijalkowski
Hi David

Maybe pypy 3 does not come with correct distutils/setuptools gimmick
to convince everyone cffi is installed? The way you described should
generally work with PyPy. I'll have a look tomorrow at the sprint.

Cheers,
fijal

On Fri, Oct 6, 2017 at 4:24 PM, David Callahan  wrote:
> Thanks Alex,
>
>
>
> We are using setuptools 34.2.0, we don’t use pip. In our environment, each
> package it built from source and installed into a separate directory and a
> long PYTHONPATH constructed. This works fine for CPython  but apparently has
> some snag with PyPy.
>
>
>
> From: Alex Gaynor 
> Date: Friday, October 6, 2017 at 5:32 AM
> To: David Callahan 
> Cc: PyPy Dev 
> Subject: Re: [pypy-dev] dist.py:261: UserWarning: Unknown distribution
> option: 'cffi_modules'
>
>
>
> Hi David,
>
>
>
> We test cryptography against PyPy in our CI, and I install it
> semi-regularly, so I'd expect it to work :-)
>
>
>
> Are you able to reproduce this reliably? If yes, can you include full
> instructions?
>
>
>
> If not: what versions of pip and setuptools do you have?
>
>
>
> Alex
>
>
>
> On Thu, Oct 5, 2017 at 8:30 PM, David Callahan  wrote:
>
> Has anyone experience building the package cryptography-1.9 using PyPy and
> setup.py?
>
>
>
> In the build phase I encounter this diagnostic
>
>
>
> …/python/pypy.5.8/…/lib-python/3/distutils/dist.py:261: UserWarning: Unknown
> distribution option: 'cffi_modules'
>
>
>
> but the stage seems to complete without error.  Is this expected or
> suspicious?
>
>
>
> Thanks david
>
>
>
>
>
>
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
>
>
>
>
>
> --
>
> "I disapprove of what you say, but I will defend to the death your right to
> say it." -- Evelyn Beatrice Hall (summarizing Voltaire)
> "The people's good is the highest law." -- Cicero
>
> GPG Key fingerprint: D1B3 ADC0 E023 8CA6
>
>
>
>
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
>
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] distutils configuration

2017-10-04 Thread Maciej Fijalkowski
Hi David

As far as I remember, there are two implementations of sysconfig,
because CPython one parser Makefile or something like that which we
don't even really have, or more precisely didn't really have at the
time.

Overwriting CC and LDFLAGS can be definitely consider a bug, I would
accept the patch that changes it to accept it only conditionally.

PS. If you're using PyPy to compile C extensions using distutils (as
opposed to say cffi), while it's generally supported, the resulting
extensions are quite slow.

Best regards,
Maciej Fijalkowski

On Wed, Oct 4, 2017 at 2:52 PM, David Callahan <dcalla...@fb.com> wrote:
> I need to adjust the default configuration used by “distutils” for
> compilation.
>
>
>
> In my default build of branch release-pypy3.5-5.x, the toplevel
> “distutils.sysconfig” dispatches as follows:
>
>
>
> if '__pypy__' in sys.builtin_module_names:
>
> from distutils.sysconfig_pypy import *
>
> from distutils.sysconfig_pypy import _config_vars # needed by setuptools
>
>
>
> and sysconfig_pypy.py unconditionally sets some configuration variables such
> as “CC” and “LDSHARED”.
>
>
>
> CPython reads from a configuration file which can be change with an
> environment variable and it is this technique we use.  This does not yet
> seem supported in PyPy’s alternate implementation
> “distutils.sysconfig_cpython.py”
>
>
>
> Two questions:
>
> Generally, why are there are two implementations of sysconfig and what
> motivates the default choice?
>
> What changes are recommended to allow a configuration file here?
>
>
>
> Thanks
>
> david
>
>
>
>
>
>
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
>
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Does pypy 58 arm version support numpy?

2017-09-07 Thread Maciej Fijalkowski
Wait for pypy 5.9 :-)

On Sun, Aug 20, 2017 at 6:29 AM, Gelin Yan  wrote:
> Hi All
>
> When I tried to use pip to install numpy, I got some compiler failtures.
> I want to know whether pypy arm version fully support numpy.
>
> OS: raspbian
> Gcc version: 4.9.2
> Pypy version: 5.8
> Hardware: raspberry pi 3
>
>
> Regards
>
> gelin yan
>
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
>
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] PyBaseExceptionObject has no member named ‘args’

2017-08-29 Thread Maciej Fijalkowski
Hi Stuart.

PyCairo should not abuse the API by directly accessing members of
structures, this is not supported on pypy

Cheers,
fijal

On Tue, Aug 15, 2017 at 12:57 AM, Stuart Axon via pypy-dev
 wrote:
> Found this trying to compile pycairo, is it worth opening a bug about? (I
> realise there is CairoCFFI, but they don't have feature parity - OTOH, CFFI
> probably is the way to go eventually).
>
>
> $ python setup.py install
> running install
> running build
> running build_py
> creating build
> creating build/lib.linux-x86_64-2.7
> creating build/lib.linux-x86_64-2.7/cairo
> copying cairo/__init__.py -> build/lib.linux-x86_64-2.7/cairo
> running build_ext
> building 'cairo._cairo' extension
> creating build/temp.linux-x86_64-2.7
> creating build/temp.linux-x86_64-2.7/cairo
> cc -pthread -DNDEBUG -O2 -fPIC -I/usr/include/cairo -I/usr/include/glib-2.0
> -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -I/usr/include/pixman-1
> -I/usr/include/freetype2 -I/usr/include/libpng16
> -I/home/stu/.virtualenvs/pypy-nightly/include -c cairo/device.c -o
> build/temp.linux-x86_64-2.7/cairo/device.o -fno-strict-aliasing
> cc -pthread -DNDEBUG -O2 -fPIC -I/usr/include/cairo -I/usr/include/glib-2.0
> -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -I/usr/include/pixman-1
> -I/usr/include/freetype2 -I/usr/include/libpng16
> -I/home/stu/.virtualenvs/pypy-nightly/include -c cairo/bufferproxy.c -o
> build/temp.linux-x86_64-2.7/cairo/bufferproxy.o -fno-strict-aliasing
> cc -pthread -DNDEBUG -O2 -fPIC -I/usr/include/cairo -I/usr/include/glib-2.0
> -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -I/usr/include/pixman-1
> -I/usr/include/freetype2 -I/usr/include/libpng16
> -I/home/stu/.virtualenvs/pypy-nightly/include -c cairo/error.c -o
> build/temp.linux-x86_64-2.7/cairo/error.o -fno-strict-aliasing
> In file included from
> /home/stu/.virtualenvs/pypy-nightly/include/Python.h:81:0,
>  from cairo/error.c:32:
> cairo/error.c: In function ‘error_init’:
> cairo/error.c:111:35: error: ‘PyBaseExceptionObject {aka struct
> }’ has no member named ‘args’
>  if(PyTuple_GET_SIZE(self->base.args) >= 2) {
>^
> /home/stu/.virtualenvs/pypy-nightly/include/object.h:61:39: note: in
> definition of macro ‘Py_SIZE’
> #define Py_SIZE(ob)  (((PyVarObject*)(ob))->ob_size)
>^~
> cairo/error.c:111:8: note: in expansion of macro ‘PyTuple_GET_SIZE’
>  if(PyTuple_GET_SIZE(self->base.args) >= 2) {
> ^~~~
> In file included from
> /home/stu/.virtualenvs/pypy-nightly/include/Python.h:108:0,
>  from cairo/error.c:32:
> cairo/error.c:112:49: error: ‘PyBaseExceptionObject {aka struct
> }’ has no member named ‘args’
>  status_obj = PyTuple_GET_ITEM(self->base.args, 1);
>  ^
> /home/stu/.virtualenvs/pypy-nightly/include/tupleobject.h:23:53: note: in
> definition of macro ‘PyTuple_GET_ITEM’
> #define PyTuple_GET_ITEM(op, i) (((PyTupleObject *)(op))->ob_item[i])
>  ^~
> In file included from
> /home/stu/.virtualenvs/pypy-nightly/include/Python.h:81:0,
>  from cairo/error.c:32:
> cairo/error.c: In function ‘error_str’:
> cairo/error.c:152:36: error: ‘PyBaseExceptionObject {aka struct
> }’ has no member named ‘args’
>  if (PyTuple_GET_SIZE(self->base.args) >= 1) {
> ^
> /home/stu/.virtualenvs/pypy-nightly/include/object.h:61:39: note: in
> definition of macro ‘Py_SIZE’
> #define Py_SIZE(ob)  (((PyVarObject*)(ob))->ob_size)
>^~
> cairo/error.c:152:9: note: in expansion of macro ‘PyTuple_GET_SIZE’
>  if (PyTuple_GET_SIZE(self->base.args) >= 1) {
>  ^~~~
> In file included from
> /home/stu/.virtualenvs/pypy-nightly/include/Python.h:108:0,
>  from cairo/error.c:32:
> cairo/error.c:153:56: error: ‘PyBaseExceptionObject {aka struct
> }’ has no member named ‘args’
>  return PyObject_Str(PyTuple_GET_ITEM(self->base.args, 0));
> ^
> /home/stu/.virtualenvs/pypy-nightly/include/tupleobject.h:23:53: note: in
> definition of macro ‘PyTuple_GET_ITEM’
> #define PyTuple_GET_ITEM(op, i) (((PyTupleObject *)(op))->ob_item[i])
>  ^~
> error: command 'cc' failed with exit status 1
>
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
>
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] GIL removal

2017-08-15 Thread Maciej Fijalkowski
Hi Brad

Thanks for your mail! We're pleased to hear that pypy is getting used
and is giving benefits :-)

The interest will be measured in "can we find someone to fund
$50/$100k proposal". We already have $15k backing plus $20k that we
have left over for STM. Can we find someone willing to fund the rest?
I'm more than happy to draft a contract (we have a commercial entity)
to work on that topic.

Best regards,
Maciej Fijalkowski

On Mon, Aug 14, 2017 at 6:10 PM, Brad Kish <brad.k...@arcticwolf.com> wrote:
> The blog post on the GIL removal work seems to indicate that the work is 
> gated on “interest of the community and the commercial partners”.
>
> How is this interest going to be measured?
>
> We use pypy in production at Arctic Wolf Networks, and have several cases 
> where this change could benefit our use cases.
>
> We would be interested in helping support this project.
>
> Brad.
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


[pypy-dev] Link to paper

2017-07-21 Thread Maciej Fijalkowski
Hi Wim

There was a complaint on reddit, that, quoting: "The link to the pyhpc
paper does not work for me, does anyone have a mirror? Would love to
read it."

can you fix the link please?

Cheers,
fijal
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] okay to rename cppyy -> _cppyy

2017-07-18 Thread Maciej Fijalkowski
that's how cffi works FYI. +1 from me

On Wed, Jul 19, 2017 at 12:33 AM,   wrote:
> Hi,
>
> any objections to renaming cppyy into _cppyy?
>
> I want to be able to do a straight 'pip install cppyy' and then use it
> w/o further gymnastics (this works today for CPython), but then I can't
> have 'cppyy' be a built-in module.
>
> (You can pip install PyPy-cppyy-backend, but then you'd still have to deal
> with LD_LIBRARY_PATH and certain cppyy features are PyPy version dependent
> even as they need not be as they are pure Python.)
>
> The pip-installed cppyy will still use the built-in _cppyy for the PyPy
> specific parts (low-level manipulations etc.).
>
> I'm also moving the cppyy documentation out of the pypy documentation and
> placing it on its own (http://cppyy.readthedocs.io/), given that the
> CPython side of things now works, too.
>
> Yes, no, conditional?
>
> Thanks,
>  Wim
> --
> wlavrij...@lbl.gov--+1 (510) 486 6411--www.lavrijsen.net
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Call rpython from outside

2017-07-16 Thread Maciej Fijalkowski
Hi Armin

We ended up (Aleksandr is here at pycon russia) using rffi_platform to
get the exact shape of the structure from the header file. There were
a few bugs how exactly this got mapped, so it ended up being a good
way to do it.

On Sat, Jul 15, 2017 at 8:02 AM, Armin Rigo  wrote:
> Hi Aleksandr,
>
> On 11 July 2017 at 18:33, Aleksandr Koshkin  wrote:
>> So ok, I have to specify headers containing my structs and somehow push it
>> to rpython toolchain, if I got you correctly.
>> 0. Why? This structures are already described in the vm file as a bunch of
>> ffi.CStruct objects.
>
> rffi.CStruct() is used to declare the RPython interface for structs
> that are originally defined in C.
>
> You can use lltype.Struct(), but it's not recommended in your case
> because lltype.Struct() is meant to define structs in RPython where
> you *don't* need a precise C-level struct; for example,
> lltype.Struct() could rename and reorder the fields in C if it is more
> efficient.
>
> We don't have a direct way to declare the struct in RPython but also
> force it to generate exactly the C struct declaration you want,
> because we never needed it.  You need to use rffi.CStruct() and write
> the struct in the .h file manually too.
>
>> 1. If I have to, how would I do that, is there any example of embedding
>> rpython into something?
>
> Not really.  Look maybe at tests using rpython.rlib.entrypoint, like
> rpython/translator/c/test/test_genc:test_entrypoints.
>
>
> A bientôt,
>
> Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Call rpython from outside

2017-07-11 Thread Maciej Fijalkowski
you're missing #includes

On Tue, Jul 11, 2017 at 8:00 PM, Aleksandr Koshkin
<tinysnipp...@gmail.com> wrote:
> Thanks for your reply. I have reworked my code a bit - now it uses CStruct
> instead of Struct.
> https://github.com/magniff/rere/blob/master/rere/vm/vm_main.py#L92
> Now it fails with a rather obscure error https://pastebin.com/MZkni9bU
> Anyway, Maciej, see you at PyConRu 17)
>
> 2017-07-11 18:20 GMT+03:00 Maciej Fijalkowski <fij...@gmail.com>:
>>
>> Sorry wrong, lltype.GcStruct is GC managed, lltype.Struct should work.
>>
>> However, please use rffi.CStruct (as it's better defined) and
>> especially rffi.CArray, since lltype.Array contains length field
>>
>> On Tue, Jul 11, 2017 at 6:49 PM, Maciej Fijalkowski <fij...@gmail.com>
>> wrote:
>> > lltype.Struct is a GC-managed struct, you don't want to have this as a
>> > part of API (use CStruct)
>> >
>> > On Mon, Jul 10, 2017 at 6:15 PM, Aleksandr Koshkin
>> > <tinysnipp...@gmail.com> wrote:
>> >> Here is a link to a function that buggs me.
>> >> https://github.com/magniff/rere/blob/master/rere/vm/vm_main.py#L110
>> >> I am using this headers for CFFI:
>> >> https://github.com/magniff/rere/blob/master/rere/build/vm_headers.h
>> >>
>> >> 2017-07-10 17:03 GMT+03:00 Aleksandr Koshkin <tinysnipp...@gmail.com>:
>> >>>
>> >>> Sup, guys.
>> >>> I want my rpython function to be invokable from outside world
>> >>> specifically
>> >>> be python. I have wrapped my function with entrypoint_highlevel and it
>> >>> appeared in shared object. So far so good. As a first argument this
>> >>> function
>> >>> takes a pointer to a C struct, and there is a problem. I have
>> >>> precisely
>> >>> recreated this struct in RPython as a lltypes.Struct (not
>> >>> rffi.CStruct) and
>> >>> annotated by this object my entrypoint signature, but it seams that
>> >>> some
>> >>> fields of the passed struct are messed up (shifted basically). Could
>> >>> it be
>> >>> because I used Struct instead of CStruct? I am using CFFI as a binding
>> >>> generator.
>> >>>
>> >>> --
>> >>> Kind regards, Aleksandr Koshkin.
>> >>
>> >>
>> >>
>> >>
>> >> --
>> >> Kind regards, Aleksandr Koshkin.
>> >>
>> >> ___
>> >> pypy-dev mailing list
>> >> pypy-dev@python.org
>> >> https://mail.python.org/mailman/listinfo/pypy-dev
>> >>
>
>
>
>
> --
> Kind regards, Aleksandr Koshkin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Call rpython from outside

2017-07-11 Thread Maciej Fijalkowski
Sorry wrong, lltype.GcStruct is GC managed, lltype.Struct should work.

However, please use rffi.CStruct (as it's better defined) and
especially rffi.CArray, since lltype.Array contains length field

On Tue, Jul 11, 2017 at 6:49 PM, Maciej Fijalkowski <fij...@gmail.com> wrote:
> lltype.Struct is a GC-managed struct, you don't want to have this as a
> part of API (use CStruct)
>
> On Mon, Jul 10, 2017 at 6:15 PM, Aleksandr Koshkin
> <tinysnipp...@gmail.com> wrote:
>> Here is a link to a function that buggs me.
>> https://github.com/magniff/rere/blob/master/rere/vm/vm_main.py#L110
>> I am using this headers for CFFI:
>> https://github.com/magniff/rere/blob/master/rere/build/vm_headers.h
>>
>> 2017-07-10 17:03 GMT+03:00 Aleksandr Koshkin <tinysnipp...@gmail.com>:
>>>
>>> Sup, guys.
>>> I want my rpython function to be invokable from outside world specifically
>>> be python. I have wrapped my function with entrypoint_highlevel and it
>>> appeared in shared object. So far so good. As a first argument this function
>>> takes a pointer to a C struct, and there is a problem. I have precisely
>>> recreated this struct in RPython as a lltypes.Struct (not rffi.CStruct) and
>>> annotated by this object my entrypoint signature, but it seams that some
>>> fields of the passed struct are messed up (shifted basically). Could it be
>>> because I used Struct instead of CStruct? I am using CFFI as a binding
>>> generator.
>>>
>>> --
>>> Kind regards, Aleksandr Koshkin.
>>
>>
>>
>>
>> --
>> Kind regards, Aleksandr Koshkin.
>>
>> ___
>> pypy-dev mailing list
>> pypy-dev@python.org
>> https://mail.python.org/mailman/listinfo/pypy-dev
>>
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Call rpython from outside

2017-07-11 Thread Maciej Fijalkowski
lltype.Struct is a GC-managed struct, you don't want to have this as a
part of API (use CStruct)

On Mon, Jul 10, 2017 at 6:15 PM, Aleksandr Koshkin
 wrote:
> Here is a link to a function that buggs me.
> https://github.com/magniff/rere/blob/master/rere/vm/vm_main.py#L110
> I am using this headers for CFFI:
> https://github.com/magniff/rere/blob/master/rere/build/vm_headers.h
>
> 2017-07-10 17:03 GMT+03:00 Aleksandr Koshkin :
>>
>> Sup, guys.
>> I want my rpython function to be invokable from outside world specifically
>> be python. I have wrapped my function with entrypoint_highlevel and it
>> appeared in shared object. So far so good. As a first argument this function
>> takes a pointer to a C struct, and there is a problem. I have precisely
>> recreated this struct in RPython as a lltypes.Struct (not rffi.CStruct) and
>> annotated by this object my entrypoint signature, but it seams that some
>> fields of the passed struct are messed up (shifted basically). Could it be
>> because I used Struct instead of CStruct? I am using CFFI as a binding
>> generator.
>>
>> --
>> Kind regards, Aleksandr Koshkin.
>
>
>
>
> --
> Kind regards, Aleksandr Koshkin.
>
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
>
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Suggestion : Build Using PyPy page to showcase.

2017-06-13 Thread Maciej Fijalkowski
Hey Phyo

Good job, thanks for the feedback!

On Tue, May 30, 2017 at 9:47 AM, Phyo Arkar  wrote:
> We had launched our chat-room for medical consultation , in our country.
> It is reaching 5000 users in first month and growing fast with 400
> concurrent user max , daily.
>
> We need a page to showcase projects running 100% on pypy runtime!
>
> Regards
>
> Phyo.
>
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
>
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] win32 buildbot

2017-05-12 Thread Maciej Fijalkowski
Hi Yuri.

While I do not have powers to make a decision, I think it's very
reasonable if pypy pays for such parts. If you can provide cost
estimates, I'll try to go through the appropriate channels to have
this approved, if you're interested.

On Wed, May 10, 2017 at 9:53 PM, Yury V. Zaytsev  wrote:
> On Wed, 10 May 2017, Matti Picus wrote:
>
>> It seems own tests are taking a very long time on this machine. I have
>> explored using jom.exe (https://github.com/qt-labs/jom) instead of nmake to
>> allow parallel compilation, which reduces my run of rpython's
>> translator\c\test\test_typed.py from ~700 secs to ~600 secs, where on the
>> same machine it is under 200 secs with linux. Note that the buildbot
>> requires over 2800 secs for the same test (see
>> http://buildbot.pypy.org/builders/own-win-x86-32/builds/1395/steps/shell_6/logs/stdio)
>
>
> It is simply so that the machine kindly provided by the folks at Dartmouth
> College where I migrated the Windows VM to is way slower than my old box,
> and has only 2 physical cores in total...
>
> Also take a note, that the same machine also does build & test runs for
> git-annex, which take about an hour each, so that also doesn't speedup PyPy
> builds.
>
> Anyways, in the mean time, it looks like I've found colo in Germany for my
> old machine, but I still need to:
>
>   * Get it shipped to the new datacenter
>   * Partly replace old hardware
>   * Install it & set it up anew
>
> No ETAs, but something is moving in some direction at least. Getting
> hardware is for now an unsolved problem though. I need 2 x 2/3 TB SATA
> enterprise HDDs and 2 x enterprise SSDs + would be good to replace
> unreliable DDR3 RAM sticks... Stuff that I have is 5 years old, and HDDs
> tend to fail when they are old :-/
>
>
> --
> Sincerely yours,
> Yury V. Zaytsev
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] pypy real world example, a django project data processing. but slow...

2017-03-31 Thread Maciej Fijalkowski
What I meant is that ORM is slow *and* it takes forever to warmup.
Your code might not run long enough for the ORM to be warm. It's also
very likely it'll end up slower on pypy. one thing you can do is to
run PYPYLOG=jit-summary:- pypy  and copy paste the
summary output

The only way to store the warmed up state is to keep the process alive
(as a daemon) and rerun it further. You can see if it speeds up after
two or three runs in one process and make decisions accordingly.

On Thu, Mar 30, 2017 at 2:09 PM, Vláďa Macek <ma...@sandbox.cz> wrote:
> Hi Maciej (and others?),
>
> I know I must be one of many who wanted a gain without pain. :-) Just gave
> it a try without having an opportunity for some deeper profiling due to my
> project deadlines. I just thought to get in touch in case I missed
> something apparent to you from the combination I reported.
>
> ORM might me slow, but I compare interpreters, not ORMs. Here's my
> program's final stats of processing the input file (nginx access log):
>
> CPython 2.7.6 32bit
> 130.1 secs, 177492 valid lines (866160 invalid), 8021 l/s, max density 72 l/s
>
> pypy2-v5.7.0-linux32
> 183.0 secs, 177492 valid lines (866160 invalid), 5703 l/s, max density 72 l/s
>
> This is longer run than what I tried previously and surely this is not a
> "double time". But still significantly slower.
>
> Each line is analyzed using a regexp, which I read is slow in pypy.
>
> Both runs have exactly same input and output. Subjectively, the processing
> debugging output really got faster gradually for pypy, cpython is constant
> speed. Is it normal that the warmup can take minutes? I don't know the 
> details.
>
> In production, this processing is run from cron every five minutes. Is it
> possible to store the warmed-up state between runs? (Note: I have *.pyc
> files disabled at home using PYTHONDONTWRITEBYTECODE=1.)
>
> I know it's annoying I don't share code and I'm sorry. With this mail I
> just wanted to give out some numbers for the possibly curious.
>
> The pypy itself is interesting and I hope I'll return to it someday more
> thoroughly.
>
> Thanks again & have a nice day,
>
> Vláďa
>
>
> On 27.3.2017 17:21, Maciej Fijalkowski wrote:
>> Hi Vlada
>>
>> Generally speaking, if we can't have a look there is incredibly little
>> we can do "I have a program" can be pretty much anything.
>>
>> It is well known that django ORM is very slow (both on pypy and on
>> cpython) and makes the JIT take forever to warm up. I have absolutely
>> no idea how long is your run at full CPU, but this is definitely one
>> of your suspects
>>
>> On Sun, Mar 26, 2017 at 1:06 PM, Vláďa Macek <ma...@sandbox.cz> wrote:
>>> Hi, recently I asked my friends to run my sort of a benchmark on their
>>> machines (attached). The goal was to test the speed of different data
>>> access in python2 and python3, 32bit and 64bit. One of my friends sent me
>>> the pypy results -- the script ran fast as hell! Astounding.
>>>
>>> At home I have a 64bit Dell laptop running 32bit Ubuntu 14.04. I downloaded
>>> your binary
>>> https://bitbucket.org/pypy/pypy/downloads/pypy2-v5.7.0-linux32.tar.bz2 and
>>> confirmed my friend's results, wow.
>>>
>>> I develop a large Django project, that includes a big amount of background
>>> data processing. Reads large files, computes, issues much SQL to postgresql
>>> via psycopg2, every 5 minutes. Heavily uses memcache daemon between runs.
>>>
>>> I'd welcome a speedup here very much.
>>>
>>> So let's give it a try. Installed psycopg2cffi (via pip in virtualenv), set
>>> up the paths and ran. The computation printouts were the same, very
>>> promising -- taking into account how complicated the project is! The SQL
>>> looked right too. My respect on compatiblity!
>>>
>>> Unfortunately, the time needed to complete was double in comparison CPython
>>> 2.7 for exactly the same task.
>>>
>>> You mention you might have some tips for why it's slow. Are you interested
>>> in getting in touch? Although I rather can't share the code and data with
>>> you, I'm offering a real world example of significant load that might help
>>> Pypy get better.
>>>
>>> Thank you,
>>>
>>> --
>>> : Vlada Macek  :  http://macek.sandbox.cz  : +420 608 978 164
>>> : UNIX && Dev || Training : Python, Django : PGP key 97330EBD
>>>
>>> (Disclaimer: The opinions expressed herein are not necessarily those
>>> of my employer, not necessarily mine, and probably not necessary.)
>>>
>
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] pypy real world example, a django project data processing. but slow...

2017-03-27 Thread Maciej Fijalkowski
Hi Vlada

Generally speaking, if we can't have a look there is incredibly little
we can do "I have a program" can be pretty much anything.

It is well known that django ORM is very slow (both on pypy and on
cpython) and makes the JIT take forever to warm up. I have absolutely
no idea how long is your run at full CPU, but this is definitely one
of your suspects

On Sun, Mar 26, 2017 at 1:06 PM, Vláďa Macek  wrote:
> Hi, recently I asked my friends to run my sort of a benchmark on their
> machines (attached). The goal was to test the speed of different data
> access in python2 and python3, 32bit and 64bit. One of my friends sent me
> the pypy results -- the script ran fast as hell! Astounding.
>
> At home I have a 64bit Dell laptop running 32bit Ubuntu 14.04. I downloaded
> your binary
> https://bitbucket.org/pypy/pypy/downloads/pypy2-v5.7.0-linux32.tar.bz2 and
> confirmed my friend's results, wow.
>
> I develop a large Django project, that includes a big amount of background
> data processing. Reads large files, computes, issues much SQL to postgresql
> via psycopg2, every 5 minutes. Heavily uses memcache daemon between runs.
>
> I'd welcome a speedup here very much.
>
> So let's give it a try. Installed psycopg2cffi (via pip in virtualenv), set
> up the paths and ran. The computation printouts were the same, very
> promising -- taking into account how complicated the project is! The SQL
> looked right too. My respect on compatiblity!
>
> Unfortunately, the time needed to complete was double in comparison CPython
> 2.7 for exactly the same task.
>
> You mention you might have some tips for why it's slow. Are you interested
> in getting in touch? Although I rather can't share the code and data with
> you, I'm offering a real world example of significant load that might help
> Pypy get better.
>
> Thank you,
>
> --
> : Vlada Macek  :  http://macek.sandbox.cz  : +420 608 978 164
> : UNIX && Dev || Training : Python, Django : PGP key 97330EBD
>
> (Disclaimer: The opinions expressed herein are not necessarily those
> of my employer, not necessarily mine, and probably not necessary.)
>
>
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
>
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Speeds of various utf8 operations

2017-03-05 Thread Maciej Fijalkowski
Yes sure, I'm aware of that :-)

The problem only shows up with "start" and "end" parameters being used

On Mon, Mar 6, 2017 at 11:13 AM, Armin Rigo <armin.r...@gmail.com> wrote:
> Hi Maciej,
>
> On 5 March 2017 at 20:24, Maciej Fijalkowski <fij...@gmail.com> wrote:
>> This is checking for spaces in unicode (so it's known to be valid utf8)
>
> Ok, then you might have missed another property of UTF-8: when you
> check for "being a substring" in UTF-8, you don't need to do any
> decoding.  Instead you only need to check "being a substring" with the
> two encoded UTF-8 strings.  This always works as expected, i.e. you
> can never get a positive answer by chance.  So for example:
>
> x in y   can be implemented as   x._utf8 in y._utf8
>
> and in this case, you can find spaces in a unicode string just by
> searching for the 10 byte patterns that are spaces-encoded-as-UTF-8
> (11 if you also count '\n\r' as one such pattern).
>
> That's also how the 're' module could be rewritten to directly handle
> UTF-8 strings, instead of decoding it first.
>
>
> A bientôt,
>
> Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Speeds of various utf8 operations

2017-03-05 Thread Maciej Fijalkowski
This is checking for spaces in unicode (so it's known to be valid utf8)

On Sun, Mar 5, 2017 at 11:14 AM, Armin Rigo <armin.r...@gmail.com> wrote:
> Hi Maciej,
>
> On 4 March 2017 at 19:01, Maciej Fijalkowski <fij...@gmail.com> wrote:
>> def next_codepoint_pos(code, pos):
>> chr1 = ord(code[pos])
>> if chr1 < 0x80:
>> return pos + 1
>> if 0xC2 >= chr1 <= 0xDF:
>> return pos + 2
>> if chr >= 0xE0 and chr <= 0xEF:
>> return pos + 3
>> return pos + 4
>
> If you don't want error checking, then you can simplify a bit the
> range checks here.  Maybe it gives some more gains, but who knows:
>
> def next_codepoint_pos(code, pos):
> chr1 = ord(code[pos])
> if chr1 < 0x80:
> return pos + 1
> if chr1 <= 0xDF:
> return pos + 2
> if chr1 <= 0xEF:
> return pos + 3
> return pos + 4
>
>
> A bientôt,
>
> Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


[pypy-dev] revisit web assembly?

2017-03-04 Thread Maciej Fijalkowski
I just found that: https://sourceware.org/ml/binutils/2017-03/msg00044.html

It might be cool to see, maybe we can relatively easily compile to the
web platform. Even interpreter-only version could be quite
interesting.

Cheers,
fijal
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Speeds of various utf8 operations

2017-03-04 Thread Maciej Fijalkowski
Hi phyo

The mail is about during operations in c/assembler. I will have more
detailed python level benchmarks while I progress with my branch.


On 04 Mar 2017 7:36 PM, "Phyo Arkar" <phyo.arkarl...@gmail.com> wrote:

SSE measn https://en.wikipedia.org/wiki/Streaming_SIMD_Extensions?

in comparison to CPython is this much slower ?

On Sun, Mar 5, 2017 at 12:32 AM Maciej Fijalkowski <fij...@gmail.com> wrote:

> Hello everyone
>
> I've been experimenting a bit with faster utf8 operations (and
> conversion that does not do much). I'm writing down the results so
> they don't get forgotten, as well as trying to put them in rpython
> comments.
>
> As far as non-SSE algorithms go, for things like splitlines, split
> etc. is important to walk the utf8 string quickly and check properties
> of characters.
>
> So far the current finding has been that lookup table, for example:
>
>  def next_codepoint_pos(code, pos):
>  chr1 = ord(code[pos])
>  if chr1 < 0x80:
>  return pos + 1
> return pos + ord(runicode._utf8_code_length[chr1 - 0x80])
>
> is significantly slower than following code (both don't do error checking):
>
> def next_codepoint_pos(code, pos):
> chr1 = ord(code[pos])
> if chr1 < 0x80:
> return pos + 1
> if 0xC2 >= chr1 <= 0xDF:
> return pos + 2
> if chr >= 0xE0 and chr <= 0xEF:
> return pos + 3
> return pos + 4
>
> The exact difference depends on how much multi-byte characters are
> there and how big the strings are. It's up to 40%, but as a general
> rule, the more ascii characters are, the less of an impact it has, as
> well as the larger they are, the more impact memory/L2/L3 cache has.
>
> PS. SSE will be faster still, but we might not want SSE for just splitlines
>
> Cheers,
> fijal
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
>
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Speeds of various utf8 operations

2017-03-04 Thread Maciej Fijalkowski
Er... why would it be slower than cpython?

Anyway, the speeds I'm reporting on are based on C/assembler programs so far.

On Sat, Mar 4, 2017 at 7:36 PM, Phyo Arkar <phyo.arkarl...@gmail.com> wrote:
> SSE measn https://en.wikipedia.org/wiki/Streaming_SIMD_Extensions?
>
> in comparison to CPython is this much slower ?
>
> On Sun, Mar 5, 2017 at 12:32 AM Maciej Fijalkowski <fij...@gmail.com> wrote:
>>
>> Hello everyone
>>
>> I've been experimenting a bit with faster utf8 operations (and
>> conversion that does not do much). I'm writing down the results so
>> they don't get forgotten, as well as trying to put them in rpython
>> comments.
>>
>> As far as non-SSE algorithms go, for things like splitlines, split
>> etc. is important to walk the utf8 string quickly and check properties
>> of characters.
>>
>> So far the current finding has been that lookup table, for example:
>>
>>  def next_codepoint_pos(code, pos):
>>  chr1 = ord(code[pos])
>>  if chr1 < 0x80:
>>  return pos + 1
>> return pos + ord(runicode._utf8_code_length[chr1 - 0x80])
>>
>> is significantly slower than following code (both don't do error
>> checking):
>>
>> def next_codepoint_pos(code, pos):
>> chr1 = ord(code[pos])
>> if chr1 < 0x80:
>> return pos + 1
>> if 0xC2 >= chr1 <= 0xDF:
>> return pos + 2
>> if chr >= 0xE0 and chr <= 0xEF:
>> return pos + 3
>> return pos + 4
>>
>> The exact difference depends on how much multi-byte characters are
>> there and how big the strings are. It's up to 40%, but as a general
>> rule, the more ascii characters are, the less of an impact it has, as
>> well as the larger they are, the more impact memory/L2/L3 cache has.
>>
>> PS. SSE will be faster still, but we might not want SSE for just
>> splitlines
>>
>> Cheers,
>> fijal
>> ___
>> pypy-dev mailing list
>> pypy-dev@python.org
>> https://mail.python.org/mailman/listinfo/pypy-dev
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


[pypy-dev] Speeds of various utf8 operations

2017-03-04 Thread Maciej Fijalkowski
Hello everyone

I've been experimenting a bit with faster utf8 operations (and
conversion that does not do much). I'm writing down the results so
they don't get forgotten, as well as trying to put them in rpython
comments.

As far as non-SSE algorithms go, for things like splitlines, split
etc. is important to walk the utf8 string quickly and check properties
of characters.

So far the current finding has been that lookup table, for example:

 def next_codepoint_pos(code, pos):
 chr1 = ord(code[pos])
 if chr1 < 0x80:
 return pos + 1
return pos + ord(runicode._utf8_code_length[chr1 - 0x80])

is significantly slower than following code (both don't do error checking):

def next_codepoint_pos(code, pos):
chr1 = ord(code[pos])
if chr1 < 0x80:
return pos + 1
if 0xC2 >= chr1 <= 0xDF:
return pos + 2
if chr >= 0xE0 and chr <= 0xEF:
return pos + 3
return pos + 4

The exact difference depends on how much multi-byte characters are
there and how big the strings are. It's up to 40%, but as a general
rule, the more ascii characters are, the less of an impact it has, as
well as the larger they are, the more impact memory/L2/L3 cache has.

PS. SSE will be faster still, but we might not want SSE for just splitlines

Cheers,
fijal
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Numpy on PyPy : cpyext

2017-03-03 Thread Maciej Fijalkowski
Hi Yash

Is your software open source? I'm happy to check it out for you

I think the c-level profiling for vmprof is relatively new, you would
need to use pypy nightly in order to get that level of insight.
Additionally, we're working on cpyext improvements *right now* stay
tuned.

If there is a good case for speeding up numpy, we can get it a lot
faster than it is right now and seek some funding for that. Neural
networks might be one of those!

Best regards,
Maciej Fijalkowski

On Fri, Mar 3, 2017 at 2:31 AM, Singh, Yashwardhan
<yashwardhan.si...@intel.com> wrote:
> Hi Everyone,
>
> I am using numpy on pypy to train a deep neural network. For my workload
> numpy on pypy is taking twice the time to train as numpy on Cpython. I am
> using Numpy via cpyext.
>
> I read in the documentation, "Performance-wise, the speed is mostly the same
> as CPython's NumPy (it is the same code); the exception is that interactions
> between the Python side and NumPy objects are mediated through the slower
> cpyext layer (which hurts a few benchmarks that do a lot of
> element-by-element array accesses, for example)." Is there any way in which
> I can profile my application to see how much additional overhead cypext
> layer is adding or is it the numpy via pypy which is slowing down the
> things. I have tried vmprof, but I couldn't figure out from it how much time
> cpyext layer is taking.
>
> Any help will be highly appreciated.
>
> Regards
> Yash
>
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
>
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Pandas on PyPy

2017-02-28 Thread Maciej Fijalkowski
Hi Singh.

We're working hard on getting pypy to run pandas at this very moment
:-) At present, you would be unlikely to find significant speedups -
our C API emulation layer is slow (unless you do a lot of work outside
of pandas without pandas objects).

We will work on it, wait for the next release, hopefully!

Best regards,
Maciej Fijalkowski

On Tue, Feb 28, 2017 at 8:06 PM, Singh, Yashwardhan
<yashwardhan.si...@intel.com> wrote:
> Hi Everyone,
>
> I was hoping to use Pandas with PyPy to speed up applications, but ran into
> compatibility issues with PyPy.
> Has anyone tried to make PyPy works with Pandas, by any experiment/hacking
> method?
> If yes, would you share your findings with me?  If not, could someone points
> me to where the obstacles are and any potential approach, or plan by this
> community?
>
> Any help is greatly appreciated.
>
> Yash
>
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
>
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] VMProf 0.4.0

2017-02-15 Thread Maciej Fijalkowski
There was definitely a massive problem with libunwind & JIT frames,
which made it unsuitable for windows and os x

Another issue was that libunwind made traces ten times bigger, for no
immediate benefit other than "it might be useful some day" and added
complexity.

On linux I was getting ~7% of stack that was not correctly rebuilt.
This is not an issue if you assume that the 7% is statistically
distributed evenly, but I heavily doubt that is the case (and there is
no way to check) which made us build a more robust approach. I think I
would like the following properties:

* everything works without libunwind, native=True raises an exception
* with libunwind, we don't loose frames in python just because
libunwind is unable to reconstruct the stack
* we don't pay 10x storage just because there is an option to want native frames

Can we have that?

PS. How does the new approach works? If it always uses libunwind and
ditches the original approach I'm very much -1

Cheers,
fijal


On Wed, Feb 15, 2017 at 12:10 PM, Richard Plangger  wrote:
> Hi,
>
>> Avoiding to make it a hard dependency would be a good idea.  Also,
>> libunwind is a hack that showed problems when vmprof previously
>> supported C frames, and it was removed for that reason.  Maybe you
>> should give a word about why re-enabling C frames with libunwind looks
>> ok now?
>
> Can you elaborate on that? My understanding is that Maciej at some point
> complained that there are some issues and removed that feature (which
> Antonio was not particularly happy about). I never learned the issues.
> One complaint I remember (should be on IRC logs some place):
>
> It was along the lines: "... some times libunwind returns garbage ..."
>
> Speaking from my experience during the development:
>
> I have not seen a C stack trace that is totally implausible (even though
> I thought so, but I turned out to be some other error). It was very hard
> to get the trampoline right and it was also not easy to get the stack
> walking right (considering it should work for all platform combinations).
>
> There are several cases where libunwind could say: 'hey, unsure what to
> do, cannot rebuild the stack' and those cases now lead to skipping the
> sample in the signal.
>
> I'm happy to accept the fact that 'libunwind is garbage/hack/...' if
> somebody proves that. So if anyone has some insight on that, please
> speak up. There are alternatives like: use libbacktrace from gcc (which
> is already included in vmprof now).
>
> Cheers,
> Richard
>
>
>
>
>
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Leysin sprint?

2017-01-16 Thread Maciej Fijalkowski
I have a place to stay in Zurich so it's a bit of a win for me :-)

I would prefer this to be somewhere around early March I think, but I
can't yet commit

Cheers,
fijal

On Mon, Jan 16, 2017 at 8:43 PM, Manuel Jacob  wrote:
> Hi,
>
> Fortunately, this year I can place my exams quite flexibly.  I'm staying in
> Brussels until the February 7th after FOSDEM, so anything after that would
> work perfectly for me.
>
> I'd prefer a Leysin sprint, but mostly for reasons of nostalgia rather than
> more serious considerations.
>
> -Manuel
>
>
> On 2017-01-15 09:41, Armin Rigo wrote:
>>
>> Hi all,
>>
>> I'm starting to organize the Leysin sprint for this winter.
>>
>> The first note is that the Swiss Python Summit will take place on Feb
>> 17th near Zurich (http://www.python-summit.ch/).  I'll give a talk
>> about RevDB, the reverse debugger.
>>
>> One option would be to add a few days of sprint at or near the
>> conference location.  That may be a way to attract more people.  The
>> other option would be a regular sprint in Leysin.  For that case, the
>> dates are still completely open.  (It snowed a lot!)
>>
>> Anyone that thinks about coming, please tell me your preferences and
>> dates!
>>
>>
>> A bientôt,
>>
>> Armin.
>> ___
>> pypy-dev mailing list
>> pypy-dev@python.org
>> https://mail.python.org/mailman/listinfo/pypy-dev
>
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Mercurial "general delta"

2016-12-04 Thread Maciej Fijalkowski
as far as I understood the answer on #mercurial, it seems to be "no".
If the only effect is that it uses less space on bitbucket servers
then I presume it's not our problem

On Sun, Dec 4, 2016 at 1:41 PM, Maciej Fijalkowski <fij...@gmail.com> wrote:
> I don't know :-)
>
> let's see
>
> On Sun, Dec 4, 2016 at 1:28 PM, Armin Rigo <armin.r...@gmail.com> wrote:
>> Hi Maciej,
>>
>> On 4 December 2016 at 12:27, Maciej Fijalkowski <fij...@gmail.com> wrote:
>>> can we somehow make it new thing on the bb? that would mean download
>>> gets smaller
>>
>> Is that true?
>>
>>
>> Armin
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Mercurial "general delta"

2016-12-04 Thread Maciej Fijalkowski
I don't know :-)

let's see

On Sun, Dec 4, 2016 at 1:28 PM, Armin Rigo <armin.r...@gmail.com> wrote:
> Hi Maciej,
>
> On 4 December 2016 at 12:27, Maciej Fijalkowski <fij...@gmail.com> wrote:
>> can we somehow make it new thing on the bb? that would mean download
>> gets smaller
>
> Is that true?
>
>
> Armin
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Mercurial "general delta"

2016-12-04 Thread Maciej Fijalkowski
can we somehow make it new thing on the bb? that would mean download
gets smaller

On Sat, Dec 3, 2016 at 8:37 PM, Armin Rigo  wrote:
> Hi all,
>
> A quick note for people that have a PyPy repo since years and are
> using it like me without ever re-cloning from Bitbucket: the format of
> modern mercurial repos is more compact (500MB instead of almost
> 900MB), but convertion of existing repos is not automatic.  The new
> format is called General Delta.
>
> If your directory ".hg/store/" is closer to 900 than 500 MB, try this:
>
> hg clone -U --config format.generaldelta=1 --pull OLDREPO NEWREPO
>
> You can then throw away the OLDREPO.  Or like me you can (1) make sure
> OLDREPO is up-to-date with default without any local changes; (2) only
> throw away OLDREPO/.hg and replace it with NEWREPO/.hg; (3) optionally
> re-modify .hg/hgrc; (4) then do "hg update default" to resynchronize.
>
> I noticed this because Bitbucket says the pypy repo is not General
> Delta.  I don't know how to fix that, or if it really matters at all:
> I guess newly cloned repos from Bitbucket will automatically use the
> latest format anyway.
>
>
> A bientôt,
>
> Armin.
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] PGO Optimized Binary

2016-11-10 Thread Maciej Fijalkowski
Hi

8% of that is very good if you can reproduce it across multiple runs
(there is a pretty high variance I would think).

You can also try running with --jit off. This gives you an indication
of the speed of interpreter, which is a part of warmup

On Wed, Nov 9, 2016 at 12:30 AM, Singh, Yashwardhan
 wrote:
> Hi Armin,
>
>
> Thanks for your feedback.
> We ran one of the program suggested by you as an example for evaluation:
> cd rpython/jit/tl
> non-pgo-pypy ../../bin/rpython -O2 --source targettlr
> pgo-pypy ../../bin/rpython -O2 --source targettlr
>
> We got the following results :
> Non-Pgo pypy -
> [Timer] Timings:
> [Timer] annotate   ---  7.5 s
> [Timer] rtype_lltype   ---  5.8 s
> [Timer] backendopt_lltype  ---  3.6 s
> [Timer] stackcheckinsertion_lltype ---  0.1 s
> [Timer] database_c --- 19.6 s
> [Timer] source_c   ---  2.6 s
> [Timer] =
> [Timer] Total: --- 39.2 s
>
> PGO-pypy :
> [Timer] Timings:
> [Timer] annotate   ---  7.6 s
> [Timer] rtype_lltype   ---  5.1 s
> [Timer] backendopt_lltype  ---  3.1 s
> [Timer] stackcheckinsertion_lltype ---  0.0 s
> [Timer] database_c --- 18.5 s
> [Timer] source_c   ---  2.3 s
> [Timer] =
> [Timer] Total: --- 36.6 s
>
> The delta in performance  between these two is about 8%.
>
> We are working on getting the data to identify the % of interpreted code vs 
> the jited code for both the binaries. We are also working on creating a pull 
> request to get a better feedback on the change.
>
> Regards
> Yash
>
> 
> From: Armin Rigo [armin.r...@gmail.com]
> Sent: Wednesday, November 02, 2016 2:18 AM
> To: Singh, Yashwardhan
> Cc: pypy-dev@python.org
> Subject: Re: [pypy-dev] PGO Optimized Binary
>
> Hi,
>
> On 31 October 2016 at 22:28, Singh, Yashwardhan
>  wrote:
>> We applied compiler assisted optimization technique called PGO or Profile 
>> Guided Optimization while building PyPy, and found performance got improved 
>> by up to 22.4% on the Grand Unified Python Benchmark (GUPB) from “hg clone 
>> https://hg.python.org/benchmarks”.  The below result table shows majority of 
>> 51 micros got performance boost with 8 got performance regression.
>
> The kind of performance improvement you are measuring involves only
> short- or very short-running programs.  A few years ago we'd have
> shrugged it off as irrelevant---"please modify the benchmarks so that
> they run for at least 10 seconds, more if they are larger"---because
> the JIT compiler doesn't have a chance to warm up.  But we'd also have
> shrugged off your whole attempt---"PGO optimization cannot change
> anything to the speed of JIT-produced machine code".
>
> Nowadays we tend to look more seriously at the cold or warming-up
> performance too, or at least we know that we should look there.  There
> are (stalled) plans of setting up a second benchmark suite for PyPy
> which focuses on this.
>
> You can get an estimate of whether you're looking at cold or hot code:
> compare the timings with CPython.  Also, you can set the environment
> variable  ``PYPYLOG=jit-summary:-`` and look at the first 2 lines to
> see how much time was spent warming up the JIT (or attempting to).
>
> Note that we did enable PGO long ago, with modest benefits.  We gave
> up when our JIT compiler became good enough.  Maybe now is the time to
> try again (and also, PGO itself might have improved in the meantime).
>
>> We’d like to get some input on how to contribute our optimization recipe to 
>> the PyPy dev tree, perhaps by creating an item to the PyPy issue tracker?
>
> The best would be to create a pull request so that we can look at your
> changes more easily.
>
>> In addition, we would also appreciate any other benchmark or real world use 
>> based workload as alternatives to evaluate this.
>
> You can take any Python program that runs either very shortly or not
> faster than CPython.  For a larger example (with Python 2.7):
>
> cd rpython/jit/tl
> python ../../bin/rpython -O2 --source targettlr# 24 secs
> pypy ../../bin/rpython -O2 --source targettlr# 39 secs
>
>
> A bientôt,
>
> Armin.
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Performance degradation in the latest pypy

2016-10-29 Thread Maciej Fijalkowski
Hi Saud

copy.copy is a lot slower in new pypy. Minimal example to reproduce it:


import copy, sys


class A(object):

def __init__(self, a, b, c):

self.a = a

self.b = b

self.c = c

self.d = a

self.e = b


a = [A(1, 2, 3), A(4, 5, 6), A(6, 7, 8)]


def f():

for i in range(int(sys.argv[1])):

copy.copy(a[i % 3])


f()


(run it with 1 000 000)

The quick workaround for you is to write a copy function manually,
since you use it in one specific place, that should make things a lot
faster.

In the meantime we'll look into the fix

On Sat, Oct 29, 2016 at 8:02 AM, Saud Alwasly <saudalwa...@gmail.com> wrote:
> here is the new zip file including the missing package
>
>
> On Fri, Oct 28, 2016 at 10:58 AM Maciej Fijalkowski <fij...@gmail.com>
> wrote:
>>
>> (vmprof)[brick:~/Downloads/dir] $ python Simulate.py
>>
>> Traceback (most recent call last):
>>
>>   File "Simulate.py", line 9, in 
>>
>> from pypyplotPKG.Ploter import Graph, plot2, hist, bar, bar2
>>
>>   File "/Users/dev/Downloads/dir/pypyplotPKG/Ploter.py", line 1, in
>> 
>>
>> import pypyplotPKG.pypyplot as plt
>>
>>   File "/Users/dev/Downloads/dir/pypyplotPKG/pypyplot.py", line 3, in
>> 
>>
>> from shareddataPKG import SharedData as SharedData
>>
>> ImportError: No module named shareddataPKG
>>
>> On Fri, Oct 28, 2016 at 12:42 AM, Saud Alwasly <saudalwa...@gmail.com>
>> wrote:
>> > Sure, here it is.
>> > you can run Simulation.py
>> >
>> > On Wed, Oct 26, 2016 at 9:08 AM Armin Rigo <armin.r...@gmail.com> wrote:
>> >>
>> >> Hi Saud,
>> >>
>> >> On 26 October 2016 at 11:19, Saud Alwasly <saudalwa...@gmail.com>
>> >> wrote:
>> >> > I have noticed severe  performance degradation after upgrading pypy
>> >> > from
>> >> > pypy4.0.1 to pypy5.4.1.
>> >> >
>> >> > I attached the call graph for the same script running on both:
>> >> > it seems that the copy module is an issue in the new version.
>> >>
>> >> Please give us an example of runnable code that shows the problem.  We
>> >> (or at least I) can't do anything with a call graph.
>> >>
>> >>
>> >> A bientôt,
>> >>
>> >> Armin.
>> >
>> >
>> > ___
>> > pypy-dev mailing list
>> > pypy-dev@python.org
>> > https://mail.python.org/mailman/listinfo/pypy-dev
>> >
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Performance degradation in the latest pypy

2016-10-28 Thread Maciej Fijalkowski
(vmprof)[brick:~/Downloads/dir] $ python Simulate.py

Traceback (most recent call last):

  File "Simulate.py", line 9, in 

from pypyplotPKG.Ploter import Graph, plot2, hist, bar, bar2

  File "/Users/dev/Downloads/dir/pypyplotPKG/Ploter.py", line 1, in 

import pypyplotPKG.pypyplot as plt

  File "/Users/dev/Downloads/dir/pypyplotPKG/pypyplot.py", line 3, in 

from shareddataPKG import SharedData as SharedData

ImportError: No module named shareddataPKG

On Fri, Oct 28, 2016 at 12:42 AM, Saud Alwasly  wrote:
> Sure, here it is.
> you can run Simulation.py
>
> On Wed, Oct 26, 2016 at 9:08 AM Armin Rigo  wrote:
>>
>> Hi Saud,
>>
>> On 26 October 2016 at 11:19, Saud Alwasly  wrote:
>> > I have noticed severe  performance degradation after upgrading pypy from
>> > pypy4.0.1 to pypy5.4.1.
>> >
>> > I attached the call graph for the same script running on both:
>> > it seems that the copy module is an issue in the new version.
>>
>> Please give us an example of runnable code that shows the problem.  We
>> (or at least I) can't do anything with a call graph.
>>
>>
>> A bientôt,
>>
>> Armin.
>
>
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
>
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Dontbug: A reversible debugger for PHP (similar in concept to RevDB for Python/PyPy)

2016-10-20 Thread Maciej Fijalkowski
Hi Sidharth

I see dontbug is based on rr - I would like to know how well rr works
for you. We've tried using rr for pypy and it didn't work as
advertised. On the other hand it seems the project is moving fast, so
maybe it works these days

On Thu, Oct 20, 2016 at 9:50 PM, Sidharth Kshatriya
 wrote:
> Dear All,
>
> There have been some interesting blogs about RevDB a reversible debugger for
> Python on the PyPy blog.
>
> I'd like to tell you about Dontbug, a reversible debugger for PHP that I
> recently released. Like RevDB, it allows you to debug forwards and backwards
> -- but in PHP.
>
> See:
> https://github.com/sidkshatriya/dontbug
>
> For a short (1m35s) demo video:
> https://www.youtube.com/watch?v=DA76z77KtY0
>
> Why am I talking about this in a PyPy mailing list :-) ? Firstly, because I
> think reverse debuggers for dynamic languages are relatively rare -- so its
> a good idea that we know about each other! Secondly, the fact that there are
> more and more reversible debuggers for various languages every year means
> that reverse debugging is definitely entering the mainstream. We could be at
> an inflexion point here!
>
> Hope you guys find Dontbug interesting!
>
> Thanks,
>
> Sidharth
>
>
>
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
>
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] NumPy or NumPyPy

2016-09-28 Thread Maciej Fijalkowski
Hi

We're maintaining both and we are planning into merging (in some point
in the future) the good parts of numpypy (speed of array access) with
the good parts of c numpy (compatibility)

On Tue, Sep 27, 2016 at 11:01 PM, Papa, Florin  wrote:
> Hello,
>
> I read this announcement [1] saying that "over 99% of the upstream numpy test 
> suite" is passed. Is this when using pypy with the upstream numpy (thanks to 
> the incremental improvements brought to cpyext) or is it when using pypy with 
> numpypy?
>
> I also found this link [2], tracking numpypy status. Is numpypy still 
> maintained or does this buildbot track the regression tests for upstream 
> numpy in pypy (using cpyext)?
>
> [1] https://morepypy.blogspot.com/2016/08/pypy2-v54-released-incremental.html
> [2] http://buildbot.pypy.org/numpy-status/latest.html
>
> Thank you,
> Florin
>
>
>
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] improve error message when missing 'self' in method definition

2016-09-28 Thread Maciej Fijalkowski
On Tue, Sep 27, 2016 at 8:33 PM, Ryan Gonzalez  wrote:
> Have you considered bringing this up on python-ideas, too?

python-idea is generally quite a hostile place. That said, if you
think it's worth your effort to submit it there, feel free to do so,
just the core pypy devs feel their time is better spent elsewhere than
arguing on python-ideas

>
> On Tue, Sep 27, 2016 at 12:19 PM, Carl Friedrich Bolz  wrote:
>>
>> Hi all,
>>
>> I read this paper today about common mistakes that Python beginners
>> make:
>>
>>
>> https://www.researchgate.net/publication/307088989_Some_Trouble_with_Transparency_An_Analysis_of_Student_Errors_with_Object-oriented_Python
>>
>> The most common one by far is forgetting the "self" parameter in the
>> method definition (which also still happens to me regularly). The error
>> message is not particularly enlightening, if you don't quite understand
>> the explicit self in Python.
>>
>>
>> So I wonder whether we should print a better error message, something
>> like this:
>>
>> $ cat m.py
>> class A(object):
>> def f(x):
>> return self.x
>> A().f(1)
>>
>> $ pypy m.py
>> Traceback (application-level):
>>   File "m.py", line 4 in 
>> A().f(1)
>> TypeError: f() takes exactly 1 argument (2 given). Did you forget 'self'
>> in the function definition?
>>
>>
>> It's a bit the question how clever we would like this to be to reduce
>> false positives, see the attached patch for a very simple approach.
>>
>> Anyone have opinions?
>>
>> Cheers,
>>
>> Carl Friedrich
>>
>> ___
>> pypy-dev mailing list
>> pypy-dev@python.org
>> https://mail.python.org/mailman/listinfo/pypy-dev
>>
>
>
>
> --
> Ryan
> [ERROR]: Your autotools build scripts are 200 lines longer than your
> program. Something’s wrong.
> http://kirbyfan64.github.io/
>
>
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
>
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] improve error message when missing 'self' in method definition

2016-09-27 Thread Maciej Fijalkowski
I'm +1

On Tue, Sep 27, 2016 at 7:19 PM, Carl Friedrich Bolz  wrote:
> Hi all,
>
> I read this paper today about common mistakes that Python beginners
> make:
>
> https://www.researchgate.net/publication/307088989_Some_Trouble_with_Transparency_An_Analysis_of_Student_Errors_with_Object-oriented_Python
>
> The most common one by far is forgetting the "self" parameter in the
> method definition (which also still happens to me regularly). The error
> message is not particularly enlightening, if you don't quite understand
> the explicit self in Python.
>
>
> So I wonder whether we should print a better error message, something
> like this:
>
> $ cat m.py
> class A(object):
> def f(x):
> return self.x
> A().f(1)
>
> $ pypy m.py
> Traceback (application-level):
>   File "m.py", line 4 in 
> A().f(1)
> TypeError: f() takes exactly 1 argument (2 given). Did you forget 'self'
> in the function definition?
>
>
> It's a bit the question how clever we would like this to be to reduce
> false positives, see the attached patch for a very simple approach.
>
> Anyone have opinions?
>
> Cheers,
>
> Carl Friedrich
>
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
>
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Build Pypy with different Interpreters

2016-09-09 Thread Maciej Fijalkowski
Hi Jan

The short answer is - all of those interpreters are a lot slower than
pypy. Having even relatively good parallelization (and only some steps
can be parallelized) would yield no improvements over using pure pypy
for the most examples

On Thu, Sep 8, 2016 at 2:38 PM, Jan Brohl  wrote:
> Sorry, for the typo - I was asking if it is possible to build pypy *with*
> different interpreters instead of just cpython and pypy.
>
> (using eg "ipy64" instead of "pypy" or "python" in the translation step
> described at
> http://doc.pypy.org/en/latest/build.html#run-the-translation )
>
>
> Am 08.09.2016 um 14:07 schrieb William ML Leslie:
>>
>> On 8 September 2016 at 19:40, Jan Brohl > >wrote:
>>
>>
>> Is it possible to build different interpreters like Stackless,
>> IronPython or Jython?
>>
>>
>> That was actually the original motivation for creating pypy -
>> maintaining all those different python implementations was a lot of
>> unnecessary work.  Stackless support is enabled by default.  Support
>> for translating to CLI and the JVM was eventually dropped for lack of
>> interest.  If someone wanted to re-add that support, they could learn
>> from the mistakes that the previous implementation used.
>>
>> http://doc.pypy.org/en/latest/stackless.html
>>
>>
>>
>>
>> If not - why?
>>
>> If yes - is it (in theory) possible to gain a speedup on those
>> without GIL? (Is there multithreading at all the in translation
>> process?)
>>
>>
>> Translation can't be done concurrently at the moment.  I probably
>> should have expanded upon this in my previous email, and maybe I will;
>> there are a number of global structures, registries, and work lists that
>> would need to be refactored before the translation work could be
>> distributed.  If that's the route the pypy team go, we will consider it
>> after pypy itself supports parallelism.
>>
>> There's another route, which is to support separate compilation, and
>> then to hand off the translation of built-in modules to different
>> executors.  This is itself quite a bit of work due to some inherent
>> properties of rpython.
>>
>> --
>> William Leslie
>>
>> Notice:
>> Likely much of this email is, by the nature of copyright, covered under
>> copyright law.  You absolutely MAY reproduce any part of it in
>> accordance with the copyright law of the nation you are reading this
>> in.  Any attempt to DENY YOU THOSE RIGHTS would be illegal without prior
>> contractual agreement.
>
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


[pypy-dev] jetbrains vulnerability and py3k money

2016-08-15 Thread Maciej Fijalkowski
So apparently someone just made a big donation to pypy 3k project.

Details here: 
http://blog.saynotolinux.com/blog/2016/08/15/jetbrains-ide-remote-code-execution-and-local-file-disclosure-vulnerability-analysis/

Cheers,
fijal
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] how to extend VTUNE support from pypy for application hot spot analysis

2016-08-04 Thread Maciej Fijalkowski
On Thu, Aug 4, 2016 at 8:37 AM, Armin Rigo <ar...@tunes.org> wrote:
> Hi Peter,
>
> On 4 August 2016 at 08:05, Maciej Fijalkowski <fij...@gmail.com> wrote:
>> The second one can be worked on using the same mechanisms as vmprof -
>> there is C API that given the assembler address will give you the
>> python stack. It's defined in
>> rpython/jit/backend/llsupport/src/codemap.c I believe
>
> I should add that vtune's particular license towards Open Source
> projects, trying to prevent any paid work from occurring on an open
> source project, did change my position about it: I now consider vtune
> as unfriendly to open source, as its license subtly prevents growth of
> the projects using it.  I'm not usually the kind of guy that would go
> on a rant about license A or B, but this one strikes me as unworkable.
> This license is made by considering that commercial interests and open
> source never interact.  I will thus reply to it by saying that I may
> consider helping you anyway---for a fee, with a regular commercial
> contract.
>
>
> A bientôt,
>
> Armin.

I fully support armins position here. The notion that Open Source
developers can somehow feed of pure air *or* not use the advantage of
spending endless free hours to make some money strikes me as a very
strange concept.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] how to extend VTUNE support from pypy for application hot spot analysis

2016-08-04 Thread Maciej Fijalkowski
Hi Peter

The first request is fulfilled by vmprof

The second one can be worked on using the same mechanisms as vmprof -
there is C API that given the assembler address will give you the
python stack. It's defined in
rpython/jit/backend/llsupport/src/codemap.c I believe

On Thu, Aug 4, 2016 at 3:05 AM, Wang, Peter Xihong
<peter.xihong.w...@intel.com> wrote:
> HI Armin and Maciej,
>
> Let us know once you have something we could actually try.  One requirement 
> is on the sampling/profiling overhead, ideally <1%, but >5% could be 
> troublesome.  We'd like to allow people to do performance analysis on 
> production systems.
>
> Meanwhile, could I make this as two separate requests:
> 1.  Application hot spot analysis.  Today I could run cProfile with CPython 
> and get application code profiles running OpenStack Swift, but can't do the 
> same thing with PyPy
> 2.  JITed code (assembly) mapping back to the application Python code.  VTUNE 
> integration with HHVM and node.js are completed and working today, and I'd 
> hope to see same capability with PyPy.
>
> Thanks,
>
> Peter
>
>
>
> -Original Message-
> From: armin.r...@gmail.com [mailto:armin.r...@gmail.com] On Behalf Of Armin 
> Rigo
> Sent: Tuesday, August 02, 2016 1:18 AM
> To: Maciej Fijalkowski <fij...@gmail.com>
> Cc: Wang, Peter Xihong <peter.xihong.w...@intel.com>; pypy-dev@python.org
> Subject: Re: [pypy-dev] how to extend VTUNE support from pypy for application 
> hot spot analysis
>
> Hi,
>
> On 2 August 2016 at 10:09, Maciej Fijalkowski <fij...@gmail.com> wrote:
>>> As far as I know (my team members tried this), vmprof does not allow us to 
>>> attach to a running process? We will evaluate 
>>> https://github.com/vmprof/vmprof-python if you think it's doable.
>>
>> You would need some form of process cooperation (I think) but it does
>> not seem impossible. What I would do is I would run a separate thread
>> that accepts something (e.g. a pipe write) and then starts vmprof.
>> vmprof once started is global to all threads
>
> Also, please note that I mentioned vmprof as a way to get started.
> You would need some *like*  what vmprof does; for a clean solution you don't 
> want to enable an additional profiler on your vtune code.
>
>
> A bientôt,
>
> Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Fwd: NumPyPy vs NumPy

2016-08-02 Thread Maciej Fijalkowski
On Tue, Aug 2, 2016 at 8:02 AM, Eli Stevens (Gmail)
 wrote:
> This was meant to go to the list, whoops.
>
> -- Forwarded message --
> From: Eli Stevens (Gmail) 
> Date: Mon, Aug 1, 2016 at 11:01 PM
> Subject: Re: [pypy-dev] NumPyPy vs NumPy
> To: Armin Rigo 
>
>
> On Mon, Aug 1, 2016 at 2:02 AM, Armin Rigo  wrote:
>> By the way, it would make a cool project for someone new to the pypy
>> code base (<= still trying to recruit help in making numpy, although
>> it turned out to be very difficult in the past).
>
> It's certainly something I'd be interested in attempting, but I feel
> like I'm lacking some concrete direction. Some documentation about how
> to get the proper environment (is it as simple as `~/venv/pypy/bin/pip
> install numpy` using the cpython numpy?) and how new tests should be
> written would be really helpful.
>
> Eli

Hi Eli.

I'm not completely sure what you're asking about. Usually more
real-time communication via IRC helps us to clear up those
misunderstandings much easier than over the mail.

If you ask how to install C numpy - yes, like that should work (I
believe there is *some* documentation somewhere?), although I would
install a development version, Not sure about what "new tests" you're
talking about

Cheers,
fijal
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] how to extend VTUNE support from pypy for application hot spot analysis

2016-08-02 Thread Maciej Fijalkowski
On Tue, Aug 2, 2016 at 12:46 AM, Wang, Peter Xihong
 wrote:
> Hi Armin,
>
> As far as I know (my team members tried this), vmprof does not allow us to 
> attach to a running process? We will evaluate 
> https://github.com/vmprof/vmprof-python if you think it's doable.

You would need some form of process cooperation (I think) but it does
not seem impossible. What I would do is I would run a separate thread
that accepts something (e.g. a pipe write) and then starts vmprof.
vmprof once started is global to all threads
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] NumPyPy vs NumPy

2016-08-01 Thread Maciej Fijalkowski
On Mon, Aug 1, 2016 at 10:02 AM, Papa, Florin  wrote:
> Hi Armin,
>
>>The table also shows that PyPy NumPyPy is really slower, even with 
>>vectorization enabled.
>>It seems that the current focus of our work, on continuing to improve cpyext 
>>instead of
>>numpypy, is a good idea.
>
> Does this mean that the main direction is to support NumPy (through improving 
> cpyext)
> instead of maintaining NumPyPy? Is NumPy (with cpyext) fully supported in 
> PyPy, or are there
> any known compatibility issues?
>
> Regards,
> Florin

Hi Florin

The main progress is to merge the two - we want to support NumPy (via
cpyext) and we want things that are fast in numpypy (array access
predominantly) to be used via numpypy
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] integration with java

2016-07-14 Thread Maciej Fijalkowski
Hi Andrey

There are no java bindings for PyPy just yet. You would need to write
your own based on cffi. http://jpype.sourceforge.net/ is one of the
examples of how those things can be done, but you would need to use
cffi as opposed to CPython C API. If you need a quick solution, you
can try compiling jpype against PyPys CPython C API compatibility
layer, but it would be at least very slow.

Best regards,
Maciej Fijalkowski

On Wed, Jul 13, 2016 at 9:49 PM, Andrey Rubik <tirnota...@gmail.com> wrote:
> Hi guys!
>
> I'am new at pypy development and need help.
>
> I want do integration between pypy and jemeter (this project on java:
> http://jmeter.apache.org/).
>
> As i learned on jemeter dev list, i need some pypy-java bindings. It may be
> some .jar file for example.
>
> Where i can get some pypy-java binding (.jar file)?
>
>
> Thanks in advance!
>
>
> --
> Best Regards,
> Andrey Rubik
>
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] PyParallel-style threads

2016-06-20 Thread Maciej Fijalkowski
no, you misunderstood me:

if you want to use multiple processes, you not gonna start a new one
per thing to do. You'll have a process pool and use that. Also, if you
don't use multiprocessing, you don't use pickling, you use something
sane for communication. The PyParallels essentially allows read-only
access to the global state, but read-only is ill defined and ill
enforced (especially in the case of cpy extensions) in Python. So what
do you get as opposed to multiple processing?

On Mon, Jun 20, 2016 at 6:42 PM, Omer Katz  wrote:
> Let's review what forking does in Python from a 10,000ft view:
> 1) It pickles the current state of the process.
> 2) Starts a new Python process
> 3) Unpickles the current state of the process
> There are a lot more memory allocations when forking comparing to starting a
> new thread. That makes forking unsuitable for small workloads.
> I'm guessing that PyPy does not save the trace/optimized ASM of the forked
> process in the parent process so each time you start a new process you have
> to trace again which makes small workloads even less suitable and even large
> processing batches will need to be traced again.
>
> In case of pre-forking servers, each PyPy instance has to trace and optimize
> the same code when there is no reason. Threads would allow us to reduce
> warmup time for this case. It will also consume less memory.
>
> ‫בתאריך יום ב׳, 20 ביוני 2016 ב-17:47 מאת ‪Maciej Fijalkowski‬‏
> <‪fij...@gmail.com‬‏>:‬
>>
>> so quick question - what's the win compared to multiple processes?
>>
>> On Mon, Jun 20, 2016 at 8:51 AM, Omer Katz  wrote:
>> > Hi all,
>> > There was an experiment based on CPython's code called PyParallel that
>> > allows running threads in parallel without STM and modifying source code
>> > of
>> > both Python and C extensions. The only limitation is that they disallow
>> > mutation of global state in parallel context.
>> > I briefly mentioned it before on PyPy's freenode channel.
>> > I'd like to discuss why the approach is useful, how it can benefit PyPy
>> > users and how can it be implemented.
>> > Allowing to run in parallel without mutating global state can help
>> > servers
>> > use each thread to handle a request. It can also allow to log in
>> > parallel or
>> > send an HTTP request (or an AMQP message) without sharing the response
>> > with
>> > the main thread. This is useful in some cases and since PyParallel
>> > managed
>> > to keep the same semantics it (shouldn't) break CPyExt.
>> > If we keep to the following rules:
>> >
>> > No global state mutation is allowed
>> > No new keywords or code modifications required
>> > No CPyExt code is allowed (for now)
>> >
>> > I believe that users can somewhat benefit from this implementation if
>> > done
>> > correctly.
>> > As for implementation, if we can trace the code running in the thread
>> > and
>> > ensure it's not mutating global state and that CPyExt is never used
>> > during
>> > the thread's course we can simply release the GIL when such a thread is
>> > run.
>> > That requires less knowledge than using STM and less code modifications.
>> > However I think that attempting to do so will introduce the same issue
>> > with
>> > caching traces (Armin am I correct here?).
>> >
>> > As for CPyExt, we could copy the same code modifications that
>> > PyParallels
>> > did but I suspect that it will be so slow that the benefit of running in
>> > parallel will be completely lost for all cases but very long threads.
>> >
>> > Is what I'm suggesting even possible? How challenging will it be?
>> >
>> > Thanks,
>> > Omer Katz.
>> >
>> > ___
>> > pypy-dev mailing list
>> > pypy-dev@python.org
>> > https://mail.python.org/mailman/listinfo/pypy-dev
>> >
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] PyParallel-style threads

2016-06-20 Thread Maciej Fijalkowski
so quick question - what's the win compared to multiple processes?

On Mon, Jun 20, 2016 at 8:51 AM, Omer Katz  wrote:
> Hi all,
> There was an experiment based on CPython's code called PyParallel that
> allows running threads in parallel without STM and modifying source code of
> both Python and C extensions. The only limitation is that they disallow
> mutation of global state in parallel context.
> I briefly mentioned it before on PyPy's freenode channel.
> I'd like to discuss why the approach is useful, how it can benefit PyPy
> users and how can it be implemented.
> Allowing to run in parallel without mutating global state can help servers
> use each thread to handle a request. It can also allow to log in parallel or
> send an HTTP request (or an AMQP message) without sharing the response with
> the main thread. This is useful in some cases and since PyParallel managed
> to keep the same semantics it (shouldn't) break CPyExt.
> If we keep to the following rules:
>
> No global state mutation is allowed
> No new keywords or code modifications required
> No CPyExt code is allowed (for now)
>
> I believe that users can somewhat benefit from this implementation if done
> correctly.
> As for implementation, if we can trace the code running in the thread and
> ensure it's not mutating global state and that CPyExt is never used during
> the thread's course we can simply release the GIL when such a thread is run.
> That requires less knowledge than using STM and less code modifications.
> However I think that attempting to do so will introduce the same issue with
> caching traces (Armin am I correct here?).
>
> As for CPyExt, we could copy the same code modifications that PyParallels
> did but I suspect that it will be so slow that the benefit of running in
> parallel will be completely lost for all cases but very long threads.
>
> Is what I'm suggesting even possible? How challenging will it be?
>
> Thanks,
> Omer Katz.
>
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
>
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] asmgcc versus shadowstack

2016-05-30 Thread Maciej Fijalkowski
hi armin

I don't have very deep opinions - but I'm worried about one particular
thing. GCC tends to change its IR with every release, would be parsing
this not be a nightmare that has to be updated with each new release
of gcc?

On Mon, May 30, 2016 at 9:18 AM, Armin Rigo  wrote:
> Hi all,
>
> Recently, we've got a few more of the common bug report "cannot find
> gc roots!".  This is caused by asmgcc somehow failing to parse the
> ".s" files produced by gcc on Linux.
>
> I'm investigating what can be done to improve the situation of asmgcc
> in a more definitive way.  There are basically two solutions:
>
>
> 1) we improve shadowstack.  This is the alternative to asmgcc, which
> is used on any non-Linux platform already.  So far it is around 10%
> slower than asmgcc.
>
> 2) we improve asmgcc by finding some better way than parsing assembler files.
>
>
> I worked during the past month in the branch "shadowstack-perf-2".
> This gives a major improvement on the placement of pushing and popping
> GC roots on the shadow stack.  I think it's worth merging that branch
> in any case.  On x86, it gives roughly 3-4% speed improvements; I'd
> guess on arm it is slightly more.  (I'm comparing the performance
> outside JITted machine code; the JITted machine code we produce is
> more similar.)
>
> The problem is that asmgcc used to be ~10% better.  IMHO, 3-4% is not
> quite enough to be happy and kill asmgcc.  Improving beyond these 3-4%
> seems to require some new ideas.
>
>
> So I'm also thinking about ways to fix asmgcc more generally, this
> time focusing on Linux only; asmgcc contains old code that tries to
> parse MSVC output, and I bet we tried with clang at some point, but
> these attempts both failed.  So let's focus on Linux and gcc only.
>
> Asmgcc does two things with the parsed assembler: it computes the
> stack size at every point, and it tracks some marked variables
> backward until the previous "call" instruction.
>
> I think we can assume that the version of gcc is not older than, say,
> the one on tannit32 (Ubuntu 12.04), which is gcc 4.6.  At least from
> that version, both on x86-32 and x86-64, gcc will emit "CFI
> directives" (https://sourceware.org/binutils/docs/as/CFI-directives.html).
> These are a saner way to get the information about the current stack
> size.
>
> About the backward tracking, we need to have a complete understanding
> of all instructions, even if e.g. for any xmm instruction we just say
> "can't handle GC pointers".  The backward tracking itself is often
> foiled because the assembler is lacking a way to know clearly "this
> call never returns" (e.g. calls to abort(), or to some RPython helper
> that prints stuff and aborts).  In other words, the control flow is
> sometimes hard to get correctly, because a "call" generally returns,
> but not always.  Such mistakes can produce bogus results (including
> "cannot find gc roots!").
>
> What can we do about that?  Maybe we can compile with "-s
> -fdump-final-insns".  This dumps a gcc-specific summary of the RTL,
> which is the final intermediate representation, which looks like it is
> in one-to-one correspondance with the actual assembly.  It would be a
> better input for the backward-tracker, because we don't have to handle
> tons of instructions with unknown effects, and because it contains
> explicit points at which control flow cannot pass.  On the other hand,
> we'd need to parse both the .s and this dump in parallel, matching
> them as we go along.  But I still think it would be better than now.
>
> Of course the best would be to get rid of asmgcc completely...
>
> This mail is meant to be a dump of my current mind's state :-)
>
>
> A bientôt,
>
> Armin.
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


[pypy-dev] Found on the internet

2016-05-27 Thread Maciej Fijalkowski
Another game boy emulator using RPython https://github.com/Baekalfen/PyBoy
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Looking into numpy ndarray.flags.writeable

2016-05-20 Thread Maciej Fijalkowski
the option is --withmod-micronumpy or --allworkingmodules

but the tests are in the test directory and *that's* how you should
run tests (not by playing with interactive)

On Fri, May 20, 2016 at 7:44 PM, Eli Stevens (Gmail)
 wrote:
> More questions!  :)
>
> When I run
>
> pypy> /usr/bin/python bin/pyinteractive.py
>
> I get to a (presumably interpreted, given the startup time) pypy
> prompt, but I cannot import numpy. Is the intent that I somehow
> install numpy into the source checkout's site-packages directory (the
> one listed in sys.path from that interpreted pypy prompt)?
>
> Also, it's pretty clear that when running the tests that "import
> numpy" just gets the numpy from the base interpreter, not from the
> micronumpy included in the pypy source. Is it possible to run the
> numpy tests without doing a full translation?
>
> Thanks,
> Eli
>
> On Thu, May 19, 2016 at 1:36 PM, Eli Stevens (Gmail)
>  wrote:
>> Looks like I need to do something along the lines of:
>>
>> def descr_set_writeable(self, space, w_value):
>> if space.is_true(w_value) != bool(self.flags & NPY.ARRAY_WRITEABLE):
>> self.flags ^= NPY.ARRAY_WRITEABLE
>>
>> (Though I probably need more robust checking to see if the flag *can*
>> be turned off)
>>
>> def descr_setitem(self, space, w_item, w_value):
>> # This function already exists, but just contains the last
>> line with the raise
>> key = space.str_w(w_item)
>> value = space.bool_w(w_value)
>> if key == "W" or key == "WRITEABLE":
>> return self.descr_set_writeable(space, value)
>> raise oefmt(space.w_KeyError, "Unknown flag")
>>
>> ...
>> writeable = GetSetProperty(W_FlagsObject.descr_get_writeable,
>> W_FlagsObject.descr_set_writeable),
>>
>> However I'm not entirely confident about things like space.bool_w,
>> etc. I've read http://doc.pypy.org/en/latest/objspace.html but am
>> still working on internalizing it.
>>
>> Setting the GetSetProperty still results in the TypeError, which makes
>> me wonder how to tell if I'm getting the right flagsobj.py. I don't
>> think that I am. The results of the tests should be the same no matter
>> what python interpreter I'm using, correct? Would running the tests
>> with a virtualenv that has a stock pypy/numpy installed cause issues?
>> What if the virtualenv is cpython?
>>
>> When I run py.test, I see:
>>
>> pytest-2.5.2 from /Users/elis/edit/play/pypy/pytest.pyc
>>
>> Which looks correct (.../play/pypy is my source checkout). But I get
>> the same thing when using cpython to run test_all.py, and there the
>> test passes, so I don't think it's indicative. When I print out
>> np.__file__ inside the test, I get
>>
>> /Users/elis/venv/droidblue-pypy/site-packages/numpy/__init__.pyc
>>
>> Which is the pypy venv I am using to run the tests in the first place,
>> but I'm not sure what the on-disk relationship between numpy and
>> micronumpy actually is. Is there a way from the test_flagobjs.py file
>> to determine what the on-disk location of micronumpy is?
>>
>> I strongly suspect I've got something basic wrong. I also think that
>> the information at
>> http://doc.pypy.org/en/latest/getting-started-dev.html#running-pypy-s-unit-tests
>> and 
>> http://doc.pypy.org/en/latest/coding-guide.html#command-line-tool-test-all
>> conflict somewhat, or are at least unclear as to which approach is the
>> right way in what situation. I'll attempt to clarify whatever it is
>> that's tripping me up once I've got it sorted out.
>>
>> Some other questions I have, looking at micornumpy/concrete.py line 37:
>>
>> class BaseConcreteArray(object):
>> _immutable_fields_ = ['dtype?', 'storage', 'start', 'size', 'shape[*]',
>>   'strides[*]', 'backstrides[*]', 'order', 
>> 'gcstruct',
>>   'flags']
>> start = 0
>> parent = None
>> flags = 0
>>
>> Does that immutable status cascade down into the objects, or is that
>> saying only that myInstance.flags cannot be reassigned (but
>> myInstance.flags.foo = 3 is fine)?
>>
>> interpreter/typedef.py 221:
>>
>> @specialize.arg(0)
>> def make_objclass_getter(tag, func, cls):
>> if func and hasattr(func, 'im_func'):
>> assert not cls or cls is func.im_class
>> cls = func.im_class
>> return _make_objclass_getter(cls)
>>
>> What's the purpose of the tag argument? It doesn't seem to be used
>> here or in _make_descr_typecheck_wrapper, both of which are called
>> from GetSetProperty init. Based on docstrings on _Specialize, it seems
>> like they might be JIT hints. Is that correct?
>>
>> Matti: If it's okay, I'd like to keep the discussion on the list, as
>> I've actively searched through discussions here to avoid asking
>> questions a second time. Hopefully this thread can help the next
>> person.
>>
>> Sorry for the mega-post; thanks for reading.
>> Eli
>>
>> On Thu, May 19, 2016 at 8:23 AM, Armin Rigo 

Re: [pypy-dev] Forwarding...

2016-05-19 Thread Maciej Fijalkowski
Hi Daniel.

As for kickstarter - it requires you to be american to start with :-P

As for numpy etc. - I can assure you we're working on the support for
those libraries as fast as possible, at the same time looking for
funding through commercial sources.

As for the website modernization - yes, this has to be done at some
point soon (and I started doing steps in that direction), but *that*
sort of things are really difficult to fund :-)

Cheers,
fijal

On Thu, May 19, 2016 at 9:29 PM, Kotrfa <kot...@gmail.com> wrote:
> Thanks for answer Maciej!
>
> I'm glad that this is in progress. It isn't possible to make some image
> about the situation from what I have found on the web. You response
> clarifies that a bit. I understand how difficult it can be.
>
> But I disagree with you regarding kickstarter. Pypy is connected to user
> experience. E.g. I am working as datascientists and pypy is running about 3
> times faster on the code I am able to use it on (which is, unfortunately,
> minority - most of it is of course in those 4 libraries which shines red on
> the library support wall - numpy, scipy, pandas, scikit-learn). Similar with
> (py)Spark. I would say there are more data scientists using Python than
> those who likes to use "MicroPython on the ESP8266". The gain this field can
> get from Pypy is quite substantial, even with that conservative estimate
> about 3 times as fast compared to cPython. And that is just one example.
>
> Of course, I cannot ensure that you might get reasonably funded on
> kickstarter-like sites. But, what can you lose by making a campaign? It
> would be definitely much more visible than on your website, which, to be
> honest, could be a bit modernized as well.  And even if it wouldn't be a
> success, you still get PR basically for free.
>
> I, unfortunately, don't have any insights or recommendation, it just
> scratched my mind.
>
> Thanks for your awesome work,
> Daniel
>
> čt 19. 5. 2016 v 18:12 odesílatel Maciej Fijalkowski <fij...@gmail.com>
> napsal:
>>
>> Hi Daniel.
>>
>> We've done all of the proposed scenarios. We had some success talking
>> to companies, but there is a lot of resistance for various reasons
>> (and the successful proposals I can't talk about), including the
>> inability to pay open source from the engineering budget and instead
>> doing it via the marketing budget (which is orders of magnitude
>> slower). In short - you need to offer them something in exchange,
>> which usually means you need to do a good job, but not good enough (so
>> you can fix it for money). This is a very perverse incentive, btu this
>> is how it goes.
>>
>> As for kickstarter - that targets primarily end-user experience and
>> not infrastructure. As such, it's hard to find money from users for
>> infrastructure, because it has relatively few direct users - mostly
>> large companies.
>>
>> As for who is working on this subject - I am. Feel free to get in
>> touch with me via other channels (private mail, gchat, IRC) if you
>> have deeper insights
>>
>> Best regards,
>> Maciej Fijalkowski
>>
>> On Thu, May 19, 2016 at 5:11 PM, Armin Rigo <ar...@tunes.org> wrote:
>> > On 19 May 2016 at 14:58,  <pypy-dev-ow...@python.org> wrote:
>> >> -- Forwarded message --
>> >> From: Daniel Hnyk <hny...@gmail.com>
>> >> To: pypy-dev@python.org
>> >> Cc:
>> >> Date: Thu, 19 May 2016 12:58:36 +
>> >> Subject: Question about funding, again
>> >> Hello,
>> >>
>> >> my question is simple. It strikes me why you don't have more financial
>> >> support, since PyPy might save quite a lot of resources compared to 
>> >> CPython.
>> >> When we witness that e.g. microsoft is able to donate $100k to Jupyter
>> >> (https://ipython.org/microsoft-donation-2013.html), why PyPy, being even
>> >> more generic then Jupyter, has problem to raise few tenths of thousands.
>> >>
>> >> I can find few mentions about this on the internet, but no serious
>> >> article or summary is out there.
>> >>
>> >> Have you tried any of the following?
>> >>
>> >> 1. Trying to get some funding from big companies and organizations such
>> >> as Google, Microsoft, RedHat or some other like Free Software Foundation? 
>> >> If
>> >> not, why not?
>> >> 2. Crowd founding websites such as Kickstarter or Indiegogo get quite a
>> >> big attention nowadays even for similar projects. There were successful
>> >> campaigns

Re: [pypy-dev] Forwarding...

2016-05-19 Thread Maciej Fijalkowski
Hi Daniel.

We've done all of the proposed scenarios. We had some success talking
to companies, but there is a lot of resistance for various reasons
(and the successful proposals I can't talk about), including the
inability to pay open source from the engineering budget and instead
doing it via the marketing budget (which is orders of magnitude
slower). In short - you need to offer them something in exchange,
which usually means you need to do a good job, but not good enough (so
you can fix it for money). This is a very perverse incentive, btu this
is how it goes.

As for kickstarter - that targets primarily end-user experience and
not infrastructure. As such, it's hard to find money from users for
infrastructure, because it has relatively few direct users - mostly
large companies.

As for who is working on this subject - I am. Feel free to get in
touch with me via other channels (private mail, gchat, IRC) if you
have deeper insights

Best regards,
Maciej Fijalkowski

On Thu, May 19, 2016 at 5:11 PM, Armin Rigo <ar...@tunes.org> wrote:
> On 19 May 2016 at 14:58,  <pypy-dev-ow...@python.org> wrote:
>> -- Forwarded message --
>> From: Daniel Hnyk <hny...@gmail.com>
>> To: pypy-dev@python.org
>> Cc:
>> Date: Thu, 19 May 2016 12:58:36 +
>> Subject: Question about funding, again
>> Hello,
>>
>> my question is simple. It strikes me why you don't have more financial 
>> support, since PyPy might save quite a lot of resources compared to CPython. 
>> When we witness that e.g. microsoft is able to donate $100k to Jupyter 
>> (https://ipython.org/microsoft-donation-2013.html), why PyPy, being even 
>> more generic then Jupyter, has problem to raise few tenths of thousands.
>>
>> I can find few mentions about this on the internet, but no serious article 
>> or summary is out there.
>>
>> Have you tried any of the following?
>>
>> 1. Trying to get some funding from big companies and organizations such as 
>> Google, Microsoft, RedHat or some other like Free Software Foundation? If 
>> not, why not?
>> 2. Crowd founding websites such as Kickstarter or Indiegogo get quite a big 
>> attention nowadays even for similar projects. There were successful 
>> campaigns for projects with even smaller target group, such as designers 
>> (https://krita.org/) or video editors (openshot 2). Why haven't you created 
>> a campaign there? Micropython, again, with much smaller target group of 
>> users had got funded as well.
>>
>> Is someone working on this subject? Or is there a general lack of man power 
>> in PyPy's team? Couldn't be someone hired from money already collected?
>>
>> Thanks for an answer,
>> Daniel
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Game state search dominated by copy.deepcopy

2016-05-16 Thread Maciej Fijalkowski
On Mon, May 16, 2016 at 6:53 PM, Eli Stevens (Gmail)
<wickedg...@gmail.com> wrote:
> On Mon, May 16, 2016 at 9:37 AM, Maciej Fijalkowski <fij...@gmail.com> wrote:
>> I don't think pypy is expected to speed up program where the majority
>> of the time is spent in copy.deepcopy. The answer to this question is
>> a bit boring one: don't write algorithms that copy around so much
>> data.
>
> I hadn't expected the copy time to dominate, but I suspect that's
> because the state object was more complex than I was giving it credit
> for (large sets of tuples all nicely hidden away under an abstraction
> layer, that kind of thing).
>
> I was still surprised about the other 50% of the runtime not
> decreasing much (the actual state transformation part), but I'm not
> particularly concerned with that right now, as I'm in the process of
> restructuring the code to be copy on write (which is turning out to be
> less work than I was originally concerned it would be).
>
>
>> I'm sorry we can't give you a very good answer - if you really *NEED*
>> to do that much copying (maybe you can make the copy more lazy?), then
>> maybe shrinking the data structures would help? I can't answer that
>> without having access to the code though, which I would be happy to
>> look at.
>
> Thank you for the offer. If I still am not seeing results after the
> refactor, I might take you up on that.
>
> Since I don't have IP rights to the game in question, would it be
> acceptable to add you to a private repo on github (should we get to
> that point)?
>
> Thanks,
> Eli

Sure, I don't mind.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Game state search dominated by copy.deepcopy

2016-05-16 Thread Maciej Fijalkowski
I don't think pypy is expected to speed up program where the majority
of the time is spent in copy.deepcopy. The answer to this question is
a bit boring one: don't write algorithms that copy around so much
data.

As a general rule - if the vast majority of your work is done by the
runtime there is no way for pypy to speed this up (mostly) and in fact
the exact same algo in C would be as slow or slower.

I'm sorry we can't give you a very good answer - if you really *NEED*
to do that much copying (maybe you can make the copy more lazy?), then
maybe shrinking the data structures would help? I can't answer that
without having access to the code though, which I would be happy to
look at.

Cheers,
fijal

On Sat, May 14, 2016 at 1:19 AM, Eli Stevens (Gmail)
 wrote:
> Hello,
>
> I'm in the process of working on a hobby project to have an AI
> searching through a game state space. I recently ran what I have so
> far on pypy (I had been doing initial work on cpython), and got two
> results that were unexpected:
>
> - The total execution time was basically identical between cpython and pypy
> - The runtime on both pythons was about 50% copy.deepcopy (called on
> the main game state object)
>
> The runtime of the script that I've been using is in the 30s to 2m
> range, depending on config details, and the implementation fits the
> following pretty well:
>
> """
>  The JIT is generally good at speeding up straight-forward Python code
> that spends a lot of time in the bytecode dispatch loop, i.e., running
> actual Python code – as opposed to running things that only are
> invoked by Python code. Good examples include numeric calculations or
> any kind of heavily object-oriented program.
> """
>
> I have already pulled out constant game information (static info like
> unit stats, etc.) into an object that isn't copied, and a lot of the
> numerical data that is copied is stored in a numpy array so that I
> don't have hundreds of dicts, ints, etc.
>
> First, is there a good way to speed up object copying? I've tried
> pickling to a cache, and unpickling from there (so as to only pickle
> once), but that didn't make a significant difference.
>
> http://vmprof.com/#/905dfb71d28626bff6341a5848deae73 (deepcopy)
> http://vmprof.com/#/545f1243b345eb9e41d73a9043a85efd (pickle)
>
> Second, what's the best way to start figuring out why pypy isn't able
> to outperform cpython on my program?
>
> Thanks for any pointers,
> Eli
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] PyPy & Django: recommended mySQL module?

2016-05-09 Thread Maciej Fijalkowski
On Sun, May 8, 2016 at 1:31 PM, Yury V. Zaytsev <y...@shurup.com> wrote:
> On Sun, 8 May 2016, Maciej Fijalkowski wrote:
>
>> For the record - if you are using django ORM, then the mysql binding
>> is unlikely to be your bottleneck for accessing the DB.
>
>
> Yep, that's what I'm using it for... good news.
>

No, that's bad news, Django ORM is terrible, noone should be using it
for anything that requires performance ;-)
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] PyPy & Django: recommended mySQL module?

2016-05-08 Thread Maciej Fijalkowski
On Sun, May 8, 2016 at 11:07 AM, Yury V. Zaytsev <y...@shurup.com> wrote:
> Hi Maciej,
>
> Thanks for the feedback!
>
> On Sun, 8 May 2016, Maciej Fijalkowski wrote:
>>
>>
>> I would personally use something cffi-based, like this:
>> https://github.com/andrewsmedina/mysql-cffi
>
>
> Well, neither this seems to be supported by Django, nor it looks like it's
> actively maintained... :-/
>
>> Generally speaking all cpyext-based solutions will be slower (although
>> these days I know MySQL-Python should indeed just work and not crash) than
>> non-cpyext based solutions but depending on the application you might or
>> might not care.
>
>
> Okay, so I started out with CPython and I'll keep an eye on the load; the
> plan is to bring up a second upstream to experiment with PyPy and see if
> this is getting me anywhere. I will then either try the pure Python fork of
> MySQL-Python, or the original and see if that's really the bottleneck.

For the record - if you are using django ORM, then the mysql binding
is unlikely to be your bottleneck for accessing the DB.

>
> On a related note, has anyone tried binary wheels with PyPy, are they known
> to work? Among other dependencies I have is for example Pillow; so far I've
> been building binary wheels on a dedicated development server and deploying
> them to the application server which doesn't have any complier
> infrastructure installed. Will this simply work for PyPy?

It should.

>
>
> --
> Sincerely yours,
> Yury V. Zaytsev
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] PyPy & Django: recommended mySQL module?

2016-05-08 Thread Maciej Fijalkowski
Hi Yury

Sorry for the late response, on holiday.

I would personally use something cffi-based, like this:
https://github.com/andrewsmedina/mysql-cffi

Generally speaking all cpyext-based solutions will be slower (although
these days I know MySQL-Python should indeed just work and not crash)
than non-cpyext based solutions but depending on the application you
might or might not care.

Cheers,
fijal

On Tue, May 3, 2016 at 7:34 PM, Yury V. Zaytsev  wrote:
> Hi,
>
> I'm thinking of trying PyPy on a Python 2.7 application based on Django,
> and I'm wondering what mySQL module would be the recommended one these
> days to use for the ORM backend?
>
> I understand that the lastest version MySQL-Python should just work,
> although will be going through CPyExt, so I'm not sure how stable it's
> gonna be / what's the performance going to look like.
>
> It seems that there is also a pure Python connector called PyMySQL,
> which should work out of the box and not rely on CPyExt.
>
> Has anyone done any recent benchmarks and/or can give me a hint which of
> the two is currently the way to go?
>
> Many thanks!
>
> --
> Sincerely yours,
> Yury V. Zaytsev
>
>
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Error Messages

2016-04-27 Thread Maciej Fijalkowski
Hi Armin

Any chance we can add this to cpython-differences? Couldn't find it
last time (and misremembered)

On Tue, Apr 26, 2016 at 11:38 PM, Armin Rigo  wrote:
> Hi,
>
> On 26 April 2016 at 19:02, Carl Friedrich Bolz  wrote:
>> Those are simple cases, of course we use the same exception types there.
>> However, if you write exactly the wrong obscure code you sometimes get a
>> different exception type under some conditions.
>
> That's about TypeError versus AttributeError for some attributes of
> built-in types in corner cases.  That's not about ValueError.  For example:
>
 def f(): pass
 del f.__closure__
>
> You get a TypeError in CPython, but an AttributeError in PyPy.  That's
> because PyPy doesn't have the same zoo of built-in attribute types of
> CPython.  For most purposes it doesn't make a difference, but it turns
> out that in this case half of these types raise AttributeError and the
> other half raises TypeError in CPython.
>
>
> A bientôt,
>
> Armin.
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Error Messages

2016-04-26 Thread Maciej Fijalkowski
Not in *those* cases, but there are cases where CPython would raise
different error classes depending e.g. on whether stuff is builtin
function or not builtin function or the cpython errors are
inconsistent. In those cases the exact error might be a bit different
(I must admit I don't remember examples off hand)

On Tue, Apr 26, 2016 at 6:38 PM, Steven D'Aprano <st...@pearwood.info> wrote:
> On Tue, Apr 26, 2016 at 06:16:39PM +0200, Maciej Fijalkowski wrote:
>
>> Typically the exception type is the same, but there is a bunch of
>> differences, especially around ValueError vs TypeError, noone should
>> rely on that anyway
>
> Do you mean that PyPy might change ValueError to TypeError, or vice
> versa? Like this:
>
> # TypeError in CPython
> len(None)
> => raises ValueError
>
> # ValueError in CPython
> [].index("a")
> => raises TypeError
>
> That doesn't sound good to me.
>
> If I have misunderstood you, and you're just talking about the error
> strings, then that's fine.
>
>
>
> --
> Steve
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Error Messages

2016-04-26 Thread Maciej Fijalkowski
On Tue, Apr 26, 2016 at 5:30 PM, Steven D'Aprano  wrote:
> On Tue, Apr 26, 2016 at 10:16:29AM -0500, Ryan Gonzalez wrote:
>> I personally think it's fine:
>>
>> 1. CPython has pretty decent error messages. Other than long stack traces
>> with recursion errors, or maybe column offsets, there isn't really anything
>> that could be significantly improved.
>
> Well, I don't know about that... some of the error messages are a bit
> uninformative ("SyntaxError: invalid syntax", although it may be
> difficult to do much about that one). But potential improvements
> should be taken on a case-by-case basis.
>
>
>> 2. Changing the actual errors would likely break...a *lot*!
>
> Do you mean the exception types? Yes, changing the exception types would
> break a lot of code. But changing the error messages shouldn't break
> anything, since the error messages are not part of the function API.
> They've changed before, and they will change again.

Typically the exception type is the same, but there is a bunch of
differences, especially around ValueError vs TypeError, noone should
rely on that anyway
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] PyPY 5.1 release is close, help needed

2016-04-20 Thread Maciej Fijalkowski
can we stress more the memory/warmup improvements? jit-leaner-frontend
makes the warmup quite a bit faster, additionally we improved memory
(both should be on the order of 20% compared to 5.0)

On Wed, Apr 20, 2016 at 6:29 AM, Matti Picus  wrote:
> We are close to releasing PyPy 5.1. The builds of version 3260adbeba4a look
> clean, and the release notice is up-to-date.
>
> I need help:
> - please check that the targz bundles of release 3260adbeba4a, available
> here
>   http://buildbot.pypy.org/nightly/release-5.x
>   are usable on your platform
>
> - Could someone with a large upload bandwidth run the repackages script
>   pypy/tool/release/repackage.sh
>   and then upload the renamed packages to bitbucket
>
> - Please review the release notice
>   http://doc.pypy.org/en/latest/release-5.1.0.html
>
> Thanks
> Matti
>
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
>
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Shift and typed array

2016-04-04 Thread Maciej Fijalkowski
right, so there is no way to get wrap-around arithmetics in cpython
without modifications

On Mon, Apr 4, 2016 at 3:48 PM, Tuom Larsen <tuom.lar...@gmail.com> wrote:
> I would like to avoid numpy, if possible. Even if it might be bundled
> with PyPy (is it?) I still would like the code to run in CPython with
> no dependencies.
>
> On Mon, Apr 4, 2016 at 3:45 PM, Maciej Fijalkowski <fij...@gmail.com> wrote:
>> so numpy64 will give you wrap-around arithmetics. What else are you
>> looking for? :-)
>>
>> On Mon, Apr 4, 2016 at 3:38 PM, Tuom Larsen <tuom.lar...@gmail.com> wrote:
>>> You mean I should first store the result into numpy's `int64`, and
>>> then to `array.array`? Like:
>>>
>>> x = int64(2**63 << 1)
>>> a[0] = x
>>>
>>> Or:
>>>
>>> x = int64(2**63)
>>> x[0] = x << 1
>>>
>>> What the "real types" goes, is this the only option?
>>>
>>> Thanks in any case!
>>>
>>>
>>> On Mon, Apr 4, 2016 at 3:32 PM, Maciej Fijalkowski <fij...@gmail.com> wrote:
>>>> one option would be to use integers from _numpypy module:
>>>>
>>>> from numpy import int64 after installing numpy.
>>>>
>>>> There are obscure ways to get it without installing numpy. Another
>>>> avenue would be to use __pypy__.intop.int_mul etc.
>>>>
>>>> Feel free to complain "no, I want real types that I can work with" :-)
>>>>
>>>> Cheers,
>>>> fijal
>>>>
>>>> On Mon, Apr 4, 2016 at 3:10 PM, Tuom Larsen <tuom.lar...@gmail.com> wrote:
>>>>> Hello!
>>>>>
>>>>> Suppose I'm on 64-bit machine and there is an `a = arrar.array('L',
>>>>> [0])` (item size is 8 bytes). In Python, when an integer does not fit
>>>>> machine width it gets promoted to "long" integer of arbitrary size. So
>>>>> this will fail:
>>>>>
>>>>> a[0] = 2**63 << 1
>>>>>
>>>>> To fix this, one could instead write:
>>>>>
>>>>> a[0] = (2**63 << 1) & (2**64 - 1)
>>>>>
>>>>> My question is, when I know that the result will be stored in
>>>>> `array.array` anyway, how to prevent the promotion to long integers?
>>>>> What is the most performat way to perform such calculations? Is PyPy
>>>>> able to optimize away that `& (2**64 - 1)` when I use `'L'` typecode?
>>>>>
>>>>> I mean, in C I wouldn't have to worry about it as everything above the
>>>>> 63rd bit will be simply cut off. I would like to help PyPy to generate
>>>>> the best possible code, does anyone have some suggestions please?
>>>>>
>>>>> Thanks!
>>>>> ___
>>>>> pypy-dev mailing list
>>>>> pypy-dev@python.org
>>>>> https://mail.python.org/mailman/listinfo/pypy-dev
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Shift and typed array

2016-04-04 Thread Maciej Fijalkowski
but if you dont actually overflow, the cost is really low btw

On Mon, Apr 4, 2016 at 3:49 PM, Maciej Fijalkowski <fij...@gmail.com> wrote:
> right, so there is no way to get wrap-around arithmetics in cpython
> without modifications
>
> On Mon, Apr 4, 2016 at 3:48 PM, Tuom Larsen <tuom.lar...@gmail.com> wrote:
>> I would like to avoid numpy, if possible. Even if it might be bundled
>> with PyPy (is it?) I still would like the code to run in CPython with
>> no dependencies.
>>
>> On Mon, Apr 4, 2016 at 3:45 PM, Maciej Fijalkowski <fij...@gmail.com> wrote:
>>> so numpy64 will give you wrap-around arithmetics. What else are you
>>> looking for? :-)
>>>
>>> On Mon, Apr 4, 2016 at 3:38 PM, Tuom Larsen <tuom.lar...@gmail.com> wrote:
>>>> You mean I should first store the result into numpy's `int64`, and
>>>> then to `array.array`? Like:
>>>>
>>>> x = int64(2**63 << 1)
>>>> a[0] = x
>>>>
>>>> Or:
>>>>
>>>> x = int64(2**63)
>>>> x[0] = x << 1
>>>>
>>>> What the "real types" goes, is this the only option?
>>>>
>>>> Thanks in any case!
>>>>
>>>>
>>>> On Mon, Apr 4, 2016 at 3:32 PM, Maciej Fijalkowski <fij...@gmail.com> 
>>>> wrote:
>>>>> one option would be to use integers from _numpypy module:
>>>>>
>>>>> from numpy import int64 after installing numpy.
>>>>>
>>>>> There are obscure ways to get it without installing numpy. Another
>>>>> avenue would be to use __pypy__.intop.int_mul etc.
>>>>>
>>>>> Feel free to complain "no, I want real types that I can work with" :-)
>>>>>
>>>>> Cheers,
>>>>> fijal
>>>>>
>>>>> On Mon, Apr 4, 2016 at 3:10 PM, Tuom Larsen <tuom.lar...@gmail.com> wrote:
>>>>>> Hello!
>>>>>>
>>>>>> Suppose I'm on 64-bit machine and there is an `a = arrar.array('L',
>>>>>> [0])` (item size is 8 bytes). In Python, when an integer does not fit
>>>>>> machine width it gets promoted to "long" integer of arbitrary size. So
>>>>>> this will fail:
>>>>>>
>>>>>> a[0] = 2**63 << 1
>>>>>>
>>>>>> To fix this, one could instead write:
>>>>>>
>>>>>> a[0] = (2**63 << 1) & (2**64 - 1)
>>>>>>
>>>>>> My question is, when I know that the result will be stored in
>>>>>> `array.array` anyway, how to prevent the promotion to long integers?
>>>>>> What is the most performat way to perform such calculations? Is PyPy
>>>>>> able to optimize away that `& (2**64 - 1)` when I use `'L'` typecode?
>>>>>>
>>>>>> I mean, in C I wouldn't have to worry about it as everything above the
>>>>>> 63rd bit will be simply cut off. I would like to help PyPy to generate
>>>>>> the best possible code, does anyone have some suggestions please?
>>>>>>
>>>>>> Thanks!
>>>>>> ___
>>>>>> pypy-dev mailing list
>>>>>> pypy-dev@python.org
>>>>>> https://mail.python.org/mailman/listinfo/pypy-dev
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Shift and typed array

2016-04-04 Thread Maciej Fijalkowski
so numpy64 will give you wrap-around arithmetics. What else are you
looking for? :-)

On Mon, Apr 4, 2016 at 3:38 PM, Tuom Larsen <tuom.lar...@gmail.com> wrote:
> You mean I should first store the result into numpy's `int64`, and
> then to `array.array`? Like:
>
> x = int64(2**63 << 1)
> a[0] = x
>
> Or:
>
> x = int64(2**63)
> x[0] = x << 1
>
> What the "real types" goes, is this the only option?
>
> Thanks in any case!
>
>
> On Mon, Apr 4, 2016 at 3:32 PM, Maciej Fijalkowski <fij...@gmail.com> wrote:
>> one option would be to use integers from _numpypy module:
>>
>> from numpy import int64 after installing numpy.
>>
>> There are obscure ways to get it without installing numpy. Another
>> avenue would be to use __pypy__.intop.int_mul etc.
>>
>> Feel free to complain "no, I want real types that I can work with" :-)
>>
>> Cheers,
>> fijal
>>
>> On Mon, Apr 4, 2016 at 3:10 PM, Tuom Larsen <tuom.lar...@gmail.com> wrote:
>>> Hello!
>>>
>>> Suppose I'm on 64-bit machine and there is an `a = arrar.array('L',
>>> [0])` (item size is 8 bytes). In Python, when an integer does not fit
>>> machine width it gets promoted to "long" integer of arbitrary size. So
>>> this will fail:
>>>
>>> a[0] = 2**63 << 1
>>>
>>> To fix this, one could instead write:
>>>
>>> a[0] = (2**63 << 1) & (2**64 - 1)
>>>
>>> My question is, when I know that the result will be stored in
>>> `array.array` anyway, how to prevent the promotion to long integers?
>>> What is the most performat way to perform such calculations? Is PyPy
>>> able to optimize away that `& (2**64 - 1)` when I use `'L'` typecode?
>>>
>>> I mean, in C I wouldn't have to worry about it as everything above the
>>> 63rd bit will be simply cut off. I would like to help PyPy to generate
>>> the best possible code, does anyone have some suggestions please?
>>>
>>> Thanks!
>>> ___
>>> pypy-dev mailing list
>>> pypy-dev@python.org
>>> https://mail.python.org/mailman/listinfo/pypy-dev
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Shift and typed array

2016-04-04 Thread Maciej Fijalkowski
one option would be to use integers from _numpypy module:

from numpy import int64 after installing numpy.

There are obscure ways to get it without installing numpy. Another
avenue would be to use __pypy__.intop.int_mul etc.

Feel free to complain "no, I want real types that I can work with" :-)

Cheers,
fijal

On Mon, Apr 4, 2016 at 3:10 PM, Tuom Larsen  wrote:
> Hello!
>
> Suppose I'm on 64-bit machine and there is an `a = arrar.array('L',
> [0])` (item size is 8 bytes). In Python, when an integer does not fit
> machine width it gets promoted to "long" integer of arbitrary size. So
> this will fail:
>
> a[0] = 2**63 << 1
>
> To fix this, one could instead write:
>
> a[0] = (2**63 << 1) & (2**64 - 1)
>
> My question is, when I know that the result will be stored in
> `array.array` anyway, how to prevent the promotion to long integers?
> What is the most performat way to perform such calculations? Is PyPy
> able to optimize away that `& (2**64 - 1)` when I use `'L'` typecode?
>
> I mean, in C I wouldn't have to worry about it as everything above the
> 63rd bit will be simply cut off. I would like to help PyPy to generate
> the best possible code, does anyone have some suggestions please?
>
> Thanks!
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] [ANN] Python compilers workshop at SciPy this year

2016-03-24 Thread Maciej Fijalkowski
Hi David

I'm sorry, it was not supposed to come as rude.

It seems that the blocker here is full numpy support which we're
working on right now, we can come back to that discussion once that's
ready

On Thu, Mar 24, 2016 at 6:31 PM, David Edelsohn <dje@gmail.com> wrote:
> Maciej,
>
> How about a little more useful response of "we'll help you find the
> right audience for this discussion and collaborate with you to make
> the case."?
>
> - David
>
> On Thu, Mar 24, 2016 at 11:32 AM, Maciej Fijalkowski <fij...@gmail.com> wrote:
>> Ok fine, but we're not the receipents of such a message.
>>
>> Please lobby PSF for having a JIT, we all support that :-)
>>
>> On Thu, Mar 24, 2016 at 5:23 PM, John Camara <john.m.cam...@gmail.com> wrote:
>>> Hi Fijal,
>>>
>>> I understand where your coming from and not trying to convince you to work
>>> on it.  Just mainly trying to point out a need that may not be obvious to
>>> this community.  I don't spend much time on big data and analytics so I
>>> don't have a lot of time to devote to this task.  That could change in the
>>> future so you never know I may end up getting involved with this.
>>>
>>> At the end of the day I think it is the PSF, which needs to do an honest
>>> assessment of the current state of Python and in programming in general, so
>>> that they can help direct the future of Python.  I think with an honest
>>> assessment it should be clear that it is absolutely necessary that a dynamic
>>> language have a JIT. Otherwise, a language like Node would not be growing so
>>> quickly on the server side.  An honest assessment would conclude that Python
>>> needs to play a major role in big data and analytics as we don't want this
>>> to be another area where Python misses the boat.  As with all languages
>>> other than JavaScript we missed playing an important role on web front end.
>>> More recently we missed out on mobile.  I don't think it is good for us to
>>> miss out on big data.  It would be a shame since we had such a strong
>>> scientific community which initially gave us a huge advantage over other
>>> communities.  Missing out on big data might also be the driver that moves
>>> the scientific community in a different direction which would be a big loss
>>> to Python.
>>>
>>> I personally don't see any particular companies or industries that are
>>> willing to fund the tasks needed to solve these issues.  It's not to say
>>> there are no more funds for Python projects its just likely no one company
>>> will be willing to fund these kinds of projects on their own.  It really
>>> needs the PSF to coordinate these efforts but they seamed to be more focus
>>> on trying to make Python 3 a success instead of improving the overall health
>>> of the community.
>>>
>>> I believe that Python is in pretty good shape in being able to solve these
>>> issues but it just needs some funding and focus to get there.
>>>
>>> Hopefully the workshop will be successful and help create some focus.
>>>
>>> John
>>>
>>> On Thu, Mar 24, 2016 at 8:56 AM, Maciej Fijalkowski <fij...@gmail.com>
>>> wrote:
>>>>
>>>> Hi John
>>>>
>>>> Thanks for explaining the current situation of the ecosystem. I'm not
>>>> quite sure what your intention is. PyPy (and CPython) is very easy to
>>>> embed through any C-level API, especially with the latest additions to
>>>> cffi embedding. If someone feels like doing the work to share stuff
>>>> that way (as I presume a lot of data presented in JVM can be
>>>> represented as some pointer and shape how to access it), then he's
>>>> obviously more than free to do so, I'm even willing to help with that.
>>>> Now this seems like a medium-to-big size project that additionally
>>>> will require quite a bit of community will to endorse. Are you willing
>>>> to volunteer to work on such a project and dedicate a lot of time to
>>>> it? If not, then there is no way you can convince us to volunteer our
>>>> own time to do it - it's just too big and quite a bit far out of our
>>>> usual areas of interest. If there is some commercial interest (and I
>>>> think there might be) in pushing python and especially pypy further in
>>>> that area, we might want to have a better story for numpy first, but
>>>> then feel free to send those corporate interest people my way, we can
>>>>

Re: [pypy-dev] [ANN] Python compilers workshop at SciPy this year

2016-03-24 Thread Maciej Fijalkowski
Ok fine, but we're not the receipents of such a message.

Please lobby PSF for having a JIT, we all support that :-)

On Thu, Mar 24, 2016 at 5:23 PM, John Camara <john.m.cam...@gmail.com> wrote:
> Hi Fijal,
>
> I understand where your coming from and not trying to convince you to work
> on it.  Just mainly trying to point out a need that may not be obvious to
> this community.  I don't spend much time on big data and analytics so I
> don't have a lot of time to devote to this task.  That could change in the
> future so you never know I may end up getting involved with this.
>
> At the end of the day I think it is the PSF, which needs to do an honest
> assessment of the current state of Python and in programming in general, so
> that they can help direct the future of Python.  I think with an honest
> assessment it should be clear that it is absolutely necessary that a dynamic
> language have a JIT. Otherwise, a language like Node would not be growing so
> quickly on the server side.  An honest assessment would conclude that Python
> needs to play a major role in big data and analytics as we don't want this
> to be another area where Python misses the boat.  As with all languages
> other than JavaScript we missed playing an important role on web front end.
> More recently we missed out on mobile.  I don't think it is good for us to
> miss out on big data.  It would be a shame since we had such a strong
> scientific community which initially gave us a huge advantage over other
> communities.  Missing out on big data might also be the driver that moves
> the scientific community in a different direction which would be a big loss
> to Python.
>
> I personally don't see any particular companies or industries that are
> willing to fund the tasks needed to solve these issues.  It's not to say
> there are no more funds for Python projects its just likely no one company
> will be willing to fund these kinds of projects on their own.  It really
> needs the PSF to coordinate these efforts but they seamed to be more focus
> on trying to make Python 3 a success instead of improving the overall health
> of the community.
>
> I believe that Python is in pretty good shape in being able to solve these
> issues but it just needs some funding and focus to get there.
>
> Hopefully the workshop will be successful and help create some focus.
>
> John
>
> On Thu, Mar 24, 2016 at 8:56 AM, Maciej Fijalkowski <fij...@gmail.com>
> wrote:
>>
>> Hi John
>>
>> Thanks for explaining the current situation of the ecosystem. I'm not
>> quite sure what your intention is. PyPy (and CPython) is very easy to
>> embed through any C-level API, especially with the latest additions to
>> cffi embedding. If someone feels like doing the work to share stuff
>> that way (as I presume a lot of data presented in JVM can be
>> represented as some pointer and shape how to access it), then he's
>> obviously more than free to do so, I'm even willing to help with that.
>> Now this seems like a medium-to-big size project that additionally
>> will require quite a bit of community will to endorse. Are you willing
>> to volunteer to work on such a project and dedicate a lot of time to
>> it? If not, then there is no way you can convince us to volunteer our
>> own time to do it - it's just too big and quite a bit far out of our
>> usual areas of interest. If there is some commercial interest (and I
>> think there might be) in pushing python and especially pypy further in
>> that area, we might want to have a better story for numpy first, but
>> then feel free to send those corporate interest people my way, we can
>> maybe organize something. If you want us to do community service to
>> push Python solutions in the area I have very little clue about
>> however, I would like to politely decline.
>>
>> Cheers,
>> fijal
>>
>> On Thu, Mar 24, 2016 at 2:22 PM, John Camara <john.m.cam...@gmail.com>
>> wrote:
>> > Besides JPype and PyJNIus there is also https://www.py4j.org/.  I
>> > haven't
>> > heard of JPype being used in any recent projects so I assuming it is
>> > outdated by now.  PyJNIus gets used but I tend to only see it used on
>> > Android projects.  The Py4J project gets used often in
>> > numerical/scientific
>> > projects mainly due to it use in PySpark.  The problem with all these
>> > libraries is that they don't have a way to share large amounts of memory
>> > between the JVM and Python VMs and so large chunks of data have to be
>> > copied/serialized when going between the 2 VMs.
>> >
>> > Spark is the de facto standard in clustering co

Re: [pypy-dev] [ANN] Python compilers workshop at SciPy this year

2016-03-24 Thread Maciej Fijalkowski
Hi John

Thanks for explaining the current situation of the ecosystem. I'm not
quite sure what your intention is. PyPy (and CPython) is very easy to
embed through any C-level API, especially with the latest additions to
cffi embedding. If someone feels like doing the work to share stuff
that way (as I presume a lot of data presented in JVM can be
represented as some pointer and shape how to access it), then he's
obviously more than free to do so, I'm even willing to help with that.
Now this seems like a medium-to-big size project that additionally
will require quite a bit of community will to endorse. Are you willing
to volunteer to work on such a project and dedicate a lot of time to
it? If not, then there is no way you can convince us to volunteer our
own time to do it - it's just too big and quite a bit far out of our
usual areas of interest. If there is some commercial interest (and I
think there might be) in pushing python and especially pypy further in
that area, we might want to have a better story for numpy first, but
then feel free to send those corporate interest people my way, we can
maybe organize something. If you want us to do community service to
push Python solutions in the area I have very little clue about
however, I would like to politely decline.

Cheers,
fijal

On Thu, Mar 24, 2016 at 2:22 PM, John Camara  wrote:
> Besides JPype and PyJNIus there is also https://www.py4j.org/.  I haven't
> heard of JPype being used in any recent projects so I assuming it is
> outdated by now.  PyJNIus gets used but I tend to only see it used on
> Android projects.  The Py4J project gets used often in numerical/scientific
> projects mainly due to it use in PySpark.  The problem with all these
> libraries is that they don't have a way to share large amounts of memory
> between the JVM and Python VMs and so large chunks of data have to be
> copied/serialized when going between the 2 VMs.
>
> Spark is the de facto standard in clustering computing at this point in
> time.  At a high level Spark executes code that is distributed throughout a
> cluster so that the code being executed is as close as possible to where the
> data lives so as to minimize transferring of large amounts of data.  The
> code that needs to be executed are packaged up into units called Resilient
> Distributed Dataset (RDD).  RDDs are lazy evaluated and are essential graphs
> of the operations that need to be performed on the data.  They are capable
> of reading data from many types of sources, outputting to multiple types of
> sources, containing the code that needs to be executed, and are also
> responsible to caching or keeping results in memory for future RDDs that
> maybe executed.
>
> If you write all your code in Java or Scala, its execution will be performed
> in JVMs distributed in the cluster.  On the other hand, Spark does not limit
> its use to only Java based languages so Python can be used.  In the case of
> Python the PySpark library is used.  When Python is used, the PySpark
> library can be used to define the RDDs that will be executed under the JVM.
> In this scenario, only if required, the final results of the calculations
> will end up being passed to Python.  I say only if necessary as its possible
> the end results may just be left in memory or to create an output such as an
> hdfs file in hadoop and does not need to be transferred to Python. Under
> this scenario the code is written in Python but effectively all the "real"
> work is performed under the JVM.
>
> Often someone writing Python is also going to want to perform some of the
> operations under Python.  This can be done as the RDDs that are created can
> contain both operations that get performed under the JVM as well as Python
> (and of course other languages are supported).  When Python is involved
> Spark will start up Python VMs on the required nodes so that the Python
> portions of the work can be performed.  The Python VMs can either be
> CPython, PyPy or even a mix of both CPython and PyPy.  The downside to using
> non Java languages is the overhead of passing data between the JVM and the
> Python VM as the memory is not shared between the processes but instead
> copied/serialized between them.
>
> Because this data is copied between the 2 VMs, anyone who writes Python code
> for this environment always has to be conscious of the data being copied
> between the processes so as to not let the amount of the extra overhead
> become a large burden.  Quite often the goal will be to first perform the
> bulk of the operations under the JVM and then hopefully only a smaller
> subset of the data will have to be processed under Python.  If this can be
> done then the overhead can be minimized and then there is essential no down
> sides to using Python in the pipeline of operations.
>
> If your unfortunate and need to perform some of the processing early in the
> pipline under Python and worse yet if there is a need to go back and 

Re: [pypy-dev] [Jython-dev] [ANN] Python compilers workshop at SciPy this year (fwd)

2016-03-23 Thread Maciej Fijalkowski
We're probably sending myself and matti

On Thu, Mar 24, 2016 at 12:05 AM, Laura Creighton  wrote:
> This from the Jython mailing list.  Are we sending somebody?  It's the
> first I heard about it, at any rate.
>
> Laura
>
> --- Forwarded Message
>
> Return-Path: 
> Received: from lists.sourceforge.net (lists.sourceforge.net [216.34.181.88])
> From: Nathaniel Smith 
> To: jython-...@lists.sourceforge.net
> Subject: [Jython-dev] [ANN] Python compilers workshop at SciPy this year
>
> Hi Jython folks,
>
> I wanted to give a heads-up to a workshop I'm organizing at SciPy this
> year that might be of interest to you:
>
> What: A two-day workshop bringing together folks working on JIT/AOT
> compilation in Python.
>
> When/where: July 11-12, in Austin, Texas.
>
> (This is co-located with SciPy 2016, at the same time as the tutorial
> sessions, just before the conference proper.)
>
> Website: https://python-compilers-workshop.github.io/
>
> Note that I anticipate that we'll be able to get sponsorship funding
> to cover travel costs for folks who can't get their employers to foot
> the bill.
>
> Cheers,
> - -n
>
> - --
> Nathaniel J. Smith -- https://vorpus.org
>
> - 
> --
> Transform Data into Opportunity.
> Accelerate data analysis in your applications with
> Intel Data Analytics Acceleration Library.
> Click to learn more.
> http://pubads.g.doubleclick.net/gampad/clk?id=278785351=/4140
> ___
> Jython-dev mailing list
> jython-...@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/jython-dev
>
> --- End of Forwarded Message
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] [ANN] Python compilers workshop at SciPy this year

2016-03-23 Thread Maciej Fijalkowski
Hi John

I understand why you're bringing this up, but it's a huge project on
it's own, worth at least a couple months worth of work. Without  a
dedicated effort from someone I'm worried it would not go anywhere.
It's kind of separated from the other goals of the summit

On Wed, Mar 23, 2016 at 8:16 PM, John Camara  wrote:
> Hi Nathaniel,
>
> I would like to suggest one more topic for the workshop. I see a big need
> for a library (jffi) similar to cffi but that provides a bridge to Java
> instead of C code. The ability to seamlessly work with native Java data/code
> would offer a huge improvement when python code needs to work with the
> Spark/Hadoop ecosystem. The current mechanisms which involve serializing
> data to/from Java can kill performance for some applications and can render
> Python unsuitable for these cases.
>
> John
>
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
>
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] GSoC: Updates on ByteArray?

2016-03-22 Thread Maciej Fijalkowski
Hi Brian

bytearray should be optimized for cases where you e.g. write() it to
file or use read_into() in a way that does not make any copies. Same
if you say convert it from ffi.buffer etc. That's probably what's
missing from making it fast

On Tue, Mar 22, 2016 at 8:56 PM, Brian Guo  wrote:
> Hi,
>
>My name is Brian Guo and I am currently an undergraduate at Cornell
> University. I am very interested in working with PyPy as part of Google's
> Summer of Code. In particular, I am interested in working on the bytearray
> project. I noticed that the current status of the ByteArray project is
> unknown, but that there may be updates on the mailing list. I am wondering
> if there is any information I may be able to read on this project, or
> possibly an overview of the project itself and the proposed changes that
> would make byteArray faster (if any have been proposed yet). I am very
> grateful to anyone who is able to point me in the right direction in regards
> to this project.
>
> Thank you all for your time,
>
> -Brian Guo
>
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
>
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] PyPy Ubuntu PPA + a cpyext question

2016-03-21 Thread Maciej Fijalkowski
PPA is usually updated, but as you said we can't demand deadlines

PyByteArray_Check and PyByteArray_CheckExact are not implemented

On Mon, Mar 21, 2016 at 3:43 AM, Tin Tvrtković  wrote:
> Hello,
>
> first question: is the PyPy Ubuntu PPA still a maintained thing? I'm not
> demanding free labor here, just curious whether I should wait a little for
> 5.0 to show up there or change my Dockerfiles to direct download.
>
> second question: does PyPy support PyByteArray_CheckExact? I seem to have
> some Cython-generated code using it and PyPy seems to be refusing to import
> the resulting module.
>
> Cheers!
>
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
>
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] setting attribute of JitHookInterface instance

2016-03-19 Thread Maciej Fijalkowski
It's general. You can do whatever you like before runtime (during
import time for example) as long as the presented world to rpython is
static enough - in other words Python is a meta-programming language
for RPython

On Wed, Mar 16, 2016 at 1:34 PM, Magnus Morton  wrote:
> Hi Armin,
>
> Thanks for looking into this. Is this pre-translation code a general thing 
> possible with any RPython based compiler, or is it very PyPy specific?
>
> Cheers,
> Magnus
>
>> On 16 Mar 2016, at 08:45, Armin Rigo  wrote:
>>
>> Hi Magnus,
>>
>> On 16 March 2016 at 01:37, Magnus Morton  wrote:
>>> You can recreate it in PyPy by putting the following two lines pretty much 
>>> anywhere in interpreter level code other than the 
>>> setup_after_space_initialization methods
>>>
>>> from pypy.module.pypyjit.hooks import pypy_hooks
>>> pypy_hooks.foo = “foo”
>>>
>>> What I can’t understand is what is special about the 
>>> setup_after_space_initialization methods that makes it work there.
>>
>> Reproduced and figured it out.  Added some docs in eda9fd6a0601:
>>
>> +# WARNING: You should make a single prebuilt instance of a subclass
>> +# of this class.  You can, before translation, initialize some
>> +# attributes on this instance, and then read or change these
>> +# attributes inside the methods of the subclass.  But this prebuilt
>> +# instance *must not* be seen during the normal annotation/rtyping
>> +# of the program!  A line like ``pypy_hooks.foo = ...`` must not
>> +# appear inside your interpreter's RPython code.
>>
>> In PyPy, setup_after_space_initialization() is not RPython (which means
>> it is executed before translation).
>>
>>
>> A bientôt,
>>
>> Armin.
>
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Refcount garbage collector build error

2016-03-19 Thread Maciej Fijalkowski
Hi Florin

The refcount garbage collector is only marginally supported (as far as
our tests go), it's definitely neither tested nor really supported
when translated, it was always very slow for example. (and as you
noticed, there is no support for weakrefs for example)

On Fri, Mar 18, 2016 at 9:57 AM, Papa, Florin  wrote:
> Hi all,
>
>
>
> This is Florin Papa from the Dynamic Scripting Languages Team at Intel
> Corporation.
>
>
>
> I am trying to build pypy to use the refcount garbage collector, for testing
> purposes. I am following the indications here [1], but the following command
> fails:
>
>
>
> pypy ../../rpython/bin/rpython -O2 --gc=ref targetpypystandalone
>
>
>
> with the error:
>
>
>
> [translation:ERROR] OpErrFmt: [ 0x89a68a8>: No module named _weakref]
>
>
>
> When I run pypy in interactive mode, “import _weakref” works fine. I
> encounter the same error if I try to use python to run the rpython script.
> Is the refcount garbage collector still supported?
>
>
>
> [1] http://doc.pypy.org/en/latest/config/translation.gc.html
>
>
>
> Regards,
>
> Florin
>
>
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
>
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Interest in contributing to PYPY

2016-03-09 Thread Maciej Fijalkowski
Hi!

Good to hear from you :-)

Any chance you can pop in to IRC, so we can discuss the project?
Alternatively you can catch me on gmail on this address

Best regards,
Maciej Fijalkowski

On Wed, Mar 9, 2016 at 3:12 PM, Djimeli Konrad <djkonr...@gmail.com> wrote:
> Hello,
>
> My name is Djimeli Konrad a second year computer science student from the
> University of Buea, Cameroon. I am proficient in c, c++, javascript and
> python. I would like to contribute to PYPY for the Google Summer of Code
> 2016. I am interested in working on the project "Improving the jitviewer". I
> have previous experience developing Django/Python applications (
> https://github.com/MCQuizzer/mcquizzer/graphs/contributors ),  VRML-STL
> parser hosted on github ( https://github.com/djkonro/vrml-stl ) and other
> project (  https://github.com/djkonro ). I would like to work on this
> project within and beyond GSoC and as I have always  sought for such a
> project ever since I learned python and web application development.I would
> like to get some pointer to some starting point that could give me a better
> understanding of the project.
>
> Thanks
> Konrad
>
> ___
> pypy-dev mailing list
> pypy-dev@python.org
> https://mail.python.org/mailman/listinfo/pypy-dev
>
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


  1   2   3   4   5   6   7   8   >