On Tue, 03 Jan 2017, Stephan Hoyer wrote:
> >> testing on stable debian box with elderly numpy, where it does behave
> >> sensibly:
> >> $> python -c "import numpy; print('numpy version: ', numpy.__version__);
> >> a=2; b=-2; print(pow(a,b)); print(pow(numpy.array(a), b))"
> >> ('numpy version:
On Tue, 03 Jan 2017, Stephan Hoyer wrote:
>On Tue, Jan 3, 2017 at 9:00 AM, Yaroslav Halchenko <li...@onerussian.com>
>wrote:
> Sorry for coming too late to the discussion and after PR "addressing"
> the issue by issuing an error was merg
On Tue, 11 Oct 2016, Peter Creasey wrote:
> >> I agree with Sebastian and Nathaniel. I don't think we can deviating from
> >> the existing behavior (int ** int -> int) without breaking lots of existing
> >> code, and if we did, yes, we would need a new integer power function.
> >> I think it's
I have no stats on either anyone is looking at
http://yarikoptic.github.io/numpy-vbench besides me at times, so I
might be just crying into the wild:
I have moved running of numpy-vbench on a bit newer/more powerful box,
and that is why benchmark results are being reestimated (thus you might
On Mon, 07 Apr 2014, Sturla Molden wrote:
so I would assume that the devil is indeed in R post-processing and would
look
into it (if/when get a chance).
I tried to look into the R source code. It's the worst mess I have ever
seen. I couldn't even find their Mersenne twister.
it is in
Hi NumPy gurus,
We wanted to test some of our code by comparing to results of R
implementation which provides bootstrapped results.
R, Python std library, numpy all have Mersenne Twister RNG implementation. But
all of them generate different numbers. This issue was previously discussed in
On Sun, 06 Apr 2014, Sturla Molden wrote:
R, Python std library, numpy all have Mersenne Twister RNG implementation.
But
all of them generate different numbers. This issue was previously
discussed in
https://github.com/numpy/numpy/issues/4530 : In Python, and numpy generated
On Mon, 25 Nov 2013, Fernando Perez wrote:
ok -- since no negative feedback received -- submitted as is. �I will
let you know when it gets rejected or accepted.
Let me know if it's accepted: I'll be keynoting at PyCon'14, and since my
focus will obviously be scientific
On Sun, 24 Nov 2013, Nathaniel Smith wrote:
On this positive note (it is boring to start a new thread, isn't it?) --
would you be interested in me transfering numpy-vbench over to
github.com/numpy ?
If you mean just moving the existing git repo under the numpy
organization, like
On Tue, 15 Oct 2013, Nathaniel Smith wrote:
What do you have to lose?
btw -- fresh results are here http://yarikoptic.github.io/numpy-vbench/ .
I have tuned benchmarking so it now reflects the best performance across
multiple executions of the whole battery, thus eliminating spurious
Hi Guys,
PyCon 2014 will be just around the corner from where I am, so I decided
to attend. Being lazy (or busy) I haven't submitted any big talk but thinking
to submit few lightning talks (just 5 min and 400 characters abstract limit),
and I think it might be worth letting people know about my
On Tue, 15 Oct 2013, Nathaniel Smith wrote:
and I think it might be worth letting people know about my little project.
I
would really appreciate your sincere feedback (e.g. not worth it would be
valuable too). Here is the title/abstract
numpy-vbench -- speed benchmarks for NumPy
ok -- since no negative feedback received -- submitted as is. I will
let you know when it gets rejected or accepted.
cheers,
On Tue, 15 Oct 2013, Yaroslav Halchenko wrote:
Hi Guys,
PyCon 2014 will be just around the corner from where I am, so I decided
to attend. Being lazy (or busy) I
On Fri, 06 Sep 2013, Daπid wrote:
some old ones are
still there, some might be specific to my CPU here
How long does one run take? Maybe I can run it in my machine (Intel i5)
for comparison.
In current configuration where I target benchmark run to around 200ms
(thus possibly
On Fri, 06 Sep 2013, josef.p...@gmail.com wrote:
On Fri, Sep 6, 2013 at 3:21 PM, Yaroslav Halchenko li...@onerussian.com
wrote:
FWIW -- updated runs of the benchmarks are available at
http://yarikoptic.github.io/numpy-vbench which now include also
maintenance/1.8.x branch
FWIW -- updated runs of the benchmarks are available at
http://yarikoptic.github.io/numpy-vbench which now include also
maintenance/1.8.x branch (no divergences were detected yet). There are
only recent improvements as I see and no new (but some old ones are
still there, some might be specific to
I am glad to announce that now you can see benchmark timing plots for
multiple branches, thus being able to spot regressions in maintenance
branches and compare enhancements in relation to previous releases.
e.g.
* improving upon 1.7.x but still lacking behind 1.6.x
://www.onerussian.com/tmp/numpy-vbench/vb_vb_core.html#numpy-ones-100
Cheers,
On Fri, 19 Jul 2013, Yaroslav Halchenko wrote:
I have just added a few more benchmarks, and here they come
http://www.onerussian.com/tmp/numpy-vbench/vb_vb_linalg.html#numpy-linalg-pinv-a-float32
it seems to be very recent so
On Wed, 24 Jul 2013, Pauli Virtanen wrote:
How about splitting doc/sphinxext out from the main Numpy repository to
a separate `numpydoc` repo under Numpy project?
+1
It's a separate Python package, after all. Moreover, this would make it
easier to use it as a git submodule (e.g. in
On Mon, 22 Jul 2013, Benjamin Root wrote:
At some point I hope to tune up the report with an option of viewing the
plot using e.g. nvd3 JS so it could be easier to pin point/analyze
interactively.
shameless plug... the soon-to-be-finalized matplotlib-1.3 has a WebAgg
On Fri, 19 Jul 2013, Warren Weckesser wrote:
Well, this is embarrassing: https://github.com/numpy/numpy/pull/3539
Thanks for benchmarks! I'm now an even bigger fan. :)
Great to see that those came of help! I thought to provide a detailed
details (benchmarking all recent commits) to provide
On Thu, 18 Jul 2013, Charles R Harris wrote:
yeah... That is how I thought it is working, but I guess it was left
without asanyarraying for additional flexibility/performance so any
array-like object could be used, not just ndarray derived classes.
Speaking of which, there
Hi everyone,
Some of my elderly code stopped working upon upgrades of numpy and
upcoming pandas: https://github.com/pydata/pandas/issues/4290 so I have
looked at the code of
2481 def mean(a, axis=None, dtype=None, out=None, keepdims=False):
2482
...
2489 Parameters
2490
On Thu, 18 Jul 2013, Skipper Seabold wrote:
Not sure anyways if my direct numpy.mean application to pandas DataFrame
is
kosher -- initially I just assumed that any argument is asanyarray'ed
first
-- but I think here catching TypeError for those incompatible .mean's
detected performance hit, but in some cases
seems still to reasonably locate commits hitting on performance.
Enjoy,
On Tue, 09 Jul 2013, Yaroslav Halchenko wrote:
Julian Taylor contributed some benchmarks he was concerned about, so
now the collection is even better.
I will keep updating tests
http://www.onerussian.com/tmp/numpy-vbench/vb_vb_reduce.html#numpy-any-fast
Enjoy
On Mon, 01 Jul 2013, Yaroslav Halchenko wrote:
FWIW -- updated plots with contribution from Julian Taylor
http://www.onerussian.com/tmp/numpy-vbench-20130701/vb_vb_indexing.html#mmap-slicing
;-)
On Mon, 01 Jul
Hi Guys,
not quite the recommendations you expressed, but here is my ugly
attempt to improve benchmarks coverage:
http://www.onerussian.com/tmp/numpy-vbench-20130701/index.html
initially I also ran those ufunc benchmarks per each dtype separately,
but then resulting webpage is loong which
FWIW -- updated plots with contribution from Julian Taylor
http://www.onerussian.com/tmp/numpy-vbench-20130701/vb_vb_indexing.html#mmap-slicing
;-)
On Mon, 01 Jul 2013, Yaroslav Halchenko wrote:
Hi Guys,
not quite the recommendations you expressed, but here is my ugly
attempt to improve
On Wed, 01 May 2013, Sebastian Berg wrote:
btw -- is there something like panda's vbench for numpy? i.e. where
it would be possible to track/visualize such performance
improvements/hits?
Sorry if it seemed harsh, but only skimmed mails and it seemed a bit
like the an obvious piece was
On Mon, 06 May 2013, Sebastian Berg wrote:
if you care to tune it up/extend and then I could fire it up again on
that box (which doesn't do anything else ATM AFAIK). Since majority of
time is spent actually building it (did it with ccache though) it would
be neat if you come up with
On Wed, 01 May 2013, Nathaniel Smith wrote:
Thanks everyone for the feedback.
Is it worth me starting a bisection to catch where it was introduced?
Is it a bug, or just typical fp rounding issues? Do we know which answer
is correct?
to ignorant me, even without considering
On Wed, 01 May 2013, Nathaniel Smith wrote:
not sure there is anything to fix here. Third-party code relying on a
certain outcome of rounding error is likely incorrect anyway.
Yeah, seems to just be the standard floating point indeterminism.
Using Matthew's numbers and pure Python floats:
, 2013 at 6:24 PM, Matthew Brett matthew.br...@gmail.com wrote:
HI,
On Wed, May 1, 2013 at 9:09 AM, Yaroslav Halchenko li...@onerussian.com
wrote:
3. they are identical on other architectures (e.g. amd64)
To me that is surprising. I would have guessed that the order is the
same on 32
On Wed, 01 May 2013, Matthew Brett wrote:
There really is no point discussing here, this has to do with numpy
doing iteration order optimization, and you actually *want* this. Lets
for a second assume that the old behavior was better, then the next guy
is going to ask: Why is
On Wed, 01 May 2013, Sebastian Berg wrote:
There really is no point discussing here, this has to do with numpy
doing iteration order optimization, and you actually *want* this. Lets
for a second assume that the old behavior was better, then the next guy
is going to ask: Why is
On Thu, 06 Sep 2012, Aron Ahmadia wrote:
Are you running the valgrind test with the Python suppression
file:�[1]http://svn.python.org/projects/python/trunk/Misc/valgrind-python.supp
yes -- on Debian there is /usr/lib/valgrind/python.supp which comes
with python package and I believe
Sep 2012, Yaroslav Halchenko wrote:
On Thu, 06 Sep 2012, Aron Ahmadia wrote:
Are you running the valgrind test with the Python suppression
file:�[1]http://svn.python.org/projects/python/trunk/Misc/valgrind-python.supp
yes -- on Debian there is /usr/lib/valgrind/python.supp which
Recently Sandro uploaded 1.7.0b1 into Debian experimental so I decided to see
if this bleeding edge version doesn't break some of its dependees... Below is
a copy of
is a, a[:4][:3].base.base is a'
1.6.2
True False True
1.7.0rc1.dev-ea23de8
True True False
On Wed, 05 Sep 2012, Yaroslav Halchenko wrote:
pymvpa2_2.1.0-1.dscok FAILED
http://www.onerussian.com/Linux/deb/logs/python-numpy_1.7.0~b1-1_amd64.testrdepends.debian-sid
/yoh/python-env/numpy/bin/python)
On Wed, 05 Sep 2012, Yaroslav Halchenko wrote:
Recently Sandro uploaded 1.7.0b1 into Debian experimental so I decided to see
if this bleeding edge version doesn't break some of its dependees... Below is
a copy of
http://www.onerussian.com/Linux/deb/logs
On Wed, 05 Sep 2012, Nathaniel Smith wrote:
It is an intentional change:
https://github.com/numpy/numpy/commit/b7cc20ad#L5R77
but the benefits aren't necessarily *that* compelling, so it could
certainly be revisited if there are unforeseen downsides. (Mostly it
means that intermediate view
cases separately.
--
=--=
Keep in touch www.onerussian.com
Yaroslav Halchenko www.ohloh.net/accounts/yarikoptic
___
NumPy-Discussion
;-)
--
=--=
Keep in touch www.onerussian.com
Yaroslav Halchenko www.ohloh.net/accounts/yarikoptic
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy
).
--
.-.
=-- /v\ =
Keep in touch// \\ (yoh@|www.)onerussian.com
Yaroslav Halchenko /( )\ ICQ#: 60653192
Linux User^^-^^[17]
___
NumPy-Discussion mailing
Dear NumPy People,
First I want to apologize if I misbehaved on NumPy Trac by reopening the
closed ticket
http://projects.scipy.org/numpy/ticket/1362
but I still feel strongly that there is misunderstanding
and the bug/defect is valid. I would appreciate if someone would waste
more of his time
On Thu, 14 Jan 2010, josef.p...@gmail.com wrote:
It looks difficult to construct an object array with only 1 element,
since a tuple is interpreted as different array elements.
yeap
It looks like some convention is necessary for interpreting a tuple in
the array construction, but it doesn't
Hi Warren,
The problem is that the tuple is converted to an array in the
statement that does the comparison, not in the construction of the
array. Numpy attempts
to convert the right hand side of the == operator into an array.
It then does the comparison using the two arrays.
Thanks for
47 matches
Mail list logo