+1 as well.
--
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
2016-04-22 20:17 GMT+02:00 Matthew Brett :
>
> The github releases idea sounds intriguing. Do you have any
> experience with that? Are there good examples other than the API
> documentation?
>
> https://developer.github.com/v3/repos/releases/
I never used it by I
2016-04-20 16:57 GMT+02:00 Matthew Brett <matthew.br...@gmail.com>:
> On Wed, Apr 20, 2016 at 1:59 AM, Olivier Grisel
> <olivier.gri...@ensta.org> wrote:
>> Thanks,
>>
>> I think next we could upgrade the travis configuration of numpy and
>> scipy to bui
Thanks,
I think next we could upgrade the travis configuration of numpy and
scipy to build and upload manylinux1 wheels to
http://travis-dev-wheels.scipy.org/ for downstream project to test
against the master branch of numpy and scipy whithout having to build
those from source.
However that
Thanks for the clarification, I read your original report too quickly.
I wonder why the travis maintainers built Python 2.7 with a
non-standard unicode option.
Edit (after googling): this is a known issue. The image with Python
2.7.11 will be fixed:
I tried on trusty and is also picked
numpy-1.11.0-cp27-cp27mu-manylinux1_x86_64.whl using the system python
2.7 (in a virtualenv with pip 8.1.1):
>>> import pip
>>> pip.pep425tags.get_abi_tag()
'cp27mu'
Outside of the virtualenv I still have the pip version from ubuntu
trusty and it does cannot
\o/
Thank you very much Matthew. I will upload the scikit-learn wheels soon.
--
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
I updated the issue:
https://github.com/xianyi/OpenBLAS-CI/issues/10#issuecomment-206195714
The random test_nanmedian_all_axis failure is unrelated to openblas
and should be ignored.
--
Olivier
___
NumPy-Discussion mailing list
Yes sorry I forgot to update the thread. Actually I am no longer sure
how I go this error. I am re-running the full test suite because I
cannot reproduce it when running the test_stats.py module alone.
--
Olivier
___
NumPy-Discussion mailing list
2016-04-05 19:44 GMT+02:00 Nathaniel Smith :
>
>> I propose to hold off distributing the OpenBLAS wheels until the
>> OpenBLAS tests are clean on the OpenBLAS buildbots - any objections?
>
> Alternatively, would it make sense to add a local patch to our openblas
> builds to
> Xianyi, the maintainer of OpenBLAS, is very helpfully running the
> OpenBLAS buildbot nightly tests with numpy and scipy:
>
> http://build.openblas.net/builders
>
> There is still one BLAS-related failure on these tests on AMD chips:
>
> https://github.com/xianyi/OpenBLAS-CI/issues/10
>
> I
while we
could not achieve similar results with atlas 3.10.
--
Olivier Grisel
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
typo:
python -m install --upgrade pip
should read:
python -m pip install --upgrade pip
--
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
The problem with the gfortran failures will be tackled by renaming the
vendored libgfortran.so library, see:
https://github.com/pypa/auditwheel/issues/24
This is orthogonal to the ATLAS vs OpenBLAS decision though.
--
Olivier
___
NumPy-Discussion
has set up a
buildbot based CI to test OpenBLAS on many CPU architectures and is
running the scipy test continuously to detect regressions early on:
https://github.com/xianyi/OpenBLAS/issues/785
http://build.openblas.net/waterfall
https://github.com/xianyi/OpenBLAS-CI/
--
Olivier Grisel
Thanks Matthew! I just installed it and ran the tests and it all works
(except for test_system_info.py that fails because I am missing a
vcvarsall.bat on that system but this is expected).
--
Olivier
___
NumPy-Discussion mailing list
I used docker to run the numpy tests on base/archlinux. I had to
pacman -Sy python-pip openssl and gcc (required by one of the numpy
tests):
```
Ran 5621 tests in 34.482s
OK (KNOWNFAIL=4, SKIP=9)
```
Everything looks fine.
--
Olivier
___
0))"
Also note that all scipy tests pass:
Ran 20180 tests in 366.163s
OK (KNOWNFAIL=97, SKIP=1657)
--
Olivier Grisel
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Note that the above segfault was found in a VM (docker-machine
virtualbox guest VM launched on a OSX host). The DYNAMIC_ARCH feature
of OpenBLAS detects an Sandybridge core (using
https://gist.github.com/ogrisel/ad4e547a32d0eb18b4ff).
Here are the flags of the CPU visible from inside the docker
mingw-w64 w.r.t. VS2015 but as
far as I know it's not supported yet either. Once the issue is fixed at the
upstream level, I think mingwpy could be rebuilt to benefit from the fix.
--
Olivier Grisel
___
NumPy-Discussion mailing list
NumPy-Disc
2015-07-11 18:30 GMT+02:00 Olivier Grisel olivier.gri...@ensta.org:
2015-07-10 20:20 GMT+02:00 Carl Kleffner cmkleff...@gmail.com:
I could provide you with a debug build of libopenblaspy.dll. The segfault -
if ithrown from openblas - could be detected with gdb or with the help of
backtrace.dll
2015-07-10 20:20 GMT+02:00 Carl Kleffner cmkleff...@gmail.com:
I could provide you with a debug build of libopenblaspy.dll. The segfault -
if ithrown from openblas - could be detected with gdb or with the help of
backtrace.dll.
That would be great thanks. Also can you give the build options /
2015-07-10 22:13 GMT+02:00 Carl Kleffner cmkleff...@gmail.com:
2015-07-10 19:06 GMT+02:00 Olivier Grisel olivier.gri...@ensta.org:
2015-07-10 16:47 GMT+02:00 Carl Kleffner cmkleff...@gmail.com:
Hi Olivier,
yes, this is all explained in
https://github.com/xianyi/OpenBLAS/wiki/Faq
Good news,
The segfaults on scikit-lern and scipy test suites are caused by a bug
in openblas core type detection: setting the OPENBLAS_CORETYPE
environment variable to Nehalem can make the test suite complete
without any failure for scikit-learn.
I will update my gist with the new test results
I narrowed down the segfault from the scipy tests on my machine to:
OPENBLAS_CORETYPE='Barcelona' /c/Python34_x64/python -cimport numpy
as np; print(np.linalg.svd(np.ones((129, 129), dtype=np.float64))
Barcelona is the architecture detected by OpenBLAS. If I force Nehalem
or if I reduce the
Hi Carl,
Sorry for the slow reply.
I ran some tests with your binstar packages:
I installed numpy, scipy and mingwpy for Python 2.7 32 bit and Python
3.4 64 bit (downloaded from python.org) on a freshly provisionned
windows VM on rackspace.
I then used the mingwpy C C++ compilers to build the
I have updated my gist with more test reports when
OPENBLAS_CORETYPE=Nehalem is fixed as an environment variable.
Note that on this machine, OpenBLAS detects the Barcelona core type.
I used the following ctypes based script to introspect the OpenBLAS
runtime:
2015-07-10 18:42 GMT+02:00 Olivier Grisel olivier.gri...@ensta.org:
I assume you've already checked that this is a Windows specific issue?
I am starting a rackspace VM with linux to check. Hopefully it will
also be detected as Barcelona by openblas.
I just built OpenBLAS 0.2.14 and numpy
2015-07-10 18:31 GMT+02:00 Nathaniel Smith n...@pobox.com:
On Jul 10, 2015 10:51 AM, Olivier Grisel olivier.gri...@ensta.org wrote:
I narrowed down the segfault from the scipy tests on my machine to:
OPENBLAS_CORETYPE='Barcelona' /c/Python34_x64/python -cimport numpy
as np; print
2015-07-10 16:47 GMT+02:00 Carl Kleffner cmkleff...@gmail.com:
Hi Olivier,
yes, this is all explained in
https://github.com/xianyi/OpenBLAS/wiki/Faq#choose_target_dynamic as well.
This seems to be necessary for CI systems, right?
The auto detection should work. If not it's a bug and we
Hi Carl,
Could you please provide some details on how you used your
mingw-static toolchain to build OpenBLAS, numpy scipy? I would like
to replicate but apparently the default Makefile in the openblas
projects expects unix commands such as `uname` and `perl` that are not
part of your archive.
+1 for bundling OpenBLAS both in scipy and numpy in the short term.
Introducing a new dependency project for OpenBLAS sounds like a good
idea but this is probably more work.
--
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
2015-01-23 9:25 GMT+01:00 Carl Kleffner cmkleff...@gmail.com:
All tests for the 64bit builds passed.
Thanks very much Carl. Did you have to patch the numpy / distutils
source to build those wheels are is this using the source code from
the official releases?
--
Olivier
2014-07-31 22:40 GMT+02:00 Matthew Brett matthew.br...@gmail.com:
Sure, I built and uploaded:
scipy-0.12.0 py27
scipy-0.13.0 py27, 33, 34
Are there any others you need?
Thanks, this is already great.
--
Olivier
http://twitter.com/ogrisel - http://github.com/ogrisel
2014-07-31 0:52 GMT+02:00 Matthew Brett matthew.br...@gmail.com:
Hi,
I took the liberty of uploading OSX wheels for some older numpy
versions to pypi. These can be useful for testing, or when building
your own wheels to be compatible with earlier numpy versions - see:
2014-07-29 14:24 GMT+02:00 Colin J. Williams c...@ncf.ca:
This version of Numpy does not appear to be available as an installable
binary. In any event, the LAPACK and other packages do not seem to be
available with the installable versions.
I understand that Windows Studio 2008 is
2014-07-28 15:25 GMT+02:00 Carl Kleffner cmkleff...@gmail.com:
Hi,
on https://bitbucket.org/carlkl/mingw-w64-for-python/downloads I uploaded
7z-archives for mingw-w64 and for OpenBLAS-0.2.10 for 32 bit and for 64 bit.
To use mingw-w64 for Python = 3.3 you have to manually tweak the so called
The dtype returned by np.where looks right (int64):
import platform
platform.architecture()
('64bit', 'WindowsPE')
import numpy as np
np.__version__
'1.9.0b1'
a = np.zeros(10)
np.where(a == 0)
(array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dtype=int64),)
--
Olivier
2014-07-13 19:05 GMT+02:00 Alexander Belopolsky ndar...@mac.com:
On Sat, Jul 12, 2014 at 8:02 PM, Nathaniel Smith n...@pobox.com wrote:
I feel like for most purposes, what we *really* want is a variable length
string dtype (I.e., where each element can be a different length.).
I've been
2014-07-10 0:53 GMT+02:00 Robert McGibbon rmcgi...@gmail.com:
This is an awesome resource for tons of projects.
Thanks.
FYI here is the PR for sklearn to use AppVeyor CI:
https://github.com/scikit-learn/scikit-learn/pull/3363
It's slightly different from the minimalistic sample I wrote for
Feodor updated the AppVeyor nodes to have the Windows SDK matching
MSVC 2008 Express for Python 2. I have updated my sample scripts and
we now have a working example of a free CI system for:
Python 2 and 3 both for 32 and 64 bit architectures.
https://github.com/ogrisel/python-appveyor-demo
Hi!
I gave appveyor a try this WE so as to build a minimalistic Python 3
project with a Cython extension. It works both with 32 and 64 bit
MSVC++ and can generate wheel packages. See:
https://github.com/ogrisel/python-appveyor-demo
However 2008 is not (yet) installed so it cannot be used
Hi Matthew and Ralf,
Has anyone managed to build working whl packages for numpy and scipy
on win32 using the static mingw-w64 toolchain?
--
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
Hi Carl,
All the items you suggest would be very appreciated. Don't hesitate to
ping me if you need me to test new packages.
Also the sklearn project has a free Rackspace Cloud account that
Matthew is already using to make travis upload OSX wheels for the
master branch of various scipy stack
Just successfully tested on Python 3.4 from python.org / OSX 10.9 and
all sklearn tests pass, including a tests that involves
multiprocessing and that used to crash with Accelerate.
Thanks very much!
--
Olivier
___
NumPy-Discussion mailing list
2014-06-09 14:53 GMT+02:00 Sturla Molden sturla.mol...@gmail.com:
I see an Anaconda user reports Anaconda is affected, but Anaconda is
linked with MKL as well (or used to be?)
Not necessarily. Only if you buy the MKL optimization package:
https://store.continuum.io/cshop/mkl-optimizations/
2014-06-09 15:51 GMT+02:00 Carl Kleffner cmkleff...@gmail.com:
The free windows conda packages are linked against MKL statically, similar
to C. Gohlke's packges.
My guess: the MKL optimization package supports multithreading and SVML, the
free packages only a serial interface to MKL.
That
BLIS looks interesting. Besides threading and runtime configuration,
adding support for building it as a shared library would also be
required to be usable by python packages that have several extension
modules that link against a BLAS implementation.
2014-04-03 14:56 GMT+02:00 Julian Taylor jtaylor.deb...@googlemail.com:
FYI, binaries linking openblas should add this patch in some way:
https://github.com/numpy/numpy/pull/4580
Cliffs: linking OpenBLAS prevents parallelization via threading or
multiprocessing.
just wasted a bunch of time
2014-03-28 23:13 GMT+01:00 Matthew Brett matthew.br...@gmail.com:
Hi,
On Fri, Mar 28, 2014 at 3:09 PM, Olivier Grisel
olivier.gri...@ensta.org wrote:
This is great! Has anyone started to work on OSX whl packages for
scipy? I assume the libgfortran, libquadmath libgcc_s dylibs will
not make
2014-03-31 13:53 GMT+02:00 Olivier Grisel olivier.gri...@ensta.org:
2014-03-28 23:13 GMT+01:00 Matthew Brett matthew.br...@gmail.com:
Hi,
On Fri, Mar 28, 2014 at 3:09 PM, Olivier Grisel
olivier.gri...@ensta.org wrote:
This is great! Has anyone started to work on OSX whl packages for
scipy
2014-03-28 22:18 GMT+01:00 Nathaniel Smith n...@pobox.com:
I thought OpenBLAS is usually used with reference lapack?
I am no longer sure myself. Debian thus Ubuntu seem to be only
packaging the BLAS part of OpenBLAS for the libblas.so symlink and
uses the reference implementation of lapack for
2014-03-28 22:55 GMT+01:00 Julian Taylor jtaylor.deb...@googlemail.com:
On 28.03.2014 22:38, Olivier Grisel wrote:
2014-03-28 22:18 GMT+01:00 Nathaniel Smith n...@pobox.com:
I thought OpenBLAS is usually used with reference lapack?
I am no longer sure myself. Debian thus Ubuntu seem
This is great! Has anyone started to work on OSX whl packages for
scipy? I assume the libgfortran, libquadmath libgcc_s dylibs will
not make it as easy as for numpy. Would it be possible to use a static
gcc toolchain as Carl Kleffner is using for his experimental windows
whl packages?
--
2014-03-26 16:27 GMT+01:00 Olivier Grisel olivier.gri...@ensta.org:
Hi Carl,
I installed Python 2.7.6 64 bits on a windows server instance from
rackspace cloud and then ran get-pip.py and then could successfully
install the numpy and scipy wheel packages from your google drive
folder. I
2014-03-27 14:55 GMT+01:00 josef.p...@gmail.com:
On Wed, Mar 26, 2014 at 5:17 PM, Olivier Grisel
olivier.gri...@ensta.org wrote:
My understanding of Carl's effort is that the long term goal is to
have official windows whl packages for both numpy and scipy published
on PyPI with a builtin
Hi Carl,
I installed Python 2.7.6 64 bits on a windows server instance from
rackspace cloud and then ran get-pip.py and then could successfully
install the numpy and scipy wheel packages from your google drive
folder. I tested dot products and scipy.linalg.svd and they work as
expected.
Then I
My understanding of Carl's effort is that the long term goal is to
have official windows whl packages for both numpy and scipy published
on PyPI with a builtin BLAS / LAPACK implementation so that users can
do `pip install scipy` under windows and get something that just works
without have to
2014-03-26 22:31 GMT+01:00 Julian Taylor jtaylor.deb...@googlemail.com:
On 26.03.2014 22:17, Olivier Grisel wrote:
The problem with ATLAS is that you need to select the number of thread
at build time AFAIK. But we could set it to a reasonable default (e.g.
4 threads) for the default windows
Indeed I just ran the bench on my Mac and OSX Veclib is more than 2x
faster than OpenBLAS on such squared matrix multiplication (I just
have 2 physical cores on this box).
MKL from Canopy Express is slightly slower OpenBLAS for this GEMM
bench on that box.
I really wonder why Veclib is faster in
2014-02-20 23:56 GMT+01:00 Carl Kleffner cmkleff...@gmail.com:
Hi,
2014-02-20 23:17 GMT+01:00 Olivier Grisel olivier.gri...@ensta.org:
I had a quick look (without running the procedure) but I don't
understand some elements:
- apparently you never tell in the numpy's site.cfg nor
2014-02-20 11:32 GMT+01:00 Julian Taylor jtaylor.deb...@googlemail.com:
On Thu, Feb 20, 2014 at 1:25 AM, Nathaniel Smith n...@pobox.com wrote:
Hey all,
Just a heads up: thanks to the tireless work of Olivier Grisel, the OpenBLAS
development branch is now fork-safe when built with its default
2014-02-20 14:28 GMT+01:00 Sturla Molden sturla.mol...@gmail.com:
Will this mean NumPy, SciPy et al. can start using OpenBLAS in the
official binary packages, e.g. on Windows and Mac OS X? ATLAS is slow and
Accelerate conflicts with fork as well.
This what I would like to do personnally.
FYI: to build scipy against OpenBLAS I used the following site.cfg at
the root of my scipy source folder:
[DEFAULT]
library_dirs = /opt/OpenBLAS-noomp/lib:/usr/local/lib
include_dirs = /opt/OpenBLAS-noomp/include:/usr/local/include
[blas_opt]
libraries = openblas
[lapack_opt]
libraries =
Thanks for sharing, this is all very interesting.
Have you tried to have a look at the memory usage and import time of
numpy when linked against libopenblas.dll?
--
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
2014-02-20 16:01 GMT+01:00 Julian Taylor jtaylor.deb...@googlemail.com:
this is probably caused by the memory warmup
it can be disabled with NO_WARMUP=1 in some configuration file.
This was it, I now get:
import os, psutil
psutil.Process(os.getpid()).get_memory_info().rss / 1e6
20.324352
I have exactly the same setup as yours and it links to OpenBLAS
correctly (in a venv as well, installed with python setup.py install).
The only difference is that I installed OpenBLAS in the default
folder: /opt/OpenBLAS (and I reflected that in site.cfg).
When you run otool -L, is it in your
I had a quick look (without running the procedure) but I don't
understand some elements:
- apparently you never tell in the numpy's site.cfg nor the scipy.cfg
to use the openblas lib nor set the
library_dirs: how does numpy.distutils know that it should dynlink
against numpy/core/libopenblas.dll
Congrats and thanks to Andreas and everyone involved in the release,
the website fixes and the online survey setup.
I posted Andreas blog post on HN and reddit:
- http://news.ycombinator.com/item?id=5094319
-
2012/9/23 Nathaniel Smith n...@pobox.com:
On Sat, Sep 22, 2012 at 4:46 PM, Olivier Grisel
olivier.gri...@ensta.org wrote:
There is also a third use case that is problematic on numpy master:
orig = np.memmap('tmp.mmap', dtype=np.float64, shape=100, mode='w+')
orig[:] = np.arange(orig.shape[0
2012/9/23 Olivier Grisel olivier.gri...@ensta.org:
The only clean solution for the collapsed base of numpy 1.7 I see
would be to replace the direct mmap.mmap buffer instance from the
numpy.memmap class to use a custom wrapper of mmap.mmap that would
still implement the buffer python API
2012/9/22 Gael Varoquaux gael.varoqu...@normalesup.org:
Hi list,
I am struggling with offsets on the view of a memmaped array. Consider
the following:
import numpy as np
a = np.memmap('tmp.mmap', dtype=np.float64, shape=50, mode='w+')
a[:] = np.arange(50)
b = a[10:]
Here, I have
There is also a third use case that is problematic on numpy master:
orig = np.memmap('tmp.mmap', dtype=np.float64, shape=100, mode='w+')
orig[:] = np.arange(orig.shape[0]) * -1.0 # negative markers to
detect under / overflows
a = np.memmap('tmp.mmap', dtype=np.float64, shape=50, mode='r+',
A posix dup (http://www.unix.com/man-page/POSIX/3posix/dup/) would not
solve it as the fd is hidden inside the python `mmap.mmap` instance
that is a builtin that just exposes the python buffer interface and
hides the implementation details.
The only clean solution would be to make `numpy.memmap`
2012/9/22 Charles R Harris charlesr.har...@gmail.com:
On Sat, Sep 22, 2012 at 11:52 AM, Charles R Harris
charlesr.har...@gmail.com wrote:
On Sat, Sep 22, 2012 at 11:31 AM, Gael Varoquaux
gael.varoqu...@normalesup.org wrote:
On Sat, Sep 22, 2012 at 11:16:27AM -0600, Charles R Harris
2012/6/13 James Bergstra bergs...@iro.umontreal.ca:
Further to the recent discussion on lazy evaluation numba, I moved
what I was doing into a new project:
PyAutoDiff:
https://github.com/jaberg/pyautodiff
It currently works by executing CPython bytecode with a numpy-aware
engine that
2012/6/14 James Bergstra bergs...@iro.umontreal.ca:
On Thu, Jun 14, 2012 at 4:00 AM, Olivier Grisel
olivier.gri...@ensta.org wrote:
2012/6/13 James Bergstra bergs...@iro.umontreal.ca:
Further to the recent discussion on lazy evaluation numba, I moved
what I was doing into a new project
Le 2 avril 2012 18:36, Frédéric Bastien no...@nouiz.org a écrit :
numpy.random are not optimized. If matlab use the random number from
mkl, they will be much faster.
In that case this is indeed negligible:
In [1]: %timeit np.random.randn(2000, 2000)
1 loops, best of 3: 306 ms per loop
--
2012/1/18 Chao YUE chaoyue...@gmail.com:
Does anybody know if there is similar chance for training in Paris? (or
other places of France)/
the price is nice, just because it's in US
The next EuroScipy will take place in Brussels. Just 1h25m train ride
from Paris.
/
- Plotting with matplotlib - Mike Müller
https://us.pycon.org/2012/schedule/presentation/238/
- Introduction to Interactive Predictive Analytics in Python with
scikit-learn - Olivier Grisel
https://us.pycon.org/2012/schedule/presentation/195/
- High Performance Python II - Travis Oliphant
https
2011/6/22 RadimRehurek radimrehu...@seznam.cz:
Date: Wed, 22 Jun 2011 11:30:47 -0400
From: Alex Flint alex.fl...@gmail.com
Subject: [Numpy-discussion] argmax for top N elements
Is it possible to use argmax or something similar to find the locations of
the largest N elements in a matrix?
I
2011/3/24 Nadav Horesh nad...@visionsense.com:
I am looking for a partial least sqaures code refactoring for two (X,Y)
matrices. I found the following, but they not not work for me:
1. MDP: Factors only one matrix (am I wrong?)
2. pychem: Windows only code (I use Linux)
3. chemometrics from
Hi all,
I will be giving a tutorial on machine learning with scikit-learn
tomorrow morning and a talk on text classification on Friday. Then I
will stay until Monday evening.
Regards,
--
Olivier
___
NumPy-Discussion mailing list
2011/2/25 Gael Varoquaux gael.varoqu...@normalesup.org:
On Fri, Feb 25, 2011 at 10:36:42AM +0100, Fred wrote:
I have a big array (44 GB) I want to decimate.
But this array has a lot of NaN (only 1/3 has value, in fact, so 2/3 of
NaN).
If I basically decimate it (a la NumPy, ie data[::nx,
Hello numpy users,
We (Isabel Drost, Nicolas Maillot and I) are organizing a Data Analytics Devroom
that will take place during the next edition of the FOSDEM in Brussels
on Feb. 5. Here is the CFP:
http://datadevroom.couch.it/CFP
You might be interested in attending the event and take the
2010/8/2 John Salvatier jsalv...@u.washington.edu:
Holy cow! I was looking for this exact package for extending pymc! Now I've
found two packages that do basically exactly what I want (Theano and
ALGOPY).
Beware that theano does only symbolic differentiation which is very
different from AD.
OpenCL is definitely the way to go for a cross platform solution with
both nvidia and AMD having released beta runtimes to their respective
developer networks (free as in beer subscription required for the beta
dowload pages). Final public releases to be expected around 2009 Q3.
OpenCL is an open
2009/8/6 David Cournapeau da...@ar.media.kyoto-u.ac.jp:
Olivier Grisel wrote:
OpenCL is definitely the way to go for a cross platform solution with
both nvidia and AMD having released beta runtimes to their respective
developer networks (free as in beer subscription required for the beta
Also note: nvidia is about to release the first implementation of an OpenCL
runtime based on cuda. OpenCL is an open standard such as OpenGL but for
numerical computing on stream platforms (GPUs, Cell BE, Larrabee, ...).
--
Olivier
On May 26, 2009 8:54 AM, David Cournapeau
2009/2/20 David Warde-Farley d...@cs.toronto.edu:
Hi Olivier,
There was this idea posted on the Scipy-user list a while back:
http://projects.scipy.org/pipermail/scipy-user/2008-August/017954.html
but it doesn't look like he got anywhere with it, or even got a
response.
I just
Hi numpist people,
I discovered the ufunc and there ability to compute the results on
preallocated arrays:
a = arange(10, dtype=float32)
b = arange(10, dtype=float32) + 1
c = add(a, b, a)
c is a
True
a
array([ 1., 3., 5., 7., 9., 11., 13., 15., 17.,
19.], dtype=float32)
+1
On Feb 6, 2009 12:16 AM, Gael Varoquaux gael.varoqu...@normalesup.org
wrote:
On Thu, Feb 05, 2009 at 05:08:49PM -0600, Travis E. Oliphant wrote:
I've been fairly quiet on this list for awhile due to work and family
schedule, but I think about how things can improve regularly.One
2009/1/16 Gregor Thalhammer gregor.thalham...@gmail.com:
Francesc Alted schrieb:
Wow, pretty nice speed-ups indeed! In fact I was thinking in including
support for threading in Numexpr (I don't think it would be too
difficult, but let's see). BTW, do you know how VML is able to achieve
a
Interesting topic indeed. I think I have been hit with similar problems on
toy experimental scripts. So far the solution was always adhoc FS caches of
numpy arrays with manual filename management. Maybe the first step for
designing a generic solution would be to list some representative yet simple
Hi list,
Suppose I have array a with dimensions (d1, d3) and array b with
dimensions (d2, d3). I want to compute array c with dimensions (d1,
d2) holding the squared euclidian norms of vectors in a and b with
size d3.
My first take was to use a python level loop:
from numpy import *
c =
2008/12/4 Stéfan van der Walt [EMAIL PROTECTED]:
Hi Olivier
2008/12/4 Olivier Grisel [EMAIL PROTECTED]:
To avoid the python level loop I then tried to use broadcasting as follows:
c = sum((a[:,newaxis,:] - b) ** 2, axis=2)
But this build a useless and huge (d1, d2, d3) temporary array
2008/12/4 Charles R Harris [EMAIL PROTECTED]:
On Thu, Dec 4, 2008 at 8:26 AM, Olivier Grisel [EMAIL PROTECTED]
wrote:
Hi list,
Suppose I have array a with dimensions (d1, d3) and array b with
dimensions (d2, d3). I want to compute array c with dimensions (d1,
d2) holding the squared
97 matches
Mail list logo