Re: [Numpy-discussion] PyData Barcelona this May

2017-03-21 Thread Jaime Fernández del Río
On Mon, Mar 20, 2017 at 10:13 PM, Chris Barker 
wrote:

> On Mon, Mar 20, 2017 at 11:58 AM, Jaime Fernández del Río <
> jaime.f...@gmail.com> wrote:
>
>>  I have just submitted a workshop proposal with the following short
>> description:
>>
>> Taking NumPy In Stride
>> This workshop is aimed at users already familiar with NumPy. We will
>> dissect
>> the NumPy memory model with the help of a very powerful abstraction:
>> strides.
>> Participants will learn how to create different views out of the same
>> data,
>> including multidimensional ones, get a new angle on how and why
>> broadcasting
>> works, and explore related techniques to write faster, more efficient
>> code.
>>
>
> I'd go!
>
> And nice title :-)
>
> Any thoughts on a similar one for SciPy in Austin?
>

I'll be more than happy to share presentations, notebooks and whatnot with
someone wanting to run the tutorial over there. But Austin is a looong way
from Zürich, and the dates conflict with my son's birthday, so I don't
think I will be going...

Jaime


>
> -CHB
>
>
>
>
>
>> Let's see what the organizers think of it...
>>
>> Jaime
>>
>>
>> On Fri, Mar 17, 2017 at 10:59 PM, Ralf Gommers 
>> wrote:
>>
>>>
>>>
>>> On Sat, Mar 18, 2017 at 8:41 AM, Chris Barker 
>>> wrote:
>>>
 On Fri, Mar 17, 2017 at 4:37 AM, Jaime Fernández del Río <
 jaime.f...@gmail.com> wrote:

>
>- many people that use numpy in their daily work don't know what
>strides are, this was a BIG surprise for me.
>
> I'm not surprised at all. To start with, the majority of users are
>>> self-taught programmers that never used something lower level than Python
>>> or Matlab. Even talking to them about memory layout presents challenges.
>>>
>>>

>-
>
> Based on that experience, I was thinking that maybe a good topic for a
> workshop would be NumPy's memory model: views, reshaping, strides, some
> hints of buffering in the iterator...
>

>>> This material has been used multiple times in EuroScipy tutorials and
>>> may be of use: http://www.scipy-lectures.org/
>>> advanced/advanced_numpy/index.html
>>>
>>> Ralf
>>>
>>>
>>>
 I think this is a great idea. In fact, when I do an intro to numpy, I
 spend a bit of time on those issues, 'cause I think it's key to "Getting"
 numpy, and not something that people end up learning on their own from
 tutorials, etc. However, in my  case, I try to jam it into a low-level
 intro, and I think that fails :-(

 So doing it on it's own with the assumption that participant already
 know the basics of the high level python interface is a great idea.

 Maybe a "advanced" numpy tutorial for SciPy 2017 in Austin also???

 Here is my last talk -- maybe it'll be helpful.

 http://uwpce-pythoncert.github.io/SystemDevelopment/scipy.html#scipy

 the strides stuff is covered in a notebook here:

 https://github.com/UWPCE-PythonCert/SystemDevelopment/blob/m
 aster/Examples/numpy/stride_tricks.ipynb

 other notebooks here:

 https://github.com/UWPCE-PythonCert/SystemDevelopment/tree/m
 aster/Examples/numpy

 and the source for the whole thing is here:

 https://github.com/UWPCE-PythonCert/SystemDevelopment/blob/m
 aster/slides_sources/source/scipy.rst


 All licensed under: Creative Commons Attribution-ShareAlike -- so
 please use anything you find useful.

 -CHB



 And Julian's temporary work lends itself to a very nice talk, more on
> Python internals than on NumPy, but it's a very cool subject nonetheless.
>
> So my thinking is that I am going to propose those two, as a workshop
> and a talk. Thoughts?
>
> Jaime
>
> On Thu, Mar 9, 2017 at 8:29 PM, Sebastian Berg <
> sebast...@sipsolutions.net> wrote:
>
>> On Thu, 2017-03-09 at 15:45 +0100, Jaime Fernández del Río wrote:
>> > There will be a PyData conference in Barcelona this May:
>> >
>> > http://pydata.org/barcelona2017/
>> >
>> > I am planning on attending, and was thinking of maybe proposing to
>> > organize a numpy-themed workshop or tutorial.
>> >
>> > My personal inclination would be to look at some advanced topic that
>> > I know well, like writing gufuncs in Cython, but wouldn't mind doing
>> > a more run of the mill thing. Anyone has any thoughts or experiences
>> > on what has worked well in similar situations? Any specific topic
>> you
>> > always wanted to attend a workshop on, but were afraid to ask?
>> >
>> > Alternatively, or on top of the workshop, I could propose to do a
>> > talk: talking last year at PyData Madrid about the new indexing was
>> a
>> > lot of fun! Thing is, I have been quite disconnected from the
>> project
>> > this past year, and can't really think of any 

Re: [Numpy-discussion] PyData Barcelona this May

2017-03-21 Thread Marten van Kerkwijk
"Taking numpy in stride, and the essential role of 0" ;-)

-- Marten
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Advanced numpy tutorial for SciPy?

2017-03-21 Thread Chris Barker - NOAA Federal
In another thread, there is a discussion of a workshop on "Taking
NumPy In Stride" for PyData Barcelona.

I think it would be great to have something like that at SciPy in
Austin this year.

Jaime can't make it, and I don't think strides are going to fill a
four hour tutorial, so it would be good as part of an advanced numpy
tutorial.

I don't have the bandwidth to put together an entire tutorial, but
maybe someone would like to join forces?

Or if someone is already planning an advanced numpy tutorial, perhaps
I could contribute.

Not much time left to get a proposal in!

-Chris
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] PyData Barcelona this May

2017-03-21 Thread Daπid
On 20 March 2017 at 19:58, Jaime Fernández del Río  wrote:
>
> Taking NumPy In Stride
> This workshop is aimed at users already familiar with NumPy. We will dissect
> the NumPy memory model with the help of a very powerful abstraction:
> strides.
> Participants will learn how to create different views out of the same data,
> including multidimensional ones, get a new angle on how and why broadcasting
> works, and explore related techniques to write faster, more efficient code.

I think I only understand this abstract because I know what views are.
Maybe you could add a line explaining what they are? (I cannot think
of one myself).
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Announcing Theano 0.9.0

2017-03-21 Thread Steven Bocco
 Announcing Theano 0.9.0

This is a release for a major version, with lots of new features, bug
fixes, and some interface changes (deprecated or potentially misleading
features were removed).

This release is the last major version that features the old GPU back-end (
theano.sandbox.cuda, accessible through device=gpu*). All GPU users are
encouraged to transition to the new GPU back-end, based on libgpuarray (
theano.gpuarray, accessible through device=cuda*). For more information,
see
https://github.com/Theano/Theano/wiki/Converting-to-the-new-gpu-back-end%28gpuarray%29
.

Upgrading to Theano 0.9.0 is recommended for everyone, but you should first
make sure that your code does not raise deprecation warnings with Theano
0.8*. Otherwise either results can change, or warnings may have been turned
into errors.

For those using the bleeding edge version in the git repository, we
encourage you to update to the rel-0.9.0 tag.
What's New

Highlights (since 0.8.0):


   - Better Python 3.5 support
   - Better numpy 1.12 support
   - Conda packages for Mac, Linux and Windows
   - Support newer Mac and Windows versions
   - More Windows integration:
  - Theano scripts (theano-cache and theano-nose) now works on Windows
  - Better support for Windows end-lines into C codes
  - Support for space in paths on Windows
   - Scan improvements:
  - More scan optimizations, with faster compilation and gradient
  computation
  - Support for checkpoint in scan (trade off between speed and memory
  usage, useful for long sequences)
  - Fixed broadcast checking in scan
   - Graphs improvements:
  - More numerical stability by default for some graphs
  - Better handling of corner cases for theano functions and graph
  optimizations
  - More graph optimizations with faster compilation and execution
  - smaller and more readable graph
   - New GPU back-end:
  - Removed warp-synchronous programming to get good results with newer
  CUDA drivers
  - More pooling support on GPU when cuDNN isn't available
  - Full support of ignore_border option for pooling
  - Inplace storage for shared variables
  - float16 storage
  - Using PCI bus ID of graphic cards for a better mapping between
  theano device number and nvidia-smi number
  - Fixed offset error in GpuIncSubtensor
   - Less C code compilation
   - Added support for bool dtype
   - Updated and more complete documentation
   - Bug fixes related to merge optimizer and shape inference
   - Lot of other bug fixes, crashes fixes and warning improvements

Interface changes:


   - Merged CumsumOp/CumprodOp into CumOp
   - In MRG module:
  - Replaced method multinomial_wo_replacement() with new method
  choice()
  - Random generator now tries to infer the broadcast pattern of its
  output
   - New pooling interface
   - Pooling parameters can change at run time
   - Moved softsign out of sandbox to theano.tensor.nnet.softsign
   - Using floatX dtype when converting empty list/tuple
   - Roll make the shift be modulo the size of the axis we roll on
   - round() default to the same as NumPy: half_to_even

Convolution updates:


   - Support of full and half modes for 2D and 3D convolutions including in
   conv3d2d
   - Allowed pooling of empty batch
   - Implement conv2d_transpose convenience function
   - Multi-cores convolution and pooling on CPU
   - New abstract 3d convolution interface similar to the 2d convolution
   interface
   - Dilated convolution

GPU:


   - cuDNN: support versoin 5.1 and wrap batch normalization (2d and 3d)
   and RNN functions
   - Multiple-GPU, synchrone update (via platoon, use NCCL)
   - Gemv(matrix-vector product) speed up for special shape
   - cublas gemv workaround when we reduce on an axis with a dimensions
   size of 0
   - Warn user that some cuDNN algorithms may produce unexpected results in
   certain environments for convolution backward filter operations
   - GPUMultinomialFromUniform op now supports multiple dtypes
   - Support for MaxAndArgMax for some axis combination
   - Support for solve (using cusolver), erfinv and erfcinv
   - Implemented GpuAdvancedSubtensor

New features:


   - OpFromGraph now allows gradient overriding for every input
   - Added Abstract Ops for batch normalization that use cuDNN when
   available and pure Theano CPU/GPU alternatives otherwise
   - Added gradient of solve, tensorinv (CPU), tensorsolve (CPU),
   searchsorted (CPU), DownsampleFactorMaxGradGrad (CPU)
   - Added Multinomial Without Replacement
   - Allowed partial evaluation of compiled function
   - More Rop support
   - Indexing support ellipsis: a[..., 3]`, a[1,...,3]
   - Added theano.tensor.{tensor5,dtensor5, ...}
   - compiledir_format support device
   - Added New Theano flag conv.assert_shape to check user-provided shapes
   at runtime (for debugging)
   - Added new Theano flag cmodule.age_thresh_use
   - Added new Theano flag cuda.enabled
   -