Announcing Theano 0.9.0

This is a release for a major version, with lots of new features, bug
fixes, and some interface changes (deprecated or potentially misleading
features were removed).

This release is the last major version that features the old GPU back-end (
theano.sandbox.cuda, accessible through device=gpu*). All GPU users are
encouraged to transition to the new GPU back-end, based on libgpuarray (
theano.gpuarray, accessible through device=cuda*). For more information,
see
https://github.com/Theano/Theano/wiki/Converting-to-the-new-gpu-back-end%28gpuarray%29
.

Upgrading to Theano 0.9.0 is recommended for everyone, but you should first
make sure that your code does not raise deprecation warnings with Theano
0.8*. Otherwise either results can change, or warnings may have been turned
into errors.

For those using the bleeding edge version in the git repository, we
encourage you to update to the rel-0.9.0 tag.
What's New

Highlights (since 0.8.0):


   - Better Python 3.5 support
   - Better numpy 1.12 support
   - Conda packages for Mac, Linux and Windows
   - Support newer Mac and Windows versions
   - More Windows integration:
      - Theano scripts (theano-cache and theano-nose) now works on Windows
      - Better support for Windows end-lines into C codes
      - Support for space in paths on Windows
   - Scan improvements:
      - More scan optimizations, with faster compilation and gradient
      computation
      - Support for checkpoint in scan (trade off between speed and memory
      usage, useful for long sequences)
      - Fixed broadcast checking in scan
   - Graphs improvements:
      - More numerical stability by default for some graphs
      - Better handling of corner cases for theano functions and graph
      optimizations
      - More graph optimizations with faster compilation and execution
      - smaller and more readable graph
   - New GPU back-end:
      - Removed warp-synchronous programming to get good results with newer
      CUDA drivers
      - More pooling support on GPU when cuDNN isn't available
      - Full support of ignore_border option for pooling
      - Inplace storage for shared variables
      - float16 storage
      - Using PCI bus ID of graphic cards for a better mapping between
      theano device number and nvidia-smi number
      - Fixed offset error in GpuIncSubtensor
   - Less C code compilation
   - Added support for bool dtype
   - Updated and more complete documentation
   - Bug fixes related to merge optimizer and shape inference
   - Lot of other bug fixes, crashes fixes and warning improvements

Interface changes:


   - Merged CumsumOp/CumprodOp into CumOp
   - In MRG module:
      - Replaced method multinomial_wo_replacement() with new method
      choice()
      - Random generator now tries to infer the broadcast pattern of its
      output
   - New pooling interface
   - Pooling parameters can change at run time
   - Moved softsign out of sandbox to theano.tensor.nnet.softsign
   - Using floatX dtype when converting empty list/tuple
   - Roll make the shift be modulo the size of the axis we roll on
   - round() default to the same as NumPy: half_to_even

Convolution updates:


   - Support of full and half modes for 2D and 3D convolutions including in
   conv3d2d
   - Allowed pooling of empty batch
   - Implement conv2d_transpose convenience function
   - Multi-cores convolution and pooling on CPU
   - New abstract 3d convolution interface similar to the 2d convolution
   interface
   - Dilated convolution

GPU:


   - cuDNN: support versoin 5.1 and wrap batch normalization (2d and 3d)
   and RNN functions
   - Multiple-GPU, synchrone update (via platoon, use NCCL)
   - Gemv(matrix-vector product) speed up for special shape
   - cublas gemv workaround when we reduce on an axis with a dimensions
   size of 0
   - Warn user that some cuDNN algorithms may produce unexpected results in
   certain environments for convolution backward filter operations
   - GPUMultinomialFromUniform op now supports multiple dtypes
   - Support for MaxAndArgMax for some axis combination
   - Support for solve (using cusolver), erfinv and erfcinv
   - Implemented GpuAdvancedSubtensor

New features:


   - OpFromGraph now allows gradient overriding for every input
   - Added Abstract Ops for batch normalization that use cuDNN when
   available and pure Theano CPU/GPU alternatives otherwise
   - Added gradient of solve, tensorinv (CPU), tensorsolve (CPU),
   searchsorted (CPU), DownsampleFactorMaxGradGrad (CPU)
   - Added Multinomial Without Replacement
   - Allowed partial evaluation of compiled function
   - More Rop support
   - Indexing support ellipsis: a[..., 3]`, a[1,...,3]
   - Added theano.tensor.{tensor5,dtensor5, ...}
   - compiledir_format support device
   - Added New Theano flag conv.assert_shape to check user-provided shapes
   at runtime (for debugging)
   - Added new Theano flag cmodule.age_thresh_use
   - Added new Theano flag cuda.enabled
   - Added new Theano flag nvcc.cudafe to enable faster compilation and
   import with old CUDA back-end
   - Added new Theano flag print_global_stats to print some global
   statistics (time spent) at the end
   - Added new Theano flag profiling.ignore_first_call, useful to profile
   the new gpu back-end
   - remove ProfileMode (use Theano flag profile=True instead)

Others:


   - Split op now has C code for CPU and GPU
   - theano-cache list now includes compilation times
   - Speed up argmax only on GPU (without also needing the max)
   - More stack trace in error messages
   - Speed up cholesky grad
   - log(sum(exp(...))) now get stability optimized

Other more detailed changes:


   - Added Jenkins (gpu tests run on pull requests in addition to daily
   buildbot)
   - Removed old benchmark directory and other old files not used anymore
   - Use of 64-bit indexing in sparse ops to allow matrix with more then 231-1
   elements
   - Allowed more then one output to be an destructive inplace
   - More support of negative axis
   - Added the keepdims parameter to the norm function
   - Make scan gradient more deterministic

Download and Install

You can download Theano from http://pypi.python.org/pypi/Theano

Installation instructions are available at
http://deeplearning.net/software/theano/install.html
Description

Theano is a Python library that allows you to define, optimize, and
efficiently evaluate mathematical expressions involving multi-dimensional
arrays. It is built on top of NumPy. Theano features:


   - tight integration with NumPy: a similar interface to NumPy's.
   numpy.ndarrays are also used internally in Theano-compiled functions.
   - transparent use of a GPU: perform data-intensive computations much
   faster than on a CPU.
   - efficient symbolic differentiation: Theano can compute derivatives for
   functions of one or many inputs.
   - speed and stability optimizations: avoid nasty bugs when computing
   expressions such as log(1+ exp(x)) for large values of x.
   - dynamic C code generation: evaluate expressions faster.
   - extensive unit-testing and self-verification: includes tools for
   detecting and diagnosing bugs and/or potential problems.

Theano has been powering large-scale computationally intensive scientific
research since 2007, but it is also approachable enough to be used in the
classroom (IFT6266 at the University of Montreal).
Resources

About Theano:

http://deeplearning.net/software/theano/

Theano-related projects:

http://github.com/Theano/Theano/wiki/Related-projects

About NumPy:

http://numpy.scipy.org/

About SciPy:

http://www.scipy.org/

Machine Learning Tutorial with Theano on Deep Architectures:

http://deeplearning.net/tutorial/
Acknowledgments

I would like to thank all contributors of Theano. For this particular
release, many people have helped, notably (in alphabetical order):


   - affanv14
   - Alexander Matyasko
   - Alexandre de Brebisson
   - Amjad Almahairi
   - Andrés Gottlieb
   - Anton Chechetka
   - Arnaud Bergeron
   - Benjamin Scellier
   - Ben Poole
   - Bhavishya Pohani
   - Bryn Keller
   - Caglar
   - Carl Thomé
   - Cesar Laurent
   - Chiheb Trabelsi
   - Chinnadhurai Sankar
   - Christos Tsirigotis
   - Ciyong Chen
   - David Bau
   - Dimitar Dimitrov
   - Evelyn Mitchell
   - Fábio Perez
   - Faruk Ahmed
   - Fei Wang
   - Fei Zhan
   - Florian Bordes
   - Francesco Visin
   - Frederic Bastien
   - Fuchai
   - Gennadiy Tupitsin
   - Gijs van Tulder
   - Gilles Louppe
   - Gokula Krishnan
   - Greg Ciccarelli
   - gw0 [http://gw.tnode.com/]
   - happygds
   - Harm de Vries
   - He
   - hexahedria
   - hsintone
   - Huan Zhang
   - Ilya Kulikov
   - Iulian Vlad Serban
   - jakirkham
   - Jakub Sygnowski
   - Jan Schlüter
   - Jesse Livezey
   - Jonas Degrave
   - joncrall
   - Kaixhin
   - Karthik Karanth
   - Kelvin Xu
   - Kevin Keraudren
   - khaotik
   - Kirill Bobyrev
   - Kumar Krishna Agrawal
   - Kv Manohar
   - Liwei Cai
   - Lucas Beyer
   - Maltimore
   - Marc-Alexandre Cote
   - Marco
   - Marius F. Killinger
   - Martin Drawitsch
   - Mathieu Germain
   - Matt Graham
   - Maxim Kochurov
   - Micah Bojrab
   - Michael Harradon
   - Mikhail Korobov
   - mockingjamie
   - Mohammad Pezeshki
   - Morgan Stuart
   - Nan Rosemary Ke
   - Neil
   - Nicolas Ballas
   - Nizar Assaf
   - Olivier Mastropietro
   - Ozan Çağlayan
   - p
   - Pascal Lamblin
   - Pierre Luc Carrier
   - RadhikaG
   - Ramana Subramanyam
   - Ray Donnelly
   - Rebecca N. Palmer
   - Reyhane Askari
   - Rithesh Kumar
   - Rizky Luthfianto
   - Robin Millette
   - Roman Ring
   - root
   - Ruslana Makovetsky
   - Saizheng Zhang
   - Samira Ebrahimi Kahou
   - Samira Shabanian
   - Sander Dieleman
   - Sebastin Santy
   - Shawn Tan
   - Simon Lefrancois
   - Sina Honari
   - Steven Bocco
   - superantichrist
   - Taesup (TS) Kim
   - texot
   - Thomas George
   - tillahoffmann
   - Tim Cooijmans
   - Tim Gasper
   - valtron
   - Vincent Dumoulin
   - Vincent Michalski
   - Vitaliy Kurlin
   - Wazeer Zulfikar
   - wazeerzulfikar
   - Wojciech Głogowski
   - Xavier Bouthillier
   - Yang Zhang
   - Yann N. Dauphin
   - Yaroslav Ganin
   - Ying Zhang
   - you-n-g
   - Zhouhan LIN

Also, thank you to all NumPy and Scipy developers as Theano builds on their
strengths.

All questions/comments are always welcome on the Theano mailing-lists (
http://deeplearning.net/software/theano/#community )
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to