[theano-users] Re: Non-machine learning applications for Theano?

2017-07-14 Thread Adam Becker
I think an older version of Nengo spiking neural network simulator used Theano as backend. It's neuroscience but still related to ML. Those guys were behind a functional brain model called "spaun". Btw, I personally used Theano to render fractals on GPU. On Friday, July 14, 2017 at 9:06:20 AM

[theano-users] Re: Theano sort - new GPU implementation

2017-06-29 Thread Adam Becker
Has there been any progress? I'm in need of sorted TopK on GPU. I can go with CPU sort but seems a bit slow. On Thursday, June 15, 2017 at 4:01:00 AM UTC+8, Victor Campmany wrote: > > Hi, > > We are planning to implement a new GPU accelerated sorting algorithm. We'd > like to know which are the

[theano-users] Re: How to convert scipy cdist in theano operation?

2017-06-29 Thread Adam Becker
The role of as_op is to convert a numerical function into a symbolic Op. You don't need @as_op inside perform method. On Thursday, June 29, 2017 at 9:00:43 PM UTC+8, Giuseppe Angora wrote: > > I need to convert the 'scipy.distance.cdist' in a theano operation. I > start using theano 'as_op': >

[theano-users] Re: Theano sort - new GPU implementation

2017-06-14 Thread Adam Becker
I'd prefer a gpuarray implementation with similar interface as numpy: gpuarray.sort(arr, [axis=-1], [kind='radixsort'], [order='inc']) Deep Learning folks would need a fast batched version, especially float32 / int32 tensors on GPU. But anyway there should be a general algorithm deals with all

[theano-users] Re: Theano same computation not optimized?

2017-05-20 Thread Adam Becker
> when it can just reuse that computation That's what optimization does. Try running it with device=cpu and optimizer=fast_run On Saturday, May 20, 2017 at 11:55:19 PM UTC+8, Alexander Botev wrote: > > I have the following code: > > >>> a = T.fmatrix() > >>> b = T.sqr(a) > >>> c =

[theano-users] Re: Theano accept data on GPU?

2017-05-09 Thread Adam Becker
> effect. One way to modify an input `x` to a function evaluating f(x) is to > define a new input `y` and use `theano.function([y], f(x), givens={x: > g(y)})`. Another solution consists in using `theano.clone`, e.g. like this: > `theano.function([x], theano.clone(f(x), replace={x: g(x)

[theano-users] Re: Theano accept data on GPU?

2017-05-09 Thread Adam Becker
In the main graph, replace the input variables with type: theano.gpuarray.GpuArrayType (Can be done using givens parameter of theano.function). Then, feed pygpu.gpuarray.GpuArray object directly to the compiled function. pygpu.gpuarray.asarray can be used to move numpy array to GPU. On

[theano-users] Re: Variable number of arguments in scan

2017-05-04 Thread Adam Becker
I don't think this works. The inner function of scan will be converted to a graph, then getting compiled inside ScanOp. If you change nonlocal variable "order" on the fly, the change won't be reflected on the compiled function. If the inner loop itself can be written as scan, you can just make

[theano-users] Re: theano.OpFromGraph doesn't work

2017-04-24 Thread Adam Becker
Hi, I got the expected result [1., 8., 3.] on my machine, on both cpu/gpuarray backend. The gradient override feature requires theano 0.9rc1 or later. What's your version installed? Also, you may want to give inline=True at constructor since OpFromGraph does not support GPU only graph when

[theano-users] Re: TypeError: Cannot convert Type TensorType(float64, matrix) (of Variable Elemwise{add,no_inplace}.0) into Type TensorType(float32, matrix)

2017-04-18 Thread Adam Becker
I encountered something similar a few days ago. Turned out float32 divided by int32 gave float64. This is consistent with numpy behavior, a bit weird, but true. No sure if related. In general, traversing the graph (prior to compilation) to find the problematic nodes can be helpful. On Sunday,

[theano-users] Re: OpFromGraph on GPU

2017-03-23 Thread Adam Becker
OpFromGraph is still under development. If you want to use it on GPU, the safest way would be setting inline=True at constructor (requires 0.9rc1+). This will cause more compilation time though. Or you can try constructing a GPU only graph by hand and build OfG with that, I didn't test that

Re: [theano-users] custom elemwise Op not getting fused

2017-03-14 Thread Adam Becker
code (you have). Another is if the node is used by more then 1 other node > in the graph. We don't want to duplicate computation, so we don't fuse > them. I would need a working example to investigate more. > > Fred > > On Thu, Mar 9, 2017 at 8:51 AM Adam Becker <junkk...@gmail.c

[theano-users] custom elemwise Op not getting fused

2017-03-09 Thread Adam Becker
Hi, I'm close to a working PoC for the generalized elemwise Op (CPU for now). However it appears the Op is not getting properly fused with other elemwise Ops. There are two new scalar Ops, ElemIdx and ElemAt, with respective Elemwise subclass: TensorIdx and TensorAt. The definitions of the

[theano-users] Re: how do I force a elemwise op included within tensor elemwise loop

2017-03-05 Thread Adam Becker
Nevermind, I figured it out. I had to disable constant folding and wrap with tensor.elemwise. On Sunday, March 5, 2017 at 10:28:39 AM UTC+8, Adam Becker wrote: > > Hi, > > I'm writing a elemwise Op with special purpose, it's c_code should be > different when it's working with

[theano-users] how do I force a elemwise op included within tensor elemwise loop

2017-03-04 Thread Adam Becker
Hi, I'm writing a elemwise Op with special purpose, it's c_code should be different when it's working with tensor objects. (Actually, it's for gh:#5471 ) Current I'm only working with the CPU version. Here's the approach I'm taking so far, I

[theano-users] Re: Help with constructing basic theano op

2017-03-03 Thread Adam Becker
In perform method, you should call directly to numpy instead of using symbolic expression. Or if you intent to build a Op using existing ones, you should use theano.OpFromGraph On Friday, March 3, 2017 at 1:25:31 AM UTC+8, Peter St. John wrote: > > I'm having an issue making a basic op that

[theano-users] Re: Speed difference between backends

2017-02-17 Thread Adam Becker
I noticed this problem late 2016 while investigating scan overhead. Check here . The comment also have a link to a script help reproducing the problem. On Friday, February 17, 2017 at 7:43:39 PM UTC+8, Ozan Çağlayan

Re: [theano-users] Re: Multiple matrix product in theano

2017-02-16 Thread Adam Becker
_reduce(T.batched_dot(A_[::2], A_[1::2]), K_ //2) On Thursday, February 16, 2017 at 10:52:28 PM UTC+8, Adam Becker wrote: > > Just for the dot reduction part, using batched_dot should give a O(log(n)) > graph depth: > > # assumes K > 1 > B = A > for i in reversed(bin(K)[3:]): &

Re: [theano-users] Re: Multiple matrix product in theano

2017-02-16 Thread Adam Becker
Just for the dot reduction part, using batched_dot should give a O(log(n)) graph depth: # assumes K > 1 B = A for i in reversed(bin(K)[3:]): if not int(i): B = T.batched_dot(B[::2], B[1::2]) else: B = T.dot(T.batched_dot(B[:-1:2], B[1::2]), B[-1]) This works if K is

[theano-users] How to partially override gradient computation

2016-11-15 Thread Adam Becker
Asked as here . Encountered the problem when I'm trying to implement an approximation method for Real-Time-Recurrent-Learning. -- --- You received this message because you are subscribed to