It definitely works, no modifications necessary.  Lasagne is a pretty thin 
layer over top of Theano, so you can take it or leave it.  It adds a layer 
class that you can plug together, which is pretty convenient, but you can 
certainly write a U net in raw Theano if you want.  If you're a beginner, 
and just want to experiment with the U net architecture, I'd suggest trying 
out Lasagne. It likely implements things that you'd end up writing yourself 
anyway.

Here's a (tested) toy example using Lasagne to get you started.  If you do 
want to go with raw Theano all of the lasagne.layers.* classes are just 
thin wrappers around Theano functions.

import theano
import theano.tensor as T
import lasagne
import numpy

# Custom softmax function to support fully convolutional networks
def softmax(x):
e_x = theano.tensor.exp(x - x.max(axis=1, keepdims=True))
return e_x / e_x.sum(axis=1, keepdims=True)


# image dimensions are [batch, channels, y, x]
inputImage = numpy.ones([1,1,32,32],numpy.float64)
batchsize,channels,y,x = inputImage.shape;
padding = 'valid'

input_var = theano.shared(inputImage)
# you don't have to set the batch or spatial dimensions, but I've 
experienced a lot of memory
# fragmentation on the GPU if they aren't specified
inLayer = lasagne.layers.InputLayer(shape=[1,channels]+[32,32], 
input_var=input_var)

# Build the uNet
# Downward arm - convolution followed by maxpooling, two levels
down1 = net = 
lasagne.layers.Conv2DLayer(inLayer,num_filters=10,filter_size=(5,5),pad=padding)
net = lasagne.layers.MaxPool2DLayer(net,pool_size=(2,2))

down2 = net = 
lasagne.layers.Conv2DLayer(net,num_filters=20,filter_size=(5,5),pad=padding)
net = lasagne.layers.MaxPool2DLayer(net,pool_size=(2,2))

# Now back up - upscale -> concate with skip connections -> (de)convolve
net = lasagne.layers.Upscale2DLayer(net,2)
net = lasagne.layers.concat([net,down2],axis=1)
net = 
lasagne.layers.Deconv2DLayer(net,num_filters=20,filter_size=(5,5),crop=padding)

net = lasagne.layers.Upscale2DLayer(net,2)
net = lasagne.layers.concat([net,down1],axis=1)
net = 
lasagne.layers.Deconv2DLayer(net,num_filters=10,filter_size=(5,5),crop=padding)

# Softmax output layer - change num_filters if you want more classes
net = lasagne.layers.Conv2DLayer(net, num_filters=2, filter_size = (5,5), 
pad=padding, nonlinearity=softmax)
output = lasagne.layers.get_output(net)

outputImage = output.eval()


Note that you need a custom softmax function because the built in one 
doesn't support multiple spatial dimensions.  For training you're probably 
going to want to replace 

input_var = theano.shared(inputImage)

with 

input_var = T.tensor4(name = 'input')

so that you (or your training function) can pass in arbitrary examples.

Regarding your question about Keres vs. Lasagne, I believe Keres' main 
advantage is that it supports both Theano and TensorFlow.  If that's 
important to you, you should probably use Keres.  I use mostly Lasagne 
because it's simpler to create your own layers, and doing the above in 3D 
does require some custom stuff at the moment.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to