AFAIK deep learning in general does not have any problem with redundant 
inputs. If you have fewer nodes in your first layer than input nodes, then 
the redundant (or nearly-redundant) input nodes will be combined into one 
node (... more or less). And there are approaches that favor using 
so-called overcomplete representations with more hidden nodes / layer than 
input nodes.

Cédric

On Saturday, January 30, 2016 at 9:46:06 AM UTC-5, [email protected] 
wrote:
>
> Thanks, that's pretty much my understanding. Scaling the inputs seems to 
> be important, too, from what I read. I'm also interested in a framework 
> that will trim off redundant inputs. 
>
> I have run the mocha tutorial examples, and it looks very promising 
> because the structure is clear, and there are C++ and cuda backends. The 
> C++ backend, with openmp, gives me a good performance boost over the pure 
> Julia backend. However, I'm not so sure that it will allow for trimming 
> redundant inputs. Also, I have some ideas on how to restrict the net to 
> remove observationally equivalent configurations, which should aid in 
> training, and I don't think I could implement those ideas with mocha.
>
> From what I see, the focus of much recent work in neural nets seems to be 
> on classification and labeling of images, and regression examples using the 
> modern tools seem to be scarce. I'm wondering if that's because other tools 
> work better for regression, or simply because it's an old problem that is 
> considered to be well studied. I would like to see some examples of 
> regression nets that work well, using the modern tools, though, if there 
> are any out there.
>
> On Saturday, January 30, 2016 at 2:32:16 PM UTC+1, Jason Eckstein wrote:
>>
>> I've been using NN for regression and I've experimented with Mocha.  I 
>> ended up coding my own network for speed purposes but in general you simply 
>> leave the final output of the neural network as a linear combination 
>> without applying an activation function.  That way the output can represent 
>> a real number rather than compress it into a 0 to 1 or -1 to 1 range for 
>> classification.  You can leave the rest of the network unchanged.
>>
>> On Saturday, January 30, 2016 at 3:45:27 AM UTC-7, [email protected] 
>> wrote:
>>>
>>> I'm interested in using neural networks (deep learning) for multivariate 
>>> multiple regression, with multiple real valued inputs and multiple real 
>>> valued outputs. At the moment, the mocha.jl package looks very promising, 
>>> but the examples seem to be all for classification problems. Does anyone 
>>> have examples of use of mocha (or other deep learning packages for Julia) 
>>> for regression problems? Or any tips for deep learning and regression?
>>>
>>

Reply via email to