Re: [scikit-learn] MLPClassifier as a feature selector

2018-01-03 Thread Maciek Wójcikowski
I agree with Gael on this one and am happy to help with the PR if you need
any assistance.

Best,
Maciek




Pozdrawiam,  |  Best regards,
Maciek Wójcikowski
mac...@wojcikowski.pl

2017-12-29 18:14 GMT+01:00 Gael Varoquaux :

> I think that a transform method would be good. We would have to add a
> parameter to the constructor to specify which layer is used for the
> transform. It should default to "-1", in my opinion.
>
> Cheers,
>
> Gaël
>
> Sent from my phone. Please forgive typos and briefness.
> On Dec 29, 2017, at 17:48, "Javier López"  wrote:
>
>> Hi Thomas,
>>
>> it is possible to obtain the activation values of any hidden layer, but
>> the
>> procedure is not completely straight forward. If you look at the code of
>> the `_predict` method of MLPs you can see the following:
>>
>> ```python
>> def _predict(self, X):
>> """Predict using the trained model
>>
>> Parameters
>> --
>> X : {array-like, sparse matrix}, shape (n_samples, n_features)
>> The input data.
>>
>> Returns
>> ---
>> y_pred : array-like, shape (n_samples,) or (n_samples, n_outputs)
>> The decision function of the samples for each class in the
>> model.
>> """
>> X = check_array(X, accept_sparse=['csr', 'csc', 'coo'])
>>
>> # Make sure self.hidden_layer_sizes is a list
>> hidden_layer_sizes = self.hidden_layer_sizes
>> if not hasattr(hidden_layer_sizes, "__iter__"):
>> hidden_layer_sizes = [hidden_layer_sizes]
>> hidden_layer_sizes = list(hidden_layer_sizes)
>>
>> layer_units = [X.shape[1]] + hidden_layer_sizes + \
>> [self.n_outputs_]
>>
>> # Initialize layers
>> activations = [X]
>>
>> for i in range(self.n_layers_ - 1):
>> activations.append(np.empty((X.shape[0],
>>  layer_units[i + 1])))
>> # forward propagate
>> self._forward_pass(activations)
>> y_pred = activations[-1]
>>
>> return y_pred
>> ```
>>
>> the line `y_pred = activations[-1]` is responsible for extracting the
>> values for the last layer,
>> but the `activations` variable contains the values for all the neurons.
>>
>> You can make this function into your own external method (changing the
>> `self` attribute by
>> a proper parameter) and add an extra argument which specifies the
>> layer(s) that you want.
>> I have done this myself in order to make an AutoEncoderNetwork out of the
>> MLP
>> implementation.
>>
>> This makes me wonder, would it be worth adding this to sklearn?
>> A very simple way would be to refactor the `_predict` method, with the
>> additional layer
>> argument, to a new method `_predict_layer`, then we can have the
>> `_predict` method
>> simply call `_predict_layer(..., layer=-1)` and add a new method (perhaps
>> a `transform`?)
>> that allows to get (raveled) values for an arbitrary subset of the layers.
>>
>> I'd be happy to submit a PR if you guys think it would be interesting for
>> the project.
>>
>> Javier
>>
>>
>>
>> On Thu, Dec 7, 2017 at 12:51 AM Thomas Evangelidis 
>> wrote:
>>
>>> Greetings,
>>>
>>> I want to train a MLPClassifier with one hidden layer and use it as a
>>> feature selector for an MLPRegressor.
>>> Is it possible to get the values of the neurons from the last hidden
>>> layer of the MLPClassifier to pass them as input to the MLPRegressor?
>>>
>>> If it is not possible with scikit-learn, is anyone aware of any
>>> scikit-compatible NN library that offers this functionality? For example
>>> this one:
>>>
>>> http://scikit-neuralnetwork.readthedocs.io/en/latest/index.html
>>>
>>> I wouldn't like to do this in Tensorflow because the MLP there is much
>>> slower than scikit-learn's implementation.
>>>
>> --
>>
>> scikit-learn mailing list
>> scikit-learn@python.org
>> https://mail.python.org/mailman/listinfo/scikit-learn
>>
>>
> ___
> scikit-learn mailing list
> scikit-learn@python.org
> https://mail.python.org/mailman/listinfo/scikit-learn
>
>
___
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn


Re: [scikit-learn] MLPClassifier as a feature selector

2017-12-29 Thread Gael Varoquaux
I think that a transform method would be good. We would have to add a parameter 
to the constructor to specify which layer is used for the transform. It should 
default to "-1", in my opinion.

Cheers,

Gaël

⁣Sent from my phone. Please forgive typos and briefness.​

On Dec 29, 2017, 17:48, at 17:48, "Javier López"  wrote:
>Hi Thomas,
>
>it is possible to obtain the activation values of any hidden layer, but
>the
>procedure is not completely straight forward. If you look at the code
>of
>the `_predict` method of MLPs you can see the following:
>
>```python
>def _predict(self, X):
>"""Predict using the trained model
>
>Parameters
>--
>X : {array-like, sparse matrix}, shape (n_samples, n_features)
>The input data.
>
>Returns
>---
>  y_pred : array-like, shape (n_samples,) or (n_samples, n_outputs)
>The decision function of the samples for each class in the
>model.
>"""
>X = check_array(X, accept_sparse=['csr', 'csc', 'coo'])
>
># Make sure self.hidden_layer_sizes is a list
>hidden_layer_sizes = self.hidden_layer_sizes
>if not hasattr(hidden_layer_sizes, "__iter__"):
>hidden_layer_sizes = [hidden_layer_sizes]
>hidden_layer_sizes = list(hidden_layer_sizes)
>
>layer_units = [X.shape[1]] + hidden_layer_sizes + \
>[self.n_outputs_]
>
># Initialize layers
>activations = [X]
>
>for i in range(self.n_layers_ - 1):
>activations.append(np.empty((X.shape[0],
> layer_units[i + 1])))
># forward propagate
>self._forward_pass(activations)
>y_pred = activations[-1]
>
>return y_pred
>```
>
>the line `y_pred = activations[-1]` is responsible for extracting the
>values for the last layer,
>but the `activations` variable contains the values for all the neurons.
>
>You can make this function into your own external method (changing the
>`self` attribute by
>a proper parameter) and add an extra argument which specifies the
>layer(s)
>that you want.
>I have done this myself in order to make an AutoEncoderNetwork out of
>the
>MLP
>implementation.
>
>This makes me wonder, would it be worth adding this to sklearn?
>A very simple way would be to refactor the `_predict` method, with the
>additional layer
>argument, to a new method `_predict_layer`, then we can have the
>`_predict`
>method
>simply call `_predict_layer(..., layer=-1)` and add a new method
>(perhaps a
>`transform`?)
>that allows to get (raveled) values for an arbitrary subset of the
>layers.
>
>I'd be happy to submit a PR if you guys think it would be interesting
>for
>the project.
>
>Javier
>
>
>
>On Thu, Dec 7, 2017 at 12:51 AM Thomas Evangelidis 
>wrote:
>
>> Greetings,
>>
>> I want to train a MLPClassifier with one hidden layer and use it as a
>> feature selector for an MLPRegressor.
>> Is it possible to get the values of the neurons from the last hidden
>layer
>> of the MLPClassifier to pass them as input to the MLPRegressor?
>>
>> If it is not possible with scikit-learn, is anyone aware of any
>> scikit-compatible NN library that offers this functionality? For
>example
>> this one:
>>
>> http://scikit-neuralnetwork.readthedocs.io/en/latest/index.html
>>
>> I wouldn't like to do this in Tensorflow because the MLP there is
>much
>> slower than scikit-learn's implementation.
>>
>
>
>
>
>___
>scikit-learn mailing list
>scikit-learn@python.org
>https://mail.python.org/mailman/listinfo/scikit-learn
___
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn


Re: [scikit-learn] MLPClassifier as a feature selector

2017-12-29 Thread Javier López
Hi Thomas,

it is possible to obtain the activation values of any hidden layer, but the
procedure is not completely straight forward. If you look at the code of
the `_predict` method of MLPs you can see the following:

```python
def _predict(self, X):
"""Predict using the trained model

Parameters
--
X : {array-like, sparse matrix}, shape (n_samples, n_features)
The input data.

Returns
---
y_pred : array-like, shape (n_samples,) or (n_samples, n_outputs)
The decision function of the samples for each class in the
model.
"""
X = check_array(X, accept_sparse=['csr', 'csc', 'coo'])

# Make sure self.hidden_layer_sizes is a list
hidden_layer_sizes = self.hidden_layer_sizes
if not hasattr(hidden_layer_sizes, "__iter__"):
hidden_layer_sizes = [hidden_layer_sizes]
hidden_layer_sizes = list(hidden_layer_sizes)

layer_units = [X.shape[1]] + hidden_layer_sizes + \
[self.n_outputs_]

# Initialize layers
activations = [X]

for i in range(self.n_layers_ - 1):
activations.append(np.empty((X.shape[0],
 layer_units[i + 1])))
# forward propagate
self._forward_pass(activations)
y_pred = activations[-1]

return y_pred
```

the line `y_pred = activations[-1]` is responsible for extracting the
values for the last layer,
but the `activations` variable contains the values for all the neurons.

You can make this function into your own external method (changing the
`self` attribute by
a proper parameter) and add an extra argument which specifies the layer(s)
that you want.
I have done this myself in order to make an AutoEncoderNetwork out of the
MLP
implementation.

This makes me wonder, would it be worth adding this to sklearn?
A very simple way would be to refactor the `_predict` method, with the
additional layer
argument, to a new method `_predict_layer`, then we can have the `_predict`
method
simply call `_predict_layer(..., layer=-1)` and add a new method (perhaps a
`transform`?)
that allows to get (raveled) values for an arbitrary subset of the layers.

I'd be happy to submit a PR if you guys think it would be interesting for
the project.

Javier



On Thu, Dec 7, 2017 at 12:51 AM Thomas Evangelidis 
wrote:

> Greetings,
>
> I want to train a MLPClassifier with one hidden layer and use it as a
> feature selector for an MLPRegressor.
> Is it possible to get the values of the neurons from the last hidden layer
> of the MLPClassifier to pass them as input to the MLPRegressor?
>
> If it is not possible with scikit-learn, is anyone aware of any
> scikit-compatible NN library that offers this functionality? For example
> this one:
>
> http://scikit-neuralnetwork.readthedocs.io/en/latest/index.html
>
> I wouldn't like to do this in Tensorflow because the MLP there is much
> slower than scikit-learn's implementation.
>
___
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn


Re: [scikit-learn] MLPClassifier as a feature selector

2017-12-29 Thread Thomas Evangelidis
Alright, with these attributes I can get the weights and biases, but what
about the values on the nodes of the last hidden layer? Do I have to work
them out myself or there is a straightforward way to get them?

On 7 December 2017 at 04:25, Manoj Kumar 
wrote:

> Hi,
>
> The weights and intercepts are available in the coefs_ and intercepts_
> attribute respectively.
>
> See https://github.com/scikit-learn/scikit-learn/blob/
> a24c8b46/sklearn/neural_network/multilayer_perceptron.py#L835
>
> On Wed, Dec 6, 2017 at 4:56 PM, Brown J.B. via scikit-learn <
> scikit-learn@python.org> wrote:
>
>> I am also very interested in knowing if there is a sklearn cookbook
>> solution for getting the weights of a one-hidde-layer MLPClassifier.
>> J.B.
>>
>> 2017-12-07 8:49 GMT+09:00 Thomas Evangelidis :
>>
>>> Greetings,
>>>
>>> I want to train a MLPClassifier with one hidden layer and use it as a
>>> feature selector for an MLPRegressor.
>>> Is it possible to get the values of the neurons from the last hidden
>>> layer of the MLPClassifier to pass them as input to the MLPRegressor?
>>>
>>> If it is not possible with scikit-learn, is anyone aware of any
>>> scikit-compatible NN library that offers this functionality? For example
>>> this one:
>>>
>>> http://scikit-neuralnetwork.readthedocs.io/en/latest/index.html
>>>
>>> I wouldn't like to do this in Tensorflow because the MLP there is much
>>> slower than scikit-learn's implementation.
>>>
>>>
>>> Thomas
>>>
>>>
>>> --
>>>
>>> ==
>>>
>>> Dr Thomas Evangelidis
>>>
>>> Post-doctoral Researcher
>>> CEITEC - Central European Institute of Technology
>>> Masaryk University
>>> Kamenice 5/A35/2S049,
>>> 62500 Brno, Czech Republic
>>>
>>> email: tev...@pharm.uoa.gr
>>>
>>>   teva...@gmail.com
>>>
>>>
>>> website: https://sites.google.com/site/thomasevangelidishomepage/
>>>
>>>
>>> ___
>>> scikit-learn mailing list
>>> scikit-learn@python.org
>>> https://mail.python.org/mailman/listinfo/scikit-learn
>>>
>>>
>>
>> ___
>> scikit-learn mailing list
>> scikit-learn@python.org
>> https://mail.python.org/mailman/listinfo/scikit-learn
>>
>>
>
>
> --
> Manoj,
> http://github.com/MechCoder
>
> ___
> scikit-learn mailing list
> scikit-learn@python.org
> https://mail.python.org/mailman/listinfo/scikit-learn
>
>


-- 

==

Dr Thomas Evangelidis

Post-doctoral Researcher
CEITEC - Central European Institute of Technology
Masaryk University
Kamenice 5/A35/2S049,
62500 Brno, Czech Republic

email: tev...@pharm.uoa.gr

  teva...@gmail.com


website: https://sites.google.com/site/thomasevangelidishomepage/
___
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn


Re: [scikit-learn] MLPClassifier as a feature selector

2017-12-06 Thread Manoj Kumar
Hi,

The weights and intercepts are available in the coefs_ and intercepts_
attribute respectively.

See
https://github.com/scikit-learn/scikit-learn/blob/a24c8b46/sklearn/neural_network/multilayer_perceptron.py#L835

On Wed, Dec 6, 2017 at 4:56 PM, Brown J.B. via scikit-learn <
scikit-learn@python.org> wrote:

> I am also very interested in knowing if there is a sklearn cookbook
> solution for getting the weights of a one-hidde-layer MLPClassifier.
> J.B.
>
> 2017-12-07 8:49 GMT+09:00 Thomas Evangelidis :
>
>> Greetings,
>>
>> I want to train a MLPClassifier with one hidden layer and use it as a
>> feature selector for an MLPRegressor.
>> Is it possible to get the values of the neurons from the last hidden
>> layer of the MLPClassifier to pass them as input to the MLPRegressor?
>>
>> If it is not possible with scikit-learn, is anyone aware of any
>> scikit-compatible NN library that offers this functionality? For example
>> this one:
>>
>> http://scikit-neuralnetwork.readthedocs.io/en/latest/index.html
>>
>> I wouldn't like to do this in Tensorflow because the MLP there is much
>> slower than scikit-learn's implementation.
>>
>>
>> Thomas
>>
>>
>> --
>>
>> ==
>>
>> Dr Thomas Evangelidis
>>
>> Post-doctoral Researcher
>> CEITEC - Central European Institute of Technology
>> Masaryk University
>> Kamenice 5/A35/2S049,
>> 62500 Brno, Czech Republic
>>
>> email: tev...@pharm.uoa.gr
>>
>>   teva...@gmail.com
>>
>>
>> website: https://sites.google.com/site/thomasevangelidishomepage/
>>
>>
>> ___
>> scikit-learn mailing list
>> scikit-learn@python.org
>> https://mail.python.org/mailman/listinfo/scikit-learn
>>
>>
>
> ___
> scikit-learn mailing list
> scikit-learn@python.org
> https://mail.python.org/mailman/listinfo/scikit-learn
>
>


-- 
Manoj,
http://github.com/MechCoder
___
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn


Re: [scikit-learn] MLPClassifier as a feature selector

2017-12-06 Thread Brown J.B. via scikit-learn
I am also very interested in knowing if there is a sklearn cookbook
solution for getting the weights of a one-hidde-layer MLPClassifier.
J.B.

2017-12-07 8:49 GMT+09:00 Thomas Evangelidis :

> Greetings,
>
> I want to train a MLPClassifier with one hidden layer and use it as a
> feature selector for an MLPRegressor.
> Is it possible to get the values of the neurons from the last hidden layer
> of the MLPClassifier to pass them as input to the MLPRegressor?
>
> If it is not possible with scikit-learn, is anyone aware of any
> scikit-compatible NN library that offers this functionality? For example
> this one:
>
> http://scikit-neuralnetwork.readthedocs.io/en/latest/index.html
>
> I wouldn't like to do this in Tensorflow because the MLP there is much
> slower than scikit-learn's implementation.
>
>
> Thomas
>
>
> --
>
> ==
>
> Dr Thomas Evangelidis
>
> Post-doctoral Researcher
> CEITEC - Central European Institute of Technology
> Masaryk University
> Kamenice 5/A35/2S049,
> 62500 Brno, Czech Republic
>
> email: tev...@pharm.uoa.gr
>
>   teva...@gmail.com
>
>
> website: https://sites.google.com/site/thomasevangelidishomepage/
>
>
> ___
> scikit-learn mailing list
> scikit-learn@python.org
> https://mail.python.org/mailman/listinfo/scikit-learn
>
>
___
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn


[scikit-learn] MLPClassifier as a feature selector

2017-12-06 Thread Thomas Evangelidis
Greetings,

I want to train a MLPClassifier with one hidden layer and use it as a
feature selector for an MLPRegressor.
Is it possible to get the values of the neurons from the last hidden layer
of the MLPClassifier to pass them as input to the MLPRegressor?

If it is not possible with scikit-learn, is anyone aware of any
scikit-compatible NN library that offers this functionality? For example
this one:

http://scikit-neuralnetwork.readthedocs.io/en/latest/index.html

I wouldn't like to do this in Tensorflow because the MLP there is much
slower than scikit-learn's implementation.


Thomas


-- 

==

Dr Thomas Evangelidis

Post-doctoral Researcher
CEITEC - Central European Institute of Technology
Masaryk University
Kamenice 5/A35/2S049,
62500 Brno, Czech Republic

email: tev...@pharm.uoa.gr

  teva...@gmail.com


website: https://sites.google.com/site/thomasevangelidishomepage/
___
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn