Javier, thank you for the detailed explanation. Indeed, it would be very
useful to add such a function in the official scikit-learn bundle instead
of keeping our own modified versions of the MLP. It would be good for
transferability of our code.

Dne 29. 12. 2017 17:47 napsal uživatel "Javier López" <jlo...@ende.cc>:

> Hi Thomas,
>
> it is possible to obtain the activation values of any hidden layer, but the
> procedure is not completely straight forward. If you look at the code of
> the `_predict` method of MLPs you can see the following:
>
> ```python
>     def _predict(self, X):
>         """Predict using the trained model
>
>         Parameters
>         ----------
>         X : {array-like, sparse matrix}, shape (n_samples, n_features)
>             The input data.
>
>         Returns
>         -------
>         y_pred : array-like, shape (n_samples,) or (n_samples, n_outputs)
>             The decision function of the samples for each class in the
> model.
>         """
>         X = check_array(X, accept_sparse=['csr', 'csc', 'coo'])
>
>         # Make sure self.hidden_layer_sizes is a list
>         hidden_layer_sizes = self.hidden_layer_sizes
>         if not hasattr(hidden_layer_sizes, "__iter__"):
>             hidden_layer_sizes = [hidden_layer_sizes]
>         hidden_layer_sizes = list(hidden_layer_sizes)
>
>         layer_units = [X.shape[1]] + hidden_layer_sizes + \
>             [self.n_outputs_]
>
>         # Initialize layers
>         activations = [X]
>
>         for i in range(self.n_layers_ - 1):
>             activations.append(np.empty((X.shape[0],
>                                          layer_units[i + 1])))
>         # forward propagate
>         self._forward_pass(activations)
>         y_pred = activations[-1]
>
>         return y_pred
> ```
>
> the line `y_pred = activations[-1]` is responsible for extracting the
> values for the last layer,
> but the `activations` variable contains the values for all the neurons.
>
> You can make this function into your own external method (changing the
> `self` attribute by
> a proper parameter) and add an extra argument which specifies the layer(s)
> that you want.
> I have done this myself in order to make an AutoEncoderNetwork out of the
> MLP
> implementation.
>
> This makes me wonder, would it be worth adding this to sklearn?
> A very simple way would be to refactor the `_predict` method, with the
> additional layer
> argument, to a new method `_predict_layer`, then we can have the
> `_predict` method
> simply call `_predict_layer(..., layer=-1)` and add a new method (perhaps
> a `transform`?)
> that allows to get (raveled) values for an arbitrary subset of the layers.
>
> I'd be happy to submit a PR if you guys think it would be interesting for
> the project.
>
> Javier
>
>
>
> On Thu, Dec 7, 2017 at 12:51 AM Thomas Evangelidis <teva...@gmail.com>
> wrote:
>
>> Greetings,
>>
>> I want to train a MLPClassifier with one hidden layer and use it as a
>> feature selector for an MLPRegressor.
>> Is it possible to get the values of the neurons from the last hidden
>> layer of the MLPClassifier to pass them as input to the MLPRegressor?
>>
>> If it is not possible with scikit-learn, is anyone aware of any
>> scikit-compatible NN library that offers this functionality? For example
>> this one:
>>
>> http://scikit-neuralnetwork.readthedocs.io/en/latest/index.html
>>
>> I wouldn't like to do this in Tensorflow because the MLP there is much
>> slower than scikit-learn's implementation.
>>
>
> _______________________________________________
> scikit-learn mailing list
> scikit-learn@python.org
> https://mail.python.org/mailman/listinfo/scikit-learn
>
>
_______________________________________________
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn

Reply via email to