I agree with Gael on this one and am happy to help with the PR if you need
any assistance.
Best,
Maciek
Pozdrawiam, | Best regards,
Maciek Wójcikowski
mac...@wojcikowski.pl
2017-12-29 18:14 GMT+01:00 Gael Varoquaux :
> I think that a transform method
I think that a transform method would be good. We would have to add a parameter
to the constructor to specify which layer is used for the transform. It should
default to "-1", in my opinion.
Cheers,
Gaël
Sent from my phone. Please forgive typos and briefness.
On Dec 29, 2017, 17:48, at
Hi Thomas,
it is possible to obtain the activation values of any hidden layer, but the
procedure is not completely straight forward. If you look at the code of
the `_predict` method of MLPs you can see the following:
```python
def _predict(self, X):
"""Predict using the trained model
Alright, with these attributes I can get the weights and biases, but what
about the values on the nodes of the last hidden layer? Do I have to work
them out myself or there is a straightforward way to get them?
On 7 December 2017 at 04:25, Manoj Kumar
wrote:
>
Hi,
The weights and intercepts are available in the coefs_ and intercepts_
attribute respectively.
See
https://github.com/scikit-learn/scikit-learn/blob/a24c8b46/sklearn/neural_network/multilayer_perceptron.py#L835
On Wed, Dec 6, 2017 at 4:56 PM, Brown J.B. via scikit-learn <
I am also very interested in knowing if there is a sklearn cookbook
solution for getting the weights of a one-hidde-layer MLPClassifier.
J.B.
2017-12-07 8:49 GMT+09:00 Thomas Evangelidis :
> Greetings,
>
> I want to train a MLPClassifier with one hidden layer and use it as a
>
Greetings,
I want to train a MLPClassifier with one hidden layer and use it as a
feature selector for an MLPRegressor.
Is it possible to get the values of the neurons from the last hidden layer
of the MLPClassifier to pass them as input to the MLPRegressor?
If it is not possible with