//mail.python.org/mailman/listinfo/scikit-learn
> >>>
> >>>
> >>
> >> ___
> >> scikit-learn mailing list
> >> scikit-learn@python.org
> >> https://mail.python.org/mailman/listinfo/scikit-learn
I think that a transform method would be good. We would have to add a parameter
to the constructor to specify which layer is used for the transform. It should
default to "-1", in my opinion.
Cheers,
Gaël
Sent from my phone. Please forgive typos and briefness.
On Dec 29, 2017, 17:48, at
Hi Thomas,
it is possible to obtain the activation values of any hidden layer, but the
procedure is not completely straight forward. If you look at the code of
the `_predict` method of MLPs you can see the following:
```python
def _predict(self, X):
"""Predict using the trained model
Alright, with these attributes I can get the weights and biases, but what
about the values on the nodes of the last hidden layer? Do I have to work
them out myself or there is a straightforward way to get them?
On 7 December 2017 at 04:25, Manoj Kumar
wrote:
>