Hello,

You could use the following code:

    X_weight = [ ]
    for x in X:
        for i in range(len(mlp.coefs_)-1):
            x =np.array([math.tanh(v) for v in
(x.dot(mlp.coefs_[i])+mlp.intercepts_[i])])
        X_weight.append(x)

    where it is assumed that mlp is your trained MLP-Classifier, and you
have trained with tanh-activation function
    X is your matrix which you want to compute the features, and x iterates
over the vectors of this matrix.
    X_weight is a list of vectors with the computed weights.

Kind regards
Orges Leka

2017-12-29 17:46 GMT+01:00 <scikit-learn-requ...@python.org>:

> Send scikit-learn mailing list submissions to
>         scikit-learn@python.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         https://mail.python.org/mailman/listinfo/scikit-learn
> or, via email, send a message with subject or body 'help' to
>         scikit-learn-requ...@python.org
>
> You can reach the person managing the list at
>         scikit-learn-ow...@python.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of scikit-learn digest..."
>
>
> Today's Topics:
>
>    1. Re: MLPClassifier as a feature selector (Thomas Evangelidis)
>    2. Re: MLPClassifier as a feature selector (Javier L?pez)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Fri, 29 Dec 2017 12:09:00 +0100
> From: Thomas Evangelidis <teva...@gmail.com>
> To: Scikit-learn mailing list <scikit-learn@python.org>
> Subject: Re: [scikit-learn] MLPClassifier as a feature selector
> Message-ID:
>         <CAACvdx0gO+5B7L6EyQbQSTWtoGBZKZKKPc0bGsxM
> gkbde6d...@mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Alright, with these attributes I can get the weights and biases, but what
> about the values on the nodes of the last hidden layer? Do I have to work
> them out myself or there is a straightforward way to get them?
>
> On 7 December 2017 at 04:25, Manoj Kumar <manojkumarsivaraj...@gmail.com>
> wrote:
>
> > Hi,
> >
> > The weights and intercepts are available in the coefs_ and intercepts_
> > attribute respectively.
> >
> > See https://github.com/scikit-learn/scikit-learn/blob/
> > a24c8b46/sklearn/neural_network/multilayer_perceptron.py#L835
> >
> > On Wed, Dec 6, 2017 at 4:56 PM, Brown J.B. via scikit-learn <
> > scikit-learn@python.org> wrote:
> >
> >> I am also very interested in knowing if there is a sklearn cookbook
> >> solution for getting the weights of a one-hidde-layer MLPClassifier.
> >> J.B.
> >>
> >> 2017-12-07 8:49 GMT+09:00 Thomas Evangelidis <teva...@gmail.com>:
> >>
> >>> Greetings,
> >>>
> >>> I want to train a MLPClassifier with one hidden layer and use it as a
> >>> feature selector for an MLPRegressor.
> >>> Is it possible to get the values of the neurons from the last hidden
> >>> layer of the MLPClassifier to pass them as input to the MLPRegressor?
> >>>
> >>> If it is not possible with scikit-learn, is anyone aware of any
> >>> scikit-compatible NN library that offers this functionality? For
> example
> >>> this one:
> >>>
> >>> http://scikit-neuralnetwork.readthedocs.io/en/latest/index.html
> >>>
> >>> I wouldn't like to do this in Tensorflow because the MLP there is much
> >>> slower than scikit-learn's implementation.
> >>>
> >>>
> >>> Thomas
> >>>
> >>>
> >>> --
> >>>
> >>> ======================================================================
> >>>
> >>> Dr Thomas Evangelidis
> >>>
> >>> Post-doctoral Researcher
> >>> CEITEC - Central European Institute of Technology
> >>> Masaryk University
> >>> Kamenice 5/A35/2S049,
> >>> 62500 Brno, Czech Republic
> >>>
> >>> email: tev...@pharm.uoa.gr
> >>>
> >>>           teva...@gmail.com
> >>>
> >>>
> >>> website: https://sites.google.com/site/thomasevangelidishomepage/
> >>>
> >>>
> >>> _______________________________________________
> >>> scikit-learn mailing list
> >>> scikit-learn@python.org
> >>> https://mail.python.org/mailman/listinfo/scikit-learn
> >>>
> >>>
> >>
> >> _______________________________________________
> >> scikit-learn mailing list
> >> scikit-learn@python.org
> >> https://mail.python.org/mailman/listinfo/scikit-learn
> >>
> >>
> >
> >
> > --
> > Manoj,
> > http://github.com/MechCoder
> >
> > _______________________________________________
> > scikit-learn mailing list
> > scikit-learn@python.org
> > https://mail.python.org/mailman/listinfo/scikit-learn
> >
> >
>
>
> --
>
> ======================================================================
>
> Dr Thomas Evangelidis
>
> Post-doctoral Researcher
> CEITEC - Central European Institute of Technology
> Masaryk University
> Kamenice 5/A35/2S049,
> 62500 Brno, Czech Republic
>
> email: tev...@pharm.uoa.gr
>
>           teva...@gmail.com
>
>
> website: https://sites.google.com/site/thomasevangelidishomepage/
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://mail.python.org/pipermail/scikit-learn/
> attachments/20171229/40eaa98c/attachment-0001.html>
>
> ------------------------------
>
> Message: 2
> Date: Fri, 29 Dec 2017 16:45:49 +0000
> From: Javier L?pez <jlo...@ende.cc>
> To: Scikit-learn mailing list <scikit-learn@python.org>
> Subject: Re: [scikit-learn] MLPClassifier as a feature selector
> Message-ID:
>         <CAJn5T5VTwy5q7VM5Dvg5i1qmT8Y9=mvEzL07GmGBhDJ+Bu2aag@mail.
> gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi Thomas,
>
> it is possible to obtain the activation values of any hidden layer, but the
> procedure is not completely straight forward. If you look at the code of
> the `_predict` method of MLPs you can see the following:
>
> ```python
>     def _predict(self, X):
>         """Predict using the trained model
>
>         Parameters
>         ----------
>         X : {array-like, sparse matrix}, shape (n_samples, n_features)
>             The input data.
>
>         Returns
>         -------
>         y_pred : array-like, shape (n_samples,) or (n_samples, n_outputs)
>             The decision function of the samples for each class in the
> model.
>         """
>         X = check_array(X, accept_sparse=['csr', 'csc', 'coo'])
>
>         # Make sure self.hidden_layer_sizes is a list
>         hidden_layer_sizes = self.hidden_layer_sizes
>         if not hasattr(hidden_layer_sizes, "__iter__"):
>             hidden_layer_sizes = [hidden_layer_sizes]
>         hidden_layer_sizes = list(hidden_layer_sizes)
>
>         layer_units = [X.shape[1]] + hidden_layer_sizes + \
>             [self.n_outputs_]
>
>         # Initialize layers
>         activations = [X]
>
>         for i in range(self.n_layers_ - 1):
>             activations.append(np.empty((X.shape[0],
>                                          layer_units[i + 1])))
>         # forward propagate
>         self._forward_pass(activations)
>         y_pred = activations[-1]
>
>         return y_pred
> ```
>
> the line `y_pred = activations[-1]` is responsible for extracting the
> values for the last layer,
> but the `activations` variable contains the values for all the neurons.
>
> You can make this function into your own external method (changing the
> `self` attribute by
> a proper parameter) and add an extra argument which specifies the layer(s)
> that you want.
> I have done this myself in order to make an AutoEncoderNetwork out of the
> MLP
> implementation.
>
> This makes me wonder, would it be worth adding this to sklearn?
> A very simple way would be to refactor the `_predict` method, with the
> additional layer
> argument, to a new method `_predict_layer`, then we can have the `_predict`
> method
> simply call `_predict_layer(..., layer=-1)` and add a new method (perhaps a
> `transform`?)
> that allows to get (raveled) values for an arbitrary subset of the layers.
>
> I'd be happy to submit a PR if you guys think it would be interesting for
> the project.
>
> Javier
>
>
>
> On Thu, Dec 7, 2017 at 12:51 AM Thomas Evangelidis <teva...@gmail.com>
> wrote:
>
> > Greetings,
> >
> > I want to train a MLPClassifier with one hidden layer and use it as a
> > feature selector for an MLPRegressor.
> > Is it possible to get the values of the neurons from the last hidden
> layer
> > of the MLPClassifier to pass them as input to the MLPRegressor?
> >
> > If it is not possible with scikit-learn, is anyone aware of any
> > scikit-compatible NN library that offers this functionality? For example
> > this one:
> >
> > http://scikit-neuralnetwork.readthedocs.io/en/latest/index.html
> >
> > I wouldn't like to do this in Tensorflow because the MLP there is much
> > slower than scikit-learn's implementation.
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://mail.python.org/pipermail/scikit-learn/
> attachments/20171229/47c835c7/attachment.html>
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> scikit-learn mailing list
> scikit-learn@python.org
> https://mail.python.org/mailman/listinfo/scikit-learn
>
>
> ------------------------------
>
> End of scikit-learn Digest, Vol 21, Issue 29
> ********************************************
>
_______________________________________________
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn

Reply via email to