2012/2/21 Gael Varoquaux :
>
> Agreed. Jenkins would be great. One thing that I worry about, is the
> build time that it is going to take us. In addition, it is time to take a
> close look at that during the pull requests, and not let a pull request
> go through if increases the number of build err
On Tue, Feb 21, 2012 at 12:59:40AM +0100, Andreas wrote:
> Finally, I got the docs in a state where they should
> build without any warnings or errors.
Wow! I am truely impressed.
> (I distinctly remember Gael promising a beer for this one ;)
Well, next time we meet. You can come to Paris any ti
Nice work Andy! I agree that any PR now should meet this standard
before merge.
And I'll certainly buy you a beer for this next time we meet!
Jake
Andreas wrote:
> Hey everybody.
> Finally, I got the docs in a state where they should
> build without any warnings or errors.
>
> (I distinctly re
Hey everybody.
Finally, I got the docs in a state where they should
build without any warnings or errors.
(I distinctly remember Gael promising a beer for this one ;)
I think it would be great if we could keep it this way.
Maybe now would be a good time to let jenkins watch
the doc building proce
http://pycon-au.org/2012/
Australia's official python conference, which will be held August 18th and
19th 2012 in Hobart, Tasmania (the island south of the mainland :)
I'm working on arrangements through work to go, and was wondering if anyone
else reading had plans to attend.
Thanks,
Robert
2012/2/20 Nicholas Pilkington :
> I was wondering if there were any immediate plans to implement Multiple
> Kernel SVMs in sklearn? There is a lots of SVM functionality and it would be
> nice to extend to MKL.
> If not what would be the best way to get involved in implementing them.
Alex started a
I don't need to use one, I was more interested in getting one implemented
at SHOGUN are the only other ML toolkit offering one, and it would be nice
to have one in sklearn.
Andreas in response to you. I mean for the purposing of learning the
weights of individual kernels. Though the improvements o
Hello
Le 20/02/2012 15:41, Andreas a écrit :
Hi Nick.
Afaik no one plans to work on that at the moment.
I was thinking about it some while ago, but then decided not to do it.
By MKL you mean learning the weights of different kernels, right?
Or do you mean inducing some group sparsity on the dif
On 02/20/2012 03:53 PM, xinfan meng wrote:
> BTW, the feature matrix at the bottom of their home page is very
> interesting, maybe sklearn should provide one.
Thanks for pointing to this matrix. That is interesting. I don't think
we should duplicate that but we
could link to it.
btw It seems the
BTW, the feature matrix at the bottom of their home page is very
interesting, maybe sklearn should provide one.
On Mon, Feb 20, 2012 at 10:50 PM, xinfan meng wrote:
> If you just need a MKL implementation, you should take a look at Shogun (
> http://www.shogun-toolbox.org/). It also provide a Py
If you just need a MKL implementation, you should take a look at Shogun (
http://www.shogun-toolbox.org/). It also provide a Python interface.
On Mon, Feb 20, 2012 at 10:36 PM, Nicholas Pilkington <
[email protected]> wrote:
> I was wondering if there were any immediate plans to imple
Hi Nick.
Afaik no one plans to work on that at the moment.
I was thinking about it some while ago, but then decided not to do it.
By MKL you mean learning the weights of different kernels, right?
Or do you mean inducing some group sparsity on the different kernels?
From what I heard recently, le
I was wondering if there were any immediate plans to implement Multiple
Kernel SVMs in sklearn? There is a lots of SVM functionality and it would
be nice to extend to MKL.
If not what would be the best way to get involved in implementing them.
Nick
--
Nicholas C.V. Pilkington
University of Cambr
2012/2/20 David Warde-Farley :
> On 2012-02-20, at 3:46 AM, Olivier Grisel wrote:
>
>> How high dimensional is this? GraphLasso works on the empirical
>> covariance matrix which is implemented as an 2D numpy array with shape
>> (n_features, n_features). It won't fit in memory for n_features >
>> 10
On 2012-02-20, at 3:46 AM, Olivier Grisel wrote:
> How high dimensional is this? GraphLasso works on the empirical
> covariance matrix which is implemented as an 2D numpy array with shape
> (n_features, n_features). It won't fit in memory for n_features >
> 1 and it might be intractably too lo
Thanks! And does it make sense to use L1 regularisation here (irrespective
of the graph structure)?
Best,
Mathias
On Mon, Feb 20, 2012 at 11:06 AM, Gael Varoquaux <
[email protected]> wrote:
> On Mon, Feb 20, 2012 at 10:35:51AM +0100, Mathias Verbeke wrote:
> > I would have around
On Mon, Feb 20, 2012 at 10:35:51AM +0100, Mathias Verbeke wrote:
> I would have around 1 features. I'm working on a sentence
> classification problem,
Graph lasso won't work on such a problem.
> I would like to do feature selection, to reduce the number of
> dimensions, and was thinking to ta
Hi Olivier,
Thanks for the fast reply!
> I have a high-dimensional feature set, where the features originate from
> > graphs. I was wondering if the use of GraphLasso applies and would be a
> good
> > idea in this case? And if it would be, can I then just apply it on the
> > feature vectors or do
2012/2/20 Mathias Verbeke :
> Hi all,
>
> I have a high-dimensional feature set, where the features originate from
> graphs. I was wondering if the use of GraphLasso applies and would be a good
> idea in this case? And if it would be, can I then just apply it on the
> feature vectors or do I need t
Hi all,
I have a high-dimensional feature set, where the features originate from
graphs. I was wondering if the use of GraphLasso applies and would be a
good idea in this case? And if it would be, can I then just apply it on the
feature vectors or do I need to input the originating graph structure
20 matches
Mail list logo