Congrats to Vivek! And to Andreas for making this this "challenge-winning"
package possible :)
Best,
Wei
On Sun, Sep 23, 2012 at 3:41 AM, Vivek Sharma wrote:
>
> Thanks Olivier, Andreas. And, again to the text classification module
> authors. sklearn rocks!
> I think I was quite lucky, but I'm
Dear All:
Will scikit-learn suppress the same user warning to avoid multiple warnings
by design? For example, the following this gist
https://gist.github.com/3926327, I will get three warnings actually rather
than four warnings so I think it is due to warning filter. This a little
problematics whe
Dear Nicolas:
I my understanding, the "partitions" is to assign discrete labels to each
of the samples. So [0,1,2,3] is actually assign the first sample to cluster
0, second to cluster 1 etc. Cluters [0,0,0,0] assign all the four samples
to 0. Not sure whether this answers your problem. Note that
Sorry for the duplicate sentences in last email...
On Fri, Nov 2, 2012 at 11:26 AM, Wei LI wrote:
> Dear Nicolas:
>
> I my understanding, the "partitions" is to assign discrete labels to each
> of the samples. So [0,1,2,3] is actually assign the first sample to cluster
&g
Could you please paste the y_true and y_probas you use to calculate the
roc_curve? Sometimes the roc curve may look very strange as the denominator
for precision is "relevant document" retrieved which varies with
thresholds. Could please also check that whether recall is decreasing in
your case?
O
voted most algorithms from contrib repo are included.(I
think it is just what R has done?). For those users who use it for research
purposes, it may allow them a new place to share benchmark algorithms or
their own algorithms rather than matlabcentral.
Best Regards,
Wei LI
On Mon, Jan 14, 2013 at
ot
seem to support one-class SVM probability output up to now. You can try
doing calibration to the SVM output like isotonic regression
http://scikit-learn.org/dev/modules/isotonic.html.
Best Regards,
Wei LI
On Sun, Jan 20, 2013 at 7:14 PM, hp wrote:
> Hello, I am a new user of scikit
Congrats to all and special thanks to Andy !
Best Regards,
Wei LI
On Tue, Jan 22, 2013 at 7:04 AM, Jake Vanderplas <
vanderp...@astro.washington.edu> wrote:
> Congrats!
> Thanks for the hard work, Andy
> Jake
>
>
> On 01/21/2013 03:02 PM, Andreas Mueller wrote:
>
clean API and clear docs.
Best Regards,
Wei LI
On Wed, Jan 23, 2013 at 7:11 AM, Andreas Mueller
wrote:
>
> > Yes, documentation standard is probably a good idea. I would also push
> > for API compatibility. And that brings me to the point raised in this
> > thread: I really hav
Hi Ariel:
There is one matlab implementation for spherical kmeans:
http://www.mathworks.com/matlabcentral/fileexchange/28902-spherical-k-means for
spherical kmeans and you can have a look at it :) That seems quite simple
and may be easily transformed into a python function.
Best,
Wei
On Fri, Jan
I am not sure whether it can be randomly initialized many times and pick
the best just like in k-means? As an approximation to an integer
programming problem, I think it may subject to poor local minimal,
especially when the problem is quite complex.
Best,
Wei
On Thu, Jan 31, 2013 at 2:29 PM, Gae
If you mean the reference for the option, it is "assign_labels" and if you
mean the reference for the algorithm, I think you can have a look at:
www.cs.berkeley.edu/~malik/papers/SM-ncut.pdf
Best Regards,
Wei
On Fri, Feb 1, 2013 at 4:40 PM, Vince Fernando wrote:
> Could you give me a reference f
Can you provide some data if possible to investigate? One possible thing is
that the subspace coordinate may shuffle if they have the same canonical
correlation.
Best,
Wei
On Tue, Feb 5, 2013 at 9:04 AM, ML Fan wrote:
> Hello,
>
> I was trying the CCA module in PLS and comparing the results wit
warm start after we have trained models. I do now have any
sound theory about this, but for SVM in particular, as the global optimal
is guaranteed, maybe a warm start will accelerate of the process to
convergence without biasing the trained model?
Best Regards,
Wei LI
On Mon, Feb 11, 2013 at 11:03
Congrats! Proud of you :)
On Wed, Apr 17, 2013 at 2:11 PM, Gilles Louppe wrote:
> Congratulations are in order :-)
>
>
> On 17 April 2013 08:06, Peter Prettenhofer
> wrote:
>
>> That's great - congratulations Olivier!
>>
>> Definitely, no pressure ;-)
>>
>>
>> 2013/4/17 Ronnie Ghose
>>
>>> wow
@Andy What do you mean by "blackbox" algorithm? Does that mean something
similar to pylearn2?
@Issam, It seems to me that scalablity is a key factor to train deep models
and make them work. Do you have any suggestion how to make it scalable
while still fits in sklearn framework? I think sklearn ca
For Mahalanobis metric, maybe we can do a cholesky decomposition to the
learned metric and make it a transformer? Then after it we can chain a knn
classifier after the transform.
Best,
Wei
On Tue, Apr 23, 2013 at 3:59 PM, Robert McGibbon wrote:
> Input to such algorithms is usually given as:
>
17 matches
Mail list logo