> I'd love to add non-negative lasso to this mix. However, I noticed
> that cd_fast.pyx is missing the positive=True option in multitask
> lasso (as well as the sparse variant). Is there any other reason for
> this or just that nobody needed it?
indeed nobody needed it :)
thanks for looking int
On 11/06/2013 05:00 PM, Jim wrote:
>> Then you don't need a OneVsRestClassifier as OvR is the default
>> strategy for SGD. You do need to put a maximum on the number of
>> classes before you start learning, though.
> I see. Thank you for the advice. This was initial novice iteration of the
> so
Hi Paolo.
Could you please give a link to the reference paper? I couldn't find it.
Could you maybe also give a quick description of the algorithm, I'm
afraid I'm not familiar with it (by that name).
Thanks,
Andy
On 11/06/2013 07:05 AM, Paolo Di Prodi wrote:
> Hello there,
> I am a new user of Sc
Thanks for the interest guys I'll try to address some of your comments.
I haven't pushed the code anywhere yet. Putting aside potential API issues,
there are currently no tests, there may be some numerical issues that still
need to be ironed out, some data types were specialized for cython for the
[Mistakenly posted as separate thread before, please ignore previous post]
I see. Thank you for the advice. This was initial novice iteration of the
solution and needs improvement of course. In terms of which, in order to
keep the behaviour of the classifier consistent, instead of a single
cla
Jacques, is your LambdaMART implementation available somewhere?
Mathieu
On Thu, Nov 7, 2013 at 12:09 AM, Mathieu Blondel wrote:
> On a related note, I implemented NDCG with a slightly different interface
> than Olivier's implementation:
> https://gist.github.com/mblondel/7337391
>
> My implemen
> Then you don't need a OneVsRestClassifier as OvR is the default
> strategy for SGD. You do need to put a maximum on the number of
> classes before you start learning, though.
I see. Thank you for the advice. This was initial novice iteration of the
solution and needs improvement of course. In t
Hi everybody,
I just updated the gist quite a lot, please take a look:
http://nbviewer.ipython.org/7224672
I'll go to sleep and interpret it with a fresh eye tomorrow, but
what's interesting at the moment is:
KKT's performance is quite constant,
PG with sparsity penalties (the new, simpler ones,
It'd be nice to merge #2199
Cheers,
N
On 6 November 2013 16:16, Peter Prettenhofer
wrote:
> Given that snow will arrive late I too should be able to get some stuff
> done as well.
>
> I want to get #2570 to MRG within one week so that we have plenty of
> time to review and tweak.
>
> Furthermo
Hello there,
I am a new user of Scikit and I was wondering if somebody had or is willing to
implement the prototype extraction algorithm (Gonzales 1985) which is a linear
time
algorithm very useful for incremental data set clustering.
In case this algorithm is not implemented should I go ahead an
Given that snow will arrive late I too should be able to get some stuff
done as well.
I want to get #2570 to MRG within one week so that we have plenty of time
to review and tweak.
Furthermore, I wanted to have a look a supporting different dtypes for SGD.
@Olivier: I will team up with you on r
On a related note, I implemented NDCG with a slightly different interface
than Olivier's implementation:
https://gist.github.com/mblondel/7337391
My implementation takes y_true and y_pred as arguments and so is more
consistent with other metrics in scikit-learn. However y_pred might not be
availab
2013/11/6 Olivier Grisel :
> I can help prepare the release by going through the open issues and
> pull requests on github and make a summary next week.
>
> All the three PRs highlighted by Gilles seem very important to me. I
> started reading the ESLII chapter on MARS soon to help with the review
I can help prepare the release by going through the open issues and
pull requests on github and make a summary next week.
All the three PRs highlighted by Gilles seem very important to me. I
started reading the ESLII chapter on MARS soon to help with the review
of the PR (I got interrupted by 2 co
This is very interesting. I have been playing recently with learning
to rank. Right now I just used point-wise regressors and just
implemented NDCG as a ranking metric to compare the models. I tried to
experiment with parallelizing extra trees here:
http://nbviewer.ipython.org/urls/raw.github.c
2013/11/6 Jim <[email protected]>:
> No I am primarily working on multiclass classification with constantly
> increasing number of classes
Then you don't need a OneVsRestClassifier as OvR is the default
strategy for SGD. You do need to put a maximum on the number of
classes before you start learni
Hi Jacques,
very exciting -- this was on my wish list for quite a while.
maybe we should start creating a PR upfront so that we can discuss things
there -- better than using the mailing list (quite a lot of traffic
already).
The most important part of adding lambdaMart to sklearn is fleshing out a
Hi,
Thanks for pointing this Andy! I think it would help indeed to set some
coarse deadline for the next release. This would help us get motion and get
things done. End of december or beginning of January would be best for me.
On my side, I don't plan to contribute anything big in the meantime.
Hello scikit-learn,
I recently wrote up an implementation of the LambdaMART algorithm on top of
the existing gradient boosting code (thanks for the great base of code to
work with btw). It currently only supports NDCG but it would be easy to
generalize. That's kind of besides the point however. Be
19 matches
Mail list logo