On 04/16/2015 05:49 PM, Joel Nothman wrote:
> I more or less agree. Certainly we only need to do one searchsorted per
> query per tree, and then do linear scans. There is a question of how
> close we stay to the original LSHForest algorithm, which relies on
> matching prefixes rather than hamming d
I more or less agree. Certainly we only need to do one searchsorted per
query per tree, and then do linear scans. There is a question of how close
we stay to the original LSHForest algorithm, which relies on matching
prefixes rather than hamming distance. Hamming distance is easier to
calculate in
GoDec might not have the citations (yet) to be added to scikit-learn.
But I think a basic ALM based RPCA would be a great addition, along
with a cool demo. Background smart background subtraction would be my
vote but might be too heavy weight - I could see a cool example of
something like colored b
never mind my question. I forgot gridsearch was the actual object.
Thanks,
From: Pagliari, Roberto [rpagli...@appcomsci.com]
Sent: Thursday, April 16, 2015 12:50 PM
To: scikit-learn-general@lists.sourceforge.net
Subject: [Scikit-learn-general] gradient boost class
How about something like this:
1. Basic implementation of ALM uses arpack (not ideal but it means sklearn
can have RPCA available)
2. Option to use randomized SVD if desired
3. Option to use propack if desired and it's available (or if/when scipy
begins to use it)
4. GoDec implementation for lo
is feature_importances_ available from gradient boosting?
it is mentioned in the documentation, but it doesn't exist when I try to access
it (after fitting via grid search).
I printed 'dir' of the object and can't see it.
Thanks,
-
This is the sphinx latex build, not a script of ours.
I'm not sure, you can consult the sphinx documentation:
http://sphinx-doc.org/
On 04/16/2015 07:48 AM, Tim wrote:
> Thanks again!
>
> Can your scripts also create pdf bookmarks of third or lower levels?
> E.g.
> ...
> 4.1.1 Ordinary Least Squa
Interestingly, this time I didn't get any errors (I got them before).
But you get a pdf even with the errors.
On 04/16/2015 06:26 AM, Joel Nothman wrote:
Although I note that I've got LaTeX compilation errors, so I'm not
sure how Andy compiles this.
On 16 April 2015 at 20:25, Joel Nothman
Hi Joel,
To extend your analysis:
- when n_samples*n_indices is large enough, the bottleneck is the use of
the index, as you say.
- when n_dimensions*n_candidates is large enough, the bottleneck is
computation of true distances between DB points and the query.
To serve well both kinds of use ca
Thanks again!
Can your scripts also create pdf bookmarks of third or lower levels?
E.g.
...
4.1.1 Ordinary Least Squares
4.1.2 Ridge Regression
Ridge Complexity
Setting the regularization parameter: generalized Cross-Validation
4.1.3 Lasso
Setting regularization parameter
Using cross-validation
Although I note that I've got LaTeX compilation errors, so I'm not sure how
Andy compiles this.
On 16 April 2015 at 20:25, Joel Nothman wrote:
> I've proposed a better chapter ordering at
> https://github.com/scikit-learn/scikit-learn/pull/4602...
>
> On 16 April 2015 at 03:48, Andreas Mueller
I've proposed a better chapter ordering at
https://github.com/scikit-learn/scikit-learn/pull/4602...
On 16 April 2015 at 03:48, Andreas Mueller wrote:
> Hi.
> Yes, run "make latexpdf" in the "doc" folder.
>
> Best,
> Andy
>
>
> On 04/15/2015 01:11 PM, Tim wrote:
> > Thanks, Andy!
> >
> > How do
12 matches
Mail list logo