Hi Nelson.
There will be a scikit-learn sprint :)
Not sure how many other core-devs will be there, though.
Cheers,
Andy
On 02/22/2016 05:35 PM, Nelson Liu wrote:
Hi all,
I might be attending, is there going to be a scikit-learn sprint? I'd
also be interested in helping put together a tutorial
Hi all,
I might be attending, is there going to be a scikit-learn sprint? I'd also
be interested in helping put together a tutorial :)
Nelson Liu
On Mon, Feb 22, 2016, 9:20 AM Sebastian Raschka
wrote:
> After missing all the fun last year, I am also planning on attending — I’d
> also be happy t
Ah, thanks! Much better solution to what I did :)
Regards,
Stelios
2016-02-22 16:51 GMT+00:00 Andreas Mueller :
> You can just do this via a CV object. For example, use
> StratifiedShuffleSplit(train_set=.1, test_set=.1, n_folds=5)
> and your training and test set will be randomly samples disjoi
On 02/22/2016 02:04 PM, Guillaume Lemaitre wrote:
> Maybe the simplest one should be to have texton (patches 9x9) with a PCA
> behind then the clustering. That would be the one without skimage dependences.
>
yeah... but what dataset? Actually one that has unequal size images
would be nice, but
Maybe the simplest one should be to have texton (patches 9x9) with a PCA behind
then the clustering. That would be the one without skimage dependences.
Guillaume Lemaitre
PhD candidate
MSc Erasmus Mundus Vision and robotic (ViBOT)
Master in Business Innovation and Technology Management (BITM)
Un
I think trees are 32 bit for X and 64 bit for y.
On Mon, Feb 22, 2016 at 9:18 AM, Andreas Mueller wrote:
>
>
> On 02/17/2016 02:25 PM, muhammad waseem wrote:
> > @Sebastian: I have tried running it by using n_jobs=2 and you were
> > right it uses around 27% of the RAM.
> > Does this mean I can o
On 02/22/2016 12:41 PM, Gael Varoquaux wrote:
>> For any particular application (I did bag of visual words), creating an
>> implementation using the kmeans or sparse coding in scikit-learn
>> is only a couple of lines (you can find my visual bow for per-superpixel
>> descriptors here https://gith
> For any particular application (I did bag of visual words), creating an
> implementation using the kmeans or sparse coding in scikit-learn
> is only a couple of lines (you can find my visual bow for per-superpixel
> descriptors here https://github.com/amueller/segmentation/blob/master/bow.py#
> L
After missing all the fun last year, I am also planning on attending — I’d also
be happy to help if there’s a shortage in core devs for the tutorials ;)
Cheers,
Sebastian
> On Feb 22, 2016, at 12:11 PM, Manoj Kumar
> wrote:
>
> Hi everyone.
>
> I'll definitely be happy to help on the tutori
On 02/17/2016 02:25 PM, muhammad waseem wrote:
> @Sebastian: I have tried running it by using n_jobs=2 and you were
> right it uses around 27% of the RAM.
> Does this mean I can only use max n_jobs=8 for my case (obviously this
> will also depend on the number of estimators, more will require m
Hi everyone.
I'll definitely be happy to help on the tutorial!
On Mon, Feb 22, 2016 at 11:41 AM, Andreas Mueller wrote:
> Who's going?
> I'll definitely be there and am happy to do a tutorial.
> Who's in?
>
>
>
> On 02/22/2016 04:15 AM, Nelle Varoquaux wrote:
>
>
> Dear all,
>
> SciPy 2016, the
Hi Guillaume.
I was a big user of BoW myself, but I don't think it should go into
scikit-learn.
BoW doesn't really operate on a "flat" dataset, as scikit-learn usually
does. It works on groups of data points.
Each sample is usually a concatenation of feature vectors, which you
summarize as a h
You can just do this via a CV object. For example, use
StratifiedShuffleSplit(train_set=.1, test_set=.1, n_folds=5)
and your training and test set will be randomly samples disjoint 10% of
the data, repeated 5 times.
On 02/19/2016 11:42 AM, Gael Varoquaux wrote:
> That won't work, as it is modi
Hi Atharva.
I think the consensus among the core people and possible mentors is that
we would only accept a small number of students (probably 0 or 1).
I don't think we currently have a list of projects, and it will likely
depend on the interests of the applicants.
We have few people that have e
Hi Devashish.
I think we're still interested, though it is a bunch of work to include
pyearth, and there are probably some non-trivial decisions to make
on what to include.
Cheers,
Andy
On 02/20/2016 02:40 AM, Devashish Deshpande wrote:
Hi everyone,
I was browsing through the projects that
Who's going?
I'll definitely be there and am happy to do a tutorial.
Who's in?
On 02/22/2016 04:15 AM, Nelle Varoquaux wrote:
Dear all,
SciPy 2016, the Fifteenth Annual Conference on Python in Science,
takes place in Austin, TX on July, 11th to 17th. The conference
features two days of tuto
Hi,
I am collaborating with a medical research group. We performed some
analysis on a medical dataset using Random Forest, the boruta algorithm and
partial dependency plot.
We wrote a paper with our findings containing some interesting medical
information. The novelty from a machine learning poin
Dear all,
SciPy 2016, the Fifteenth Annual Conference on Python in Science, takes
place in Austin, TX on July, 11th to 17th. The conference features two days
of tutorials by followed by three days of presentations, and concludes with
two days of developer sprints on projects of interest to attende
18 matches
Mail list logo