Le 02/10/2018 à 16:46, Andreas Mueller a écrit :
> Thank you for your feedback Alex!
Thanks for answering !
>
> On 10/02/2018 09:28 AM, Alex Garel wrote:
>>
>> * chunk processing (kind of handling streaming data) : when
>> dealing with lot of data, the abili
Le 26/09/2018 à 21:59, Joel Nothman a écrit :
> And for those interested in what's in the pipeline, we are trying to
> draft a
> roadmap...
> https://github.com/scikit-learn/scikit-learn/wiki/Draft-Roadmap-2018
Hello,
First of all thanks for the incredible work on scikit-learn.
I found the RoadM
Hello,
First, thanks for the fantastic scikit-learn library.
I have the following use case: For a classification problem, I have a
list of sentences and use word2vec and a method (eg. mean, or weigthed
mean, or attention and mean) to transform sentences to vectors. Because
my dataset is very nois
I'm not totally sure of what you're trying to do, but here are some
remarks that may help you:
1. in modelfit = model.fit(count_vect, enc), the enc parameter is not
used, only the count_vect matrix is used
2. when you use kneighbors you get vectors corresponding to wiki['text']
not to wiki['name']