2012/7/10 Olivier Grisel <[email protected]>:
> 2012/7/10 Lars Buitinck <[email protected]>:
>> 2012/7/10 Olivier Grisel <[email protected]>:
>>> When doing single node multi cpu parallel machine learning (e.g grid
>>> search, one vs all SGD, random forests), it would be great to avoid
>>> duplicating memory, especially for the input dataset that is used as a
>>> readonly resource in most of our common usecases.
>>
>> I may be entirely mistaken here, but I always assumed copy-on-write
>> semantics would apply in joblib, so getting data from the master to
>> the workers would be free as long as the workers don't change the
>> data. Is that not the case?
>
> Indeed it might be the case if the input data already has the right
> memory layout (e.g. fortran in np.float32)

=> I meant for the RandomForestClassifier input data.

-- 
Olivier
http://twitter.com/ogrisel - http://github.com/ogrisel

------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to