Thanks for that explanation, Josef. sklearn uses __init__ and its __all__
to define a public API where across-version compatibility is maintained. It
abstracts path on disk from import path by each top-level submodule
exporting all public names within that submodule (whether it's a deeper
package o
On Mon, Dec 2, 2013 at 4:17 PM, Gael Varoquaux
wrote:
> On Tue, Dec 03, 2013 at 06:56:14AM +1100, Joel Nothman wrote:
>> As for "There should be one-- and preferably only one --obvious way to
>> do it," Gaël, I feel there are times where the one obvious way to do it
>> should be conditioned on wh
On Tue, Dec 03, 2013 at 06:56:14AM +1100, Joel Nothman wrote:
> As for "There should be one-- and preferably only one --obvious way to
> do it," Gaël, I feel there are times where the one obvious way to do it
> should be conditioned on whether you're building an application or
> writing a quick sc
Hi all,
I've created a pull request to make Pipeline compatible with AdaBoost. It
is here:
https://github.com/scikit-learn/scikit-learn/pull/2630
Best,
Jason
--
Rapidly troubleshoot problems before they affect your busi
PS: in terms of any conflicting names, I've only encountered
sklearn.hmm.normalize and sklearn.preprocessing.normalize at master...
On Tue, Dec 3, 2013 at 6:56 AM, Joel Nothman wrote:
> I find myself agreeing with basically everything said here.
>
> As for "There should be one-- and preferably
I find myself agreeing with basically everything said here.
As for "There should be one-- and preferably only one --obvious way to do
it," Gaël, I feel there are times where the one obvious way to do it should
be conditioned on whether you're building an application or writing a quick
script / in
On Mon, Dec 2, 2013 at 3:21 PM, Nelle Varoquaux
wrote:
>
> On 2 December 2013 16:11, Gael Varoquaux
wrote:
> > On Mon, Dec 02, 2013 at 02:50:45PM +, Robert Kern wrote:
> >> > +1. "Import *" is a really really bad habit. And hacked up
interactive
> >> > environments (with crazy start up script
On 2 December 2013 16:11, Gael Varoquaux wrote:
> On Mon, Dec 02, 2013 at 02:50:45PM +, Robert Kern wrote:
>> > +1. "Import *" is a really really bad habit. And hacked up interactive
>> > environments (with crazy start up scripts) make it really hard to teach,
>> > because beginners don't make
On Mon, Dec 2, 2013 at 3:11 PM, Gael Varoquaux <
gael.varoqu...@normalesup.org> wrote:
> On Mon, Dec 02, 2013 at 02:50:45PM +, Robert Kern wrote:
> > > +1. "Import *" is a really really bad habit. And hacked up interactive
> > > environments (with crazy start up scripts) make it really hard to
On Mon, Dec 02, 2013 at 02:50:45PM +, Robert Kern wrote:
> > +1. "Import *" is a really really bad habit. And hacked up interactive
> > environments (with crazy start up scripts) make it really hard to teach,
> > because beginners don't make the difference between a hack and Python
> > proper a
On Mon, Dec 2, 2013 at 2:16 PM, Gael Varoquaux <
gael.varoqu...@normalesup.org> wrote:
>
> On Mon, Dec 02, 2013 at 12:49:19PM +0100, Vlad Niculae wrote:
> > Personally I'd rather be a bit frustrated but have tab completion and
> > pyflakes warnings. I avoid using star imports even in hackish script
2013/12/2 abdalrahman eweiwi :
> Hi,
>
> You are right, infact I spent almost 1 month reviewing the code base of PLS
> and CCA implementation in sklearn. I should say that the (old) code base in
> my opinion should be somehow refactored to get into a simpler shape. I
> remember I had some difficult
On Mon, Dec 02, 2013 at 12:49:19PM +0100, Vlad Niculae wrote:
> Personally I'd rather be a bit frustrated but have tab completion and
> pyflakes warnings. I avoid using star imports even in hackish scripts.
> I assume the warning will create unnecessary confusion when people
> learn to use the star
2013/12/2 Andrea Bravi :
>
> Hi everybody,
>
>
> quick question: do I need to do anything to enable closing the pull request
> for the feature selection algorithm I wrote? (Minimal redundancy, maximal
> relevance)
>
> Sorry, I am new to this world and not entire sure of whether it is just
> process
Hi everybody,
quick question: do I need to do anything to enable closing the pull request
for the feature selection algorithm I wrote? (Minimal redundancy, maximal
relevance)
Sorry, I am new to this world and not entire sure of whether it is just
processing time, or there is something missing fr
I like the:
import statsmodels.api as sm
pattern.
We could have:
import sklearn.api as skl
However we need to check that we don't have conflicting classnames or
public functions.
--
Olivier
--
Rapidly troublesho
For IPython sessions, startup scripts may also be useful:
http://stackoverflow.com/questions/17915385/load-ipython-with-custom-packages-imported
http://ipython.org/ipython-doc/stable/config/overview.html
M.
On Mon, Dec 2, 2013 at 8:49 PM, Vlad Niculae wrote:
> Personally I'd rather be a bit f
On Mon, Dec 2, 2013 at 11:29 AM, Joel Nothman wrote:
> I think it's great that scikit-learn keeps its objects to modular
> namespaces, and doesn't litter one space as does numpy, pyplot, etc. Yet,
> when writing quick scripts it can be frustrating to have to import from
> pipeline, grid_search, li
Fair enough; but it also enables:
from sklearn.all import Pipeline, GridSearchCV, LinearSVC, SelectKBest,
chi2, load_iris
On Mon, Dec 2, 2013 at 10:49 PM, Vlad Niculae wrote:
> Personally I'd rather be a bit frustrated but have tab completion and
> pyflakes warnings. I avoid using star imports
Personally I'd rather be a bit frustrated but have tab completion and
pyflakes warnings. I avoid using star imports even in hackish scripts.
I assume the warning will create unnecessary confusion when people
learn to use the star import first. These users will probably feel
that the warning is a s
I think it's great that scikit-learn keeps its objects to modular
namespaces, and doesn't litter one space as does numpy, pyplot, etc. Yet,
when writing quick scripts it can be frustrating to have to import from
pipeline, grid_search, linear_model and feature_extraction before I can
mock something
Hi List,
I was counting the number of non-zero coefficients of a SGDClassifier and
got a very strange ValueError after calling predict() again.
After some research it seems that our own sklearn.utils.fixes.count_nonzero
has a side effect that changes the type of the coef_ matrix. Any further
call
Hi,
You are right, infact I spent almost 1 month reviewing the code base of PLS
and CCA implementation in sklearn. I should say that the (old) code base in
my opinion should be somehow refactored to get into a simpler shape. I
remember I had some difficulties in analyzing that code. Also the CCA
r
23 matches
Mail list logo