On Thu, Nov 03, 2011 at 09:46:53PM +0100, Andreas Mueller wrote:
> > Again, thanks for reporting Andreas.
> Thank you for fixing this so quickly!
Awesome work on both sides!
G
--
RSA(R) Conference 2012
Save $700 by Nov 1
On 11/03/2011 08:58 PM, Peter Prettenhofer wrote:
> I fixed the bug: there was an overflow of the `offset` variable which
> occurred when X is too large (more than 250M elements).
That was in the direction I was guessing ;)
> Again, thanks for reporting Andreas.
Thank you for fixing this so quickly
I fixed the bug: there was an overflow of the `offset` variable which
occurred when X is too large (more than 250M elements).
Again, thanks for reporting Andreas.
best,
Peter
2011/11/3 Peter Prettenhofer :
> Andreas,
>
> I just wanted to say that I can reproduce the problem on synthetic
> data
Andreas,
I just wanted to say that I can reproduce the problem on synthetic
data - I'm on it.
thanks for reporting!
best,
Peter
2011/11/3 Peter Prettenhofer :
> I've to admit I haven't compiled extensions with debug information for
> a while AFAIK you can add the '-g' flag to the
> `extra_
I've to admit I haven't compiled extensions with debug information for
a while AFAIK you can add the '-g' flag to the
`extra_compile_args` list of fast_sgd's extension in
`sklearn/linear_model/setup.py`. But maybe a simpler solution is to
add a `--debug` flag to the build_ext command::
pytho
On 11/03/2011 06:22 PM, Alexandre Passos wrote:
> Can you try running it under valgrind and sending back the memory
> errors you get? I have no idea how hoisy valgrind's output on the
> scikit is, but this should help narrow things down.
>
This might be a stupid question but is there an easy way
to
On 11/03/2011 06:19 PM, Peter Prettenhofer wrote:
> Can you try if it segfaults on X[5000:6000] and if so, it would be
> great if you could send me the ndarray - then I can reproduce the
> error and see what's wrong.
>
As I said, the features consist of three distinct parts and if
I use any of thos
Can you try running it under valgrind and sending back the memory
errors you get? I have no idea how hoisy valgrind's output on the
scikit is, but this should help narrow things down.
On Thu, Nov 3, 2011 at 12:17, Andreas Müller wrote:
> Hi folks.
> Today I ran across a segfault doing sgd multi c
Can you try if it segfaults on X[5000:6000] and if so, it would be
great if you could send me the ndarray - then I can reproduce the
error and see what's wrong.
2011/11/3 Andreas Müller :
> On 11/03/2011 06:10 PM, Peter Prettenhofer wrote:
>> Hi Andreas,
>>
>> can you run the following line and se
On 11/03/2011 06:10 PM, Peter Prettenhofer wrote:
> Hi Andreas,
>
> can you run the following line and send me the results (assume your
> data is stored in `X`)::
>
> print X.flags
>
train_data.flags
C_CONTIGUOUS : True
F_CONTIGUOUS : False
OWNDATA : True
WRITEABLE : True
ALIGNED : Tr
Hi Andreas,
can you run the following line and send me the results (assume your
data is stored in `X`)::
print X.flags
thx,
Peter
2011/11/3 Andreas Müller :
> Hi folks.
> Today I ran across a segfault doing sgd multi class classification.
> I tracked the error down to sgd_fast but don't kn
Hi folks.
Today I ran across a segfault doing sgd multi class classification.
I tracked the error down to sgd_fast but don't know how to proceed.
My data has shape (5, 13824) and is dense, the parameters
of the classifier are:
SGDClassifier(loss="hinge", penalty="l2")
I ran it through scaler a
12 matches
Mail list logo