Hi Andy,
Yes, it is regression, so that explains it.
Here is the script and data that produced the output:
https://dl.dropbox.com/u/74279156/accuracy.zip
Thanks,
Zach
On 13 August 2012 16:21, Andreas Mueller wrote:
> Hi Zach.
> If this is related to your previous problems, let me just
> answe
Hi Zach.
If this is related to your previous problems, let me just
answer 1: the values depend on what error score is used.
If your problem is a regression problem, the standard score is r2,
which can become negative.
That the CV values vary so much is really a bit odd.
Could you post a gist with
Changing the cv parameter (number of iterations) in cross_val_score()
really changes the returned scores. Increasing the CV doesn't
neccessarily mean that the returned scores stabalise. Instead, they get
worse, and only get better later. I have included the output of
increasing the CV below.
M
Cool thanks, hope it's not too much of a pain.
On Mon, Aug 13, 2012 at 3:26 PM, Andreas Müller wrote:
> Hi Nick.
> I think it is not possible at the moment, though I'm not so familiar with
> this part of the code.
> I opened an issue here:
> https://github.com/scikit-learn/scikit-learn/issues/101
On 08/13/2012 08:29 PM, Abhi wrote:
> Andreas Müller writes:
>
>> Alternatively you could look at the output of "decision_function" in
> LinearSVC.
>> These do not represent probabilities, though.
>>
>> Andy
>>
>
> Hi Andy, thanks for pointing me towards that. I looked around online but I'm
> st
Andreas Müller writes:
>
> Alternatively you could look at the output of "decision_function" in
LinearSVC.
> These do not represent probabilities, though.
>
> Andy
>
Hi Andy, thanks for pointing me towards that. I looked around online but I'm
still not sure how I can use the decision_funct
Hi Abhi.
As I said above, you can just use "decision_function" of LibLinear,
which gives you the distance to the separating hyperplane.
Alternatively you can use "LogisticRegression" from the "linear_models"
module.
Best,
Andy
---
Gael Varoquaux writes:
>
> On Thu, Aug 09, 2012 at 01:02:21AM +, Abhi wrote:
> > I am using sklearn.svm.LinearSVC for document classification and I get
> > a
> > good accuracy[98%] on predict. Is there a way to find the confidence of
match
> > (like predict_proba() in SGDClassifier)
Hi Michael.
Thanks for getting back.
As I just started working on a kaggle submission, I implemented some new
features and bugfixes for that
and I'll probably be working on those in the next couple of days.
So get your hopes up to much.
I think the new submission system somehow lets you subscri
Hi Andreas,
I doubt I could be of great use on the algorithm itself, but I'm happy to
help out with example(s) when it's up and running. I'll keep an eye on the
PR (Is there a way to "watch" a specific issue on github?), or feel free to
ping me when the tests are looking good.
Cheers,
Michael
O
Hi Nick.
I think it is not possible at the moment, though I'm not so familiar with this
part of the code.
I opened an issue here:
https://github.com/scikit-learn/scikit-learn/issues/1018
I might fix this this week, not guarantee, though.
Cheers,
Andy
- Ursprüngliche Mail -
Von: "Nicholas
Is it possible to use the RFE SVM with a Sparse feature matrix without
converting it to dense representation?
Nick
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and
thre
12 matches
Mail list logo