Gabriel,

your problem is that you want to apply an LDA classifier on a regression task.
Make sure y is discrete (eg. filled with 0 or 1) and it should work.

A


On Sat, Nov 9, 2013 at 7:14 PM, Gabriel Peschl <gabrielpes...@gmail.com> wrote:
> Hi,
>
> I am trying to implement the LDA algorithm using the sklearn, in python
>
> The code is:
>
> import numpy as np
>
> from sklearn.lda import LDA
>
>
> X = np.array ([[0.000000, 0.000000, 0.000000, 0.000000, 0.001550,
>
> 0.000000, 0.000000, 0.000000, 0.000000, 0.000000,
>
> 0.000000, 0.000000, 0.201550, 0.011111, 0.077778,
>
> 0.011111, 0.000000, 0.000000, 0.000000, 0.000000,
>
> 0.000000, 0.092732, 0.000000, 0.000000, 0.000000,
>
> 0.000000, 0.035659, 0.000000, 0.000000, 0.000000,
>
> 0.000000, 0.066667, 0.000000, 0.000000, 0.010853,
>
> 0.000000, 0.033333, 0.055556, 0.055556, 0.077778,
>
> 0.000000, 0.000000, 0.000000, 0.268170, 0.000000,
>
> 0.000000, 0.000000, 0.000000, 0.130233, 0.000000,
>
> 0.000000, 0.000000, 0.000000, 0.000000, 0.000000,
>
> 0.000000, 0.034109, 0.077778, 0.055556, 0.011111,
>
> 0.000000, 0.000000, 0.000000, 0.000000, 0.000000,
>
> 0.155388, 0.000000, 0.000000, 0.000000, 0.000000,
>
> 0.181395, 0.000000, 0.000000, 0.000000, 0.000000,
>
> 0.001550, 0.007752, 0.000000, 0.000000, 0.000000,
>
> 0.000000, 0.000000, 0.011111, 0.088889, 0.033333,
>
> 0.000000, 0.000000, 0.142857, 0.000000, 0.000000,
>
> 0.000000, 0.000000, 0.093023, 0.000000, 0.000000,
>
> 0.000000, 0.000000, 0.000000, 0.009302, 0.010853,
>
> 0.000000, 0.100000, 0.000000, 0.000000, 0.000000,
>
> 0.000000, 0.022222, 0.088889, 0.033333, 0.238095,
>
> 0.000000, 0.000000, 0.000000, 0.000000, 0.032558,
>
> 0.000000, 0.000000, 0.000000, 0.000000, 0.000000,
>
> 0.182946, 0.000000, 0.000000, 0.000000, 0.000000,
>
> 0.000000, 0.000000, 0.022222, 0.077778, 0.055556,
>
> 0.000000, 0.102757],
>
> [0.000000, 0.000000, 0.000000, 0.000000, 0.001550,
>
> 0.000000, 0.000000, 0.000000, 0.000000, 0.000000,
>
> 0.000000, 0.000000, 0.201550, 0.011111, 0.077778,
>
> 0.011111, 0.000000, 0.000000, 0.000000, 0.000000,
>
> 0.000000, 0.092732, 0.000000, 0.000000, 0.000000,
>
> 0.000000, 0.035659, 0.000000, 0.000000, 0.000000,
>
> 0.000000, 0.066667, 0.000000, 0.000000, 0.010853,
>
> 0.000000, 0.033333, 0.055556, 0.055556, 0.077778,
>
> 0.000000, 0.000000, 0.000000, 0.268170, 0.000000,
>
> 0.000000, 0.000000, 0.000000, 0.130233, 0.000000,
>
> 0.000000, 0.000000, 0.000000, 0.000000, 0.000000,
>
> 0.000000, 0.034109, 0.077778, 0.055556, 0.011111,
>
> 0.000000, 0.000000, 0.000000, 0.000000, 0.000000,
>
> 0.155388, 0.000000, 0.000000, 0.000000, 0.000000,
>
> 0.181395, 0.000000, 0.000000, 0.000000, 0.000000,
>
> 0.001550, 0.007752, 0.000000, 0.000000, 0.000000,
>
> 0.000000, 0.000000, 0.011111, 0.088889, 0.033333,
>
> 0.000000, 0.000000, 0.142857, 0.000000, 0.000000,
>
> 0.000000, 0.000000, 0.093023, 0.000000, 0.000000,
>
> 0.000000, 0.000000, 0.000000, 0.009302, 0.010853,
>
> 0.000000, 0.100000, 0.000000, 0.000000, 0.000000,
>
> 0.000000, 0.022222, 0.088889, 0.033333, 0.238095,
>
> 0.000000, 0.000000, 0.000000, 0.000000, 0.032558,
>
> 0.000000, 0.000000, 0.000000, 0.000000, 0.000000,
>
> 0.182946, 0.000000, 0.000000, 0.000000, 0.000000,
>
> 0.000000, 0.000000, 0.022222, 0.077778, 0.055556,
>
> 0.000000, 0.102757]])
>
> y = np.array ([[0.000000, 0.000000, 0.008821, 0.000000, 0.000000,
>
> 0.000000, 0.000000, 0.000000, 0.000000, 0.000000,
>
> 0.000000, 0.000000, 0.179631, 0.010471, 0.036649,
>
> 0.026178, 0.000000, 0.000000, 0.020942, 0.010471,
>
> 0.000000, 0.109215, 0.000000, 0.000000, 0.060144,
>
> 0.000000, 0.042502, 0.000000, 0.005613, 0.000000,
>
> 0.000000, 0.018444, 0.000000, 0.000000, 0.013633,
>
> 0.020942, 0.031414, 0.083770, 0.015707, 0.041885,
>
> 0.041885, 0.057592, 0.010471, 0.233788, 0.000000,
>
> 0.000000, 0.018444, 0.000000, 0.000000, 0.000000,
>
> 0.000000, 0.000000, 0.090617, 0.000000, 0.000000,
>
> 0.000000, 0.104250, 0.005236, 0.020942, 0.031414,
>
> 0.000000, 0.000000, 0.010471, 0.015707, 0.005236,
>
> 0.056314, 0.000000, 0.000000, 0.026464, 0.000000,
>
> 0.004010, 0.000000, 0.031275, 0.007217, 0.036889,
>
> 0.007217, 0.013633, 0.000000, 0.000000, 0.005236,
>
> 0.047120, 0.057592, 0.015707, 0.010471, 0.047120,
>
> 0.062827, 0.005236, 0.262799, 0.000000, 0.000000,
>
> 0.000000, 0.000000, 0.000802, 0.000000, 0.000000,
>
> 0.000000, 0.001604, 0.000000, 0.052927, 0.000000,
>
> 0.039294, 0.026178, 0.041885, 0.031414, 0.000000,
>
> 0.000000, 0.041885, 0.073298, 0.000000, 0.308874,
>
> 0.000000, 0.000000, 0.000000, 0.000000, 0.000000,
>
> 0.000000, 0.000000, 0.000000, 0.000000, 0.000000,
>
> 0.236568, 0.000000, 0.000000, 0.000000, 0.000000,
>
> 0.000000, 0.000000, 0.000000, 0.020942, 0.015707,
>
> 0.000000, 0.029010,
>
> 0.000000, 0.000000, 0.008821, 0.000000, 0.000000,
>
> 0.000000, 0.000000, 0.000000, 0.000000, 0.000000,
>
> 0.000000, 0.000000, 0.179631, 0.010471, 0.036649,
>
> 0.026178, 0.000000, 0.000000, 0.020942, 0.010471,
>
> 0.000000, 0.109215, 0.000000, 0.000000, 0.060144,
>
> 0.000000, 0.042502, 0.000000, 0.005613, 0.000000,
>
> 0.000000, 0.018444, 0.000000, 0.000000, 0.013633,
>
> 0.020942, 0.031414, 0.083770, 0.015707, 0.041885,
>
> 0.041885, 0.057592, 0.010471, 0.233788, 0.000000,
>
> 0.000000, 0.018444, 0.000000, 0.000000, 0.000000,
>
> 0.000000, 0.000000, 0.090617, 0.000000, 0.000000,
>
> 0.000000, 0.104250, 0.005236, 0.020942, 0.031414,
>
> 0.000000, 0.000000, 0.010471, 0.015707, 0.005236,
>
> 0.056314, 0.000000, 0.000000, 0.026464, 0.000000,
>
> 0.004010, 0.000000, 0.031275, 0.007217, 0.036889,
>
> 0.007217, 0.013633, 0.000000, 0.000000, 0.005236,
>
> 0.047120, 0.057592, 0.015707, 0.010471, 0.047120,
>
> 0.062827, 0.005236, 0.262799, 0.000000, 0.000000,
>
> 0.000000, 0.000000, 0.000802, 0.000000, 0.000000,
>
> 0.000000, 0.001604, 0.000000, 0.052927, 0.000000,
>
> 0.039294, 0.026178, 0.041885, 0.031414, 0.000000,
>
> 0.000000, 0.041885, 0.073298, 0.000000, 0.308874,
>
> 0.000000, 0.000000, 0.000000, 0.000000, 0.000000,
>
> 0.000000, 0.000000, 0.000000, 0.000000, 0.000000,
>
> 0.236568, 0.000000, 0.000000, 0.000000, 0.000000,
>
> 0.000000, 0.000000, 0.000000, 0.020942, 0.015707,
>
> 0.000000, 0.029010
>
> ],
>
> [0.000000, 0.000000, 0.008821, 0.000000, 0.000000,
>
> 0.000000, 0.000000, 0.000000, 0.000000, 0.000000,
>
> 0.000000, 0.000000, 0.179631, 0.010471, 0.036649,
>
> 0.026178, 0.000000, 0.000000, 0.020942, 0.010471,
>
> 0.000000, 0.109215, 0.000000, 0.000000, 0.060144,
>
> 0.000000, 0.042502, 0.000000, 0.005613, 0.000000,
>
> 0.000000, 0.018444, 0.000000, 0.000000, 0.013633,
>
> 0.020942, 0.031414, 0.083770, 0.015707, 0.041885,
>
> 0.041885, 0.057592, 0.010471, 0.233788, 0.000000,
>
> 0.000000, 0.018444, 0.000000, 0.000000, 0.000000,
>
> 0.000000, 0.000000, 0.090617, 0.000000, 0.000000,
>
> 0.000000, 0.104250, 0.005236, 0.020942, 0.031414,
>
> 0.000000, 0.000000, 0.010471, 0.015707, 0.005236,
>
> 0.056314, 0.000000, 0.000000, 0.026464, 0.000000,
>
> 0.004010, 0.000000, 0.031275, 0.007217, 0.036889,
>
> 0.007217, 0.013633, 0.000000, 0.000000, 0.005236,
>
> 0.047120, 0.057592, 0.015707, 0.010471, 0.047120,
>
> 0.062827, 0.005236, 0.262799, 0.000000, 0.000000,
>
> 0.000000, 0.000000, 0.000802, 0.000000, 0.000000,
>
> 0.000000, 0.001604, 0.000000, 0.052927, 0.000000,
>
> 0.039294, 0.026178, 0.041885, 0.031414, 0.000000,
>
> 0.000000, 0.041885, 0.073298, 0.000000, 0.308874,
>
> 0.000000, 0.000000, 0.000000, 0.000000, 0.000000,
>
> 0.000000, 0.000000, 0.000000, 0.000000, 0.000000,
>
> 0.236568, 0.000000, 0.000000, 0.000000, 0.000000,
>
> 0.000000, 0.000000, 0.000000, 0.020942, 0.015707,
>
> 0.000000, 0.029010 ]
>
> ])
>
> clf = LDA()
>
> clf.fit(X, y)
>
> print(clf.predict(2, 1))
>
>
> But, I got the message error:
>
>
>  clf.fit(X, y)
>
> fac = 1 / (n_samples - n_classes)
> ZeroDivisionError: float division by zero
>
>
> What I do to solve this error?
>
> I am using this version of the LDA, from SKLEARN
> http://scikit-learn.org/stable/modules/generated/sklearn.lda.LDA.html
>
> The second question is:
>
> Can I use  sklearn.lda with 2 .txt files? Files have 68.830 kB and 174.317
> KB. The first one is a test file and the second is training file.
>
> How I can use them, some suggestion?
>
> Thank you very much!
>
>
>
> ------------------------------------------------------------------------------
> November Webinars for C, C++, Fortran Developers
> Accelerate application performance with scalable programming models. Explore
> techniques for threading, error checking, porting, and tuning. Get the most
> from the latest Intel processors and coprocessors. See abstracts and
> register
> http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk
> _______________________________________________
> Scikit-learn-general mailing list
> Scikit-learn-general@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>

------------------------------------------------------------------------------
November Webinars for C, C++, Fortran Developers
Accelerate application performance with scalable programming models. Explore
techniques for threading, error checking, porting, and tuning. Get the most 
from the latest Intel processors and coprocessors. See abstracts and register
http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to