Hi , The way you have installed mahout may be wrong .. can I know whether out of box in mahout is working fine . Syed Abdul kather send from Samsung S3 On Jul 28, 2012 5:35 AM, "k6.amruta [via Lucene]" < [email protected]> wrote:
> I am trying to run Wikipedia Bayes Example from > https://cwiki.apache.org/confluence/...+Bayes+Example > > When I ran the following command : $MAHOUT_HOME/bin/mahout > wikipediaXMLSplitter -d > $MAHOUT_HOME/examples/temp/enwiki-latest-pages-articles10.xml -o > wikipedia/chunks -c 64 > > I am getting this error: > > Exception in thread "main" java.lang.NoClassDefFoundError: classpath > Caused by: java.lang.ClassNotFoundException: classpath > at java.net.URLClassLoader$1.run(URLClassLoader.java:217) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:205) > at java.lang.ClassLoader.loadClass(ClassLoader.java:323) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) > at java.lang.ClassLoader.loadClass(ClassLoader.java:268) > at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:336) > Running on hadoop, using /x/home/hadoop_adm/opt/hadoop/bin/hadoop and > HADOOP_CONF_DIR= > MAHOUT-JOB: > /x/home/user/mahout-distribution-0.7/mahout-examples-0.7-job.jar > 12/07/27 16:28:02 WARN driver.MahoutDriver: Unable to add class: > wikipediaXMLSplitter > 12/07/27 16:28:02 WARN driver.MahoutDriver: No wikipediaXMLSplitter.props > found on classpath, will use command-line arguments only > Unknown program 'wikipediaXMLSplitter' chosen. > Valid program names are: > arff.vector: : Generate Vectors from an ARFF file or directory > baumwelch: : Baum-Welch algorithm for unsupervised HMM training > canopy: : Canopy clustering > cat: : Print a file or resource as the logistic regression models would > see it > cleansvd: : Cleanup and verification of SVD output > clusterdump: : Dump cluster output to text > clusterpp: : Groups Clustering Output In Clusters > cmdump: : Dump confusion matrix in HTML or text formats > cvb: : LDA via Collapsed Variation Bayes (0th deriv. approx) > cvb0_local: : LDA via Collapsed Variation Bayes, in memory locally. > dirichlet: : Dirichlet Clustering > eigencuts: : Eigencuts spectral clustering > evaluateFactorization: : compute RMSE and MAE of a rating matrix > factorization against probes > fkmeans: : Fuzzy K-means clustering > fpg: : Frequent Pattern Growth > hmmpredict: : Generate random sequence of observations by given HMM > itemsimilarity: : Compute the item-item-similarities for item-based > collaborative filtering > kmeans: : K-means clustering > lucene.vector: : Generate Vectors from a Lucene index > matrixdump: : Dump matrix in CSV format > matrixmult: : Take the product of two matrices > meanshift: : Mean Shift clustering > minhash: : Run Minhash clustering > parallelALS: : ALS-WR factorization of a rating matrix > recommendfactorized: : Compute recommendations using the factorization > of a rating matrix > recommenditembased: : Compute recommendations using item-based > collaborative filtering > regexconverter: : Convert text files on a per line basis based on > regular expressions > rowid: : Map SequenceFile<Text,VectorWritable> to > {SequenceFile<IntWritable,VectorWritable>, SequenceFile<IntWritable,Text>} > rowsimilarity: : Compute the pairwise similarities of the rows of a > matrix > runAdaptiveLogistic: : Score new production data using a probably > trained and validated AdaptivelogisticRegression model > runlogistic: : Run a logistic regression model against CSV data > seq2encoded: : Encoded Sparse Vector generation from Text sequence files > seq2sparse: : Sparse Vector generation from Text sequence files > seqdirectory: : Generate sequence files (of Text) from a directory > seqdumper: : Generic Sequence File dumper > seqmailarchives: : Creates SequenceFile from a directory containing > gzipped mail archives > seqwiki: : Wikipedia xml dump to sequence file > spectralkmeans: : Spectral k-means clustering > split: : Split Input data into test and train sets > splitDataset: : split a rating dataset into training and probe parts > ssvd: : Stochastic SVD > svd: : Lanczos Singular Value Decomposition > testnb: : Test the Vector-based Bayes classifier > trainAdaptiveLogistic: : Train an AdaptivelogisticRegression model > trainlogistic: : Train a logistic regression using stochastic gradient > descent > trainnb: : Train the Vector-based Bayes classifier > transpose: : Take the transpose of a matrix > validateAdaptiveLogistic: : Validate an AdaptivelogisticRegression model > against hold-out data set > vecdist: : Compute the distances between a set of Vectors (or Cluster or > Canopy, they must fit in memory) and a list of Vectors > vectordump: : Dump vectors from a sequence file to text > viterbi: : Viterbi decoding of hidden states from given output states > sequence > > > ------------------------------ > If you reply to this email, your message will be added to the discussion > below: > > http://lucene.472066.n3.nabble.com/Exception-in-thread-main-java-lang-NoClassDefFoundError-classpath-tp3997816.html > To unsubscribe from Lucene, click > here<http://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=472066&code=aW4uYWJkdWxAZ21haWwuY29tfDQ3MjA2NnwxMDczOTUyNDEw> > . > NAML<http://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml> > ----- THANKS AND REGARDS, SYED ABDUL KATHER -- View this message in context: http://lucene.472066.n3.nabble.com/Re-Exception-in-thread-main-java-lang-NoClassDefFoundError-classpath-tp3997866.html Sent from the Mahout User List mailing list archive at Nabble.com.
