See <https://builds.apache.org/job/Mahout-Examples-Cluster-Reuters/127/changes>

Changes:

[ssc] MAHOUT-979 cleanup: removing unused imports

[ssc] MAHOUT-979 RowSimilarityJob should be able to infer the number of columns 
from the input matrix if not specified

------------------------------------------
[...truncated 5894 lines...]
[INFO] META-INF/maven/ already added, skipping
[INFO] META-INF/maven/org.apache.commons/ already added, skipping
[INFO] META-INF/ already added, skipping
[INFO] META-INF/MANIFEST.MF already added, skipping
[INFO] javax/ already added, skipping
[INFO] org/ already added, skipping
[INFO] org/apache/ already added, skipping
[INFO] org/w3c/ already added, skipping
[INFO] org/w3c/dom/ already added, skipping
[INFO] org/w3c/dom/html/ already added, skipping
[INFO] META-INF/ already added, skipping
[INFO] META-INF/MANIFEST.MF already added, skipping
[INFO] org/ already added, skipping
[INFO] org/uncommons/ already added, skipping
[INFO] org/uncommons/watchmaker/ already added, skipping
[INFO] META-INF/ already added, skipping
[INFO] META-INF/MANIFEST.MF already added, skipping
[INFO] org/ already added, skipping
[INFO] org/jfree/ already added, skipping
[INFO] META-INF/ already added, skipping
[INFO] META-INF/MANIFEST.MF already added, skipping
[INFO] org/ already added, skipping
[INFO] org/apache/ already added, skipping
[INFO] org/apache/mahout/ already added, skipping
[INFO] org/apache/mahout/text/ already added, skipping
[INFO] org/apache/mahout/fpm/ already added, skipping
[INFO] org/apache/mahout/fpm/pfpgrowth/ already added, skipping
[INFO] org/apache/mahout/cf/ already added, skipping
[INFO] org/apache/mahout/cf/taste/ already added, skipping
[INFO] org/apache/mahout/cf/taste/hadoop/ already added, skipping
[INFO] org/apache/mahout/clustering/ already added, skipping
[INFO] org/apache/mahout/clustering/minhash/ already added, skipping
[INFO] org/apache/mahout/classifier/ already added, skipping
[INFO] org/apache/mahout/classifier/sequencelearning/ already added, skipping
[INFO] org/apache/mahout/classifier/sequencelearning/hmm/ already added, 
skipping
[INFO] org/apache/mahout/classifier/df/ already added, skipping
[INFO] org/apache/mahout/classifier/df/mapreduce/ already added, skipping
[INFO] org/apache/mahout/classifier/sgd/ already added, skipping
[INFO] org/apache/mahout/classifier/bayes/ already added, skipping
[INFO] org/apache/mahout/ga/ already added, skipping
[INFO] org/apache/mahout/ga/watchmaker/ already added, skipping
[INFO] META-INF/maven/ already added, skipping
[INFO] META-INF/maven/org.apache.mahout/ already added, skipping
[INFO] [source:jar-no-fork {execution: attach-sources}]
[INFO] Building jar: 
<https://builds.apache.org/job/Mahout-Examples-Cluster-Reuters/ws/trunk/examples/target/mahout-examples-0.7-SNAPSHOT-sources.jar>
[INFO] [install:install {execution: default-install}]
[INFO] Installing 
<https://builds.apache.org/job/Mahout-Examples-Cluster-Reuters/ws/trunk/examples/target/mahout-examples-0.7-SNAPSHOT.jar>
 to 
/home/jenkins/.m2/repository/org/apache/mahout/mahout-examples/0.7-SNAPSHOT/mahout-examples-0.7-SNAPSHOT.jar
[INFO] Installing 
<https://builds.apache.org/job/Mahout-Examples-Cluster-Reuters/ws/trunk/examples/target/mahout-examples-0.7-SNAPSHOT-job.jar>
 to 
/home/jenkins/.m2/repository/org/apache/mahout/mahout-examples/0.7-SNAPSHOT/mahout-examples-0.7-SNAPSHOT-job.jar
[INFO] Installing 
<https://builds.apache.org/job/Mahout-Examples-Cluster-Reuters/ws/trunk/examples/target/mahout-examples-0.7-SNAPSHOT-sources.jar>
 to 
/home/jenkins/.m2/repository/org/apache/mahout/mahout-examples/0.7-SNAPSHOT/mahout-examples-0.7-SNAPSHOT-sources.jar
[INFO] ------------------------------------------------------------------------
[INFO] Building Mahout Release Package
[INFO]    task-segment: [clean, install]
[INFO] ------------------------------------------------------------------------
[INFO] [clean:clean {execution: default-clean}]
[INFO] [site:attach-descriptor {execution: default-attach-descriptor}]
[INFO] [assembly:single {execution: bin-assembly}]
[INFO] Assemblies have been skipped per configuration of the skipAssembly 
parameter.
[INFO] [assembly:single {execution: src-assembly}]
[INFO] Assemblies have been skipped per configuration of the skipAssembly 
parameter.
[INFO] [install:install {execution: default-install}]
[INFO] Installing 
<https://builds.apache.org/job/Mahout-Examples-Cluster-Reuters/ws/trunk/distribution/pom.xml>
 to 
/home/jenkins/.m2/repository/org/apache/mahout/mahout-distribution/0.7-SNAPSHOT/mahout-distribution-0.7-SNAPSHOT.pom
[INFO] 
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] ------------------------------------------------------------------------
[INFO] Apache Mahout ......................................... SUCCESS [2.651s]
[INFO] Mahout Build Tools .................................... SUCCESS [0.795s]
[INFO] Mahout Math ........................................... SUCCESS [5.611s]
[INFO] Mahout Core ........................................... SUCCESS [22.148s]
[INFO] Mahout Integration .................................... SUCCESS [5.585s]
[INFO] Mahout Examples ....................................... SUCCESS [13.123s]
[INFO] Mahout Release Package ................................ SUCCESS [0.014s]
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESSFUL
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 50 seconds
[INFO] Finished at: Wed May 09 19:37:56 UTC 2012
[INFO] Final Memory: 107M/394M
[INFO] ------------------------------------------------------------------------
[Mahout-Examples-Cluster-Reuters] $ /bin/bash -xe 
/tmp/hudson53588128879884388.sh
+ cd trunk
+ ./examples/bin/cluster-reuters.sh 1
ok. You chose 1 and we'll use kmeans Clustering
creating work directory at /tmp/mahout-work-jenkins
hadoop binary is not in PATH,HADOOP_HOME/bin,HADOOP_PREFIX/bin, running locally
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:<https://builds.apache.org/job/Mahout-Examples-Cluster-Reuters/ws/trunk/examples/target/mahout-examples-0.7-SNAPSHOT-job.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
SLF4J: Found binding in 
[jar:<https://builds.apache.org/job/Mahout-Examples-Cluster-Reuters/ws/trunk/examples/target/dependency/slf4j-jcl-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
SLF4J: Found binding in 
[jar:<https://builds.apache.org/job/Mahout-Examples-Cluster-Reuters/ws/trunk/examples/target/dependency/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
12/05/09 19:37:57 INFO vectorizer.SparseVectorsFromSequenceFiles: Maximum 
n-gram size is: 1
12/05/09 19:37:57 INFO vectorizer.SparseVectorsFromSequenceFiles: Minimum LLR 
value: 1.0
12/05/09 19:37:57 INFO vectorizer.SparseVectorsFromSequenceFiles: Number of 
reduce tasks: 1
12/05/09 19:37:57 INFO common.HadoopUtil: Deleting 
/tmp/mahout-work-jenkins/reuters-out-seqdir-sparse-kmeans/tokenized-documents
12/05/09 19:37:58 INFO input.FileInputFormat: Total input paths to process : 4
12/05/09 19:37:58 INFO mapred.JobClient: Running job: job_local_0001
12/05/09 19:37:59 INFO mapred.Task: Task:attempt_local_0001_m_000000_0 is done. 
And is in the process of commiting
12/05/09 19:37:59 INFO mapred.LocalJobRunner: 
12/05/09 19:37:59 INFO mapred.Task: Task attempt_local_0001_m_000000_0 is 
allowed to commit now
12/05/09 19:37:59 INFO output.FileOutputCommitter: Saved output of task 
'attempt_local_0001_m_000000_0' to 
/tmp/mahout-work-jenkins/reuters-out-seqdir-sparse-kmeans/tokenized-documents
12/05/09 19:37:59 INFO mapred.JobClient:  map 0% reduce 0%
12/05/09 19:38:01 INFO mapred.LocalJobRunner: 
12/05/09 19:38:01 INFO mapred.Task: Task 'attempt_local_0001_m_000000_0' done.
12/05/09 19:38:01 INFO mapred.Task: Task:attempt_local_0001_m_000001_0 is done. 
And is in the process of commiting
12/05/09 19:38:01 INFO mapred.LocalJobRunner: 
12/05/09 19:38:01 INFO mapred.Task: Task attempt_local_0001_m_000001_0 is 
allowed to commit now
12/05/09 19:38:01 INFO output.FileOutputCommitter: Saved output of task 
'attempt_local_0001_m_000001_0' to 
/tmp/mahout-work-jenkins/reuters-out-seqdir-sparse-kmeans/tokenized-documents
12/05/09 19:38:02 INFO mapred.JobClient:  map 50% reduce 0%
12/05/09 19:38:04 INFO mapred.LocalJobRunner: 
12/05/09 19:38:04 INFO mapred.Task: Task 'attempt_local_0001_m_000001_0' done.
12/05/09 19:38:04 INFO mapred.Task: Task:attempt_local_0001_m_000002_0 is done. 
And is in the process of commiting
12/05/09 19:38:04 INFO mapred.LocalJobRunner: 
12/05/09 19:38:04 INFO mapred.Task: Task attempt_local_0001_m_000002_0 is 
allowed to commit now
12/05/09 19:38:04 INFO output.FileOutputCommitter: Saved output of task 
'attempt_local_0001_m_000002_0' to 
/tmp/mahout-work-jenkins/reuters-out-seqdir-sparse-kmeans/tokenized-documents
12/05/09 19:38:05 INFO mapred.JobClient:  map 66% reduce 0%
12/05/09 19:38:07 INFO mapred.LocalJobRunner: 
12/05/09 19:38:07 INFO mapred.Task: Task 'attempt_local_0001_m_000002_0' done.
12/05/09 19:38:07 INFO mapred.Task: Task:attempt_local_0001_m_000003_0 is done. 
And is in the process of commiting
12/05/09 19:38:07 INFO mapred.LocalJobRunner: 
12/05/09 19:38:07 INFO mapred.Task: Task attempt_local_0001_m_000003_0 is 
allowed to commit now
12/05/09 19:38:07 INFO output.FileOutputCommitter: Saved output of task 
'attempt_local_0001_m_000003_0' to 
/tmp/mahout-work-jenkins/reuters-out-seqdir-sparse-kmeans/tokenized-documents
12/05/09 19:38:08 INFO mapred.JobClient:  map 75% reduce 0%
12/05/09 19:38:10 INFO mapred.LocalJobRunner: 
12/05/09 19:38:10 INFO mapred.Task: Task 'attempt_local_0001_m_000003_0' done.
12/05/09 19:38:11 INFO mapred.JobClient:  map 100% reduce 0%
12/05/09 19:38:11 INFO mapred.JobClient: Job complete: job_local_0001
12/05/09 19:38:11 INFO mapred.JobClient: Counters: 8
12/05/09 19:38:11 INFO mapred.JobClient:   File Output Format Counters 
12/05/09 19:38:11 INFO mapred.JobClient:     Bytes Written=15194699
12/05/09 19:38:11 INFO mapred.JobClient:   FileSystemCounters
12/05/09 19:38:11 INFO mapred.JobClient:     FILE_BYTES_READ=144600100
12/05/09 19:38:11 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=136808819
12/05/09 19:38:11 INFO mapred.JobClient:   File Input Format Counters 
12/05/09 19:38:11 INFO mapred.JobClient:     Bytes Read=18547538
12/05/09 19:38:11 INFO mapred.JobClient:   Map-Reduce Framework
12/05/09 19:38:11 INFO mapred.JobClient:     Map input records=21578
12/05/09 19:38:11 INFO mapred.JobClient:     Spilled Records=0
12/05/09 19:38:11 INFO mapred.JobClient:     Map output records=21578
12/05/09 19:38:11 INFO mapred.JobClient:     SPLIT_RAW_BYTES=484
12/05/09 19:38:11 INFO common.HadoopUtil: Deleting 
/tmp/mahout-work-jenkins/reuters-out-seqdir-sparse-kmeans/wordcount
12/05/09 19:38:11 INFO input.FileInputFormat: Total input paths to process : 4
12/05/09 19:38:11 INFO mapred.JobClient: Running job: job_local_0002
12/05/09 19:38:11 INFO mapred.MapTask: io.sort.mb = 100
12/05/09 19:38:11 INFO mapred.MapTask: data buffer = 79691776/99614720
12/05/09 19:38:11 INFO mapred.MapTask: record buffer = 262144/327680
12/05/09 19:38:12 INFO mapred.MapTask: Spilling map output: record full = true
12/05/09 19:38:12 INFO mapred.MapTask: bufstart = 0; bufend = 3822789; bufvoid 
= 99614720
12/05/09 19:38:12 INFO mapred.MapTask: kvstart = 0; kvend = 262144; length = 
327680
12/05/09 19:38:12 INFO mapred.JobClient:  map 0% reduce 0%
12/05/09 19:38:12 INFO mapred.MapTask: Finished spill 0
12/05/09 19:38:12 INFO mapred.MapTask: Starting flush of map output
12/05/09 19:38:12 INFO mapred.MapTask: Finished spill 1
12/05/09 19:38:12 INFO mapred.Merger: Merging 2 sorted segments
12/05/09 19:38:12 INFO mapred.Merger: Down to the last merge-pass, with 2 
segments left of total size: 883020 bytes
12/05/09 19:38:13 INFO mapred.Task: Task:attempt_local_0002_m_000000_0 is done. 
And is in the process of commiting
12/05/09 19:38:14 WARN mapred.Task: Could not find output size 
org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find 
output/file.out in any of the configured local directories
        at 
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathToRead(LocalDirAllocator.java:429)
        at 
org.apache.hadoop.fs.LocalDirAllocator.getLocalPathToRead(LocalDirAllocator.java:160)
        at 
org.apache.hadoop.mapred.MapOutputFile.getOutputFile(MapOutputFile.java:56)
        at org.apache.hadoop.mapred.Task.calculateOutputSize(Task.java:889)
        at org.apache.hadoop.mapred.Task.sendLastUpdate(Task.java:869)
        at org.apache.hadoop.mapred.Task.done(Task.java:820)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:374)
        at 
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
12/05/09 19:38:14 INFO mapred.LocalJobRunner: 
12/05/09 19:38:14 INFO mapred.Task: Task 'attempt_local_0002_m_000000_0' done.
12/05/09 19:38:14 INFO mapred.MapTask: io.sort.mb = 100
12/05/09 19:38:14 INFO mapred.MapTask: data buffer = 79691776/99614720
12/05/09 19:38:14 INFO mapred.MapTask: record buffer = 262144/327680
12/05/09 19:38:14 INFO mapred.MapTask: Spilling map output: record full = true
12/05/09 19:38:14 INFO mapred.MapTask: bufstart = 0; bufend = 3827681; bufvoid 
= 99614720
12/05/09 19:38:14 INFO mapred.MapTask: kvstart = 0; kvend = 262144; length = 
327680
12/05/09 19:38:15 INFO mapred.MapTask: Finished spill 0
12/05/09 19:38:15 INFO mapred.MapTask: Starting flush of map output
12/05/09 19:38:15 INFO mapred.JobClient:  map 100% reduce 0%
12/05/09 19:38:15 INFO mapred.MapTask: Finished spill 1
12/05/09 19:38:15 INFO mapred.Merger: Merging 2 sorted segments
12/05/09 19:38:15 INFO mapred.Merger: Down to the last merge-pass, with 2 
segments left of total size: 883196 bytes
12/05/09 19:38:15 INFO mapred.Task: Task:attempt_local_0002_m_000001_0 is done. 
And is in the process of commiting
12/05/09 19:38:17 INFO mapred.LocalJobRunner: 
12/05/09 19:38:17 INFO mapred.Task: Task 'attempt_local_0002_m_000001_0' done.
12/05/09 19:38:17 INFO mapred.MapTask: io.sort.mb = 100
12/05/09 19:38:17 INFO mapred.MapTask: data buffer = 79691776/99614720
12/05/09 19:38:17 INFO mapred.MapTask: record buffer = 262144/327680
12/05/09 19:38:17 INFO mapred.MapTask: Spilling map output: record full = true
12/05/09 19:38:17 INFO mapred.MapTask: bufstart = 0; bufend = 3833037; bufvoid 
= 99614720
12/05/09 19:38:17 INFO mapred.MapTask: kvstart = 0; kvend = 262144; length = 
327680
12/05/09 19:38:18 INFO mapred.MapTask: Finished spill 0
12/05/09 19:38:18 INFO mapred.MapTask: Starting flush of map output
12/05/09 19:38:18 INFO mapred.MapTask: Finished spill 1
12/05/09 19:38:18 INFO mapred.Merger: Merging 2 sorted segments
12/05/09 19:38:18 INFO mapred.Merger: Down to the last merge-pass, with 2 
segments left of total size: 873620 bytes
12/05/09 19:38:18 INFO mapred.Task: Task:attempt_local_0002_m_000002_0 is done. 
And is in the process of commiting
12/05/09 19:38:20 INFO mapred.LocalJobRunner: 
12/05/09 19:38:20 INFO mapred.Task: Task 'attempt_local_0002_m_000002_0' done.
12/05/09 19:38:20 INFO mapred.MapTask: io.sort.mb = 100
12/05/09 19:38:20 INFO mapred.MapTask: data buffer = 79691776/99614720
12/05/09 19:38:20 INFO mapred.MapTask: record buffer = 262144/327680
12/05/09 19:38:21 INFO mapred.MapTask: Spilling map output: record full = true
12/05/09 19:38:21 INFO mapred.MapTask: bufstart = 0; bufend = 3828324; bufvoid 
= 99614720
12/05/09 19:38:21 INFO mapred.MapTask: kvstart = 0; kvend = 262144; length = 
327680
12/05/09 19:38:21 INFO mapred.MapTask: Starting flush of map output
12/05/09 19:38:21 INFO mapred.MapTask: Finished spill 0
12/05/09 19:38:21 INFO mapred.MapTask: Finished spill 1
12/05/09 19:38:21 INFO mapred.Merger: Merging 2 sorted segments
12/05/09 19:38:21 INFO mapred.Merger: Down to the last merge-pass, with 2 
segments left of total size: 716933 bytes
12/05/09 19:38:21 INFO mapred.Task: Task:attempt_local_0002_m_000003_0 is done. 
And is in the process of commiting
12/05/09 19:38:23 INFO mapred.LocalJobRunner: 
12/05/09 19:38:23 INFO mapred.Task: Task 'attempt_local_0002_m_000003_0' done.
12/05/09 19:38:23 WARN mapred.LocalJobRunner: job_local_0002
org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find 
output/file.out in any of the configured local directories
        at 
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathToRead(LocalDirAllocator.java:429)
        at 
org.apache.hadoop.fs.LocalDirAllocator.getLocalPathToRead(LocalDirAllocator.java:160)
        at 
org.apache.hadoop.mapred.MapOutputFile.getOutputFile(MapOutputFile.java:56)
        at 
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:236)
12/05/09 19:38:24 INFO mapred.JobClient: Job complete: job_local_0002
12/05/09 19:38:24 INFO mapred.JobClient: Counters: 11
12/05/09 19:38:24 INFO mapred.JobClient:   FileSystemCounters
12/05/09 19:38:24 INFO mapred.JobClient:     FILE_BYTES_READ=315361258
12/05/09 19:38:24 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=273017150
12/05/09 19:38:24 INFO mapred.JobClient:   File Input Format Counters 
12/05/09 19:38:24 INFO mapred.JobClient:     Bytes Read=15194699
12/05/09 19:38:24 INFO mapred.JobClient:   Map-Reduce Framework
12/05/09 19:38:24 INFO mapred.JobClient:     Map output materialized 
bytes=3356777
12/05/09 19:38:24 INFO mapred.JobClient:     Combine output records=190518
12/05/09 19:38:24 INFO mapred.JobClient:     Map input records=21578
12/05/09 19:38:24 INFO mapred.JobClient:     Spilled Records=381036
12/05/09 19:38:24 INFO mapred.JobClient:     Map output bytes=22501851
12/05/09 19:38:24 INFO mapred.JobClient:     Combine input records=1540960
12/05/09 19:38:24 INFO mapred.JobClient:     Map output records=1540960
12/05/09 19:38:24 INFO mapred.JobClient:     SPLIT_RAW_BYTES=640
Exception in thread "main" java.lang.IllegalStateException: Job failed!
        at 
org.apache.mahout.vectorizer.DictionaryVectorizer.startWordCounting(DictionaryVectorizer.java:367)
        at 
org.apache.mahout.vectorizer.DictionaryVectorizer.createTermFrequencyVectors(DictionaryVectorizer.java:179)
        at 
org.apache.mahout.vectorizer.SparseVectorsFromSequenceFiles.run(SparseVectorsFromSequenceFiles.java:271)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
        at 
org.apache.mahout.vectorizer.SparseVectorsFromSequenceFiles.main(SparseVectorsFromSequenceFiles.java:55)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at 
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
        at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
        at org.apache.mahout.driver.MahoutDriver.main(MahoutDriver.java:188)
Build step 'Execute shell' marked build as failure

Reply via email to