I'm running into an error when trying a modified version of the "grades"
example created by Naama (which is a fantastic example).   The wrinkle with
mine is that i'm trying to compute the average field value length by field
name, and i'm doing this using the 0.20 api.   Here's the error I'm getting
(near the bottom):

$ java -Xmx256M -jar compare.jar 27
09/09/29 10:57:38 INFO jvm.JvmMetrics: Initializing JVM Metrics with
processName
=JobTracker, sessionId=
09/09/29 10:57:39 INFO zookeeper.ZooKeeper: Client
environment:zookeeper.version
=3.2.0--1, built on 05/15/2009 06:05 GMT
09/09/29 10:57:39 INFO zookeeper.ZooKeeper: Client
environment:host.name=kanwlap
151786.na.srcp.net
09/09/29 10:57:39 INFO zookeeper.ZooKeeper: Client
environment:java.version=1.6.
0_10
09/09/29 10:57:39 INFO zookeeper.ZooKeeper: Client
environment:java.vendor=Sun M
icrosystems Inc.
09/09/29 10:57:39 INFO zookeeper.ZooKeeper: Client
environment:java.home=c:\jdk1
.6.0_10\jre
09/09/29 10:57:39 INFO zookeeper.ZooKeeper: Client
environment:java.class.path=c
ompare.jar
09/09/29 10:57:39 INFO zookeeper.ZooKeeper: Client
environment:java.library.path
=c:\jdk1.6.0_10\bin;.;C:\WINDOWS\Sun\Java\bin;C:\WINDOWS\system32;C:\WINDOWS;c:\
jdk1.6.0_10\bin;C:\cygwin\usr\local\bin;C:\cygwin\bin;C:\cygwin\bin;C:\cygwin\us
r\X11R6\bin;c:\oracle;c:\WINDOWS\system32;c:\WINDOWS;c:\WINDOWS\System32\Wbem;c:
\Program Files\Intel\WiFi\bin\;c:\Program Files\Common Files\Roxio
Shared\DLLSha
red\;c:\Program Files\Common Files\Roxio Shared\9.0\DLLShared\;c:\Program
Files\
IDM Computer Solutions\UltraEdit-32
09/09/29 10:57:39 INFO zookeeper.ZooKeeper: Client
environment:java.io.tmpdir=C:
\DOCUME~1\terryg\LOCALS~1\Temp\
09/09/29 10:57:39 INFO zookeeper.ZooKeeper: Client
environment:java.compiler=<NA
>
09/09/29 10:57:39 INFO zookeeper.ZooKeeper: Client
environment:os.name=Windows X
P
09/09/29 10:57:39 INFO zookeeper.ZooKeeper: Client environment:os.arch=x86
09/09/29 10:57:39 INFO zookeeper.ZooKeeper: Client
environment:os.version=5.1
09/09/29 10:57:39 INFO zookeeper.ZooKeeper: Client
environment:user.name=tgruene
wald
09/09/29 10:57:39 INFO zookeeper.ZooKeeper: Client
environment:user.home=C:\Docu
ments and Settings\terryg
09/09/29 10:57:39 INFO zookeeper.ZooKeeper: Client
environment:user.dir=c:\Docum
ents and Settings\terryg\Desktop
09/09/29 10:57:39 INFO zookeeper.ZooKeeper: Initiating client connection,
host=l
ocalhost:2181 sessionTimeout=60000
watcher=org.apache.hadoop.hbase.client.HConne
ctionmanager$clientzkwatc...@ba6c83
09/09/29 10:57:39 INFO zookeeper.ClientCnxn: zookeeper.disableAutoWatchReset
is
false
09/09/29 10:57:39 INFO zookeeper.ClientCnxn: Attempting connection to server
loc
alhost/127.0.0.1:2181
09/09/29 10:57:39 INFO zookeeper.ClientCnxn: Priming connection to
java.nio.chan
nels.SocketChannel[connected local=/127.0.0.1:3151
remote=localhost/127.0.0.1:21
81]
09/09/29 10:57:39 INFO zookeeper.ClientCnxn: Server connection successful
09/09/29 10:57:39 INFO mapreduce.TableInputFormatBase: split:
0->kanwlap151786.n
a.srcp.net:,
09/09/29 10:57:39 INFO mapred.JobClient: Running job: job_local_0001
09/09/29 10:57:39 INFO mapreduce.TableInputFormatBase: split:
0->kanwlap151786.n
a.srcp.net:,
09/09/29 10:57:39 INFO mapred.MapTask: io.sort.mb = 100
09/09/29 10:57:39 INFO mapred.MapTask: data buffer = 79691776/99614720
09/09/29 10:57:39 INFO mapred.MapTask: record buffer = 262144/327680
09/09/29 10:57:39 INFO mapred.MapTask: Starting flush of map output
09/09/29 10:57:39 INFO mapred.MapTask: Finished spill 0
09/09/29 10:57:39 INFO mapred.TaskRunner: Task:attempt_local_0001_m_000000_0
is
done. And is in the process of commiting
09/09/29 10:57:39 INFO mapred.LocalJobRunner:
09/09/29 10:57:39 INFO mapred.TaskRunner: Task
'attempt_local_0001_m_000000_0' d
one.
09/09/29 10:57:39 INFO mapred.LocalJobRunner:
09/09/29 10:57:39 INFO mapred.Merger: Merging 1 sorted segments
09/09/29 10:57:39 INFO mapred.Merger: Down to the last merge-pass, with 1
segmen
ts left of total size: 418 bytes
09/09/29 10:57:39 INFO mapred.LocalJobRunner:
09/09/29 10:57:39 WARN mapred.LocalJobRunner: job_local_0001
java.lang.ClassCastException:
org.apache.hadoop.hbase.io.ImmutableBytesWritable
cannot be cast to org.apache.hadoop.io.Text
        at Compare$MyReducer.reduce(Compare.java:1)
        at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:174)
        at
org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:543
)
        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:410)
        at
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:2
15)
09/09/29 10:57:40 INFO mapred.JobClient:  map 100% reduce 0%
09/09/29 10:57:40 INFO mapred.JobClient: Job complete: job_local_0001
09/09/29 10:57:40 INFO mapred.JobClient: Counters: 12
09/09/29 10:57:40 INFO mapred.JobClient:   FileSystemCounters
09/09/29 10:57:40 INFO mapred.JobClient:     FILE_BYTES_READ=20937031
09/09/29 10:57:40 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=21121188
09/09/29 10:57:40 INFO mapred.JobClient:   Map-Reduce Framework
09/09/29 10:57:40 INFO mapred.JobClient:     Reduce input groups=0
09/09/29 10:57:40 INFO mapred.JobClient:     Combine output records=0
09/09/29 10:57:40 INFO mapred.JobClient:     Map input records=2
09/09/29 10:57:40 INFO mapred.JobClient:     Reduce shuffle bytes=0
09/09/29 10:57:40 INFO mapred.JobClient:     Reduce output records=0
09/09/29 10:57:40 INFO mapred.JobClient:     Spilled Records=2
09/09/29 10:57:40 INFO mapred.JobClient:     Map output bytes=410
09/09/29 10:57:40 INFO mapred.JobClient:     Combine input records=0
09/09/29 10:57:40 INFO mapred.JobClient:     Map output records=2
09/09/29 10:57:40 INFO mapred.JobClient:     Reduce input records=0

>

Here's my code:
http://www.nabble.com/file/p25665901/Compare.java Compare.java 

What I'm not getting is how the "plumbing" works for when you change from
one type of map, to the map you would like for the reduce step.   If anyone
could offer any suggestions, I would appreciate that.

-- 
View this message in context: 
http://www.nabble.com/Modified-%22grades%22-example-not-working-for-me-in-0.20-API-tp25665901p25665901.html
Sent from the HBase User mailing list archive at Nabble.com.

Reply via email to