I got the errors after untaring the RC1 for 0.2.0 and running a MR job on it

I download trunk now and it seams the errors went away
I thank that HBASE-770 fixed my first errors
but now I am getting a different error

2008-07-23 17:30:40,300 INFO org.apache.hadoop.hbase.client.HConnectionManager$TableServers: Attempt 0 of 5 failed with <java.net.SocketTimeoutException: timed out waiting for rpc response>. Retrying after sleep of 10000 2008-07-23 17:31:50,307 INFO org.apache.hadoop.hbase.client.HConnectionManager$TableServers: Attempt 1 of 5 failed with <java.net.SocketTimeoutException: timed out waiting for rpc response>. Retrying after sleep of 10000 2008-07-23 17:33:00,312 INFO org.apache.hadoop.hbase.client.HConnectionManager$TableServers: Attempt 2 of 5 failed with <java.net.SocketTimeoutException: timed out waiting for rpc response>. Retrying after sleep of 10000 2008-07-23 17:34:10,318 INFO org.apache.hadoop.hbase.client.HConnectionManager$TableServers: Attempt 3 of 5 failed with <java.net.SocketTimeoutException: timed out waiting for rpc response>. Retrying after sleep of 10000 2008-07-23 17:35:20,391 WARN org.apache.hadoop.mapred.TaskTracker: Error running child
org.apache.hadoop.hbase.MasterNotRunningException: xx.xx.xx.xx:60000
at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getMaster(HConnectionManager.java:219) at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:431)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:124)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:109)
at com.compspy.mapred.RecordImport$MapClass.getTable(RecordImport.java:50)
at com.compspy.mapred.RecordImport$MapClass.map(RecordImport.java:76)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:47)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:219)
at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2124)

I commented out the ip above it was correct and the port is correct I check and the master is a live and well when I get these errors when running a MR job to import records.

Billy



"Jean-Daniel Cryans" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED]
Billy,

I looked at the code where the exceptions were thrown and something is
weird. For the NPE, the line 291 in HbaseObjectWritable looks like:
public static Object readObject(DataInput in,
     HbaseObjectWritable objectWritable, Configuration conf)
 throws IOException {
   Class<?> declaredClass = CODE_TO_CLASS.get(in.readByte());
   Object instance;
   *if (declaredClass.isPrimitive()) {            // primitive types*
     if (declaredClass == Boolean.TYPE) {             // boolean
       instance = Boolean.valueOf(in.readBoolean());

The chance that declaredClass would be null is low. Also, at line 821 of HCM
regards the other exception it reads:

server.getRegionInfo(HRegionInfo.ROOT_REGIONINFO.getRegionName());
         *if (LOG.isDebugEnabled()) {*
           LOG.debug("Found ROOT " + HRegionInfo.ROOT_REGIONINFO);
         }

So I clearly see that the " at $Proxy3.getRegionInfo(Unknown Source)" the
came after that is not called here. So I'm wondering what version of HBase
exactly  are you running?

Thx for looking at this and thx for testing!

J-D

On Tue, Jul 22, 2008 at 10:10 PM, Billy Pearson <[EMAIL PROTECTED]>
wrote:

I get this when trying to run a mr job on the Release Candidate

2008-07-22 21:05:00,237 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
Initializing JVM Metrics with processName=MAP, sessionId=
2008-07-22 21:05:00,254 WARN org.apache.hadoop.fs.FileSystem: "
64.69.33.145:9000" is a deprecated filesystem name. Use "hdfs://
64.69.33.145:9000/" instead.
2008-07-22 21:05:00,378 INFO org.apache.hadoop.mapred.MapTask:
numReduceTasks: 0
2008-07-22 21:05:00,379 WARN org.apache.hadoop.fs.FileSystem: "
64.69.33.145:9000" is a deprecated filesystem name. Use "hdfs://
64.69.33.145:9000/" instead.
2008-07-22 21:05:00,797 INFO org.apache.hadoop.ipc.Client:
java.lang.NullPointerException
at
org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:291)
at
org.apache.hadoop.hbase.io.HbaseObjectWritable.readFields(HbaseObjectWritable.java:166)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:306)

2008-07-22 21:06:00,804 WARN org.apache.hadoop.mapred.TaskTracker: Error
running child
java.lang.reflect.UndeclaredThrowableException
at $Proxy3.getRegionInfo(Unknown Source)
at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRootRegion(HConnectionManager.java:821)
at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:458)
at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.relocateRegion(HConnectionManager.java:440)
at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:575)
at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:468)
at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:432)
at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:511)
at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:472)
at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:432)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:124)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:109)
at com.compspy.mapred.RecordImport$MapClass.getTable(RecordImport.java:50)
at com.compspy.mapred.RecordImport$MapClass.map(RecordImport.java:76)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:47)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:219)
at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2124)
Caused by: java.net.SocketTimeoutException: timed out waiting for rpc
response
at org.apache.hadoop.ipc.Client.call(Client.java:559)
at org.apache.hadoop.hbase.ipc.HbaseRPC$Invoker.invoke(HbaseRPC.java:213)
... 17 more


not sure what the problem is here same job I been running for over a month and now today I get this from the EC1 and Trunk on clean installs of hadoop
0.17.1 and hbase

Billy

"stack" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]

 The first 0.2.0 release candidate is available for download:


http://people.apache.org/~stack/hbase-0.2.0-candidate-1/<http://people.apache.org/%7Estack/hbase-0.2.0-candidate-1/>

Please take this release candidate for a spin. Check the documentation,
that unit tests all complete on your platform, etc.

Should we release this candidate as hbase 0.2.0?  Vote yes or no before
Friday, July 25th.

Release 0.2.0 has over 240 issues resolved [1] since the branch for 0.1
hbase was made. Be warned that hbase 0.2.0 is not backward compatible with
the hbase 0.1 API.  See [2] Izaak Rubins' notes on the high-level API
differences between 0.1 and 0.2. For notes on how to migrate your 0.1 era
hbase data to 0.2, see Izaak's migration guide [3].

Yours,
The HBase Team

1.
https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12312955&styleName=Html&projectId=12310753&Create=Create
2. http://wiki.apache.org/hadoop/Hbase/Plan-0.2/APIChanges
3. http://wiki.apache.org/hadoop/Hbase/HowToMigrate







Reply via email to