Hello,

I'm try to start the hbase, using hbase from trunk and hadoop also from trunk :

when i start the hbase i get the following error :

------------cut-------------
2009-01-29 16:42:12,609 INFO org.apache.hadoop.hbase.master.HMaster:
vmName=Java HotSpot(TM) Server VM, vmVendor=Sun Microsystems Inc.,
vmVersion=11.0-b15
2009-01-29 16:42:12,609 INFO org.apache.hadoop.hbase.master.HMaster:
vmInputArguments=[-Xmx1000m, -XX:+HeapDumpOnOutOfMemoryError,
-Dhbase.log.dir=/opt/hbase/bin/../logs,
-Dhbase.log.file=hbase-hadoop-master-tobeThink.log,
-Dhbase.home.dir=/opt/hbase/bin/.., -Dhbase.id.str=hadoop,
-Dhbase.root.logger=INFO,DRFA,
-Djava.library.path=/opt/hbase/bin/../lib/native/Linux-i386-32]
2009-01-29 16:42:13,059 ERROR org.apache.hadoop.hbase.master.HMaster:
Can not start master
java.io.IOException: Call to
tobethink.pappiptek.lipi.go.id/192.168.107.119:54310 failed on local
exception: null
        at org.apache.hadoop.ipc.Client.call(Client.java:699)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
        at $Proxy0.getProtocolVersion(Unknown Source)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:319)
        at 
org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:104)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:177)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:74)
        at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1367)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:56)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1379)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:215)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:120)
        at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:186)
        at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:156)
        at 
org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:96)
        at 
org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:78)
        at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:978)
        at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1022)
Caused by: java.io.EOFException
        at java.io.DataInputStream.readInt(DataInputStream.java:375)
        at 
org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:493)
        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:438)
---------cut--------------------------------


Here is my hbase-site.xml

<configuration>
  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://tobethink.pappiptek.lipi.go.id:54310/user/hadoop/hbase</value>
    <description>The directory shared by region servers.
    Should be fully-qualified to include the filesystem to use.
    E.g: hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR
    </description>
  </property>

<property>
    <name>hbase.master</name>
    <value>local</value>
    <description>The host and port that the HBase master runs at.
    A value of 'local' runs the master and a regionserver in
    a single process.
    </description>
  </property>

<property>
    <name>hbase.master.info.port</name>
    <value>60010</value>
    <description>The port for the hbase master web UI
    Set to -1 if you do not want the info server to run.
    </description>
  </property>

</configuration>

Thanks !

Best Regards,
Wildan

-- 
---
tobeThink!
www.tobethink.com

Aligning IT and Education

>> 021-99325243
Y! : hawking_123
Linkedln : http://www.linkedin.com/in/wildanmaulana

Reply via email to