Hi, I'm trying to connect from a Java client running on a Windows desktop machine to a remote HBase cluster running in the distributed mode(atop 2-node Hadoop cluster).
1. On the master(namenode)node : hduser@cldx-1139-1033:~/hadoop_ecosystem/apache_hbase/hbase_installation/hbase-0.94.6.1/bin$ jps 9161 HMaster 4536 SecondaryNameNode 4368 DataNode 4645 JobTracker 8395 Jps 4813 TaskTracker 4179 NameNode 7717 Main 2. On the slave(datanode)node : hduser@cldx-1140-1034:~$ jps 5677 HRegionServer 5559 HQuorumPeer 2634 TaskTracker 3260 Jps 2496 DataNode 3. I also connected to the shell and created a hbase table and also able to scan it : hbase(main):004:0> scan 'CUSTOMERS' ROW COLUMN+CELL CUSTID12345 column=CUSTOMER_INFO:EMAIL, timestamp=1365600369284, [email protected] CUSTID12345 column=CUSTOMER_INFO:NAME, timestamp=1365600052104, value=Omkar Joshi CUSTID614 column=CUSTOMER_INFO:NAME, timestamp=1365601350972, value=Prachi Shah 2 row(s) in 0.8760 seconds 4. The hbase-site.xml has the following configurations: <configuration> <property> <name>hbase.zookeeper.quorum</name> <value>cldx-1140-1034</value> <description>The directory shared by RegionServers. </description> </property> <property> <name>hbase.zookeeper.property.dataDir</name> <value>/home/hduser/hadoop_ecosystem/apache_hbase/zk_datadir</value> <description>Property from ZooKeeper's config zoo.cfg. The directory where the snapshot is stored. </description> </property> <property> <name>hbase.rootdir</name> <value>hdfs://cldx-1139-1033:9000/hbase</value> <description>The directory shared by RegionServers.</description> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> <description>The mode the cluster will be in. Possible values are false: standalone and pseudo-distributed setups with managed Zookeeper true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)</description> </property> </configuration> 5. The Hadoop's core-site.xml has the following settings : <configuration> <property> <name>fs.default.name</name> <value>hdfs://cldx-1139-1033:9000</value> </property> </configuration> 6. My java client class is : package client.hbase; import java.io.IOException; import java.util.List; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.client.HBaseAdmin; import org.apache.hadoop.hbase.client.HTableInterface; import org.apache.hadoop.hbase.client.HTablePool; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.filter.CompareFilter; import org.apache.hadoop.hbase.filter.Filter; import org.apache.hadoop.hbase.filter.RegexStringComparator; import org.apache.hadoop.hbase.filter.ValueFilter; import org.apache.hadoop.hbase.util.Bytes; public class HBaseCRUD { private static Configuration config; private static HBaseAdmin hbaseAdmin; private static HTablePool hTablePool; static { config = HBaseConfiguration.create(); config.set("hbase.zookeeper.quorum", "172.25.6.69"); hTablePool = new HTablePool(config, 2); } /** * @param args * @throws IOException */ public static void main(String[] args) throws IOException { // TODO Auto-generated method stub HBaseCRUD hbaseCRUD = new HBaseCRUD(); /*hbaseCRUD.createTables("CUSTOMERS", "CUSTOMER_INFO"); hbaseCRUD.populateTableData("CUSTOMERS");*/ hbaseCRUD.scanTable("CUSTOMERS", "CUSTOMER_INFO", "EMAIL"); } private void createTables(String tableName, String... columnFamilyNames) throws IOException { // TODO Auto-generated method stub HTableDescriptor tableDesc = new HTableDescriptor(tableName); if (!(columnFamilyNames == null || columnFamilyNames.length == 0)) { for (String columnFamilyName : columnFamilyNames) { HColumnDescriptor columnFamily = new HColumnDescriptor( columnFamilyName); tableDesc.addFamily(columnFamily); } } hbaseAdmin.createTable(tableDesc); } private void populateTableData(String tableName) throws IOException { HTableInterface tbl = hTablePool.getTable(Bytes.toBytes(tableName)); List<Put> tblRows = getTableData(tableName); tbl.close(); } private List<Put> getTableData(String tableName) { // TODO Auto-generated method stub if (tableName == null || tableName.isEmpty()) return null; /* Pull data from wherever located */ if (tableName.equalsIgnoreCase("CUSTOMERS")) { /* * Put p1 = new Put(); p1.add(Bytes.toBytes("CUSTOMER_INFO"), * Bytes.toBytes("NAME"), value) */ } return null; } private void scanTable(String tableName, String columnFamilyName, String... columnNames) throws IOException { System.out.println("In HBaseCRUD.scanTable(...)"); Scan scan = new Scan(); if (!(columnNames == null || columnNames.length <= 0)) { for (String columnName : columnNames) { scan.addColumn(Bytes.toBytes(columnFamilyName), Bytes.toBytes(columnName)); } Filter filter = new ValueFilter(CompareFilter.CompareOp.EQUAL, new RegexStringComparator("lntinfotech")); scan.setFilter(filter); } HTableInterface tbl = hTablePool.getTable(Bytes.toBytes(tableName)); ResultScanner scanResults = tbl.getScanner(scan); for (Result result : scanResults) { System.out.println("The result is " + result); } tbl.close(); } } 7. The exception I get is : In HBaseCRUD.scanTable(...) SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. Apr 10, 2013 4:24:54 PM org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper <init> INFO: The identifier of this process is 3648@INFVA03351 Apr 10, 2013 4:24:56 PM org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper retryOrThrow WARNING: Possibly transient ZooKeeper exception: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid Apr 10, 2013 4:24:56 PM org.apache.hadoop.hbase.util.RetryCounter sleepUntilNextRetry INFO: Sleeping 2000ms before retry #1... Apr 10, 2013 4:24:58 PM org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper retryOrThrow WARNING: Possibly transient ZooKeeper exception: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid Apr 10, 2013 4:24:58 PM org.apache.hadoop.hbase.util.RetryCounter sleepUntilNextRetry INFO: Sleeping 4000ms before retry #2... I have also made the master's entry in my local machine's host file. What can be the error? Thanks and regards ! ________________________________ The contents of this e-mail and any attachment(s) may contain confidential or privileged information for the intended recipient(s). Unintended recipients are prohibited from taking action on the basis of information in this e-mail and using or disseminating the information, and must notify the sender and delete it from their system. L&T Infotech will not accept responsibility or liability for the accuracy or completeness of, or the presence of any virus or disabling code in this e-mail"
