Re: spark with standalone HBase

2015-04-30 Thread Akshat Aranya
Looking at your classpath, it looks like you've compiled Spark yourself.
Depending on which version of Hadoop you've compiled against (looks like
it's Hadoop 2.2 in your case), Spark will have its own version of
protobuf.  You should try by making sure both your HBase and Spark are
compiled against the same version of Hadoop.

On Thu, Apr 30, 2015 at 6:54 AM, Ted Yu  wrote:

> The error indicates incompatible protobuf versions.
>
> Please take a look at 4.1.1 under
> http://hbase.apache.org/book.html#basic.prerequisites
>
> Cheers
>
> On Thu, Apr 30, 2015 at 3:49 AM, Saurabh Gupta 
> wrote:
>
>> Now able to solve the issue by setting
>>
>> SparkConf sconf = *new* SparkConf().setAppName(“App").setMaster("local")
>> and
>>
>> conf.set(“zookeeper.znode.parent”, “/hbase-unsecure”)
>>
>> Standalone hbase has a table 'test'
>> hbase(main):001:0> scan 'test'
>> ROW  COLUMN+CELL
>> row1column=cf:a, timestamp=1430234895637,
>> value=value1
>> row2column=cf:b, timestamp=1430234907537,
>> value=value2
>> row3column=cf:c, timestamp=1430234918284,
>> value=value3
>>
>> Now facing this issue:
>>
>> ERROR TableInputFormat: java.io.IOException:
>> java.lang.reflect.InvocationTargetException
>> at
>> org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:416)
>> at
>> org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:393)
>> at
>> org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:274)
>> at org.apache.hadoop.hbase.client.HTable.(HTable.java:194)
>> at org.apache.hadoop.hbase.client.HTable.(HTable.java:156)
>> at
>> org.apache.hadoop.hbase.mapreduce.TableInputFormat.setConf(TableInputFormat.java:101)
>> at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:91)
>> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
>> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
>> at scala.Option.getOrElse(Option.scala:120)
>> at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
>> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1632)
>> at org.apache.spark.rdd.RDD.count(RDD.scala:1012)
>> at org.apache.spark.examples.HBaseTest$.main(HBaseTest.scala:58)
>> at org.apache.spark.examples.HBaseTest.main(HBaseTest.scala)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:606)
>> at
>> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:607)
>> at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:167)
>> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:190)
>> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
>> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>> Caused by: java.lang.reflect.InvocationTargetException
>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>> at
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>> at
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>> at
>> org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:414)
>> ... 23 more
>> Caused by: java.lang.VerifyError: class
>> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$Result overrides
>> final method getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
>> at java.lang.ClassLoader.defineClass1(Native Method)
>> at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>> at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>> at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>> at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>> at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>> at
>> org.apache.hadoop.hbase.protobuf.ProtobufUtil.(ProtobufUtil.java:176)
>> at org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64)
>> at
>> org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:69)
>> at
>> org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:83)
>> at
>> org.apache.ha

Re: spark with standalone HBase

2015-04-30 Thread Ted Yu
The error indicates incompatible protobuf versions.

Please take a look at 4.1.1 under
http://hbase.apache.org/book.html#basic.prerequisites

Cheers

On Thu, Apr 30, 2015 at 3:49 AM, Saurabh Gupta 
wrote:

> Now able to solve the issue by setting
>
> SparkConf sconf = *new* SparkConf().setAppName(“App").setMaster("local")
> and
>
> conf.set(“zookeeper.znode.parent”, “/hbase-unsecure”)
>
> Standalone hbase has a table 'test'
> hbase(main):001:0> scan 'test'
> ROW  COLUMN+CELL
> row1column=cf:a, timestamp=1430234895637,
> value=value1
> row2column=cf:b, timestamp=1430234907537,
> value=value2
> row3column=cf:c, timestamp=1430234918284,
> value=value3
>
> Now facing this issue:
>
> ERROR TableInputFormat: java.io.IOException:
> java.lang.reflect.InvocationTargetException
> at
> org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:416)
> at
> org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:393)
> at
> org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:274)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:194)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:156)
> at
> org.apache.hadoop.hbase.mapreduce.TableInputFormat.setConf(TableInputFormat.java:101)
> at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:91)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
> at scala.Option.getOrElse(Option.scala:120)
> at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1632)
> at org.apache.spark.rdd.RDD.count(RDD.scala:1012)
> at org.apache.spark.examples.HBaseTest$.main(HBaseTest.scala:58)
> at org.apache.spark.examples.HBaseTest.main(HBaseTest.scala)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:607)
> at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:167)
> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:190)
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> Caused by: java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at
> org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:414)
> ... 23 more
> Caused by: java.lang.VerifyError: class
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$Result overrides
> final method getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
> at java.lang.ClassLoader.defineClass1(Native Method)
> at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
> at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
> at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
> at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> at
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.(ProtobufUtil.java:176)
> at org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64)
> at
> org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:69)
> at
> org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:83)
> at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.retrieveClusterId(HConnectionManager.java:857)
> at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.(HConnectionManager.java:662)
> ... 28 more
>
> Exception in thread "main" java.io.IOException: No table was provided.
> at
> org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:154)
> at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD

Re: spark with standalone HBase

2015-04-30 Thread Saurabh Gupta
Now able to solve the issue by setting

SparkConf sconf = *new* SparkConf().setAppName(“App").setMaster("local")
and

conf.set(“zookeeper.znode.parent”, “/hbase-unsecure”)

Standalone hbase has a table 'test'
hbase(main):001:0> scan 'test'
ROW  COLUMN+CELL
row1column=cf:a, timestamp=1430234895637,
value=value1
row2column=cf:b, timestamp=1430234907537,
value=value2
row3column=cf:c, timestamp=1430234918284,
value=value3

Now facing this issue:

ERROR TableInputFormat: java.io.IOException:
java.lang.reflect.InvocationTargetException
at
org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:416)
at
org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:393)
at
org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:274)
at org.apache.hadoop.hbase.client.HTable.(HTable.java:194)
at org.apache.hadoop.hbase.client.HTable.(HTable.java:156)
at
org.apache.hadoop.hbase.mapreduce.TableInputFormat.setConf(TableInputFormat.java:101)
at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:91)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1632)
at org.apache.spark.rdd.RDD.count(RDD.scala:1012)
at org.apache.spark.examples.HBaseTest$.main(HBaseTest.scala:58)
at org.apache.spark.examples.HBaseTest.main(HBaseTest.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:607)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:167)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:190)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at
org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:414)
... 23 more
Caused by: java.lang.VerifyError: class
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$Result overrides
final method getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at
org.apache.hadoop.hbase.protobuf.ProtobufUtil.(ProtobufUtil.java:176)
at org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64)
at
org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:69)
at
org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:83)
at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.retrieveClusterId(HConnectionManager.java:857)
at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.(HConnectionManager.java:662)
... 28 more

Exception in thread "main" java.io.IOException: No table was provided.
at
org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:154)
at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:95)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1632)
at org.apache.spark.rdd.RDD.count(RDD.scala:1012)
at org.apache.spark.ex

Re: spark with standalone HBase

2015-04-30 Thread Saurabh Gupta
I am using hbase -0.94.8.

On Wed, Apr 29, 2015 at 11:56 PM, Ted Yu  wrote:

> Can you enable HBase DEBUG logging in log4j.properties so that we can have
> more clue ?
>
> What hbase release are you using ?
>
> Cheers
>
> On Wed, Apr 29, 2015 at 4:27 AM, Saurabh Gupta 
> wrote:
>
>> Hi,
>>
>> I am working with standalone HBase. And I want to execute HBaseTest.scala
>> (in scala examples) .
>>
>> I have created a test table with three rows and I just want to get the
>> count using HBaseTest.scala
>>
>> I am getting this issue:
>>
>> 15/04/29 11:17:10 INFO BlockManagerMaster: Registered BlockManager
>> 15/04/29 11:17:11 INFO ZooKeeper: Client
>> environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
>> 15/04/29 11:17:11 INFO ZooKeeper: Client environment:host.name
>> =ip-10-144-185-113
>> 15/04/29 11:17:11 INFO ZooKeeper: Client environment:java.version=1.7.0_79
>> 15/04/29 11:17:11 INFO ZooKeeper: Client environment:java.vendor=Oracle
>> Corporation
>> 15/04/29 11:17:11 INFO ZooKeeper: Client
>> environment:java.home=/usr/lib/jvm/java-7-openjdk-amd64/jre
>> 15/04/29 11:17:11 INFO ZooKeeper: Client
>> environment:java.class.path=/home/ubuntu/sparkfolder/conf/:/home/ubuntu/sparkfolder/assembly/target/scala-2.10/spark-assembly-1.4.0-SNAPSHOT-hadoop2.2.0.jar:/home/ubuntu/sparkfolder/lib_managed/jars/datanucleus-core-3.2.10.jar:/home/ubuntu/sparkfolder/lib_managed/jars/datanucleus-api-jdo-3.2.6.jar:/home/ubuntu/sparkfolder/lib_managed/jars/datanucleus-rdbms-3.2.9.jar
>> 15/04/29 11:17:11 INFO ZooKeeper: Client
>> environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
>> 15/04/29 11:17:11 INFO ZooKeeper: Client environment:java.io.tmpdir=/tmp
>> 15/04/29 11:17:11 INFO ZooKeeper: Client environment:java.compiler=
>> 15/04/29 11:17:11 INFO ZooKeeper: Client environment:os.name=Linux
>> 15/04/29 11:17:11 INFO ZooKeeper: Client environment:os.arch=amd64
>> 15/04/29 11:17:11 INFO ZooKeeper: Client
>> environment:os.version=3.13.0-49-generic
>> 15/04/29 11:17:11 INFO ZooKeeper: Client environment:user.name=root
>> 15/04/29 11:17:11 INFO ZooKeeper: Client environment:user.home=/root
>> 15/04/29 11:17:11 INFO ZooKeeper: Client
>> environment:user.dir=/home/ubuntu/sparkfolder
>> 15/04/29 11:17:11 INFO ZooKeeper: Initiating client connection,
>> connectString=localhost:2181 sessionTimeout=9
>> watcher=hconnection-0x2711025f, quorum=localhost:2181, baseZNode=/hbase
>> 15/04/29 11:17:11 INFO RecoverableZooKeeper: Process
>> identifier=hconnection-0x2711025f connecting to ZooKeeper
>> ensemble=localhost:2181
>> 15/04/29 11:17:11 INFO ClientCnxn: Opening socket connection to server
>> ip-10-144-185-113/10.144.185.113:2181. Will not attempt to authenticate
>> using SASL (unknown error)
>> 15/04/29 11:17:11 INFO ClientCnxn: Socket connection established to
>> ip-10-144-185-113/10.144.185.113:2181, initiating session
>> 15/04/29 11:17:11 INFO ClientCnxn: Session establishment complete on
>> server ip-10-144-185-113/10.144.185.113:2181, sessionid =
>> 0x14d04d506da0005, negotiated timeout = 4
>> 15/04/29 11:17:11 INFO ZooKeeperRegistry: ClusterId read in ZooKeeper is
>> null
>>
>> Its just stuck Not showing any error. There is no Hadoop on my machine.
>> What could be the issue?
>>
>> here is hbase-site.xml:
>>
>> 
>> 
>>hbase.zookeeper.quorum
>>   localhost
>> 
>>
>>
>>   hbase.zookeeper.property.clientPort
>>   2181
>>
>> 
>> zookeeper.znode.parent
>>/hbase
>> 
>> 
>>
>>
>


Re: spark with standalone HBase

2015-04-29 Thread Ted Yu
Can you enable HBase DEBUG logging in log4j.properties so that we can have
more clue ?

What hbase release are you using ?

Cheers

On Wed, Apr 29, 2015 at 4:27 AM, Saurabh Gupta 
wrote:

> Hi,
>
> I am working with standalone HBase. And I want to execute HBaseTest.scala
> (in scala examples) .
>
> I have created a test table with three rows and I just want to get the
> count using HBaseTest.scala
>
> I am getting this issue:
>
> 15/04/29 11:17:10 INFO BlockManagerMaster: Registered BlockManager
> 15/04/29 11:17:11 INFO ZooKeeper: Client
> environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
> 15/04/29 11:17:11 INFO ZooKeeper: Client environment:host.name
> =ip-10-144-185-113
> 15/04/29 11:17:11 INFO ZooKeeper: Client environment:java.version=1.7.0_79
> 15/04/29 11:17:11 INFO ZooKeeper: Client environment:java.vendor=Oracle
> Corporation
> 15/04/29 11:17:11 INFO ZooKeeper: Client
> environment:java.home=/usr/lib/jvm/java-7-openjdk-amd64/jre
> 15/04/29 11:17:11 INFO ZooKeeper: Client
> environment:java.class.path=/home/ubuntu/sparkfolder/conf/:/home/ubuntu/sparkfolder/assembly/target/scala-2.10/spark-assembly-1.4.0-SNAPSHOT-hadoop2.2.0.jar:/home/ubuntu/sparkfolder/lib_managed/jars/datanucleus-core-3.2.10.jar:/home/ubuntu/sparkfolder/lib_managed/jars/datanucleus-api-jdo-3.2.6.jar:/home/ubuntu/sparkfolder/lib_managed/jars/datanucleus-rdbms-3.2.9.jar
> 15/04/29 11:17:11 INFO ZooKeeper: Client
> environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
> 15/04/29 11:17:11 INFO ZooKeeper: Client environment:java.io.tmpdir=/tmp
> 15/04/29 11:17:11 INFO ZooKeeper: Client environment:java.compiler=
> 15/04/29 11:17:11 INFO ZooKeeper: Client environment:os.name=Linux
> 15/04/29 11:17:11 INFO ZooKeeper: Client environment:os.arch=amd64
> 15/04/29 11:17:11 INFO ZooKeeper: Client
> environment:os.version=3.13.0-49-generic
> 15/04/29 11:17:11 INFO ZooKeeper: Client environment:user.name=root
> 15/04/29 11:17:11 INFO ZooKeeper: Client environment:user.home=/root
> 15/04/29 11:17:11 INFO ZooKeeper: Client
> environment:user.dir=/home/ubuntu/sparkfolder
> 15/04/29 11:17:11 INFO ZooKeeper: Initiating client connection,
> connectString=localhost:2181 sessionTimeout=9
> watcher=hconnection-0x2711025f, quorum=localhost:2181, baseZNode=/hbase
> 15/04/29 11:17:11 INFO RecoverableZooKeeper: Process
> identifier=hconnection-0x2711025f connecting to ZooKeeper
> ensemble=localhost:2181
> 15/04/29 11:17:11 INFO ClientCnxn: Opening socket connection to server
> ip-10-144-185-113/10.144.185.113:2181. Will not attempt to authenticate
> using SASL (unknown error)
> 15/04/29 11:17:11 INFO ClientCnxn: Socket connection established to
> ip-10-144-185-113/10.144.185.113:2181, initiating session
> 15/04/29 11:17:11 INFO ClientCnxn: Session establishment complete on
> server ip-10-144-185-113/10.144.185.113:2181, sessionid =
> 0x14d04d506da0005, negotiated timeout = 4
> 15/04/29 11:17:11 INFO ZooKeeperRegistry: ClusterId read in ZooKeeper is
> null
>
> Its just stuck Not showing any error. There is no Hadoop on my machine.
> What could be the issue?
>
> here is hbase-site.xml:
>
> 
> 
>hbase.zookeeper.quorum
>   localhost
> 
>
>
>   hbase.zookeeper.property.clientPort
>   2181
>
> 
> zookeeper.znode.parent
>/hbase
> 
> 
>
>


spark with standalone HBase

2015-04-29 Thread Saurabh Gupta
Hi,

I am working with standalone HBase. And I want to execute HBaseTest.scala
(in scala examples) .

I have created a test table with three rows and I just want to get the
count using HBaseTest.scala

I am getting this issue:

15/04/29 11:17:10 INFO BlockManagerMaster: Registered BlockManager
15/04/29 11:17:11 INFO ZooKeeper: Client
environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
15/04/29 11:17:11 INFO ZooKeeper: Client environment:host.name
=ip-10-144-185-113
15/04/29 11:17:11 INFO ZooKeeper: Client environment:java.version=1.7.0_79
15/04/29 11:17:11 INFO ZooKeeper: Client environment:java.vendor=Oracle
Corporation
15/04/29 11:17:11 INFO ZooKeeper: Client
environment:java.home=/usr/lib/jvm/java-7-openjdk-amd64/jre
15/04/29 11:17:11 INFO ZooKeeper: Client
environment:java.class.path=/home/ubuntu/sparkfolder/conf/:/home/ubuntu/sparkfolder/assembly/target/scala-2.10/spark-assembly-1.4.0-SNAPSHOT-hadoop2.2.0.jar:/home/ubuntu/sparkfolder/lib_managed/jars/datanucleus-core-3.2.10.jar:/home/ubuntu/sparkfolder/lib_managed/jars/datanucleus-api-jdo-3.2.6.jar:/home/ubuntu/sparkfolder/lib_managed/jars/datanucleus-rdbms-3.2.9.jar
15/04/29 11:17:11 INFO ZooKeeper: Client
environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
15/04/29 11:17:11 INFO ZooKeeper: Client environment:java.io.tmpdir=/tmp
15/04/29 11:17:11 INFO ZooKeeper: Client environment:java.compiler=
15/04/29 11:17:11 INFO ZooKeeper: Client environment:os.name=Linux
15/04/29 11:17:11 INFO ZooKeeper: Client environment:os.arch=amd64
15/04/29 11:17:11 INFO ZooKeeper: Client
environment:os.version=3.13.0-49-generic
15/04/29 11:17:11 INFO ZooKeeper: Client environment:user.name=root
15/04/29 11:17:11 INFO ZooKeeper: Client environment:user.home=/root
15/04/29 11:17:11 INFO ZooKeeper: Client
environment:user.dir=/home/ubuntu/sparkfolder
15/04/29 11:17:11 INFO ZooKeeper: Initiating client connection,
connectString=localhost:2181 sessionTimeout=9
watcher=hconnection-0x2711025f, quorum=localhost:2181, baseZNode=/hbase
15/04/29 11:17:11 INFO RecoverableZooKeeper: Process
identifier=hconnection-0x2711025f connecting to ZooKeeper
ensemble=localhost:2181
15/04/29 11:17:11 INFO ClientCnxn: Opening socket connection to server
ip-10-144-185-113/10.144.185.113:2181. Will not attempt to authenticate
using SASL (unknown error)
15/04/29 11:17:11 INFO ClientCnxn: Socket connection established to
ip-10-144-185-113/10.144.185.113:2181, initiating session
15/04/29 11:17:11 INFO ClientCnxn: Session establishment complete on server
ip-10-144-185-113/10.144.185.113:2181, sessionid = 0x14d04d506da0005,
negotiated timeout = 4
15/04/29 11:17:11 INFO ZooKeeperRegistry: ClusterId read in ZooKeeper is
null

Its just stuck Not showing any error. There is no Hadoop on my machine.
What could be the issue?

here is hbase-site.xml:



   hbase.zookeeper.quorum
  localhost


   
  hbase.zookeeper.property.clientPort
  2181
   

zookeeper.znode.parent
   /hbase