Re: i can not stop hbase

2013-08-21 Thread 闫昆
thank for you help i use hbase-daemon.sh stop master/regionserver can stop
i will check my hbase script



2013/8/21 Stas Maksimov maksi...@gmail.com

 Hello,

 Can you revert to the original bin/hbase script to make sure there's no
 errors in the script?

 Typically you don't need to change any of the HBase scripts, but you can
 control the HBase environment by changing conf/hbase-env.sh (you can set
 your extra HBASE_OPTS there).

 Thanks,
 Stas


 On 21 August 2013 07:37, 闫昆 yankunhad...@gmail.com wrote:

  hi all i use hbase 0.94 and when I stop hbase as follow
  last week i add configuration to $HBASE_HOME/bin/hbase
 
  elif [ $COMMAND = master ] ; then
CLASS='org.apache.hadoop.hbase.master.HMaster'
HBASE_OPTS=$HBASE_OPTS -Xdebug
  -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=10444
 
  I add hbase_opts want to remote debug hbase
  but now i cannot stop hbase
  stopping hbase.../stop-hbase.sh: line 58: 28454 Aborted
  (core dumped) nohup nice -n ${HBASE_NICENESS:-0} $HBASE_HOME/bin/hbase
  --config ${HBASE_CONF_DIR} master stop $@  $logout 21 
 /dev/null
 
 
 
 
 
  --
 
  In the Hadoop world, I am just a novice, explore the entire Hadoop
  ecosystem, I hope one day I can contribute their own code
 
  YanBit
  yankunhad...@gmail.com
 




-- 

In the Hadoop world, I am just a novice, explore the entire Hadoop
ecosystem, I hope one day I can contribute their own code

YanBit
yankunhad...@gmail.com


TableMapReduceUtil addDependencyJars question

2013-08-21 Thread Amit Sela
Hi all,
I'm using HBase 0.94.2.
Looking at TableMapReduceUtil.addDependencyJars(Job job) I see that
org.apache.zookeeper.ZooKeeper.class and com.google.protobuf.Message.class
are hard-coded as jars to send to cluster.
I can understand why all the others (map output key/value class, etc.) are
sent to cluster but if you are using HBase you are supposed to have
ZooKeeper deployed in your cluster, right? and protobuf is a part of that
installation, isn't it ?
Are these two jars necessary ?

In general, If I have all the classes I need deployed in the cluster
(including map output key/value class, etc.), can I skip
addDependencyJars ? I'm using an extended HFileOutputFormat I wrote...

Thanks,

Amit.


Re: Zookeeper tries to connect to localhost when i have specified another clearly.

2013-08-21 Thread manoj p
Check your /etc/hosts file if you have the correct mapping to localhost for
127.0.0.1. Also ensure that if you have hbase.zookeeper.quorum in your
configuration and also check if HBase classpath is appended to Hadoop
classpath.


BR/Manoj


On Wed, Aug 21, 2013 at 4:10 PM, Pavan Sudheendra pavan0...@gmail.comwrote:

 Hadoop Namenode reports the following error which is unusual :


 013-08-21 09:21:12,328 INFO org.apache.zookeeper.ClientCnxn: Opening socket
 connection to server localhost/127.0.0.1:2181. Will not attempt to
 authenticate using SASL (Unable to locate a login configuration)
 java.net.ConnectException: Connection refused
 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
 at
 sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
 at

 org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
 2013-08-21 09:33:11,033 WARN
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
 ZooKeeper exception:
 org.apache.zookeeper.KeeperException$ConnectionLossException:
 KeeperErrorCode = ConnectionLoss for /hbase
 2013-08-21 09:33:11,033 INFO org.apache.hadoop.hbase.util.RetryCounter:
 Sleeping 8000ms before retry #3...
 2013-08-21 09:33:11,043 WARN org.apache.hadoop.mapred.Task: Parent died.
 Exiting attempt_201307181246_0548_m_22_2


 Because i have specified the address in the java file
 Configuration conf = HBaseConfiguration.create();
 conf.set(hbase.zookeeper.quorum, 10.34.187.170);
 conf.set(hbase.zookeeper.property.clientPort,2181);
 conf.set(hbase.master,10.34.187.170);



 All my map tasks fail like this! Please help.. I'm on a timebomb
 --
 Regards-
 Pavan



Re: Zookeeper tries to connect to localhost when i have specified another clearly.

2013-08-21 Thread manoj p
For your code to run, please ensure if you use the correct HBase/Hadoop jar
versions while compiling your program.

BR/Manoj


On Wed, Aug 21, 2013 at 4:38 PM, manoj p eors...@gmail.com wrote:

 Check your /etc/hosts file if you have the correct mapping to localhost
 for 127.0.0.1. Also ensure that if you have hbase.zookeeper.quorum in your
 configuration and also check if HBase classpath is appended to Hadoop
 classpath.


 BR/Manoj


 On Wed, Aug 21, 2013 at 4:10 PM, Pavan Sudheendra pavan0...@gmail.comwrote:

 Hadoop Namenode reports the following error which is unusual :


 013-08-21 09:21:12,328 INFO org.apache.zookeeper.ClientCnxn: Opening
 socket
 connection to server localhost/127.0.0.1:2181. Will not attempt to
 authenticate using SASL (Unable to locate a login configuration)
 java.net.ConnectException: Connection refused
 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
 at
 sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
 at

 org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
 at
 org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
 2013-08-21 09:33:11,033 WARN
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
 ZooKeeper exception:
 org.apache.zookeeper.KeeperException$ConnectionLossException:
 KeeperErrorCode = ConnectionLoss for /hbase
 2013-08-21 09:33:11,033 INFO org.apache.hadoop.hbase.util.RetryCounter:
 Sleeping 8000ms before retry #3...
 2013-08-21 09:33:11,043 WARN org.apache.hadoop.mapred.Task: Parent died.
 Exiting attempt_201307181246_0548_m_22_2


 Because i have specified the address in the java file
 Configuration conf = HBaseConfiguration.create();
 conf.set(hbase.zookeeper.quorum, 10.34.187.170);
 conf.set(hbase.zookeeper.property.clientPort,2181);
 conf.set(hbase.master,10.34.187.170);



 All my map tasks fail like this! Please help.. I'm on a timebomb
 --
 Regards-
 Pavan





Re: Zookeeper tries to connect to localhost when i have specified another clearly.

2013-08-21 Thread Pavan Sudheendra
Yes. My /etc/hosts have the correct mapping to localhost

127.0.0.1localhost

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

I've added the HBase jars to the Hadoop Classpath as well. Not sure why..
I'm running this on a 6 node cloudera cluster which consist of 1
jobtrackers and 5 tasktrackers..

After a while all my map jobs fail.. Completely baffled because the map
tasks were doing the required tasks..



On Wed, Aug 21, 2013 at 4:45 PM, manoj p eors...@gmail.com wrote:

 For your code to run, please ensure if you use the correct HBase/Hadoop jar
 versions while compiling your program.

 BR/Manoj


 On Wed, Aug 21, 2013 at 4:38 PM, manoj p eors...@gmail.com wrote:

  Check your /etc/hosts file if you have the correct mapping to localhost
  for 127.0.0.1. Also ensure that if you have hbase.zookeeper.quorum in
 your
  configuration and also check if HBase classpath is appended to Hadoop
  classpath.
 
 
  BR/Manoj
 
 
  On Wed, Aug 21, 2013 at 4:10 PM, Pavan Sudheendra pavan0...@gmail.com
 wrote:
 
  Hadoop Namenode reports the following error which is unusual :
 
 
  013-08-21 09:21:12,328 INFO org.apache.zookeeper.ClientCnxn: Opening
  socket
  connection to server localhost/127.0.0.1:2181. Will not attempt to
  authenticate using SASL (Unable to locate a login configuration)
  java.net.ConnectException: Connection refused
  at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
  at
  sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
  at
 
 
 org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
  at
  org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
  2013-08-21 09:33:11,033 WARN
  org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly
 transient
  ZooKeeper exception:
  org.apache.zookeeper.KeeperException$ConnectionLossException:
  KeeperErrorCode = ConnectionLoss for /hbase
  2013-08-21 09:33:11,033 INFO org.apache.hadoop.hbase.util.RetryCounter:
  Sleeping 8000ms before retry #3...
  2013-08-21 09:33:11,043 WARN org.apache.hadoop.mapred.Task: Parent died.
  Exiting attempt_201307181246_0548_m_22_2
 
 
  Because i have specified the address in the java file
  Configuration conf = HBaseConfiguration.create();
  conf.set(hbase.zookeeper.quorum, 10.34.187.170);
  conf.set(hbase.zookeeper.property.clientPort,2181);
  conf.set(hbase.master,10.34.187.170);
 
 
 
  All my map tasks fail like this! Please help.. I'm on a timebomb
  --
  Regards-
  Pavan
 
 
 




-- 
Regards-
Pavan


Re: Setting zookeeper timeout to higher value

2013-08-21 Thread Vimal Jain
Can someone please give advice on this ?


On Tue, Aug 20, 2013 at 12:41 PM, Vimal Jain vkj...@gmail.com wrote:

 Hi,
 I am running Hbase in pseudo distributed mode on top HDFS.
 Recently , i was facing problems related to long GC pauses.
 When i read in official documentation , its suggested to increase
 zookeeper timeout.

 I am planning to make it 10 minutes .I understand the risk of increasing
 timeout means it takes a while for master to assign new regionserver in
 case of a RS failure.
 But as i am running Pseudo distributed mode , there is only one RS and
 even if it goes down my entire system is down , so increasing timeout does
 not seems to be an issue in my case.But still , i would like expert's
 advice on this.


 Here is my hbase-site.xml :-

 configuration
 *property
 namezookeeper.session.timeout/name
 value60/value
  /property
 property
 namehbase.zookeeper.property.tickTime/name
 value3/value
 /property*
 property
 namehbase.rootdir/name
 valuehdfs://192.168.20.30:9000/hbase/value
 /property
 property
 namehbase.cluster.distributed/name
 valuetrue/value
  /property
  property
   namehbase.zookeeper.quorum/name
   value192.168.20.30/value
 /property
 property
   namedfs.replication/name
   value1/value
 /property
  property
  namehbase.zookeeper.property.clientPort/name
  value2181/value
 /property
 property
  namehbase.zookeeper.property.dataDir/name
  value/home/hadoop/HbaseData/zookeeper/value
 /property
 /configuration

 --
 Thanks and Regards,
 Vimal Jain




-- 
Thanks and Regards,
Vimal Jain


Re: Zookeeper tries to connect to localhost when i have specified another clearly.

2013-08-21 Thread Jean-Marc Spaggiari
Hi Pavan,

I don't think Cloudera Manager assign the address to your computer. When CM
is down, your computer still have an IP, and even if you un-install CM, you
will still have an IP assigned to your computer.

If you have not configured anything there, then you most probably have a
DHCP. Just give a try to what I told you on the other message.

JM

2013/8/21 Pavan Sudheendra pavan0...@gmail.com

 @Manoj i have set hbase.zookeeper.quorum in my M-R application..

 @Jean The cloudera manager picks up the ip address automatically..


 On Wed, Aug 21, 2013 at 5:07 PM, manoj p eors...@gmail.com wrote:

  Can you try passing the argument -Dhbase.zookeeper.quorum=10.34.187.170
  while running the program
 
  If this does'nt work either please check if HBASE_HOME and HBASE_CONF_DIR
  is set correctly.
 
  BR/Manoj
 
 
  On Wed, Aug 21, 2013 at 4:48 PM, Pavan Sudheendra pavan0...@gmail.com
  wrote:
 
   Yes. My /etc/hosts have the correct mapping to localhost
  
   127.0.0.1localhost
  
   # The following lines are desirable for IPv6 capable hosts
   ::1 ip6-localhost ip6-loopback
   fe00::0 ip6-localnet
   ff00::0 ip6-mcastprefix
   ff02::1 ip6-allnodes
   ff02::2 ip6-allrouters
  
   I've added the HBase jars to the Hadoop Classpath as well. Not sure
 why..
   I'm running this on a 6 node cloudera cluster which consist of 1
   jobtrackers and 5 tasktrackers..
  
   After a while all my map jobs fail.. Completely baffled because the map
   tasks were doing the required tasks..
  
  
  
   On Wed, Aug 21, 2013 at 4:45 PM, manoj p eors...@gmail.com wrote:
  
For your code to run, please ensure if you use the correct
 HBase/Hadoop
   jar
versions while compiling your program.
   
BR/Manoj
   
   
On Wed, Aug 21, 2013 at 4:38 PM, manoj p eors...@gmail.com wrote:
   
 Check your /etc/hosts file if you have the correct mapping to
  localhost
 for 127.0.0.1. Also ensure that if you have hbase.zookeeper.quorum
 in
your
 configuration and also check if HBase classpath is appended to
 Hadoop
 classpath.


 BR/Manoj


 On Wed, Aug 21, 2013 at 4:10 PM, Pavan Sudheendra 
  pavan0...@gmail.com
wrote:

 Hadoop Namenode reports the following error which is unusual :


 013-08-21 09:21:12,328 INFO org.apache.zookeeper.ClientCnxn:
 Opening
 socket
 connection to server localhost/127.0.0.1:2181. Will not attempt
 to
 authenticate using SASL (Unable to locate a login configuration)
 java.net.ConnectException: Connection refused
 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
 at

  sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
 at


   
  
 
 org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
 at

 org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
 2013-08-21 09:33:11,033 WARN
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly
transient
 ZooKeeper exception:
 org.apache.zookeeper.KeeperException$ConnectionLossException:
 KeeperErrorCode = ConnectionLoss for /hbase
 2013-08-21 09:33:11,033 INFO
   org.apache.hadoop.hbase.util.RetryCounter:
 Sleeping 8000ms before retry #3...
 2013-08-21 09:33:11,043 WARN org.apache.hadoop.mapred.Task: Parent
   died.
 Exiting attempt_201307181246_0548_m_22_2


 Because i have specified the address in the java file
 Configuration conf = HBaseConfiguration.create();
 conf.set(hbase.zookeeper.quorum, 10.34.187.170);
 conf.set(hbase.zookeeper.property.clientPort,2181);
 conf.set(hbase.master,10.34.187.170);



 All my map tasks fail like this! Please help.. I'm on a timebomb
 --
 Regards-
 Pavan



   
  
  
  
   --
   Regards-
   Pavan
  
 



 --
 Regards-
 Pavan



Re: Zookeeper tries to connect to localhost when i have specified another clearly.

2013-08-21 Thread Jean-Marc Spaggiari
Can you past you host file here again with the modification you have done?

Also, can you share a big more of you code? What are you doing with the
config object after, how do you create your table object, etc.?

Thanks,

JM

2013/8/21 Pavan Sudheendra pavan0...@gmail.com

 @Jean tried your method didn't work..

 2013-08-21 12:17:10,908 INFO org.apache.zookeeper.ClientCnxn: Opening
 socket connection to server localhost/127.0.0.1:2181. Will not attempt to
 authenticate using SASL (Unable to locate a login configuration)
 2013-08-21 12:17:10,908 WARN org.apache.zookeeper.ClientCnxn: Session 0x0
 for server null, unexpected error, closing socket connection and attempting
 reconnect
 java.net.ConnectException: Connection refused
 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
 at
 sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
 at

 org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
 2013-08-21 12:17:11,009 WARN
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
 ZooKeeper exception:
 org.apache.zookeeper.KeeperException$ConnectionLossException:
 KeeperErrorCode = ConnectionLoss for /hbase
 2013-08-21 12:17:11,009 INFO org.apache.hadoop.hbase.util.RetryCounter:
 Sleeping 8000ms before retry #3...\

 Any tips?



 On Wed, Aug 21, 2013 at 5:15 PM, Jean-Marc Spaggiari 
 jean-m...@spaggiari.org wrote:

  Hi Pavan,
 
  I don't think Cloudera Manager assign the address to your computer. When
 CM
  is down, your computer still have an IP, and even if you un-install CM,
 you
  will still have an IP assigned to your computer.
 
  If you have not configured anything there, then you most probably have a
  DHCP. Just give a try to what I told you on the other message.
 
  JM
 
  2013/8/21 Pavan Sudheendra pavan0...@gmail.com
 
   @Manoj i have set hbase.zookeeper.quorum in my M-R application..
  
   @Jean The cloudera manager picks up the ip address automatically..
  
  
   On Wed, Aug 21, 2013 at 5:07 PM, manoj p eors...@gmail.com wrote:
  
Can you try passing the argument
 -Dhbase.zookeeper.quorum=10.34.187.170
while running the program
   
If this does'nt work either please check if HBASE_HOME and
  HBASE_CONF_DIR
is set correctly.
   
BR/Manoj
   
   
On Wed, Aug 21, 2013 at 4:48 PM, Pavan Sudheendra 
 pavan0...@gmail.com
wrote:
   
 Yes. My /etc/hosts have the correct mapping to localhost

 127.0.0.1localhost

 # The following lines are desirable for IPv6 capable hosts
 ::1 ip6-localhost ip6-loopback
 fe00::0 ip6-localnet
 ff00::0 ip6-mcastprefix
 ff02::1 ip6-allnodes
 ff02::2 ip6-allrouters

 I've added the HBase jars to the Hadoop Classpath as well. Not sure
   why..
 I'm running this on a 6 node cloudera cluster which consist of 1
 jobtrackers and 5 tasktrackers..

 After a while all my map jobs fail.. Completely baffled because the
  map
 tasks were doing the required tasks..



 On Wed, Aug 21, 2013 at 4:45 PM, manoj p eors...@gmail.com
 wrote:

  For your code to run, please ensure if you use the correct
   HBase/Hadoop
 jar
  versions while compiling your program.
 
  BR/Manoj
 
 
  On Wed, Aug 21, 2013 at 4:38 PM, manoj p eors...@gmail.com
  wrote:
 
   Check your /etc/hosts file if you have the correct mapping to
localhost
   for 127.0.0.1. Also ensure that if you have
  hbase.zookeeper.quorum
   in
  your
   configuration and also check if HBase classpath is appended to
   Hadoop
   classpath.
  
  
   BR/Manoj
  
  
   On Wed, Aug 21, 2013 at 4:10 PM, Pavan Sudheendra 
pavan0...@gmail.com
  wrote:
  
   Hadoop Namenode reports the following error which is unusual :
  
  
   013-08-21 09:21:12,328 INFO org.apache.zookeeper.ClientCnxn:
   Opening
   socket
   connection to server localhost/127.0.0.1:2181. Will not
 attempt
   to
   authenticate using SASL (Unable to locate a login
 configuration)
   java.net.ConnectException: Connection refused
   at sun.nio.ch.SocketChannelImpl.checkConnect(Native
 Method)
   at
  
   
 sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
   at
  
  
 

   
  
 
 org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
   at
  
   org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
   2013-08-21 09:33:11,033 WARN
   org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper:
 Possibly
  transient
   ZooKeeper exception:
   org.apache.zookeeper.KeeperException$ConnectionLossException:
   KeeperErrorCode = ConnectionLoss for /hbase
   2013-08-21 09:33:11,033 INFO
 

Re: Setting zookeeper timeout to higher value

2013-08-21 Thread Jean-Marc Spaggiari
Hi Vimal,

I'm talking about your HBase installationg. I guessed that a pseudo
distributed distributed HBase installation is for testing purposes, and not
to sustain a production load.

JM

2013/8/21 Vimal Jain vkj...@gmail.com

 Hi Jean,
 Sorry , but i did not get you completely.
 When you say its for testing , what exactly it means ?


 On Wed, Aug 21, 2013 at 5:28 PM, Jean-Marc Spaggiari 
 jean-m...@spaggiari.org wrote:

  Hi Vimal,
 
  It's for testing only, so you can put what ever value you want ;)
 
  It might just take longer because HBase see that RS is down. But as I
 said,
  it's for testing. So it's not really critical.
 
  You might also want to play with GC options to make it faster.
 
  JM
 
  2013/8/21 Vimal Jain vkj...@gmail.com
 
   Can someone please give advice on this ?
  
  
   On Tue, Aug 20, 2013 at 12:41 PM, Vimal Jain vkj...@gmail.com wrote:
  
Hi,
I am running Hbase in pseudo distributed mode on top HDFS.
Recently , i was facing problems related to long GC pauses.
When i read in official documentation , its suggested to increase
zookeeper timeout.
   
I am planning to make it 10 minutes .I understand the risk of
  increasing
timeout means it takes a while for master to assign new regionserver
 in
case of a RS failure.
But as i am running Pseudo distributed mode , there is only one RS
 and
even if it goes down my entire system is down , so increasing timeout
   does
not seems to be an issue in my case.But still , i would like expert's
advice on this.
   
   
Here is my hbase-site.xml :-
   
configuration
*property
namezookeeper.session.timeout/name
value60/value
 /property
property
namehbase.zookeeper.property.tickTime/name
value3/value
/property*
property
namehbase.rootdir/name
valuehdfs://192.168.20.30:9000/hbase/value
/property
property
namehbase.cluster.distributed/name
valuetrue/value
 /property
 property
  namehbase.zookeeper.quorum/name
  value192.168.20.30/value
/property
property
  namedfs.replication/name
  value1/value
/property
 property
 namehbase.zookeeper.property.clientPort/name
 value2181/value
/property
property
 namehbase.zookeeper.property.dataDir/name
 value/home/hadoop/HbaseData/zookeeper/value
/property
/configuration
   
--
Thanks and Regards,
Vimal Jain
   
  
  
  
   --
   Thanks and Regards,
   Vimal Jain
  
 



 --
 Thanks and Regards,
 Vimal Jain



Re: Zookeeper tries to connect to localhost when i have specified another clearly.

2013-08-21 Thread Pavan Sudheendra
Sure..
/etc/hosts file:

127.0.0.1 localhost
10.34.187.170 ip-10-34-187-170
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

Configuration conf = HBaseConfiguration.create();
conf.set(hbase.zookeeper.quorum, 10.34.187.170);
 conf.set(hbase.zookeeper.property.clientPort,2181);
   conf.set(hbase.master,10.34.187.170);
   Job job = new Job(conf, ViewersTable);

I'm trying to process table data which has 19 million rows..It runs fine
for a while although i don't see the map percent completion change from 0%
.. After a while it says

Task attempt_201304161625_0028_m_00_0 failed to report status for
600 seconds. Killing!





On Wed, Aug 21, 2013 at 5:52 PM, Jean-Marc Spaggiari 
jean-m...@spaggiari.org wrote:

 Can you past you host file here again with the modification you have done?

 Also, can you share a big more of you code? What are you doing with the
 config object after, how do you create your table object, etc.?

 Thanks,

 JM

 2013/8/21 Pavan Sudheendra pavan0...@gmail.com

  @Jean tried your method didn't work..
 
  2013-08-21 12:17:10,908 INFO org.apache.zookeeper.ClientCnxn: Opening
  socket connection to server localhost/127.0.0.1:2181. Will not attempt
 to
  authenticate using SASL (Unable to locate a login configuration)
  2013-08-21 12:17:10,908 WARN org.apache.zookeeper.ClientCnxn: Session 0x0
  for server null, unexpected error, closing socket connection and
 attempting
  reconnect
  java.net.ConnectException: Connection refused
  at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
  at
  sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
  at
 
 
 org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
  at
 org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
  2013-08-21 12:17:11,009 WARN
  org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly
 transient
  ZooKeeper exception:
  org.apache.zookeeper.KeeperException$ConnectionLossException:
  KeeperErrorCode = ConnectionLoss for /hbase
  2013-08-21 12:17:11,009 INFO org.apache.hadoop.hbase.util.RetryCounter:
  Sleeping 8000ms before retry #3...\
 
  Any tips?
 
 
 
  On Wed, Aug 21, 2013 at 5:15 PM, Jean-Marc Spaggiari 
  jean-m...@spaggiari.org wrote:
 
   Hi Pavan,
  
   I don't think Cloudera Manager assign the address to your computer.
 When
  CM
   is down, your computer still have an IP, and even if you un-install CM,
  you
   will still have an IP assigned to your computer.
  
   If you have not configured anything there, then you most probably have
 a
   DHCP. Just give a try to what I told you on the other message.
  
   JM
  
   2013/8/21 Pavan Sudheendra pavan0...@gmail.com
  
@Manoj i have set hbase.zookeeper.quorum in my M-R application..
   
@Jean The cloudera manager picks up the ip address automatically..
   
   
On Wed, Aug 21, 2013 at 5:07 PM, manoj p eors...@gmail.com wrote:
   
 Can you try passing the argument
  -Dhbase.zookeeper.quorum=10.34.187.170
 while running the program

 If this does'nt work either please check if HBASE_HOME and
   HBASE_CONF_DIR
 is set correctly.

 BR/Manoj


 On Wed, Aug 21, 2013 at 4:48 PM, Pavan Sudheendra 
  pavan0...@gmail.com
 wrote:

  Yes. My /etc/hosts have the correct mapping to localhost
 
  127.0.0.1localhost
 
  # The following lines are desirable for IPv6 capable hosts
  ::1 ip6-localhost ip6-loopback
  fe00::0 ip6-localnet
  ff00::0 ip6-mcastprefix
  ff02::1 ip6-allnodes
  ff02::2 ip6-allrouters
 
  I've added the HBase jars to the Hadoop Classpath as well. Not
 sure
why..
  I'm running this on a 6 node cloudera cluster which consist of 1
  jobtrackers and 5 tasktrackers..
 
  After a while all my map jobs fail.. Completely baffled because
 the
   map
  tasks were doing the required tasks..
 
 
 
  On Wed, Aug 21, 2013 at 4:45 PM, manoj p eors...@gmail.com
  wrote:
 
   For your code to run, please ensure if you use the correct
HBase/Hadoop
  jar
   versions while compiling your program.
  
   BR/Manoj
  
  
   On Wed, Aug 21, 2013 at 4:38 PM, manoj p eors...@gmail.com
   wrote:
  
Check your /etc/hosts file if you have the correct mapping to
 localhost
for 127.0.0.1. Also ensure that if you have
   hbase.zookeeper.quorum
in
   your
configuration and also check if HBase classpath is appended
 to
Hadoop
classpath.
   
   
BR/Manoj
   
   
On Wed, Aug 21, 2013 at 4:10 PM, Pavan Sudheendra 
 pavan0...@gmail.com
   wrote:
   
Hadoop Namenode reports the following error which is
 unusual :
   
   
013-08-21 09:21:12,328 

Re: Zookeeper tries to connect to localhost when i have specified another clearly.

2013-08-21 Thread Jean-Marc Spaggiari
And what about:
# cat /etc/hostname

and
# hostname

?

2013/8/21 Pavan Sudheendra pavan0...@gmail.com

 Sure..
 /etc/hosts file:

 127.0.0.1 localhost
 10.34.187.170 ip-10-34-187-170
 # The following lines are desirable for IPv6 capable hosts
 ::1 ip6-localhost ip6-loopback
 fe00::0 ip6-localnet
 ff00::0 ip6-mcastprefix
 ff02::1 ip6-allnodes
 ff02::2 ip6-allrouters
 ff02::3 ip6-allhosts

 Configuration conf = HBaseConfiguration.create();
 conf.set(hbase.zookeeper.quorum, 10.34.187.170);
  conf.set(hbase.zookeeper.property.clientPort,2181);
conf.set(hbase.master,10.34.187.170);
Job job = new Job(conf, ViewersTable);

 I'm trying to process table data which has 19 million rows..It runs fine
 for a while although i don't see the map percent completion change from 0%
 .. After a while it says

 Task attempt_201304161625_0028_m_00_0 failed to report status for
 600 seconds. Killing!





 On Wed, Aug 21, 2013 at 5:52 PM, Jean-Marc Spaggiari 
 jean-m...@spaggiari.org wrote:

  Can you past you host file here again with the modification you have
 done?
 
  Also, can you share a big more of you code? What are you doing with the
  config object after, how do you create your table object, etc.?
 
  Thanks,
 
  JM
 
  2013/8/21 Pavan Sudheendra pavan0...@gmail.com
 
   @Jean tried your method didn't work..
  
   2013-08-21 12:17:10,908 INFO org.apache.zookeeper.ClientCnxn: Opening
   socket connection to server localhost/127.0.0.1:2181. Will not attempt
  to
   authenticate using SASL (Unable to locate a login configuration)
   2013-08-21 12:17:10,908 WARN org.apache.zookeeper.ClientCnxn: Session
 0x0
   for server null, unexpected error, closing socket connection and
  attempting
   reconnect
   java.net.ConnectException: Connection refused
   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
   at
   sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
   at
  
  
 
 org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
   at
  org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
   2013-08-21 12:17:11,009 WARN
   org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly
  transient
   ZooKeeper exception:
   org.apache.zookeeper.KeeperException$ConnectionLossException:
   KeeperErrorCode = ConnectionLoss for /hbase
   2013-08-21 12:17:11,009 INFO org.apache.hadoop.hbase.util.RetryCounter:
   Sleeping 8000ms before retry #3...\
  
   Any tips?
  
  
  
   On Wed, Aug 21, 2013 at 5:15 PM, Jean-Marc Spaggiari 
   jean-m...@spaggiari.org wrote:
  
Hi Pavan,
   
I don't think Cloudera Manager assign the address to your computer.
  When
   CM
is down, your computer still have an IP, and even if you un-install
 CM,
   you
will still have an IP assigned to your computer.
   
If you have not configured anything there, then you most probably
 have
  a
DHCP. Just give a try to what I told you on the other message.
   
JM
   
2013/8/21 Pavan Sudheendra pavan0...@gmail.com
   
 @Manoj i have set hbase.zookeeper.quorum in my M-R application..

 @Jean The cloudera manager picks up the ip address automatically..


 On Wed, Aug 21, 2013 at 5:07 PM, manoj p eors...@gmail.com
 wrote:

  Can you try passing the argument
   -Dhbase.zookeeper.quorum=10.34.187.170
  while running the program
 
  If this does'nt work either please check if HBASE_HOME and
HBASE_CONF_DIR
  is set correctly.
 
  BR/Manoj
 
 
  On Wed, Aug 21, 2013 at 4:48 PM, Pavan Sudheendra 
   pavan0...@gmail.com
  wrote:
 
   Yes. My /etc/hosts have the correct mapping to localhost
  
   127.0.0.1localhost
  
   # The following lines are desirable for IPv6 capable hosts
   ::1 ip6-localhost ip6-loopback
   fe00::0 ip6-localnet
   ff00::0 ip6-mcastprefix
   ff02::1 ip6-allnodes
   ff02::2 ip6-allrouters
  
   I've added the HBase jars to the Hadoop Classpath as well. Not
  sure
 why..
   I'm running this on a 6 node cloudera cluster which consist of
 1
   jobtrackers and 5 tasktrackers..
  
   After a while all my map jobs fail.. Completely baffled because
  the
map
   tasks were doing the required tasks..
  
  
  
   On Wed, Aug 21, 2013 at 4:45 PM, manoj p eors...@gmail.com
   wrote:
  
For your code to run, please ensure if you use the correct
 HBase/Hadoop
   jar
versions while compiling your program.
   
BR/Manoj
   
   
On Wed, Aug 21, 2013 at 4:38 PM, manoj p eors...@gmail.com
wrote:
   
 Check your /etc/hosts file if you have the correct mapping
 to
  localhost
 for 127.0.0.1. Also ensure that if you have
hbase.zookeeper.quorum
 in
your
 configuration and also check if HBase classpath is appended
  to
 Hadoop
 classpath.


Re: Setting zookeeper timeout to higher value

2013-08-21 Thread Vimal Jain
yeah .. its mentioned so in official guide but currently i am using it in
production and would later convert it to 3-node cluster.


On Wed, Aug 21, 2013 at 5:54 PM, Jean-Marc Spaggiari 
jean-m...@spaggiari.org wrote:

 Hi Vimal,

 I'm talking about your HBase installationg. I guessed that a pseudo
 distributed distributed HBase installation is for testing purposes, and not
 to sustain a production load.

 JM

 2013/8/21 Vimal Jain vkj...@gmail.com

  Hi Jean,
  Sorry , but i did not get you completely.
  When you say its for testing , what exactly it means ?
 
 
  On Wed, Aug 21, 2013 at 5:28 PM, Jean-Marc Spaggiari 
  jean-m...@spaggiari.org wrote:
 
   Hi Vimal,
  
   It's for testing only, so you can put what ever value you want ;)
  
   It might just take longer because HBase see that RS is down. But as I
  said,
   it's for testing. So it's not really critical.
  
   You might also want to play with GC options to make it faster.
  
   JM
  
   2013/8/21 Vimal Jain vkj...@gmail.com
  
Can someone please give advice on this ?
   
   
On Tue, Aug 20, 2013 at 12:41 PM, Vimal Jain vkj...@gmail.com
 wrote:
   
 Hi,
 I am running Hbase in pseudo distributed mode on top HDFS.
 Recently , i was facing problems related to long GC pauses.
 When i read in official documentation , its suggested to increase
 zookeeper timeout.

 I am planning to make it 10 minutes .I understand the risk of
   increasing
 timeout means it takes a while for master to assign new
 regionserver
  in
 case of a RS failure.
 But as i am running Pseudo distributed mode , there is only one RS
  and
 even if it goes down my entire system is down , so increasing
 timeout
does
 not seems to be an issue in my case.But still , i would like
 expert's
 advice on this.


 Here is my hbase-site.xml :-

 configuration
 *property
 namezookeeper.session.timeout/name
 value60/value
  /property
 property
 namehbase.zookeeper.property.tickTime/name
 value3/value
 /property*
 property
 namehbase.rootdir/name
 valuehdfs://192.168.20.30:9000/hbase/value
 /property
 property
 namehbase.cluster.distributed/name
 valuetrue/value
  /property
  property
   namehbase.zookeeper.quorum/name
   value192.168.20.30/value
 /property
 property
   namedfs.replication/name
   value1/value
 /property
  property
  namehbase.zookeeper.property.clientPort/name
  value2181/value
 /property
 property
  namehbase.zookeeper.property.dataDir/name
  value/home/hadoop/HbaseData/zookeeper/value
 /property
 /configuration

 --
 Thanks and Regards,
 Vimal Jain

   
   
   
--
Thanks and Regards,
Vimal Jain
   
  
 
 
 
  --
  Thanks and Regards,
  Vimal Jain
 




-- 
Thanks and Regards,
Vimal Jain


Re: Zookeeper tries to connect to localhost when i have specified another clearly.

2013-08-21 Thread Pavan Sudheendra
Hi Jean,

ubuntu@ip-10-34-187-170:~$ cat /etc/hostname
ip-10-34-187-170
ubuntu@ip-10-34-187-170:~$ hostname
ip-10-34-187-170



On Wed, Aug 21, 2013 at 6:01 PM, Jean-Marc Spaggiari 
jean-m...@spaggiari.org wrote:

 And what about:
 # cat /etc/hostname

 and
 # hostname

 ?

 2013/8/21 Pavan Sudheendra pavan0...@gmail.com

  Sure..
  /etc/hosts file:
 
  127.0.0.1 localhost
  10.34.187.170 ip-10-34-187-170
  # The following lines are desirable for IPv6 capable hosts
  ::1 ip6-localhost ip6-loopback
  fe00::0 ip6-localnet
  ff00::0 ip6-mcastprefix
  ff02::1 ip6-allnodes
  ff02::2 ip6-allrouters
  ff02::3 ip6-allhosts
 
  Configuration conf = HBaseConfiguration.create();
  conf.set(hbase.zookeeper.quorum, 10.34.187.170);
   conf.set(hbase.zookeeper.property.clientPort,2181);
 conf.set(hbase.master,10.34.187.170);
 Job job = new Job(conf, ViewersTable);
 
  I'm trying to process table data which has 19 million rows..It runs fine
  for a while although i don't see the map percent completion change from
 0%
  .. After a while it says
 
  Task attempt_201304161625_0028_m_00_0 failed to report status for
  600 seconds. Killing!
 
 
 
 
 
  On Wed, Aug 21, 2013 at 5:52 PM, Jean-Marc Spaggiari 
  jean-m...@spaggiari.org wrote:
 
   Can you past you host file here again with the modification you have
  done?
  
   Also, can you share a big more of you code? What are you doing with the
   config object after, how do you create your table object, etc.?
  
   Thanks,
  
   JM
  
   2013/8/21 Pavan Sudheendra pavan0...@gmail.com
  
@Jean tried your method didn't work..
   
2013-08-21 12:17:10,908 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server localhost/127.0.0.1:2181. Will not
 attempt
   to
authenticate using SASL (Unable to locate a login configuration)
2013-08-21 12:17:10,908 WARN org.apache.zookeeper.ClientCnxn: Session
  0x0
for server null, unexpected error, closing socket connection and
   attempting
reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at
   
 sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at
   
   
  
 
 org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at
   org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
2013-08-21 12:17:11,009 WARN
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly
   transient
ZooKeeper exception:
org.apache.zookeeper.KeeperException$ConnectionLossException:
KeeperErrorCode = ConnectionLoss for /hbase
2013-08-21 12:17:11,009 INFO
 org.apache.hadoop.hbase.util.RetryCounter:
Sleeping 8000ms before retry #3...\
   
Any tips?
   
   
   
On Wed, Aug 21, 2013 at 5:15 PM, Jean-Marc Spaggiari 
jean-m...@spaggiari.org wrote:
   
 Hi Pavan,

 I don't think Cloudera Manager assign the address to your computer.
   When
CM
 is down, your computer still have an IP, and even if you un-install
  CM,
you
 will still have an IP assigned to your computer.

 If you have not configured anything there, then you most probably
  have
   a
 DHCP. Just give a try to what I told you on the other message.

 JM

 2013/8/21 Pavan Sudheendra pavan0...@gmail.com

  @Manoj i have set hbase.zookeeper.quorum in my M-R application..
 
  @Jean The cloudera manager picks up the ip address
 automatically..
 
 
  On Wed, Aug 21, 2013 at 5:07 PM, manoj p eors...@gmail.com
  wrote:
 
   Can you try passing the argument
-Dhbase.zookeeper.quorum=10.34.187.170
   while running the program
  
   If this does'nt work either please check if HBASE_HOME and
 HBASE_CONF_DIR
   is set correctly.
  
   BR/Manoj
  
  
   On Wed, Aug 21, 2013 at 4:48 PM, Pavan Sudheendra 
pavan0...@gmail.com
   wrote:
  
Yes. My /etc/hosts have the correct mapping to localhost
   
127.0.0.1localhost
   
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
   
I've added the HBase jars to the Hadoop Classpath as well.
 Not
   sure
  why..
I'm running this on a 6 node cloudera cluster which consist
 of
  1
jobtrackers and 5 tasktrackers..
   
After a while all my map jobs fail.. Completely baffled
 because
   the
 map
tasks were doing the required tasks..
   
   
   
On Wed, Aug 21, 2013 at 4:45 PM, manoj p eors...@gmail.com
wrote:
   
 For your code to run, please ensure if you use the correct
  HBase/Hadoop
jar
 versions while compiling your program.

 BR/Manoj



Re: Zookeeper tries to connect to localhost when i have specified another clearly.

2013-08-21 Thread Pavan Sudheendra
Is this a zookeeper specific error or something?


On Wed, Aug 21, 2013 at 6:06 PM, Pavan Sudheendra pavan0...@gmail.comwrote:

 Hi Jean,

 ubuntu@ip-10-34-187-170:~$ cat /etc/hostname
 ip-10-34-187-170
 ubuntu@ip-10-34-187-170:~$ hostname
 ip-10-34-187-170



 On Wed, Aug 21, 2013 at 6:01 PM, Jean-Marc Spaggiari 
 jean-m...@spaggiari.org wrote:

 And what about:
 # cat /etc/hostname

 and
 # hostname

 ?

 2013/8/21 Pavan Sudheendra pavan0...@gmail.com

  Sure..
  /etc/hosts file:
 
  127.0.0.1 localhost
  10.34.187.170 ip-10-34-187-170
  # The following lines are desirable for IPv6 capable hosts
  ::1 ip6-localhost ip6-loopback
  fe00::0 ip6-localnet
  ff00::0 ip6-mcastprefix
  ff02::1 ip6-allnodes
  ff02::2 ip6-allrouters
  ff02::3 ip6-allhosts
 
  Configuration conf = HBaseConfiguration.create();
  conf.set(hbase.zookeeper.quorum, 10.34.187.170);
   conf.set(hbase.zookeeper.property.clientPort,2181);
 conf.set(hbase.master,10.34.187.170);
 Job job = new Job(conf, ViewersTable);
 
  I'm trying to process table data which has 19 million rows..It runs fine
  for a while although i don't see the map percent completion change from
 0%
  .. After a while it says
 
  Task attempt_201304161625_0028_m_00_0 failed to report status for
  600 seconds. Killing!
 
 
 
 
 
  On Wed, Aug 21, 2013 at 5:52 PM, Jean-Marc Spaggiari 
  jean-m...@spaggiari.org wrote:
 
   Can you past you host file here again with the modification you have
  done?
  
   Also, can you share a big more of you code? What are you doing with
 the
   config object after, how do you create your table object, etc.?
  
   Thanks,
  
   JM
  
   2013/8/21 Pavan Sudheendra pavan0...@gmail.com
  
@Jean tried your method didn't work..
   
2013-08-21 12:17:10,908 INFO org.apache.zookeeper.ClientCnxn:
 Opening
socket connection to server localhost/127.0.0.1:2181. Will not
 attempt
   to
authenticate using SASL (Unable to locate a login configuration)
2013-08-21 12:17:10,908 WARN org.apache.zookeeper.ClientCnxn:
 Session
  0x0
for server null, unexpected error, closing socket connection and
   attempting
reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at
   
 sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at
   
   
  
 
 org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at
   org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
2013-08-21 12:17:11,009 WARN
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly
   transient
ZooKeeper exception:
org.apache.zookeeper.KeeperException$ConnectionLossException:
KeeperErrorCode = ConnectionLoss for /hbase
2013-08-21 12:17:11,009 INFO
 org.apache.hadoop.hbase.util.RetryCounter:
Sleeping 8000ms before retry #3...\
   
Any tips?
   
   
   
On Wed, Aug 21, 2013 at 5:15 PM, Jean-Marc Spaggiari 
jean-m...@spaggiari.org wrote:
   
 Hi Pavan,

 I don't think Cloudera Manager assign the address to your
 computer.
   When
CM
 is down, your computer still have an IP, and even if you
 un-install
  CM,
you
 will still have an IP assigned to your computer.

 If you have not configured anything there, then you most probably
  have
   a
 DHCP. Just give a try to what I told you on the other message.

 JM

 2013/8/21 Pavan Sudheendra pavan0...@gmail.com

  @Manoj i have set hbase.zookeeper.quorum in my M-R application..
 
  @Jean The cloudera manager picks up the ip address
 automatically..
 
 
  On Wed, Aug 21, 2013 at 5:07 PM, manoj p eors...@gmail.com
  wrote:
 
   Can you try passing the argument
-Dhbase.zookeeper.quorum=10.34.187.170
   while running the program
  
   If this does'nt work either please check if HBASE_HOME and
 HBASE_CONF_DIR
   is set correctly.
  
   BR/Manoj
  
  
   On Wed, Aug 21, 2013 at 4:48 PM, Pavan Sudheendra 
pavan0...@gmail.com
   wrote:
  
Yes. My /etc/hosts have the correct mapping to localhost
   
127.0.0.1localhost
   
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
   
I've added the HBase jars to the Hadoop Classpath as well.
 Not
   sure
  why..
I'm running this on a 6 node cloudera cluster which consist
 of
  1
jobtrackers and 5 tasktrackers..
   
After a while all my map jobs fail.. Completely baffled
 because
   the
 map
tasks were doing the required tasks..
   
   
   
On Wed, Aug 21, 2013 at 4:45 PM, manoj p eors...@gmail.com
 
wrote:
   
 For your code to run, please ensure if 

Re: Zookeeper tries to connect to localhost when i have specified another clearly.

2013-08-21 Thread Jean-Marc Spaggiari
Hum.

Things seems to be correct there.

Can you try something simple like:

Configuration config = HBaseConfiguration.create();
config.set(hbase.zookeeper.quorum, ip-10-34-187-170);
HTable table  = new HTable (config,
Bytes.toBytes(TABLE_NAME));

And see if it works?

JM

2013/8/21 Pavan Sudheendra pavan0...@gmail.com

 Is this a zookeeper specific error or something?


 On Wed, Aug 21, 2013 at 6:06 PM, Pavan Sudheendra pavan0...@gmail.com
 wrote:

  Hi Jean,
 
  ubuntu@ip-10-34-187-170:~$ cat /etc/hostname
  ip-10-34-187-170
  ubuntu@ip-10-34-187-170:~$ hostname
  ip-10-34-187-170
 
 
 
  On Wed, Aug 21, 2013 at 6:01 PM, Jean-Marc Spaggiari 
  jean-m...@spaggiari.org wrote:
 
  And what about:
  # cat /etc/hostname
 
  and
  # hostname
 
  ?
 
  2013/8/21 Pavan Sudheendra pavan0...@gmail.com
 
   Sure..
   /etc/hosts file:
  
   127.0.0.1 localhost
   10.34.187.170 ip-10-34-187-170
   # The following lines are desirable for IPv6 capable hosts
   ::1 ip6-localhost ip6-loopback
   fe00::0 ip6-localnet
   ff00::0 ip6-mcastprefix
   ff02::1 ip6-allnodes
   ff02::2 ip6-allrouters
   ff02::3 ip6-allhosts
  
   Configuration conf = HBaseConfiguration.create();
   conf.set(hbase.zookeeper.quorum, 10.34.187.170);
conf.set(hbase.zookeeper.property.clientPort,2181);
  conf.set(hbase.master,10.34.187.170);
  Job job = new Job(conf, ViewersTable);
  
   I'm trying to process table data which has 19 million rows..It runs
 fine
   for a while although i don't see the map percent completion change
 from
  0%
   .. After a while it says
  
   Task attempt_201304161625_0028_m_00_0 failed to report status for
   600 seconds. Killing!
  
  
  
  
  
   On Wed, Aug 21, 2013 at 5:52 PM, Jean-Marc Spaggiari 
   jean-m...@spaggiari.org wrote:
  
Can you past you host file here again with the modification you have
   done?
   
Also, can you share a big more of you code? What are you doing with
  the
config object after, how do you create your table object, etc.?
   
Thanks,
   
JM
   
2013/8/21 Pavan Sudheendra pavan0...@gmail.com
   
 @Jean tried your method didn't work..

 2013-08-21 12:17:10,908 INFO org.apache.zookeeper.ClientCnxn:
  Opening
 socket connection to server localhost/127.0.0.1:2181. Will not
  attempt
to
 authenticate using SASL (Unable to locate a login configuration)
 2013-08-21 12:17:10,908 WARN org.apache.zookeeper.ClientCnxn:
  Session
   0x0
 for server null, unexpected error, closing socket connection and
attempting
 reconnect
 java.net.ConnectException: Connection refused
 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
 at

  sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
 at


   
  
 
 org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
 at
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
 2013-08-21 12:17:11,009 WARN
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly
transient
 ZooKeeper exception:
 org.apache.zookeeper.KeeperException$ConnectionLossException:
 KeeperErrorCode = ConnectionLoss for /hbase
 2013-08-21 12:17:11,009 INFO
  org.apache.hadoop.hbase.util.RetryCounter:
 Sleeping 8000ms before retry #3...\

 Any tips?



 On Wed, Aug 21, 2013 at 5:15 PM, Jean-Marc Spaggiari 
 jean-m...@spaggiari.org wrote:

  Hi Pavan,
 
  I don't think Cloudera Manager assign the address to your
  computer.
When
 CM
  is down, your computer still have an IP, and even if you
  un-install
   CM,
 you
  will still have an IP assigned to your computer.
 
  If you have not configured anything there, then you most
 probably
   have
a
  DHCP. Just give a try to what I told you on the other message.
 
  JM
 
  2013/8/21 Pavan Sudheendra pavan0...@gmail.com
 
   @Manoj i have set hbase.zookeeper.quorum in my M-R
 application..
  
   @Jean The cloudera manager picks up the ip address
  automatically..
  
  
   On Wed, Aug 21, 2013 at 5:07 PM, manoj p eors...@gmail.com
   wrote:
  
Can you try passing the argument
 -Dhbase.zookeeper.quorum=10.34.187.170
while running the program
   
If this does'nt work either please check if HBASE_HOME and
  HBASE_CONF_DIR
is set correctly.
   
BR/Manoj
   
   
On Wed, Aug 21, 2013 at 4:48 PM, Pavan Sudheendra 
 pavan0...@gmail.com
wrote:
   
 Yes. My /etc/hosts have the correct mapping to localhost

 127.0.0.1localhost

 # The following lines are desirable for IPv6 capable hosts
 ::1 ip6-localhost ip6-loopback
 fe00::0 ip6-localnet
 ff00::0 ip6-mcastprefix
 ff02::1 ip6-allnodes
 ff02::2 

Re: Zookeeper tries to connect to localhost when i have specified another clearly.

2013-08-21 Thread Pavan Sudheendra
Yes.. I can do everything.. But i do not want my Hadoop Namenode to report
logs like this.. Also, it says

KeeperException, re-throwing exception
org.apache.zookeeper.KeeperException$ConnectionLossException:
KeeperErrorCode = ConnectionLoss for /hbase/





On Wed, Aug 21, 2013 at 6:14 PM, Jean-Marc Spaggiari 
jean-m...@spaggiari.org wrote:

 Sound correct. You are able to start the shell and scan the few first line
 of the tables, right?

 2013/8/21 Pavan Sudheendra pavan0...@gmail.com

  This is my hbase-site.xml file if it helps:
 
  ?xml version=1.0 encoding=UTF-8?
 
  !--Autogenerated by Cloudera CM on 2013-07-09T09:26:49.841Z--
  configuration
property
  namehbase.rootdir/name
 
 
 
 valuehdfs://ip-10-34-187-170.eu-west-1.compute.internal:8020/hbase/value
/property
property
  namehbase.client.write.buffer/name
  value2097152/value
/property
property
  namehbase.client.pause/name
  value1000/value
/property
property
  namehbase.client.retries.number/name
  value10/value
/property
property
  namehbase.client.scanner.caching/name
  value1/value
/property
property
  namehbase.client.keyvalue.maxsize/name
  value10485760/value
/property
property
  namehbase.rpc.timeout/name
  value6/value
/property
property
  namehbase.security.authentication/name
  valuesimple/value
/property
property
  namezookeeper.session.timeout/name
  value6/value
/property
property
  namezookeeper.znode.parent/name
  value/hbase/value
/property
property
  namezookeeper.znode.rootserver/name
  valueroot-region-server/value
/property
property
  namehbase.zookeeper.quorum/name
  valueip-10-34-187-170.eu-west-1.compute.internal/value
/property
property
  namehbase.zookeeper.property.clientPort/name
  value2181/value
/property
  /configuration
 
 
 
  On Wed, Aug 21, 2013 at 6:09 PM, Jean-Marc Spaggiari 
  jean-m...@spaggiari.org wrote:
 
   Hum.
  
   Things seems to be correct there.
  
   Can you try something simple like:
  
   Configuration config = HBaseConfiguration.create();
   config.set(hbase.zookeeper.quorum, ip-10-34-187-170);
   HTable table  = new HTable (config,
   Bytes.toBytes(TABLE_NAME));
  
   And see if it works?
  
   JM
  
   2013/8/21 Pavan Sudheendra pavan0...@gmail.com
  
Is this a zookeeper specific error or something?
   
   
On Wed, Aug 21, 2013 at 6:06 PM, Pavan Sudheendra 
 pavan0...@gmail.com
wrote:
   
 Hi Jean,

 ubuntu@ip-10-34-187-170:~$ cat /etc/hostname
 ip-10-34-187-170
 ubuntu@ip-10-34-187-170:~$ hostname
 ip-10-34-187-170



 On Wed, Aug 21, 2013 at 6:01 PM, Jean-Marc Spaggiari 
 jean-m...@spaggiari.org wrote:

 And what about:
 # cat /etc/hostname

 and
 # hostname

 ?

 2013/8/21 Pavan Sudheendra pavan0...@gmail.com

  Sure..
  /etc/hosts file:
 
  127.0.0.1 localhost
  10.34.187.170 ip-10-34-187-170
  # The following lines are desirable for IPv6 capable hosts
  ::1 ip6-localhost ip6-loopback
  fe00::0 ip6-localnet
  ff00::0 ip6-mcastprefix
  ff02::1 ip6-allnodes
  ff02::2 ip6-allrouters
  ff02::3 ip6-allhosts
 
  Configuration conf = HBaseConfiguration.create();
  conf.set(hbase.zookeeper.quorum, 10.34.187.170);
   conf.set(hbase.zookeeper.property.clientPort,2181);
 conf.set(hbase.master,10.34.187.170);
 Job job = new Job(conf, ViewersTable);
 
  I'm trying to process table data which has 19 million rows..It
  runs
fine
  for a while although i don't see the map percent completion
 change
from
 0%
  .. After a while it says
 
  Task attempt_201304161625_0028_m_00_0 failed to report
 status
   for
  600 seconds. Killing!
 
 
 
 
 
  On Wed, Aug 21, 2013 at 5:52 PM, Jean-Marc Spaggiari 
  jean-m...@spaggiari.org wrote:
 
   Can you past you host file here again with the modification
 you
   have
  done?
  
   Also, can you share a big more of you code? What are you doing
   with
 the
   config object after, how do you create your table object,
 etc.?
  
   Thanks,
  
   JM
  
   2013/8/21 Pavan Sudheendra pavan0...@gmail.com
  
@Jean tried your method didn't work..
   
2013-08-21 12:17:10,908 INFO
 org.apache.zookeeper.ClientCnxn:
 Opening
socket connection to server localhost/127.0.0.1:2181. Will
  not
 attempt
   to
authenticate using SASL (Unable to locate a login
  configuration)
2013-08-21 12:17:10,908 WARN
 org.apache.zookeeper.ClientCnxn:
 Session
  0x0
for server null, unexpected error, closing socket connection
  and
   attempting
reconnect

Re: Zookeeper tries to connect to localhost when i have specified another clearly.

2013-08-21 Thread Jean-Marc Spaggiari
Are you able to connect to your ZK server shell and list the nodes?

2013/8/21 Pavan Sudheendra pavan0...@gmail.com

 Yes.. I can do everything.. But i do not want my Hadoop Namenode to report
 logs like this.. Also, it says

 KeeperException, re-throwing exception
 org.apache.zookeeper.KeeperException$ConnectionLossException:
 KeeperErrorCode = ConnectionLoss for /hbase/





 On Wed, Aug 21, 2013 at 6:14 PM, Jean-Marc Spaggiari 
 jean-m...@spaggiari.org wrote:

  Sound correct. You are able to start the shell and scan the few first
 line
  of the tables, right?
 
  2013/8/21 Pavan Sudheendra pavan0...@gmail.com
 
   This is my hbase-site.xml file if it helps:
  
   ?xml version=1.0 encoding=UTF-8?
  
   !--Autogenerated by Cloudera CM on 2013-07-09T09:26:49.841Z--
   configuration
 property
   namehbase.rootdir/name
  
  
  
 
 valuehdfs://ip-10-34-187-170.eu-west-1.compute.internal:8020/hbase/value
 /property
 property
   namehbase.client.write.buffer/name
   value2097152/value
 /property
 property
   namehbase.client.pause/name
   value1000/value
 /property
 property
   namehbase.client.retries.number/name
   value10/value
 /property
 property
   namehbase.client.scanner.caching/name
   value1/value
 /property
 property
   namehbase.client.keyvalue.maxsize/name
   value10485760/value
 /property
 property
   namehbase.rpc.timeout/name
   value6/value
 /property
 property
   namehbase.security.authentication/name
   valuesimple/value
 /property
 property
   namezookeeper.session.timeout/name
   value6/value
 /property
 property
   namezookeeper.znode.parent/name
   value/hbase/value
 /property
 property
   namezookeeper.znode.rootserver/name
   valueroot-region-server/value
 /property
 property
   namehbase.zookeeper.quorum/name
   valueip-10-34-187-170.eu-west-1.compute.internal/value
 /property
 property
   namehbase.zookeeper.property.clientPort/name
   value2181/value
 /property
   /configuration
  
  
  
   On Wed, Aug 21, 2013 at 6:09 PM, Jean-Marc Spaggiari 
   jean-m...@spaggiari.org wrote:
  
Hum.
   
Things seems to be correct there.
   
Can you try something simple like:
   
Configuration config = HBaseConfiguration.create();
config.set(hbase.zookeeper.quorum, ip-10-34-187-170);
HTable table  = new HTable (config,
Bytes.toBytes(TABLE_NAME));
   
And see if it works?
   
JM
   
2013/8/21 Pavan Sudheendra pavan0...@gmail.com
   
 Is this a zookeeper specific error or something?


 On Wed, Aug 21, 2013 at 6:06 PM, Pavan Sudheendra 
  pavan0...@gmail.com
 wrote:

  Hi Jean,
 
  ubuntu@ip-10-34-187-170:~$ cat /etc/hostname
  ip-10-34-187-170
  ubuntu@ip-10-34-187-170:~$ hostname
  ip-10-34-187-170
 
 
 
  On Wed, Aug 21, 2013 at 6:01 PM, Jean-Marc Spaggiari 
  jean-m...@spaggiari.org wrote:
 
  And what about:
  # cat /etc/hostname
 
  and
  # hostname
 
  ?
 
  2013/8/21 Pavan Sudheendra pavan0...@gmail.com
 
   Sure..
   /etc/hosts file:
  
   127.0.0.1 localhost
   10.34.187.170 ip-10-34-187-170
   # The following lines are desirable for IPv6 capable hosts
   ::1 ip6-localhost ip6-loopback
   fe00::0 ip6-localnet
   ff00::0 ip6-mcastprefix
   ff02::1 ip6-allnodes
   ff02::2 ip6-allrouters
   ff02::3 ip6-allhosts
  
   Configuration conf = HBaseConfiguration.create();
   conf.set(hbase.zookeeper.quorum, 10.34.187.170);
conf.set(hbase.zookeeper.property.clientPort,2181);
  conf.set(hbase.master,10.34.187.170);
  Job job = new Job(conf, ViewersTable);
  
   I'm trying to process table data which has 19 million rows..It
   runs
 fine
   for a while although i don't see the map percent completion
  change
 from
  0%
   .. After a while it says
  
   Task attempt_201304161625_0028_m_00_0 failed to report
  status
for
   600 seconds. Killing!
  
  
  
  
  
   On Wed, Aug 21, 2013 at 5:52 PM, Jean-Marc Spaggiari 
   jean-m...@spaggiari.org wrote:
  
Can you past you host file here again with the modification
  you
have
   done?
   
Also, can you share a big more of you code? What are you
 doing
with
  the
config object after, how do you create your table object,
  etc.?
   
Thanks,
   
JM
   
2013/8/21 Pavan Sudheendra pavan0...@gmail.com
   
 @Jean tried your method didn't work..

 2013-08-21 12:17:10,908 INFO
  org.apache.zookeeper.ClientCnxn:
  Opening
 socket connection to server localhost/127.0.0.1:2181.
 Will
   not
  

Re: Zookeeper tries to connect to localhost when i have specified another clearly.

2013-08-21 Thread Pavan Sudheendra
Yes .. The zookeeper server is also 10.34.187.170 ..


On Wed, Aug 21, 2013 at 6:21 PM, Jean-Marc Spaggiari 
jean-m...@spaggiari.org wrote:

 Are you able to connect to your ZK server shell and list the nodes?

 2013/8/21 Pavan Sudheendra pavan0...@gmail.com

  Yes.. I can do everything.. But i do not want my Hadoop Namenode to
 report
  logs like this.. Also, it says
 
  KeeperException, re-throwing exception
  org.apache.zookeeper.KeeperException$ConnectionLossException:
  KeeperErrorCode = ConnectionLoss for /hbase/
 
 
 
 
 
  On Wed, Aug 21, 2013 at 6:14 PM, Jean-Marc Spaggiari 
  jean-m...@spaggiari.org wrote:
 
   Sound correct. You are able to start the shell and scan the few first
  line
   of the tables, right?
  
   2013/8/21 Pavan Sudheendra pavan0...@gmail.com
  
This is my hbase-site.xml file if it helps:
   
?xml version=1.0 encoding=UTF-8?
   
!--Autogenerated by Cloudera CM on 2013-07-09T09:26:49.841Z--
configuration
  property
namehbase.rootdir/name
   
   
   
  
 
 valuehdfs://ip-10-34-187-170.eu-west-1.compute.internal:8020/hbase/value
  /property
  property
namehbase.client.write.buffer/name
value2097152/value
  /property
  property
namehbase.client.pause/name
value1000/value
  /property
  property
namehbase.client.retries.number/name
value10/value
  /property
  property
namehbase.client.scanner.caching/name
value1/value
  /property
  property
namehbase.client.keyvalue.maxsize/name
value10485760/value
  /property
  property
namehbase.rpc.timeout/name
value6/value
  /property
  property
namehbase.security.authentication/name
valuesimple/value
  /property
  property
namezookeeper.session.timeout/name
value6/value
  /property
  property
namezookeeper.znode.parent/name
value/hbase/value
  /property
  property
namezookeeper.znode.rootserver/name
valueroot-region-server/value
  /property
  property
namehbase.zookeeper.quorum/name
valueip-10-34-187-170.eu-west-1.compute.internal/value
  /property
  property
namehbase.zookeeper.property.clientPort/name
value2181/value
  /property
/configuration
   
   
   
On Wed, Aug 21, 2013 at 6:09 PM, Jean-Marc Spaggiari 
jean-m...@spaggiari.org wrote:
   
 Hum.

 Things seems to be correct there.

 Can you try something simple like:

 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum,
 ip-10-34-187-170);
 HTable table  = new HTable (config,
 Bytes.toBytes(TABLE_NAME));

 And see if it works?

 JM

 2013/8/21 Pavan Sudheendra pavan0...@gmail.com

  Is this a zookeeper specific error or something?
 
 
  On Wed, Aug 21, 2013 at 6:06 PM, Pavan Sudheendra 
   pavan0...@gmail.com
  wrote:
 
   Hi Jean,
  
   ubuntu@ip-10-34-187-170:~$ cat /etc/hostname
   ip-10-34-187-170
   ubuntu@ip-10-34-187-170:~$ hostname
   ip-10-34-187-170
  
  
  
   On Wed, Aug 21, 2013 at 6:01 PM, Jean-Marc Spaggiari 
   jean-m...@spaggiari.org wrote:
  
   And what about:
   # cat /etc/hostname
  
   and
   # hostname
  
   ?
  
   2013/8/21 Pavan Sudheendra pavan0...@gmail.com
  
Sure..
/etc/hosts file:
   
127.0.0.1 localhost
10.34.187.170 ip-10-34-187-170
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
   
Configuration conf = HBaseConfiguration.create();
conf.set(hbase.zookeeper.quorum, 10.34.187.170);
 conf.set(hbase.zookeeper.property.clientPort,2181);
   conf.set(hbase.master,10.34.187.170);
   Job job = new Job(conf, ViewersTable);
   
I'm trying to process table data which has 19 million
 rows..It
runs
  fine
for a while although i don't see the map percent completion
   change
  from
   0%
.. After a while it says
   
Task attempt_201304161625_0028_m_00_0 failed to report
   status
 for
600 seconds. Killing!
   
   
   
   
   
On Wed, Aug 21, 2013 at 5:52 PM, Jean-Marc Spaggiari 
jean-m...@spaggiari.org wrote:
   
 Can you past you host file here again with the
 modification
   you
 have
done?

 Also, can you share a big more of you code? What are you
  doing
 with
   the
 config object after, how do you create your table object,
   etc.?

Re: Zookeeper tries to connect to localhost when i have specified another clearly.

2013-08-21 Thread Pavan Sudheendra
@Jean the log which i got at the start of running the hadoop jar: Maybe you
can spot something

11:51:44,431  INFO ZooKeeper:100 - Client
environment:java.library.path=/usr/lib/hadoop/lib/native
11:51:44,432  INFO ZooKeeper:100 - Client environment:java.io.tmpdir=/tmp
11:51:44,432  INFO ZooKeeper:100 - Client environment:java.compiler=NA
11:51:44,432  INFO ZooKeeper:100 - Client environment:os.name=Linux
11:51:44,432  INFO ZooKeeper:100 - Client environment:os.arch=amd64
11:51:44,432  INFO ZooKeeper:100 - Client
environment:os.version=3.2.0-23-virtual
11:51:44,432  INFO ZooKeeper:100 - Client environment:user.name=root
11:51:44,433  INFO ZooKeeper:100 - Client environment:user.home=/root
11:51:44,433  INFO ZooKeeper:100 - Client
environment:user.dir=/home/ubuntu/pasudhee/ActionDataInterpret
11:51:44,437  INFO ZooKeeper:438 - Initiating client connection,
connectString=localhost:2181 sessionTimeout=18 watcher=hconnection
11:51:44,493  INFO ClientCnxn:966 - Opening socket connection to server
localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL
(Unable to locate a login configuration)
11:51:44,500  INFO RecoverableZooKeeper:104 - The identifier of this
process is 19...@ip-10-34-187-170.eu-west-1.compute.internal
11:51:44,513  INFO ClientCnxn:849 - Socket connection established to
localhost/127.0.0.1:2181, initiating session
11:51:44,532  INFO ClientCnxn:1207 - Session establishment complete on
server localhost/127.0.0.1:2181, sessionid = 0x13ff1cff71bb167, negotiated
timeout = 6
11:51:44,743  INFO ZooKeeper:438 - Initiating client connection,
connectString=localhost:2181 sessionTimeout=18 watcher=hconnection
11:51:44,747  INFO ClientCnxn:966 - Opening socket connection to server
localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL
(Unable to locate a login configuration)
11:51:44,747  INFO ClientCnxn:849 - Socket connection established to
localhost/127.0.0.1:2181, initiating session
11:51:44,747  INFO RecoverableZooKeeper:104 - The identifier of this
process is 19...@ip-10-34-187-170.eu-west-1.compute.internal
11:51:44,749  INFO ClientCnxn:1207 - Session establishment complete on
server localhost/127.0.0.1:2181, sessionid = 0x13ff1cff71bb168, negotiated
timeout = 6
11:51:44,803  WARN Configuration:824 - hadoop.native.lib is deprecated.
Instead, use io.native.lib.available
11:51:45,051  INFO HConnectionManager$HConnectionImplementation:1789 -
Closed zookeeper sessionid=0x13ff1cff71bb168
11:51:45,054  INFO ZooKeeper:684 - Session: 0x13ff1cff71bb168 closed
11:51:45,054  INFO ClientCnxn:509 - EventThread shut down
11:51:45,057  INFO ZooKeeper:438 - Initiating client connection,
connectString=localhost:2181 sessionTimeout=18 watcher=hconnection
11:51:45,059  INFO ClientCnxn:966 - Opening socket connection to server
localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL
(Unable to locate a login configuration)
11:51:45,060  INFO ClientCnxn:849 - Socket connection established to
localhost/127.0.0.1:2181, initiating session
11:51:45,061  INFO ClientCnxn:1207 - Session establishment complete on
server localhost/127.0.0.1:2181, sessionid = 0x13ff1cff71bb169, negotiated
timeout = 6
11:51:45,065  INFO RecoverableZooKeeper:104 - The identifier of this
process is 19...@ip-10-34-187-170.eu-west-1.compute.internal
11:51:45,135  INFO ZooKeeper:438 - Initiating client connection,
connectString=10.34.187.170:2181 sessionTimeout=18 watcher=hconnection
11:51:45,137  INFO ClientCnxn:966 - Opening socket connection to server
ip-10-34-187-170.eu-west-1.compute.internal/10.34.187.170:2181. Will not
attempt to authenticate using SASL (Unable to locate a login configuration)
11:51:45,138  INFO ClientCnxn:849 - Socket connection established to
ip-10-34-187-170.eu-west-1.compute.internal/10.34.187.170:2181, initiating
session
11:51:45,138  INFO RecoverableZooKeeper:104 - The identifier of this
process is 19...@ip-10-34-187-170.eu-west-1.compute.internal
11:51:45,140  INFO ClientCnxn:1207 - Session establishment complete on
server ip-10-34-187-170.eu-west-1.compute.internal/10.34.187.170:2181,
sessionid = 0x13ff1cff71bb16a, negotiated timeout = 6
11:51:45,173  INFO ZooKeeper:438 - Initiating client connection,
connectString=10.34.187.170:2181 sessionTimeout=18
watcher=catalogtracker-on-org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@7444f787
11:51:45,176  INFO ClientCnxn:966 - Opening socket connection to server
ip-10-34-187-170.eu-west-1.compute.internal/10.34.187.170:2181. Will not
attempt to authenticate using SASL (Unable to locate a login configuration)
11:51:45,176  INFO ClientCnxn:849 - Socket connection established to
ip-10-34-187-170.eu-west-1.compute.internal/10.34.187.170:2181, initiating
session
11:51:45,178  INFO ClientCnxn:1207 - Session establishment complete on
server ip-10-34-187-170.eu-west-1.compute.internal/10.34.187.170:2181,
sessionid = 0x13ff1cff71bb16b, negotiated timeout = 6
11:51:45,180  INFO 

Re: Zookeeper tries to connect to localhost when i have specified another clearly.

2013-08-21 Thread Jean-Marc Spaggiari
All fine on those logs too.

So everything is working fine, ZK, HBase, the job is working fine too. The
only issue is this INFO regarding SAS, correct?

I think you should simply ignore it.

If it's annoying you, just turn org.apache.zookeeper.ClientCnxn loglevel to
WARN on log4j.properties. (It's the setting I have on my own cluster).

JM

2013/8/21 Pavan Sudheendra pavan0...@gmail.com

 @Jean the log which i got at the start of running the hadoop jar: Maybe you
 can spot something

 11:51:44,431  INFO ZooKeeper:100 - Client
 environment:java.library.path=/usr/lib/hadoop/lib/native
 11:51:44,432  INFO ZooKeeper:100 - Client environment:java.io.tmpdir=/tmp
 11:51:44,432  INFO ZooKeeper:100 - Client environment:java.compiler=NA
 11:51:44,432  INFO ZooKeeper:100 - Client environment:os.name=Linux
 11:51:44,432  INFO ZooKeeper:100 - Client environment:os.arch=amd64
 11:51:44,432  INFO ZooKeeper:100 - Client
 environment:os.version=3.2.0-23-virtual
 11:51:44,432  INFO ZooKeeper:100 - Client environment:user.name=root
 11:51:44,433  INFO ZooKeeper:100 - Client environment:user.home=/root
 11:51:44,433  INFO ZooKeeper:100 - Client
 environment:user.dir=/home/ubuntu/pasudhee/ActionDataInterpret
 11:51:44,437  INFO ZooKeeper:438 - Initiating client connection,
 connectString=localhost:2181 sessionTimeout=18 watcher=hconnection
 11:51:44,493  INFO ClientCnxn:966 - Opening socket connection to server
 localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL
 (Unable to locate a login configuration)
 11:51:44,500  INFO RecoverableZooKeeper:104 - The identifier of this
 process is 19...@ip-10-34-187-170.eu-west-1.compute.internal
 11:51:44,513  INFO ClientCnxn:849 - Socket connection established to
 localhost/127.0.0.1:2181, initiating session
 11:51:44,532  INFO ClientCnxn:1207 - Session establishment complete on
 server localhost/127.0.0.1:2181, sessionid = 0x13ff1cff71bb167, negotiated
 timeout = 6
 11:51:44,743  INFO ZooKeeper:438 - Initiating client connection,
 connectString=localhost:2181 sessionTimeout=18 watcher=hconnection
 11:51:44,747  INFO ClientCnxn:966 - Opening socket connection to server
 localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL
 (Unable to locate a login configuration)
 11:51:44,747  INFO ClientCnxn:849 - Socket connection established to
 localhost/127.0.0.1:2181, initiating session
 11:51:44,747  INFO RecoverableZooKeeper:104 - The identifier of this
 process is 19...@ip-10-34-187-170.eu-west-1.compute.internal
 11:51:44,749  INFO ClientCnxn:1207 - Session establishment complete on
 server localhost/127.0.0.1:2181, sessionid = 0x13ff1cff71bb168, negotiated
 timeout = 6
 11:51:44,803  WARN Configuration:824 - hadoop.native.lib is deprecated.
 Instead, use io.native.lib.available
 11:51:45,051  INFO HConnectionManager$HConnectionImplementation:1789 -
 Closed zookeeper sessionid=0x13ff1cff71bb168
 11:51:45,054  INFO ZooKeeper:684 - Session: 0x13ff1cff71bb168 closed
 11:51:45,054  INFO ClientCnxn:509 - EventThread shut down
 11:51:45,057  INFO ZooKeeper:438 - Initiating client connection,
 connectString=localhost:2181 sessionTimeout=18 watcher=hconnection
 11:51:45,059  INFO ClientCnxn:966 - Opening socket connection to server
 localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL
 (Unable to locate a login configuration)
 11:51:45,060  INFO ClientCnxn:849 - Socket connection established to
 localhost/127.0.0.1:2181, initiating session
 11:51:45,061  INFO ClientCnxn:1207 - Session establishment complete on
 server localhost/127.0.0.1:2181, sessionid = 0x13ff1cff71bb169, negotiated
 timeout = 6
 11:51:45,065  INFO RecoverableZooKeeper:104 - The identifier of this
 process is 19...@ip-10-34-187-170.eu-west-1.compute.internal
 11:51:45,135  INFO ZooKeeper:438 - Initiating client connection,
 connectString=10.34.187.170:2181 sessionTimeout=18 watcher=hconnection
 11:51:45,137  INFO ClientCnxn:966 - Opening socket connection to server
 ip-10-34-187-170.eu-west-1.compute.internal/10.34.187.170:2181. Will not
 attempt to authenticate using SASL (Unable to locate a login configuration)
 11:51:45,138  INFO ClientCnxn:849 - Socket connection established to
 ip-10-34-187-170.eu-west-1.compute.internal/10.34.187.170:2181, initiating
 session
 11:51:45,138  INFO RecoverableZooKeeper:104 - The identifier of this
 process is 19...@ip-10-34-187-170.eu-west-1.compute.internal
 11:51:45,140  INFO ClientCnxn:1207 - Session establishment complete on
 server ip-10-34-187-170.eu-west-1.compute.internal/10.34.187.170:2181,
 sessionid = 0x13ff1cff71bb16a, negotiated timeout = 6
 11:51:45,173  INFO ZooKeeper:438 - Initiating client connection,
 connectString=10.34.187.170:2181 sessionTimeout=18

 watcher=catalogtracker-on-org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@7444f787
 11:51:45,176  INFO ClientCnxn:966 - Opening socket connection to server
 

Re: Zookeeper tries to connect to localhost when i have specified another clearly.

2013-08-21 Thread Pavan Sudheendra
it doesn't pose any real threats?


On Wed, Aug 21, 2013 at 6:30 PM, Jean-Marc Spaggiari 
jean-m...@spaggiari.org wrote:

 All fine on those logs too.

 So everything is working fine, ZK, HBase, the job is working fine too. The
 only issue is this INFO regarding SAS, correct?

 I think you should simply ignore it.

 If it's annoying you, just turn org.apache.zookeeper.ClientCnxn loglevel to
 WARN on log4j.properties. (It's the setting I have on my own cluster).

 JM

 2013/8/21 Pavan Sudheendra pavan0...@gmail.com

  @Jean the log which i got at the start of running the hadoop jar: Maybe
 you
  can spot something
 
  11:51:44,431  INFO ZooKeeper:100 - Client
  environment:java.library.path=/usr/lib/hadoop/lib/native
  11:51:44,432  INFO ZooKeeper:100 - Client environment:java.io.tmpdir=/tmp
  11:51:44,432  INFO ZooKeeper:100 - Client environment:java.compiler=NA
  11:51:44,432  INFO ZooKeeper:100 - Client environment:os.name=Linux
  11:51:44,432  INFO ZooKeeper:100 - Client environment:os.arch=amd64
  11:51:44,432  INFO ZooKeeper:100 - Client
  environment:os.version=3.2.0-23-virtual
  11:51:44,432  INFO ZooKeeper:100 - Client environment:user.name=root
  11:51:44,433  INFO ZooKeeper:100 - Client environment:user.home=/root
  11:51:44,433  INFO ZooKeeper:100 - Client
  environment:user.dir=/home/ubuntu/pasudhee/ActionDataInterpret
  11:51:44,437  INFO ZooKeeper:438 - Initiating client connection,
  connectString=localhost:2181 sessionTimeout=18 watcher=hconnection
  11:51:44,493  INFO ClientCnxn:966 - Opening socket connection to server
  localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL
  (Unable to locate a login configuration)
  11:51:44,500  INFO RecoverableZooKeeper:104 - The identifier of this
  process is 19...@ip-10-34-187-170.eu-west-1.compute.internal
  11:51:44,513  INFO ClientCnxn:849 - Socket connection established to
  localhost/127.0.0.1:2181, initiating session
  11:51:44,532  INFO ClientCnxn:1207 - Session establishment complete on
  server localhost/127.0.0.1:2181, sessionid = 0x13ff1cff71bb167,
 negotiated
  timeout = 6
  11:51:44,743  INFO ZooKeeper:438 - Initiating client connection,
  connectString=localhost:2181 sessionTimeout=18 watcher=hconnection
  11:51:44,747  INFO ClientCnxn:966 - Opening socket connection to server
  localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL
  (Unable to locate a login configuration)
  11:51:44,747  INFO ClientCnxn:849 - Socket connection established to
  localhost/127.0.0.1:2181, initiating session
  11:51:44,747  INFO RecoverableZooKeeper:104 - The identifier of this
  process is 19...@ip-10-34-187-170.eu-west-1.compute.internal
  11:51:44,749  INFO ClientCnxn:1207 - Session establishment complete on
  server localhost/127.0.0.1:2181, sessionid = 0x13ff1cff71bb168,
 negotiated
  timeout = 6
  11:51:44,803  WARN Configuration:824 - hadoop.native.lib is deprecated.
  Instead, use io.native.lib.available
  11:51:45,051  INFO HConnectionManager$HConnectionImplementation:1789 -
  Closed zookeeper sessionid=0x13ff1cff71bb168
  11:51:45,054  INFO ZooKeeper:684 - Session: 0x13ff1cff71bb168 closed
  11:51:45,054  INFO ClientCnxn:509 - EventThread shut down
  11:51:45,057  INFO ZooKeeper:438 - Initiating client connection,
  connectString=localhost:2181 sessionTimeout=18 watcher=hconnection
  11:51:45,059  INFO ClientCnxn:966 - Opening socket connection to server
  localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL
  (Unable to locate a login configuration)
  11:51:45,060  INFO ClientCnxn:849 - Socket connection established to
  localhost/127.0.0.1:2181, initiating session
  11:51:45,061  INFO ClientCnxn:1207 - Session establishment complete on
  server localhost/127.0.0.1:2181, sessionid = 0x13ff1cff71bb169,
 negotiated
  timeout = 6
  11:51:45,065  INFO RecoverableZooKeeper:104 - The identifier of this
  process is 19...@ip-10-34-187-170.eu-west-1.compute.internal
  11:51:45,135  INFO ZooKeeper:438 - Initiating client connection,
  connectString=10.34.187.170:2181 sessionTimeout=18
 watcher=hconnection
  11:51:45,137  INFO ClientCnxn:966 - Opening socket connection to server
  ip-10-34-187-170.eu-west-1.compute.internal/10.34.187.170:2181. Will not
  attempt to authenticate using SASL (Unable to locate a login
 configuration)
  11:51:45,138  INFO ClientCnxn:849 - Socket connection established to
  ip-10-34-187-170.eu-west-1.compute.internal/10.34.187.170:2181,
 initiating
  session
  11:51:45,138  INFO RecoverableZooKeeper:104 - The identifier of this
  process is 19...@ip-10-34-187-170.eu-west-1.compute.internal
  11:51:45,140  INFO ClientCnxn:1207 - Session establishment complete on
  server ip-10-34-187-170.eu-west-1.compute.internal/10.34.187.170:2181,
  sessionid = 0x13ff1cff71bb16a, negotiated timeout = 6
  11:51:45,173  INFO ZooKeeper:438 - Initiating client connection,
  connectString=10.34.187.170:2181 sessionTimeout=18
 
 
 

Re: Zookeeper tries to connect to localhost when i have specified another clearly.

2013-08-21 Thread Jean-Marc Spaggiari
I'm running with this INFO for more than a year now ;) So no, I don't think
this is going to pose any real threats. You have everything configured
correctly and everything seems to be working fine.

JM

2013/8/21 Pavan Sudheendra pavan0...@gmail.com

 it doesn't pose any real threats?


 On Wed, Aug 21, 2013 at 6:30 PM, Jean-Marc Spaggiari 
 jean-m...@spaggiari.org wrote:

  All fine on those logs too.
 
  So everything is working fine, ZK, HBase, the job is working fine too.
 The
  only issue is this INFO regarding SAS, correct?
 
  I think you should simply ignore it.
 
  If it's annoying you, just turn org.apache.zookeeper.ClientCnxn loglevel
 to
  WARN on log4j.properties. (It's the setting I have on my own cluster).
 
  JM
 
  2013/8/21 Pavan Sudheendra pavan0...@gmail.com
 
   @Jean the log which i got at the start of running the hadoop jar: Maybe
  you
   can spot something
  
   11:51:44,431  INFO ZooKeeper:100 - Client
   environment:java.library.path=/usr/lib/hadoop/lib/native
   11:51:44,432  INFO ZooKeeper:100 - Client
 environment:java.io.tmpdir=/tmp
   11:51:44,432  INFO ZooKeeper:100 - Client
 environment:java.compiler=NA
   11:51:44,432  INFO ZooKeeper:100 - Client environment:os.name=Linux
   11:51:44,432  INFO ZooKeeper:100 - Client environment:os.arch=amd64
   11:51:44,432  INFO ZooKeeper:100 - Client
   environment:os.version=3.2.0-23-virtual
   11:51:44,432  INFO ZooKeeper:100 - Client environment:user.name=root
   11:51:44,433  INFO ZooKeeper:100 - Client environment:user.home=/root
   11:51:44,433  INFO ZooKeeper:100 - Client
   environment:user.dir=/home/ubuntu/pasudhee/ActionDataInterpret
   11:51:44,437  INFO ZooKeeper:438 - Initiating client connection,
   connectString=localhost:2181 sessionTimeout=18 watcher=hconnection
   11:51:44,493  INFO ClientCnxn:966 - Opening socket connection to server
   localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL
   (Unable to locate a login configuration)
   11:51:44,500  INFO RecoverableZooKeeper:104 - The identifier of this
   process is 19...@ip-10-34-187-170.eu-west-1.compute.internal
   11:51:44,513  INFO ClientCnxn:849 - Socket connection established to
   localhost/127.0.0.1:2181, initiating session
   11:51:44,532  INFO ClientCnxn:1207 - Session establishment complete on
   server localhost/127.0.0.1:2181, sessionid = 0x13ff1cff71bb167,
  negotiated
   timeout = 6
   11:51:44,743  INFO ZooKeeper:438 - Initiating client connection,
   connectString=localhost:2181 sessionTimeout=18 watcher=hconnection
   11:51:44,747  INFO ClientCnxn:966 - Opening socket connection to server
   localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL
   (Unable to locate a login configuration)
   11:51:44,747  INFO ClientCnxn:849 - Socket connection established to
   localhost/127.0.0.1:2181, initiating session
   11:51:44,747  INFO RecoverableZooKeeper:104 - The identifier of this
   process is 19...@ip-10-34-187-170.eu-west-1.compute.internal
   11:51:44,749  INFO ClientCnxn:1207 - Session establishment complete on
   server localhost/127.0.0.1:2181, sessionid = 0x13ff1cff71bb168,
  negotiated
   timeout = 6
   11:51:44,803  WARN Configuration:824 - hadoop.native.lib is deprecated.
   Instead, use io.native.lib.available
   11:51:45,051  INFO HConnectionManager$HConnectionImplementation:1789 -
   Closed zookeeper sessionid=0x13ff1cff71bb168
   11:51:45,054  INFO ZooKeeper:684 - Session: 0x13ff1cff71bb168 closed
   11:51:45,054  INFO ClientCnxn:509 - EventThread shut down
   11:51:45,057  INFO ZooKeeper:438 - Initiating client connection,
   connectString=localhost:2181 sessionTimeout=18 watcher=hconnection
   11:51:45,059  INFO ClientCnxn:966 - Opening socket connection to server
   localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL
   (Unable to locate a login configuration)
   11:51:45,060  INFO ClientCnxn:849 - Socket connection established to
   localhost/127.0.0.1:2181, initiating session
   11:51:45,061  INFO ClientCnxn:1207 - Session establishment complete on
   server localhost/127.0.0.1:2181, sessionid = 0x13ff1cff71bb169,
  negotiated
   timeout = 6
   11:51:45,065  INFO RecoverableZooKeeper:104 - The identifier of this
   process is 19...@ip-10-34-187-170.eu-west-1.compute.internal
   11:51:45,135  INFO ZooKeeper:438 - Initiating client connection,
   connectString=10.34.187.170:2181 sessionTimeout=18
  watcher=hconnection
   11:51:45,137  INFO ClientCnxn:966 - Opening socket connection to server
   ip-10-34-187-170.eu-west-1.compute.internal/10.34.187.170:2181. Will
 not
   attempt to authenticate using SASL (Unable to locate a login
  configuration)
   11:51:45,138  INFO ClientCnxn:849 - Socket connection established to
   ip-10-34-187-170.eu-west-1.compute.internal/10.34.187.170:2181,
  initiating
   session
   11:51:45,138  INFO RecoverableZooKeeper:104 - The identifier of this
   process is 19...@ip-10-34-187-170.eu-west-1.compute.internal
   11:51:45,140  

Re: Zookeeper tries to connect to localhost when i have specified another clearly.

2013-08-21 Thread Jean-Marc Spaggiari
Hi Pavan,

How mane namenodes do you have? Are you running HA and that's why you have
more than 1 namenode?

Also, yes, if you don't have DNS, you need to have hosts file configured.

First try to ping from one server to the other one using the name only, if
it works, they you don't need to update the hosts file.

JM

2013/8/21 Pavan Sudheendra pavan0...@gmail.com

 I should update the /etc/hosts file on every namenode correct?




 On Wed, Aug 21, 2013 at 7:09 PM, Pavan Sudheendra pavan0...@gmail.com
 wrote:

  But Jean all my namenodes log the same thing..
  2013-08-21 13:38:55,815 INFO org.apache.zookeeper.ClientCnxn: Opening
  socket connection to server localhost/127.0.0.1:2181. Will not attempt
 to
  authenticate using SASL (Unable to locate a login configuration)
  java.net.ConnectException: Connection refused
 
   Although i believe you, i'm starting to worry a bit.. Its taking a lot
 of
  time for hbase to process 1 million entries in the cluster, let alone 19
  million.. So, huge performance blow there..
 
 
 
 
  On Wed, Aug 21, 2013 at 6:56 PM, Jean-Marc Spaggiari 
  jean-m...@spaggiari.org wrote:
 
  I'm running with this INFO for more than a year now ;) So no, I don't
  think
  this is going to pose any real threats. You have everything configured
  correctly and everything seems to be working fine.
 
  JM
 
  2013/8/21 Pavan Sudheendra pavan0...@gmail.com
 
   it doesn't pose any real threats?
  
  
   On Wed, Aug 21, 2013 at 6:30 PM, Jean-Marc Spaggiari 
   jean-m...@spaggiari.org wrote:
  
All fine on those logs too.
   
So everything is working fine, ZK, HBase, the job is working fine
 too.
   The
only issue is this INFO regarding SAS, correct?
   
I think you should simply ignore it.
   
If it's annoying you, just turn org.apache.zookeeper.ClientCnxn
  loglevel
   to
WARN on log4j.properties. (It's the setting I have on my own
 cluster).
   
JM
   
2013/8/21 Pavan Sudheendra pavan0...@gmail.com
   
 @Jean the log which i got at the start of running the hadoop jar:
  Maybe
you
 can spot something

 11:51:44,431  INFO ZooKeeper:100 - Client
 environment:java.library.path=/usr/lib/hadoop/lib/native
 11:51:44,432  INFO ZooKeeper:100 - Client
   environment:java.io.tmpdir=/tmp
 11:51:44,432  INFO ZooKeeper:100 - Client
   environment:java.compiler=NA
 11:51:44,432  INFO ZooKeeper:100 - Client environment:os.name
 =Linux
 11:51:44,432  INFO ZooKeeper:100 - Client
 environment:os.arch=amd64
 11:51:44,432  INFO ZooKeeper:100 - Client
 environment:os.version=3.2.0-23-virtual
 11:51:44,432  INFO ZooKeeper:100 - Client environment:user.name
  =root
 11:51:44,433  INFO ZooKeeper:100 - Client
  environment:user.home=/root
 11:51:44,433  INFO ZooKeeper:100 - Client
 environment:user.dir=/home/ubuntu/pasudhee/ActionDataInterpret
 11:51:44,437  INFO ZooKeeper:438 - Initiating client connection,
 connectString=localhost:2181 sessionTimeout=18
  watcher=hconnection
 11:51:44,493  INFO ClientCnxn:966 - Opening socket connection to
  server
 localhost/127.0.0.1:2181. Will not attempt to authenticate using
  SASL
 (Unable to locate a login configuration)
 11:51:44,500  INFO RecoverableZooKeeper:104 - The identifier of
 this
 process is 19...@ip-10-34-187-170.eu-west-1.compute.internal
 11:51:44,513  INFO ClientCnxn:849 - Socket connection established
 to
 localhost/127.0.0.1:2181, initiating session
 11:51:44,532  INFO ClientCnxn:1207 - Session establishment
 complete
  on
 server localhost/127.0.0.1:2181, sessionid = 0x13ff1cff71bb167,
negotiated
 timeout = 6
 11:51:44,743  INFO ZooKeeper:438 - Initiating client connection,
 connectString=localhost:2181 sessionTimeout=18
  watcher=hconnection
 11:51:44,747  INFO ClientCnxn:966 - Opening socket connection to
  server
 localhost/127.0.0.1:2181. Will not attempt to authenticate using
  SASL
 (Unable to locate a login configuration)
 11:51:44,747  INFO ClientCnxn:849 - Socket connection established
 to
 localhost/127.0.0.1:2181, initiating session
 11:51:44,747  INFO RecoverableZooKeeper:104 - The identifier of
 this
 process is 19...@ip-10-34-187-170.eu-west-1.compute.internal
 11:51:44,749  INFO ClientCnxn:1207 - Session establishment
 complete
  on
 server localhost/127.0.0.1:2181, sessionid = 0x13ff1cff71bb168,
negotiated
 timeout = 6
 11:51:44,803  WARN Configuration:824 - hadoop.native.lib is
  deprecated.
 Instead, use io.native.lib.available
 11:51:45,051  INFO
  HConnectionManager$HConnectionImplementation:1789 -
 Closed zookeeper sessionid=0x13ff1cff71bb168
 11:51:45,054  INFO ZooKeeper:684 - Session: 0x13ff1cff71bb168
 closed
 11:51:45,054  INFO ClientCnxn:509 - EventThread shut down
 11:51:45,057  INFO ZooKeeper:438 - Initiating client connection,
 connectString=localhost:2181 sessionTimeout=18
  

Re: Zookeeper tries to connect to localhost when i have specified another clearly.

2013-08-21 Thread Pavan Sudheendra
But Jean all my namenodes log the same thing..
2013-08-21 13:38:55,815 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server localhost/127.0.0.1:2181. Will not attempt to
authenticate using SASL (Unable to locate a login configuration)
java.net.ConnectException: Connection refused

 Although i believe you, i'm starting to worry a bit.. Its taking a lot of
time for hbase to process 1 million entries in the cluster, let alone 19
million.. So, huge performance blow there..




On Wed, Aug 21, 2013 at 6:56 PM, Jean-Marc Spaggiari 
jean-m...@spaggiari.org wrote:

 I'm running with this INFO for more than a year now ;) So no, I don't think
 this is going to pose any real threats. You have everything configured
 correctly and everything seems to be working fine.

 JM

 2013/8/21 Pavan Sudheendra pavan0...@gmail.com

  it doesn't pose any real threats?
 
 
  On Wed, Aug 21, 2013 at 6:30 PM, Jean-Marc Spaggiari 
  jean-m...@spaggiari.org wrote:
 
   All fine on those logs too.
  
   So everything is working fine, ZK, HBase, the job is working fine too.
  The
   only issue is this INFO regarding SAS, correct?
  
   I think you should simply ignore it.
  
   If it's annoying you, just turn org.apache.zookeeper.ClientCnxn
 loglevel
  to
   WARN on log4j.properties. (It's the setting I have on my own cluster).
  
   JM
  
   2013/8/21 Pavan Sudheendra pavan0...@gmail.com
  
@Jean the log which i got at the start of running the hadoop jar:
 Maybe
   you
can spot something
   
11:51:44,431  INFO ZooKeeper:100 - Client
environment:java.library.path=/usr/lib/hadoop/lib/native
11:51:44,432  INFO ZooKeeper:100 - Client
  environment:java.io.tmpdir=/tmp
11:51:44,432  INFO ZooKeeper:100 - Client
  environment:java.compiler=NA
11:51:44,432  INFO ZooKeeper:100 - Client environment:os.name=Linux
11:51:44,432  INFO ZooKeeper:100 - Client environment:os.arch=amd64
11:51:44,432  INFO ZooKeeper:100 - Client
environment:os.version=3.2.0-23-virtual
11:51:44,432  INFO ZooKeeper:100 - Client environment:user.name=root
11:51:44,433  INFO ZooKeeper:100 - Client environment:user.home=/root
11:51:44,433  INFO ZooKeeper:100 - Client
environment:user.dir=/home/ubuntu/pasudhee/ActionDataInterpret
11:51:44,437  INFO ZooKeeper:438 - Initiating client connection,
connectString=localhost:2181 sessionTimeout=18
 watcher=hconnection
11:51:44,493  INFO ClientCnxn:966 - Opening socket connection to
 server
localhost/127.0.0.1:2181. Will not attempt to authenticate using
 SASL
(Unable to locate a login configuration)
11:51:44,500  INFO RecoverableZooKeeper:104 - The identifier of this
process is 19...@ip-10-34-187-170.eu-west-1.compute.internal
11:51:44,513  INFO ClientCnxn:849 - Socket connection established to
localhost/127.0.0.1:2181, initiating session
11:51:44,532  INFO ClientCnxn:1207 - Session establishment complete
 on
server localhost/127.0.0.1:2181, sessionid = 0x13ff1cff71bb167,
   negotiated
timeout = 6
11:51:44,743  INFO ZooKeeper:438 - Initiating client connection,
connectString=localhost:2181 sessionTimeout=18
 watcher=hconnection
11:51:44,747  INFO ClientCnxn:966 - Opening socket connection to
 server
localhost/127.0.0.1:2181. Will not attempt to authenticate using
 SASL
(Unable to locate a login configuration)
11:51:44,747  INFO ClientCnxn:849 - Socket connection established to
localhost/127.0.0.1:2181, initiating session
11:51:44,747  INFO RecoverableZooKeeper:104 - The identifier of this
process is 19...@ip-10-34-187-170.eu-west-1.compute.internal
11:51:44,749  INFO ClientCnxn:1207 - Session establishment complete
 on
server localhost/127.0.0.1:2181, sessionid = 0x13ff1cff71bb168,
   negotiated
timeout = 6
11:51:44,803  WARN Configuration:824 - hadoop.native.lib is
 deprecated.
Instead, use io.native.lib.available
11:51:45,051  INFO HConnectionManager$HConnectionImplementation:1789
 -
Closed zookeeper sessionid=0x13ff1cff71bb168
11:51:45,054  INFO ZooKeeper:684 - Session: 0x13ff1cff71bb168 closed
11:51:45,054  INFO ClientCnxn:509 - EventThread shut down
11:51:45,057  INFO ZooKeeper:438 - Initiating client connection,
connectString=localhost:2181 sessionTimeout=18
 watcher=hconnection
11:51:45,059  INFO ClientCnxn:966 - Opening socket connection to
 server
localhost/127.0.0.1:2181. Will not attempt to authenticate using
 SASL
(Unable to locate a login configuration)
11:51:45,060  INFO ClientCnxn:849 - Socket connection established to
localhost/127.0.0.1:2181, initiating session
11:51:45,061  INFO ClientCnxn:1207 - Session establishment complete
 on
server localhost/127.0.0.1:2181, sessionid = 0x13ff1cff71bb169,
   negotiated
timeout = 6
11:51:45,065  INFO RecoverableZooKeeper:104 - The identifier of this
process is 19...@ip-10-34-187-170.eu-west-1.compute.internal

Re: Chocolatey package for Windows

2013-08-21 Thread Enis Söztutar
Hi,

Agreed with what Nick said. There is also an MSI based installation for
HBase as a part of HDP-1.3 package. You can check it out here:
http://hortonworks.com/products/hdp-windows/

Enis


On Tue, Aug 20, 2013 at 2:54 PM, Nick Dimiduk ndimi...@gmail.com wrote:

 Hi Andrew,

 I don't think the homebrew recipes are managed by an HBase developer.
 Rather, someone in the community has taken it upon themselves to
 provide the project through brew. Likewise, the Apache HBase project does
 not provide RPM or DEB packages, but you're likely to find them if you look
 around.

 Maybe you can find a willing maintainer on the users@ list? (I don't run
 Windows very often so I won't make a good volunteer)

 Thanks,
 Nick

 On Tuesday, August 20, 2013, Andrew Pennebaker wrote:

  Could we automate the installation process for Windows with a
  Chocolateyhttp://chocolatey.org/package, the way we offer a
  Homebrew
  https://github.com/mxcl/homebrew/blob/master/Library/Formula/hbase.rb
  formula
  for Mac OS X?
 



Re: Zookeeper tries to connect to localhost when i have specified another clearly.

2013-08-21 Thread Pavan Sudheendra
We have 5 tasktrackers and 1 job tracker..
I don't know why we have this.. This architecture was already existing when
i started working on this..

I just tried pinging the datanode from a namenode using ip-10-34-187-170
and it is able to ping it..
Not sure its not able to pick it up and show it in the namenode logs...
Frankly i thought the namenode logs should be more useful and specific
instead of just giving out warning messages like this.. Let me know if you
need any more files to look at ..

thanks for all the help :)


On Wed, Aug 21, 2013 at 7:13 PM, Jean-Marc Spaggiari 
jean-m...@spaggiari.org wrote:

 Hi Pavan,

 How mane namenodes do you have? Are you running HA and that's why you have
 more than 1 namenode?

 Also, yes, if you don't have DNS, you need to have hosts file configured.

 First try to ping from one server to the other one using the name only, if
 it works, they you don't need to update the hosts file.

 JM

 2013/8/21 Pavan Sudheendra pavan0...@gmail.com

  I should update the /etc/hosts file on every namenode correct?
 
 
 
 
  On Wed, Aug 21, 2013 at 7:09 PM, Pavan Sudheendra pavan0...@gmail.com
  wrote:
 
   But Jean all my namenodes log the same thing..
   2013-08-21 13:38:55,815 INFO org.apache.zookeeper.ClientCnxn: Opening
   socket connection to server localhost/127.0.0.1:2181. Will not attempt
  to
   authenticate using SASL (Unable to locate a login configuration)
   java.net.ConnectException: Connection refused
  
Although i believe you, i'm starting to worry a bit.. Its taking a lot
  of
   time for hbase to process 1 million entries in the cluster, let alone
 19
   million.. So, huge performance blow there..
  
  
  
  
   On Wed, Aug 21, 2013 at 6:56 PM, Jean-Marc Spaggiari 
   jean-m...@spaggiari.org wrote:
  
   I'm running with this INFO for more than a year now ;) So no, I don't
   think
   this is going to pose any real threats. You have everything configured
   correctly and everything seems to be working fine.
  
   JM
  
   2013/8/21 Pavan Sudheendra pavan0...@gmail.com
  
it doesn't pose any real threats?
   
   
On Wed, Aug 21, 2013 at 6:30 PM, Jean-Marc Spaggiari 
jean-m...@spaggiari.org wrote:
   
 All fine on those logs too.

 So everything is working fine, ZK, HBase, the job is working fine
  too.
The
 only issue is this INFO regarding SAS, correct?

 I think you should simply ignore it.

 If it's annoying you, just turn org.apache.zookeeper.ClientCnxn
   loglevel
to
 WARN on log4j.properties. (It's the setting I have on my own
  cluster).

 JM

 2013/8/21 Pavan Sudheendra pavan0...@gmail.com

  @Jean the log which i got at the start of running the hadoop
 jar:
   Maybe
 you
  can spot something
 
  11:51:44,431  INFO ZooKeeper:100 - Client
  environment:java.library.path=/usr/lib/hadoop/lib/native
  11:51:44,432  INFO ZooKeeper:100 - Client
environment:java.io.tmpdir=/tmp
  11:51:44,432  INFO ZooKeeper:100 - Client
environment:java.compiler=NA
  11:51:44,432  INFO ZooKeeper:100 - Client environment:os.name
  =Linux
  11:51:44,432  INFO ZooKeeper:100 - Client
  environment:os.arch=amd64
  11:51:44,432  INFO ZooKeeper:100 - Client
  environment:os.version=3.2.0-23-virtual
  11:51:44,432  INFO ZooKeeper:100 - Client environment:user.name
   =root
  11:51:44,433  INFO ZooKeeper:100 - Client
   environment:user.home=/root
  11:51:44,433  INFO ZooKeeper:100 - Client
  environment:user.dir=/home/ubuntu/pasudhee/ActionDataInterpret
  11:51:44,437  INFO ZooKeeper:438 - Initiating client connection,
  connectString=localhost:2181 sessionTimeout=18
   watcher=hconnection
  11:51:44,493  INFO ClientCnxn:966 - Opening socket connection to
   server
  localhost/127.0.0.1:2181. Will not attempt to authenticate
 using
   SASL
  (Unable to locate a login configuration)
  11:51:44,500  INFO RecoverableZooKeeper:104 - The identifier of
  this
  process is 19...@ip-10-34-187-170.eu-west-1.compute.internal
  11:51:44,513  INFO ClientCnxn:849 - Socket connection
 established
  to
  localhost/127.0.0.1:2181, initiating session
  11:51:44,532  INFO ClientCnxn:1207 - Session establishment
  complete
   on
  server localhost/127.0.0.1:2181, sessionid = 0x13ff1cff71bb167,
 negotiated
  timeout = 6
  11:51:44,743  INFO ZooKeeper:438 - Initiating client connection,
  connectString=localhost:2181 sessionTimeout=18
   watcher=hconnection
  11:51:44,747  INFO ClientCnxn:966 - Opening socket connection to
   server
  localhost/127.0.0.1:2181. Will not attempt to authenticate
 using
   SASL
  (Unable to locate a login configuration)
  11:51:44,747  INFO ClientCnxn:849 - Socket connection
 established
  to
  localhost/127.0.0.1:2181, initiating session
  11:51:44,747  INFO RecoverableZooKeeper:104 - The identifier of
  this
  process is 

Re: Coprocessors - failure

2013-08-21 Thread Ted Yu
Are tables A and B colocated ?
Meaning, is your coprocessor making an RPC call to another server ?

Cheers


On Wed, Aug 21, 2013 at 6:00 AM, Federico Gaule fga...@despegar.com wrote:

 Hi everyone,
 Let's say, I have a table(A) where every time I write a coprocessor,
 hooked to postPut, writes in other table (B). I can't find anywhere what
 happen if the coprocessor fails or can't connect to B
 Does the client get aware of that?  In case it doesn't, how can i get
 noticed about the failure?
 Does the Put get rolled back (I think coprocessors works outside row
 transaction) ?

 Thanks!
 Federico





Re: Coprocessors - failure

2013-08-21 Thread Federico Gaule


Table A and B are within the same HBase cluster. Can't guarantee they 
are in the same physical machine.


Thanks
On 08/21/2013 10:54 AM, Ted Yu wrote:

Are tables A and B colocated ?
Meaning, is your coprocessor making an RPC call to another server ?

Cheers


On Wed, Aug 21, 2013 at 6:00 AM, Federico Gaule fga...@despegar.com wrote:


Hi everyone,
Let's say, I have a table(A) where every time I write a coprocessor,
hooked to postPut, writes in other table (B). I can't find anywhere what
happen if the coprocessor fails or can't connect to B
Does the client get aware of that?  In case it doesn't, how can i get
noticed about the failure?
Does the Put get rolled back (I think coprocessors works outside row
transaction) ?

Thanks!
Federico







Re: Coprocessors - failure

2013-08-21 Thread Ted Yu
See http://search-hadoop.com/m/XtAi5Fogw32


On Wed, Aug 21, 2013 at 6:56 AM, Federico Gaule fga...@despegar.com wrote:


 Table A and B are within the same HBase cluster. Can't guarantee they are
 in the same physical machine.

 Thanks

 On 08/21/2013 10:54 AM, Ted Yu wrote:

 Are tables A and B colocated ?
 Meaning, is your coprocessor making an RPC call to another server ?

 Cheers


 On Wed, Aug 21, 2013 at 6:00 AM, Federico Gaule fga...@despegar.com
 wrote:

  Hi everyone,
 Let's say, I have a table(A) where every time I write a coprocessor,
 hooked to postPut, writes in other table (B). I can't find anywhere what
 happen if the coprocessor fails or can't connect to B
 Does the client get aware of that?  In case it doesn't, how can i get
 noticed about the failure?
 Does the Put get rolled back (I think coprocessors works outside row
 transaction) ?

 Thanks!
 Federico







Re: Coprocessors - failure

2013-08-21 Thread Federico Gaule

Thanks Ted.
Basically it's recommended to avoid using RPC calls when writing from 
table A to B, having the region in the same RegionServer. There i will 
exclude network failures. Am i right?



On 08/21/2013 11:02 AM, Ted Yu wrote:

See http://search-hadoop.com/m/XtAi5Fogw32


On Wed, Aug 21, 2013 at 6:56 AM, Federico Gaule fga...@despegar.com wrote:


Table A and B are within the same HBase cluster. Can't guarantee they are
in the same physical machine.

Thanks

On 08/21/2013 10:54 AM, Ted Yu wrote:


Are tables A and B colocated ?
Meaning, is your coprocessor making an RPC call to another server ?

Cheers


On Wed, Aug 21, 2013 at 6:00 AM, Federico Gaule fga...@despegar.com
wrote:

  Hi everyone,

Let's say, I have a table(A) where every time I write a coprocessor,
hooked to postPut, writes in other table (B). I can't find anywhere what
happen if the coprocessor fails or can't connect to B
Does the client get aware of that?  In case it doesn't, how can i get
noticed about the failure?
Does the Put get rolled back (I think coprocessors works outside row
transaction) ?

Thanks!
Federico








Region server is getting disconnected and becomes unreachable.

2013-08-21 Thread Vamshi Krishna
I setup hbase cluster on two machines. One machine has master aswell as
regionserver and other has only RS. After running ./start-hbase.sh all
daemons are started perfectly. But 2nd machine which runs only RS is
getting disconnceted after some time and what ever data i iserted in to
Hbase table resides only in the master machine.
I see following error in the Regions server log.

2013-08-21 16:16:17,243 INFO org.apache.zookeeper.ZooKeeper: Initiating
client connection, connectString=vamshi_RS:2181 sessionTimeout=18
watcher=regionserver:60020
2013-08-21 16:16:17,253 INFO
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of
this process is 31047@vamshi
2013-08-21 16:16:17,258 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server vamshi_RS/192.168.1.57:2181. Will not attempt
to authenticate using SASL (Unable to locate a login configuration)
2013-08-21 16:17:20,347 WARN org.apache.zookeeper.ClientCnxn: Session 0x0
for server null, unexpected error, closing socket connection and attempting
reconnect
java.net.ConnectException: Connection timed out
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
at
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
2013-08-21 16:17:20,463 WARN
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
ZooKeeper exception:
org.apache.zookeeper.KeeperException$ConnectionLossException:
KeeperErrorCode = ConnectionLoss for /hbase/master
2013-08-21 16:17:20,463 INFO org.apache.hadoop.hbase.util.RetryCounter:
Sleeping 2000ms before retry #1...
2013-08-21 16:17:21,458 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server vamshi_RS/192.168.1.57:2181. Will not attempt
to authenticate using SASL (Unable to locate a login configuration)
2013-08-21 16:18:24,601 WARN org.apache.zookeeper.ClientCnxn: Session 0x0
for server null, unexpected error, closing socket connection and attempting
reconnect
java.net.ConnectException: Connection timed out
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
at
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
2013-08-21 16:18:24,702 WARN
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
ZooKeeper exception:
org.apache.zookeeper.KeeperException$ConnectionLossException:
KeeperErrorCode = ConnectionLoss for /hbase/master
2013-08-21 16:18:24,702 INFO org.apache.hadoop.hbase.util.RetryCounter:
Sleeping 4000ms before retry #2...
2013-08-21 16:18:25,702 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server vamshi_RS/192.168.1.57:2181. Will not attempt
to authenticate using SASL (Unable to locate a login configuration)
2013-08-21 16:19:28,857 WARN org.apache.zookeeper.ClientCnxn: Session 0x0
for server null, unexpected error, closing socket connection and attempting
reconnect
java.net.ConnectException: Connection timed out
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
at
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
.
.
..
2013-08-21 16:20:33,217 FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort:
loaded coprocessors are: []
2013-08-21 16:20:33,217 INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Unexpected
exception during initialization, aborting
2013-08-21 16:20:34,214 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server vamshi_RS/192.168.1.57:2181. Will not attempt
to authenticate using SASL (Unable to locate a login configuration)
2013-08-21 16:20:36,220 INFO org.apache.hadoop.ipc.HBaseServer: Stopping
server on 60020
2013-08-21 16:20:36,221 FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server
vamshi,60020,1377081977160: Initialization of RS failed.  Hence aborting RS.
java.io.IOException: Received the shutdown message while waiting.
at
org.apache.hadoop.hbase.regionserver.HRegionServer.blockAndCheckIfStopped(HRegionServer.java:680)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:649)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:609)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:735)
at java.lang.Thread.run(Thread.java:662)
2013-08-21 16:20:36,222 FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort:
loaded coprocessors are: []
2013-08-21 16:20:36,222 INFO

Re: Using HBase timestamps as natural versioning

2013-08-21 Thread Michael Segel
I would have to disagree with Lars on this one... 

Its really a bad design. 

To your point, your data is temporal in nature. That is to say, time is an 
element of your data and it should be part of your schema. 

You have to remember that time is relative. 

When a row is entered in to HBase, which time is used in the timestamp? 
The client(s)? The RS?  Unless I am mistaken or the API has changed, you can 
set up any arbitrary long value to be the timestamp for a given row/cell. 
Like I said, its relative. 

Since your data is temporal what is the difference if the event happened at TS 
10 x11 (the point is that the TS is different by 1 in the least 
significant bit) 
You could be trying to reference the same event.

To Lars point, if you make time part of your key, you could end up with hot 
spots. It depends on your key design. If its the least significant portion of 
the key, its less of an issue. (clientX | action | TS) would be an example that 
would sort the data by client, by action type, then by time stamp.  (EPOCH - TS 
) would put the most current first.

When you try to take a short cut, it usually will bite you in the ass. 

TANSTAAFL applies!

HTH

-Mike

On Aug 11, 2013, at 12:21 AM, lars hofhansl la...@apache.org wrote:

 If you want deletes to work correctly you should enable KEEP_DELETED_CELLS 
 for your column families (I still think that should be the default anyway).
 Otherwise time-range queries will not be correct w.r.t. deleted data 
 (specifically you cannot get back at deleted data even if you specify a time 
 range before the delete and even if you column family as unlimited versions).
 
 
 Depending on what your typical queries are, you might run into performance 
 issues. HBase sorts all versions of a KeyValue adjacent to each other.
 If you now want to query only along the latest data (the last version), HBase 
 will have to skip a lot of other versions. In the worst case the latest 
 version of all KeyVales are on separate (HFile) blocks.
 
 The question of whether to use the builtin timestamps or model the time as 
 part of the row keys (or even a time-column), is an interesting one.
 Generally the row-key identifies your row. If you want a new row for each TS 
 in your logical model you should manage the time dimension yourself.
 Otherwise if you identities (i.e. row) with many versions, the builtin TS 
 might be better.
 
 -- Lars
 
 
 From: Henning Blohm henning.bl...@zfabrik.de
 To: user user@hbase.apache.org 
 Sent: Saturday, August 10, 2013 6:26 AM
 Subject: Using HBase timestamps as natural versioning
 
 
 Hi,
 
 we are managing some naturally time versioned data in HBase. That is, 
 there are change events that have a specific time set and when such 
 event is handled, data in HBase, pertaining to the exact same point in 
 time, is updated.
 
 So far we are using HBase time stamps to model the time dimension. All 
 columns have unlimited number of versions. That worked ok so far, and 
 HBase's way of providing access to data at a given time or time range 
 seemed a natural fit.
 
 We are aware of some tricky issues around timestamp handling (e.g. in 
 particular in conjunction with deletes). As we need to migrate HBase 
 stored data (for other reasons) shortly we are wondering, if our 
 approach has some long-term drawbacks that we should pay attention to 
 now and possibly re-design our timestamp handling as well.
 
 So my question is:
 
 * Is there problematic experience with using HBase timestamps as time 
 dimension of your data (assuming it has some natural time-based versioning)?
 
 * Is it generally better to model time-based versioning of data within 
 the data structure itself (e.g. in the row key) and why?
 
 * In case you used HBase timestamps similar to the way we use them, 
 feedback on how that worked is welcome as well!
 
 Thanks,
 Henning
 

The opinions expressed here are mine, while they may reflect a cognitive 
thought, that is purely accidental. 
Use at your own risk. 
Michael Segel
michael_segel (AT) hotmail.com







Re: Chocolatey package for Windows

2013-08-21 Thread Andrew Pennebaker
Cool! Is there an MSI for just HBase?

On Wed, Aug 21, 2013 at 9:48 AM, Enis Söztutar e...@hortonworks.com wrote:

 Hi,

 Agreed with what Nick said. There is also an MSI based installation for
 HBase as a part of HDP-1.3 package. You can check it out here:
 http://hortonworks.com/products/hdp-windows/

 Enis


 On Tue, Aug 20, 2013 at 2:54 PM, Nick Dimiduk ndimi...@gmail.com wrote:

  Hi Andrew,
 
  I don't think the homebrew recipes are managed by an HBase developer.
  Rather, someone in the community has taken it upon themselves to
  provide the project through brew. Likewise, the Apache HBase project does
  not provide RPM or DEB packages, but you're likely to find them if you
 look
  around.
 
  Maybe you can find a willing maintainer on the users@ list? (I don't run
  Windows very often so I won't make a good volunteer)
 
  Thanks,
  Nick
 
  On Tuesday, August 20, 2013, Andrew Pennebaker wrote:
 
   Could we automate the installation process for Windows with a
   Chocolateyhttp://chocolatey.org/package, the way we offer a
   Homebrew
   https://github.com/mxcl/homebrew/blob/master/Library/Formula/hbase.rb
   formula
   for Mac OS X?
  
 



Re: TableMapReduceUtil addDependencyJars question

2013-08-21 Thread Ted Yu
bq. you are supposed to have ZooKeeper deployed in your cluster, right?

We need to consider whether the versions (of ZooKeeper) in the two clusters
match or not.

Cheers

On Wed, Aug 21, 2013 at 2:00 AM, Amit Sela am...@infolinks.com wrote:

 Hi all,
 I'm using HBase 0.94.2.
 Looking at TableMapReduceUtil.addDependencyJars(Job job) I see that
 org.apache.zookeeper.ZooKeeper.class and com.google.protobuf.Message.class
 are hard-coded as jars to send to cluster.
 I can understand why all the others (map output key/value class, etc.) are
 sent to cluster but if you are using HBase you are supposed to have
 ZooKeeper deployed in your cluster, right? and protobuf is a part of that
 installation, isn't it ?
 Are these two jars necessary ?

 In general, If I have all the classes I need deployed in the cluster
 (including map output key/value class, etc.), can I skip
 addDependencyJars ? I'm using an extended HFileOutputFormat I wrote...

 Thanks,

 Amit.



Re: Zookeeper tries to connect to localhost when i have specified another clearly.

2013-08-21 Thread Jean-Marc Spaggiari
TaskTrackers and Job trackers are MR nodes. You also have HDFS nodes and
HBase nodes.

What's the file name where you got that from?

JM

2013/8/21 Pavan Sudheendra pavan0...@gmail.com

 We have 5 tasktrackers and 1 job tracker..
 I don't know why we have this.. This architecture was already existing when
 i started working on this..

 I just tried pinging the datanode from a namenode using ip-10-34-187-170
 and it is able to ping it..
 Not sure its not able to pick it up and show it in the namenode logs...
 Frankly i thought the namenode logs should be more useful and specific
 instead of just giving out warning messages like this.. Let me know if you
 need any more files to look at ..

 thanks for all the help :)


 On Wed, Aug 21, 2013 at 7:13 PM, Jean-Marc Spaggiari 
 jean-m...@spaggiari.org wrote:

  Hi Pavan,
 
  How mane namenodes do you have? Are you running HA and that's why you
 have
  more than 1 namenode?
 
  Also, yes, if you don't have DNS, you need to have hosts file configured.
 
  First try to ping from one server to the other one using the name only,
 if
  it works, they you don't need to update the hosts file.
 
  JM
 
  2013/8/21 Pavan Sudheendra pavan0...@gmail.com
 
   I should update the /etc/hosts file on every namenode correct?
  
  
  
  
   On Wed, Aug 21, 2013 at 7:09 PM, Pavan Sudheendra pavan0...@gmail.com
   wrote:
  
But Jean all my namenodes log the same thing..
2013-08-21 13:38:55,815 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server localhost/127.0.0.1:2181. Will not
 attempt
   to
authenticate using SASL (Unable to locate a login configuration)
java.net.ConnectException: Connection refused
   
 Although i believe you, i'm starting to worry a bit.. Its taking a
 lot
   of
time for hbase to process 1 million entries in the cluster, let alone
  19
million.. So, huge performance blow there..
   
   
   
   
On Wed, Aug 21, 2013 at 6:56 PM, Jean-Marc Spaggiari 
jean-m...@spaggiari.org wrote:
   
I'm running with this INFO for more than a year now ;) So no, I
 don't
think
this is going to pose any real threats. You have everything
 configured
correctly and everything seems to be working fine.
   
JM
   
2013/8/21 Pavan Sudheendra pavan0...@gmail.com
   
 it doesn't pose any real threats?


 On Wed, Aug 21, 2013 at 6:30 PM, Jean-Marc Spaggiari 
 jean-m...@spaggiari.org wrote:

  All fine on those logs too.
 
  So everything is working fine, ZK, HBase, the job is working
 fine
   too.
 The
  only issue is this INFO regarding SAS, correct?
 
  I think you should simply ignore it.
 
  If it's annoying you, just turn org.apache.zookeeper.ClientCnxn
loglevel
 to
  WARN on log4j.properties. (It's the setting I have on my own
   cluster).
 
  JM
 
  2013/8/21 Pavan Sudheendra pavan0...@gmail.com
 
   @Jean the log which i got at the start of running the hadoop
  jar:
Maybe
  you
   can spot something
  
   11:51:44,431  INFO ZooKeeper:100 - Client
   environment:java.library.path=/usr/lib/hadoop/lib/native
   11:51:44,432  INFO ZooKeeper:100 - Client
 environment:java.io.tmpdir=/tmp
   11:51:44,432  INFO ZooKeeper:100 - Client
 environment:java.compiler=NA
   11:51:44,432  INFO ZooKeeper:100 - Client environment:os.name
   =Linux
   11:51:44,432  INFO ZooKeeper:100 - Client
   environment:os.arch=amd64
   11:51:44,432  INFO ZooKeeper:100 - Client
   environment:os.version=3.2.0-23-virtual
   11:51:44,432  INFO ZooKeeper:100 - Client environment:
 user.name
=root
   11:51:44,433  INFO ZooKeeper:100 - Client
environment:user.home=/root
   11:51:44,433  INFO ZooKeeper:100 - Client
   environment:user.dir=/home/ubuntu/pasudhee/ActionDataInterpret
   11:51:44,437  INFO ZooKeeper:438 - Initiating client
 connection,
   connectString=localhost:2181 sessionTimeout=18
watcher=hconnection
   11:51:44,493  INFO ClientCnxn:966 - Opening socket connection
 to
server
   localhost/127.0.0.1:2181. Will not attempt to authenticate
  using
SASL
   (Unable to locate a login configuration)
   11:51:44,500  INFO RecoverableZooKeeper:104 - The identifier
 of
   this
   process is 19...@ip-10-34-187-170.eu-west-1.compute.internal
   11:51:44,513  INFO ClientCnxn:849 - Socket connection
  established
   to
   localhost/127.0.0.1:2181, initiating session
   11:51:44,532  INFO ClientCnxn:1207 - Session establishment
   complete
on
   server localhost/127.0.0.1:2181, sessionid =
 0x13ff1cff71bb167,
  negotiated
   timeout = 6
   11:51:44,743  INFO ZooKeeper:438 - Initiating client
 connection,
   connectString=localhost:2181 sessionTimeout=18
watcher=hconnection
   11:51:44,747  INFO ClientCnxn:966 - Opening socket connection
 to
server
   

Re: Performance penalty: Custom Filter names serialization

2013-08-21 Thread Jean-Marc Spaggiari
Have you guys tried with  0.94? Are you facing the same issue with
ProtoBuf?

JM

2013/8/20 Federico Gaule fga...@despegar.com

 Hi everyone,

 I'm facing the same issue as Pablo. Renaming my classes used in HBase
 context improved network usage more than 20%. It would be really nice to
 have an improvement around this.




 On 08/20/2013 01:15 PM, Jean-Marc Spaggiari wrote:

 But even if we are using Protobuf, he is going to face the same issue,
 right?

 We should have a way to send the filter once with a number to say to the
 regions that this filter, moving forward, will be represented by this
 number. There is some risk to re-use a number of a filter already using
 it,
 but I'm sure we can come with some mechanism to avoid that.

 2013/8/20 Ted Yu yuzhih...@gmail.com

  Are you using HBase 0.92 or 0.94 ?

 In 0.95 and later releases, HbaseObjectWritable doesn't exist. Protobuf
 is
 used for communication.

 Cheers


 On Tue, Aug 20, 2013 at 8:56 AM, Pablo Medina pablomedin...@gmail.com

 wrote:
 Hi all,

 I'm using custom filters to retrieve filtered data from HBase using the
 native api. I noticed that the class full names of those custom filters

 is

 being sent as the bytes representation of the string using
 Text.writeString(). This consumes a lot of network bandwidth in my case

 due

 to using 5 custom filters per Get and issuing 1.5 million gets per

 minute.

 I took at look at the code

 (org.apache.hadoop.hbase.io.**HbaseObjectWritable)

 and It seems that HBase registers its known classes (Get, Put, etc...)

 and

 associates them with an Integer (CODE_TO_CLASS and CLASS_TO_CODE). That
 integer is sent instead of the full class name for those known classes.
 I
 did a test reducing my custom filter class names to 2 or 3 letters and
 it
 improved my performance in 25%.
 Is there any way to register my custom filter classes to behave the

 same

 as HBase's classes? If not, does it make sense to introduce a change to

 do

 that? Is there any other workaround for this issue?

 Thanks!





Re: Coprocessors - failure

2013-08-21 Thread Jung-Yup Lee
X.
ㅌ..,,(?:..;?.

2013년 8월 21일 수요일에 Ted Yu님이 작성:

 See http://search-hadoop.com/m/XtAi5Fogw32


 On Wed, Aug 21, 2013 at 6:56 AM, Federico Gaule 
 fga...@despegar.comjavascript:;
 wrote:

 
  Table A and B are within the same HBase cluster. Can't guarantee they are
  in the same physical machine.
 
  Thanks
 
  On 08/21/2013 10:54 AM, Ted Yu wrote:
 
  Are tables A and B colocated ?
  Meaning, is your coprocessor making an RPC call to another server ?
 
  Cheers
 
 
  On Wed, Aug 21, 2013 at 6:00 AM, Federico Gaule 
  fga...@despegar.comjavascript:;
 
  wrote:
 
   Hi everyone,
  Let's say, I have a table(A) where every time I write a coprocessor,
  hooked to postPut, writes in other table (B). I can't find anywhere
 what
  happen if the coprocessor fails or can't connect to B
  Does the client get aware of that?  In case it doesn't, how can i get
  noticed about the failure?
  Does the Put get rolled back (I think coprocessors works outside row
  transaction) ?
 
  Thanks!
  Federico
 
 
 
 
 



Re: Coprocessors - failure

2013-08-21 Thread Ted Yu
Certain work is needed on the load balancer side to colocate regions of
table A with those of table B.

Do they have similar schema ?

For reference, take a look
at 
secondaryindex/src/main/java/org/apache/hadoop/hbase/index/SecIndexLoadBalancer.java
from HBASE-9203

Cheers


On Wed, Aug 21, 2013 at 7:09 AM, Federico Gaule fga...@despegar.com wrote:

 Thanks Ted.
 Basically it's recommended to avoid using RPC calls when writing from
 table A to B, having the region in the same RegionServer. There i will
 exclude network failures. Am i right?



 On 08/21/2013 11:02 AM, Ted Yu wrote:

 See 
 http://search-hadoop.com/m/**XtAi5Fogw32http://search-hadoop.com/m/XtAi5Fogw32


 On Wed, Aug 21, 2013 at 6:56 AM, Federico Gaule fga...@despegar.com
 wrote:

  Table A and B are within the same HBase cluster. Can't guarantee they are
 in the same physical machine.

 Thanks

 On 08/21/2013 10:54 AM, Ted Yu wrote:

  Are tables A and B colocated ?
 Meaning, is your coprocessor making an RPC call to another server ?

 Cheers


 On Wed, Aug 21, 2013 at 6:00 AM, Federico Gaule fga...@despegar.com
 wrote:

   Hi everyone,

 Let's say, I have a table(A) where every time I write a coprocessor,
 hooked to postPut, writes in other table (B). I can't find anywhere
 what
 happen if the coprocessor fails or can't connect to B
 Does the client get aware of that?  In case it doesn't, how can i get
 noticed about the failure?
 Does the Put get rolled back (I think coprocessors works outside row
 transaction) ?

 Thanks!
 Federico








Re: Coprocessors - failure

2013-08-21 Thread fgaule
Yes, schemas are similar, both are flat-wide, 'A' has half 'B' row key, and the 
other half as column (hope to be clear)
I'll take a look to that class.

Cheers
Enviado desde mi BlackBerry de Personal (http://www.personal.com.ar/)

-Original Message-
From: Ted Yu yuzhih...@gmail.com
Date: Wed, 21 Aug 2013 16:11:01 
To: Federico Gaulefga...@despegar.com
Reply-To: user@hbase.apache.org
Cc: user@hbase.apache.orguser@hbase.apache.org
Subject: Re: Coprocessors - failure

Certain work is needed on the load balancer side to colocate regions of
table A with those of table B.

Do they have similar schema ?

For reference, take a look
at 
secondaryindex/src/main/java/org/apache/hadoop/hbase/index/SecIndexLoadBalancer.java
from HBASE-9203

Cheers


On Wed, Aug 21, 2013 at 7:09 AM, Federico Gaule fga...@despegar.com wrote:

 Thanks Ted.
 Basically it's recommended to avoid using RPC calls when writing from
 table A to B, having the region in the same RegionServer. There i will
 exclude network failures. Am i right?



 On 08/21/2013 11:02 AM, Ted Yu wrote:

 See 
 http://search-hadoop.com/m/**XtAi5Fogw32http://search-hadoop.com/m/XtAi5Fogw32


 On Wed, Aug 21, 2013 at 6:56 AM, Federico Gaule fga...@despegar.com
 wrote:

  Table A and B are within the same HBase cluster. Can't guarantee they are
 in the same physical machine.

 Thanks

 On 08/21/2013 10:54 AM, Ted Yu wrote:

  Are tables A and B colocated ?
 Meaning, is your coprocessor making an RPC call to another server ?

 Cheers


 On Wed, Aug 21, 2013 at 6:00 AM, Federico Gaule fga...@despegar.com
 wrote:

   Hi everyone,

 Let's say, I have a table(A) where every time I write a coprocessor,
 hooked to postPut, writes in other table (B). I can't find anywhere
 what
 happen if the coprocessor fails or can't connect to B
 Does the client get aware of that?  In case it doesn't, how can i get
 noticed about the failure?
 Does the Put get rolled back (I think coprocessors works outside row
 transaction) ?

 Thanks!
 Federico









Re: Zookeeper tries to connect to localhost when i have specified another clearly.

2013-08-21 Thread Pavan Sudheendra
Sorry what files are you talking about?

Regards,
Pavan
On Aug 22, 2013 12:04 AM, Jean-Marc Spaggiari jean-m...@spaggiari.org
wrote:

 TaskTrackers and Job trackers are MR nodes. You also have HDFS nodes and
 HBase nodes.

 What's the file name where you got that from?

 JM

 2013/8/21 Pavan Sudheendra pavan0...@gmail.com

  We have 5 tasktrackers and 1 job tracker..
  I don't know why we have this.. This architecture was already existing
 when
  i started working on this..
 
  I just tried pinging the datanode from a namenode using ip-10-34-187-170
  and it is able to ping it..
  Not sure its not able to pick it up and show it in the namenode logs...
  Frankly i thought the namenode logs should be more useful and specific
  instead of just giving out warning messages like this.. Let me know if
 you
  need any more files to look at ..
 
  thanks for all the help :)
 
 
  On Wed, Aug 21, 2013 at 7:13 PM, Jean-Marc Spaggiari 
  jean-m...@spaggiari.org wrote:
 
   Hi Pavan,
  
   How mane namenodes do you have? Are you running HA and that's why you
  have
   more than 1 namenode?
  
   Also, yes, if you don't have DNS, you need to have hosts file
 configured.
  
   First try to ping from one server to the other one using the name only,
  if
   it works, they you don't need to update the hosts file.
  
   JM
  
   2013/8/21 Pavan Sudheendra pavan0...@gmail.com
  
I should update the /etc/hosts file on every namenode correct?
   
   
   
   
On Wed, Aug 21, 2013 at 7:09 PM, Pavan Sudheendra 
 pavan0...@gmail.com
wrote:
   
 But Jean all my namenodes log the same thing..
 2013-08-21 13:38:55,815 INFO org.apache.zookeeper.ClientCnxn:
 Opening
 socket connection to server localhost/127.0.0.1:2181. Will not
  attempt
to
 authenticate using SASL (Unable to locate a login configuration)
 java.net.ConnectException: Connection refused

  Although i believe you, i'm starting to worry a bit.. Its taking a
  lot
of
 time for hbase to process 1 million entries in the cluster, let
 alone
   19
 million.. So, huge performance blow there..




 On Wed, Aug 21, 2013 at 6:56 PM, Jean-Marc Spaggiari 
 jean-m...@spaggiari.org wrote:

 I'm running with this INFO for more than a year now ;) So no, I
  don't
 think
 this is going to pose any real threats. You have everything
  configured
 correctly and everything seems to be working fine.

 JM

 2013/8/21 Pavan Sudheendra pavan0...@gmail.com

  it doesn't pose any real threats?
 
 
  On Wed, Aug 21, 2013 at 6:30 PM, Jean-Marc Spaggiari 
  jean-m...@spaggiari.org wrote:
 
   All fine on those logs too.
  
   So everything is working fine, ZK, HBase, the job is working
  fine
too.
  The
   only issue is this INFO regarding SAS, correct?
  
   I think you should simply ignore it.
  
   If it's annoying you, just turn
 org.apache.zookeeper.ClientCnxn
 loglevel
  to
   WARN on log4j.properties. (It's the setting I have on my own
cluster).
  
   JM
  
   2013/8/21 Pavan Sudheendra pavan0...@gmail.com
  
@Jean the log which i got at the start of running the hadoop
   jar:
 Maybe
   you
can spot something
   
11:51:44,431  INFO ZooKeeper:100 - Client
environment:java.library.path=/usr/lib/hadoop/lib/native
11:51:44,432  INFO ZooKeeper:100 - Client
  environment:java.io.tmpdir=/tmp
11:51:44,432  INFO ZooKeeper:100 - Client
  environment:java.compiler=NA
11:51:44,432  INFO ZooKeeper:100 - Client environment:
 os.name
=Linux
11:51:44,432  INFO ZooKeeper:100 - Client
environment:os.arch=amd64
11:51:44,432  INFO ZooKeeper:100 - Client
environment:os.version=3.2.0-23-virtual
11:51:44,432  INFO ZooKeeper:100 - Client environment:
  user.name
 =root
11:51:44,433  INFO ZooKeeper:100 - Client
 environment:user.home=/root
11:51:44,433  INFO ZooKeeper:100 - Client
   
 environment:user.dir=/home/ubuntu/pasudhee/ActionDataInterpret
11:51:44,437  INFO ZooKeeper:438 - Initiating client
  connection,
connectString=localhost:2181 sessionTimeout=18
 watcher=hconnection
11:51:44,493  INFO ClientCnxn:966 - Opening socket
 connection
  to
 server
localhost/127.0.0.1:2181. Will not attempt to authenticate
   using
 SASL
(Unable to locate a login configuration)
11:51:44,500  INFO RecoverableZooKeeper:104 - The identifier
  of
this
process is 19...@ip-10-34-187-170.eu-west-1.compute.internal
11:51:44,513  INFO ClientCnxn:849 - Socket connection
   established
to
localhost/127.0.0.1:2181, initiating session
11:51:44,532  INFO ClientCnxn:1207 - Session establishment
complete
 on
server localhost/127.0.0.1:2181, sessionid =
  

Hbase region server disconnecting with master after some time.

2013-08-21 Thread Vamshi Krishna
Hi all,
 Facing problem with Hbase region server disconnecting with master after
some time. I set up Hbase cluster with 2 machines where Machine-1 (M1) is
master and Region server and M2 is only Region server.
After running hbase-start.sh , all the daemons are started perfectly but
after some time i see M2 region server is dead. I am running zookeeper on
M1 alone.

The error i found is M2 region server log is pasted below.

2013-08-22 10:31:38,554 INFO org.apache.zookeeper.ZooKeeper: Initiating
client connection, connectString=vamshi_RS:2181 sessionTimeout=18
watcher=regionserver:60020
2013-08-22 10:31:38,564 INFO
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of
this process is 4076@vamshi
2013-08-22 10:31:38,568 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server vamshi_RS/192.168.1.57:2181. Will not attempt
to authenticate using SASL (Unable to locate a login configuration)
2013-08-22 10:32:41,675 WARN org.apache.zookeeper.ClientCnxn: Session 0x0
for server null, unexpected error, closing socket connection and attempting
reconnect
java.net.ConnectException: Connection timed out
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
at
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
2013-08-22 10:32:41,791 WARN
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
ZooKeeper exception:
org.apache.zookeeper.KeeperException$ConnectionLossException:
KeeperErrorCode = ConnectionLoss for /hbase/master
2013-08-22 10:32:41,791 INFO org.apache.hadoop.hbase.util.RetryCounter:
Sleeping 2000ms before retry #1...
2013-08-22 10:32:42,789 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server vamshi_RS/192.168.1.57:2181. Will not attempt
to authenticate using SASL (Unable to locate a login configuration)
2013-08-22 10:33:45,929 WARN org.apache.zookeeper.ClientCnxn: Session 0x0
for server null, unexpected error, closing socket connection and attempting
reconnect
.
.
..
2013-08-22 10:35:54,542 WARN org.apache.hadoop.hbase.zookeeper.ZKUtil:
regionserver:60020 Unable to set watcher on znode /hbase/master
org.apache.zookeeper.KeeperException$ConnectionLossException:
KeeperErrorCode = ConnectionLoss for /hbase/master
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1041)
at
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:172)
at
org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:420)
at
org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:76)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:648)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:609)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:735)
at java.lang.Thread.run(Thread.java:662)
.
..
2013-08-22 10:35:57,549 FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server
vamshi,60020,1377147698472: Initialization of RS failed.  Hence aborting RS.
java.io.IOException: Received the shutdown message while waiting.
at
org.apache.hadoop.hbase.regionserver.HRegionServer.blockAndCheckIfStopped(HRegionServer.java:680)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:649)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:609)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:735)
at java.lang.Thread.run(Thread.java:662)
2013-08-22 10:35:57,550 FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort:
loaded coprocessors are: []
2013-08-22 10:35:57,550 INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Initialization
of RS failed.  Hence aborting RS.
2013-08-22 10:35:57,552 INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: Registered RegionServer
MXBean
2013-08-22 10:35:57,553 FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server
vamshi,60020,1377147698472: Unhandled exception: null
java.lang.NullPointerException
at
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:756)
at java.lang.Thread.run(Thread.java:662)
2013-08-22 10:35:57,553 FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort:
loaded coprocessors are: []
2013-08-22 10:35:57,553 INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Unhandled
exception: null
2013-08-22 10:35:57,554 INFO

Fwd: Hbase region server disconnecting with master after some time.

2013-08-21 Thread anil gupta
Reply Inline..

-- Forwarded message --
From: Vamshi Krishna vamshi2...@gmail.com
Date: Wed, Aug 21, 2013 at 10:42 PM
Subject: Hbase region server disconnecting with master after some time.
To: user@hbase.apache.org


Hi all,
 Facing problem with Hbase region server disconnecting with master after
some time. I set up Hbase cluster with 2 machines where Machine-1 (M1) is
master and Region server and M2 is only Region server.
After running hbase-start.sh , all the daemons are started perfectly but
after some time i see M2 region server is dead. I am running zookeeper on
M1 alone.

The error i found is M2 region server log is pasted below.

2013-08-22 10:31:38,554 INFO org.apache.zookeeper.ZooKeeper: Initiating
client connection, connectString=vamshi_RS:2181 sessionTimeout=18
watcher=regionserver:60020
*Anil: Above line means that RS  ZK and were unable to communicate for 3
minutes(180 sec). Hence RS is deemed dead by ZK. It seems like you have
some networking/firewall problem. *
2013-08-22 10:31:38,564 INFO
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of
this process is 4076@vamshi
2013-08-22 10:31:38,568 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server vamshi_RS/192.168.1.57:2181. Will not attempt
to authenticate using SASL (Unable to locate a login configuration)
2013-08-22 10:32:41,675 WARN org.apache.zookeeper.ClientCnxn: Session 0x0
for server null, unexpected error, closing socket connection and attempting
reconnect
java.net.ConnectException: Connection timed out
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
at
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
2013-08-22 10:32:41,791 WARN
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
ZooKeeper exception:
org.apache.zookeeper.KeeperException$ConnectionLossException:
KeeperErrorCode = ConnectionLoss for /hbase/master
2013-08-22 10:32:41,791 INFO org.apache.hadoop.hbase.util.RetryCounter:
Sleeping 2000ms before retry #1...
2013-08-22 10:32:42,789 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server vamshi_RS/192.168.1.57:2181. Will not attempt
to authenticate using SASL (Unable to locate a login configuration)
2013-08-22 10:33:45,929 WARN org.apache.zookeeper.ClientCnxn: Session 0x0
for server null, unexpected error, closing socket connection and attempting
reconnect
.
.
..
2013-08-22 10:35:54,542 WARN org.apache.hadoop.hbase.zookeeper.ZKUtil:
regionserver:60020 Unable to set watcher on znode /hbase/master
org.apache.zookeeper.KeeperException$ConnectionLossException:
KeeperErrorCode = ConnectionLoss for /hbase/master
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1041)
at
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:172)
at
org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:420)
at
org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:76)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:648)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:609)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:735)
at java.lang.Thread.run(Thread.java:662)
.
..
2013-08-22 10:35:57,549 FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server
vamshi,60020,1377147698472: Initialization of RS failed.  Hence aborting RS.
java.io.IOException: Received the shutdown message while waiting.
at
org.apache.hadoop.hbase.regionserver.HRegionServer.blockAndCheckIfStopped(HRegionServer.java:680)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:649)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:609)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:735)
at java.lang.Thread.run(Thread.java:662)
2013-08-22 10:35:57,550 FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort:
loaded coprocessors are: []
2013-08-22 10:35:57,550 INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Initialization
of RS failed.  Hence aborting RS.
2013-08-22 10:35:57,552 INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: Registered RegionServer
MXBean
2013-08-22 10:35:57,553 FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server
vamshi,60020,1377147698472: Unhandled exception: null
java.lang.NullPointerException
at