Is this a zookeeper specific error or something?

On Wed, Aug 21, 2013 at 6:06 PM, Pavan Sudheendra <[email protected]>wrote:

> Hi Jean,
>
> ubuntu@ip-10-34-187-170:~$ cat /etc/hostname
> ip-10-34-187-170
> ubuntu@ip-10-34-187-170:~$ hostname
> ip-10-34-187-170
>
>
>
> On Wed, Aug 21, 2013 at 6:01 PM, Jean-Marc Spaggiari <
> [email protected]> wrote:
>
>> And what about:
>> # cat /etc/hostname
>>
>> and
>> # hostname
>>
>> ?
>>
>> 2013/8/21 Pavan Sudheendra <[email protected]>
>>
>> > Sure..
>> > /etc/hosts file:
>> >
>> > 127.0.0.1 localhost
>> > 10.34.187.170 ip-10-34-187-170
>> > # The following lines are desirable for IPv6 capable hosts
>> > ::1 ip6-localhost ip6-loopback
>> > fe00::0 ip6-localnet
>> > ff00::0 ip6-mcastprefix
>> > ff02::1 ip6-allnodes
>> > ff02::2 ip6-allrouters
>> > ff02::3 ip6-allhosts
>> >
>> > Configuration conf = HBaseConfiguration.create();
>> > conf.set("hbase.zookeeper.quorum", "10.34.187.170");
>> >  conf.set("hbase.zookeeper.property.clientPort","2181");
>> >    conf.set("hbase.master","10.34.187.170");
>> >    Job job = new Job(conf, ViewersTable);
>> >
>> > I'm trying to process table data which has 19 million rows..It runs fine
>> > for a while although i don't see the map percent completion change from
>> 0%
>> > .. After a while it says
>> >
>> > Task attempt_201304161625_0028_m_000000_0 failed to report status for
>> > 600 seconds. Killing!
>> >
>> >
>> >
>> >
>> >
>> > On Wed, Aug 21, 2013 at 5:52 PM, Jean-Marc Spaggiari <
>> > [email protected]> wrote:
>> >
>> > > Can you past you host file here again with the modification you have
>> > done?
>> > >
>> > > Also, can you share a big more of you code? What are you doing with
>> the
>> > > config object after, how do you create your table object, etc.?
>> > >
>> > > Thanks,
>> > >
>> > > JM
>> > >
>> > > 2013/8/21 Pavan Sudheendra <[email protected]>
>> > >
>> > > > @Jean tried your method didn't work..
>> > > >
>> > > > 2013-08-21 12:17:10,908 INFO org.apache.zookeeper.ClientCnxn:
>> Opening
>> > > > socket connection to server localhost/127.0.0.1:2181. Will not
>> attempt
>> > > to
>> > > > authenticate using SASL (Unable to locate a login configuration)
>> > > > 2013-08-21 12:17:10,908 WARN org.apache.zookeeper.ClientCnxn:
>> Session
>> > 0x0
>> > > > for server null, unexpected error, closing socket connection and
>> > > attempting
>> > > > reconnect
>> > > > java.net.ConnectException: Connection refused
>> > > >     at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>> > > >     at
>> > > >
>> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
>> > > >     at
>> > > >
>> > > >
>> > >
>> >
>> org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
>> > > >     at
>> > > org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
>> > > > 2013-08-21 12:17:11,009 WARN
>> > > > org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly
>> > > transient
>> > > > ZooKeeper exception:
>> > > > org.apache.zookeeper.KeeperException$ConnectionLossException:
>> > > > KeeperErrorCode = ConnectionLoss for /hbase
>> > > > 2013-08-21 12:17:11,009 INFO
>> org.apache.hadoop.hbase.util.RetryCounter:
>> > > > Sleeping 8000ms before retry #3...\
>> > > >
>> > > > Any tips?
>> > > >
>> > > >
>> > > >
>> > > > On Wed, Aug 21, 2013 at 5:15 PM, Jean-Marc Spaggiari <
>> > > > [email protected]> wrote:
>> > > >
>> > > > > Hi Pavan,
>> > > > >
>> > > > > I don't think Cloudera Manager assign the address to your
>> computer.
>> > > When
>> > > > CM
>> > > > > is down, your computer still have an IP, and even if you
>> un-install
>> > CM,
>> > > > you
>> > > > > will still have an IP assigned to your computer.
>> > > > >
>> > > > > If you have not configured anything there, then you most probably
>> > have
>> > > a
>> > > > > DHCP. Just give a try to what I told you on the other message.
>> > > > >
>> > > > > JM
>> > > > >
>> > > > > 2013/8/21 Pavan Sudheendra <[email protected]>
>> > > > >
>> > > > > > @Manoj i have set hbase.zookeeper.quorum in my M-R application..
>> > > > > >
>> > > > > > @Jean The cloudera manager picks up the ip address
>> automatically..
>> > > > > >
>> > > > > >
>> > > > > > On Wed, Aug 21, 2013 at 5:07 PM, manoj p <[email protected]>
>> > wrote:
>> > > > > >
>> > > > > > > Can you try passing the argument
>> > > > -Dhbase.zookeeper.quorum=10.34.187.170
>> > > > > > > while running the program
>> > > > > > >
>> > > > > > > If this does'nt work either please check if HBASE_HOME and
>> > > > > HBASE_CONF_DIR
>> > > > > > > is set correctly.
>> > > > > > >
>> > > > > > > BR/Manoj
>> > > > > > >
>> > > > > > >
>> > > > > > > On Wed, Aug 21, 2013 at 4:48 PM, Pavan Sudheendra <
>> > > > [email protected]
>> > > > > > > >wrote:
>> > > > > > >
>> > > > > > > > Yes. My /etc/hosts have the correct mapping to localhost
>> > > > > > > >
>> > > > > > > > 127.0.0.1    localhost
>> > > > > > > >
>> > > > > > > > # The following lines are desirable for IPv6 capable hosts
>> > > > > > > > ::1     ip6-localhost ip6-loopback
>> > > > > > > > fe00::0 ip6-localnet
>> > > > > > > > ff00::0 ip6-mcastprefix
>> > > > > > > > ff02::1 ip6-allnodes
>> > > > > > > > ff02::2 ip6-allrouters
>> > > > > > > >
>> > > > > > > > I've added the HBase jars to the Hadoop Classpath as well.
>> Not
>> > > sure
>> > > > > > why..
>> > > > > > > > I'm running this on a 6 node cloudera cluster which consist
>> of
>> > 1
>> > > > > > > > jobtrackers and 5 tasktrackers..
>> > > > > > > >
>> > > > > > > > After a while all my map jobs fail.. Completely baffled
>> because
>> > > the
>> > > > > map
>> > > > > > > > tasks were doing the required tasks..
>> > > > > > > >
>> > > > > > > >
>> > > > > > > >
>> > > > > > > > On Wed, Aug 21, 2013 at 4:45 PM, manoj p <[email protected]
>> >
>> > > > wrote:
>> > > > > > > >
>> > > > > > > > > For your code to run, please ensure if you use the correct
>> > > > > > HBase/Hadoop
>> > > > > > > > jar
>> > > > > > > > > versions while compiling your program.
>> > > > > > > > >
>> > > > > > > > > BR/Manoj
>> > > > > > > > >
>> > > > > > > > >
>> > > > > > > > > On Wed, Aug 21, 2013 at 4:38 PM, manoj p <
>> [email protected]>
>> > > > > wrote:
>> > > > > > > > >
>> > > > > > > > > > Check your /etc/hosts file if you have the correct
>> mapping
>> > to
>> > > > > > > localhost
>> > > > > > > > > > for 127.0.0.1. Also ensure that if you have
>> > > > > hbase.zookeeper.quorum
>> > > > > > in
>> > > > > > > > > your
>> > > > > > > > > > configuration and also check if HBase classpath is
>> appended
>> > > to
>> > > > > > Hadoop
>> > > > > > > > > > classpath.
>> > > > > > > > > >
>> > > > > > > > > >
>> > > > > > > > > > BR/Manoj
>> > > > > > > > > >
>> > > > > > > > > >
>> > > > > > > > > > On Wed, Aug 21, 2013 at 4:10 PM, Pavan Sudheendra <
>> > > > > > > [email protected]
>> > > > > > > > > >wrote:
>> > > > > > > > > >
>> > > > > > > > > >> Hadoop Namenode reports the following error which is
>> > > unusual :
>> > > > > > > > > >>
>> > > > > > > > > >>
>> > > > > > > > > >> 013-08-21 09:21:12,328 INFO
>> > org.apache.zookeeper.ClientCnxn:
>> > > > > > Opening
>> > > > > > > > > >> socket
>> > > > > > > > > >> connection to server localhost/127.0.0.1:2181. Will
>> not
>> > > > attempt
>> > > > > > to
>> > > > > > > > > >> authenticate using SASL (Unable to locate a login
>> > > > configuration)
>> > > > > > > > > >> java.net.ConnectException: Connection refused
>> > > > > > > > > >>     at sun.nio.ch.SocketChannelImpl.checkConnect(Native
>> > > > Method)
>> > > > > > > > > >>     at
>> > > > > > > > > >>
>> > > > > > >
>> > > >
>> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
>> > > > > > > > > >>     at
>> > > > > > > > > >>
>> > > > > > > > > >>
>> > > > > > > > >
>> > > > > > > >
>> > > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
>> > > > > > > > > >>     at
>> > > > > > > > > >>
>> > > > > >
>> > org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
>> > > > > > > > > >> 2013-08-21 09:33:11,033 WARN
>> > > > > > > > > >> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper:
>> > > > Possibly
>> > > > > > > > > transient
>> > > > > > > > > >> ZooKeeper exception:
>> > > > > > > > > >>
>> > > org.apache.zookeeper.KeeperException$ConnectionLossException:
>> > > > > > > > > >> KeeperErrorCode = ConnectionLoss for /hbase
>> > > > > > > > > >> 2013-08-21 09:33:11,033 INFO
>> > > > > > > > org.apache.hadoop.hbase.util.RetryCounter:
>> > > > > > > > > >> Sleeping 8000ms before retry #3...
>> > > > > > > > > >> 2013-08-21 09:33:11,043 WARN
>> > org.apache.hadoop.mapred.Task:
>> > > > > Parent
>> > > > > > > > died.
>> > > > > > > > > >> Exiting attempt_201307181246_0548_m_000022_2
>> > > > > > > > > >>
>> > > > > > > > > >>
>> > > > > > > > > >> Because i have specified the address in the java file
>> > > > > > > > > >>     Configuration conf = HBaseConfiguration.create();
>> > > > > > > > > >>     conf.set("hbase.zookeeper.quorum",
>> "10.34.187.170");
>> > > > > > > > > >>
>> > conf.set("hbase.zookeeper.property.clientPort","2181");
>> > > > > > > > > >>     conf.set("hbase.master","10.34.187.170");
>> > > > > > > > > >>
>> > > > > > > > > >>
>> > > > > > > > > >>
>> > > > > > > > > >> All my map tasks fail like this! Please help.. I'm on a
>> > > > timebomb
>> > > > > > > > > >> --
>> > > > > > > > > >> Regards-
>> > > > > > > > > >> Pavan
>> > > > > > > > > >>
>> > > > > > > > > >
>> > > > > > > > > >
>> > > > > > > > >
>> > > > > > > >
>> > > > > > > >
>> > > > > > > >
>> > > > > > > > --
>> > > > > > > > Regards-
>> > > > > > > > Pavan
>> > > > > > > >
>> > > > > > >
>> > > > > >
>> > > > > >
>> > > > > >
>> > > > > > --
>> > > > > > Regards-
>> > > > > > Pavan
>> > > > > >
>> > > > >
>> > > >
>> > > >
>> > > >
>> > > > --
>> > > > Regards-
>> > > > Pavan
>> > > >
>> > >
>> >
>> >
>> >
>> > --
>> > Regards-
>> > Pavan
>> >
>>
>
>
>
> --
> Regards-
> Pavan
>



-- 
Regards-
Pavan

Reply via email to