Hi Stack,

Just FYI - when I change to only 1 zookeeper IP, I get the stacktrace
as above, and then the master will not shut down.


Cheers

Tim


On Mon, Jul 20, 2009 at 8:12 PM, tim robertson<[email protected]> wrote:
> Hi Stack,
>
> Thanks, I now have it loading in using mapreduce on my (cough cough)
> cluster of 4 mac minis and a mac pro (each mini is a hdfs slave and a
> regionserver).  We have some Dell R300's getting installed now so this
> is my dev environment.  As a mapreduce cluster it runs ok and will
> scan and do basic counts of 200million records in 20 minutes or so.  I
> will probably be killing it now though ;o)
>
> I did need to use IPs for the zookeeper set up, and will change it to
> use only 3 nodes.  Honestly I don't really know a thing about
> zookeeper other than reading it's homepage.
>
> I built the trunk about 4 hours ago Stack and will keep an eye on it
> and report if I see misbehaving shutdown behavior - I expect to be
> doing a lot of starting at stopping.
>
> Cheers
>
> Tim
> - got to head home so done for the day
>
> On Mon, Jul 20, 2009 at 7:58 PM, stack<[email protected]> wrote:
>> You should have an odd number of members in your zk quorum Tim.
>>
>> Having to use IPs seems extreme.  Whats your networking setup Tim (This on
>> EC2?).
>>
>> On cluster not going down, are you on recent TRUNK?  (I ask because help for
>> this condition went in friday last week).
>>
>> St.Ack
>>
>> On Mon, Jul 20, 2009 at 10:01 AM, tim robertson
>> <[email protected]>wrote:
>>
>>> Hi all,
>>>
>>> (thanks for help on the other thread - I moved this to new one since I
>>> was diverging...)
>>>
>>> Following the instructions on the readme from the trunk (0.20.0) I
>>> have in my configuration:
>>>
>>> hbase-env.sh
>>>  export HBASE_MANAGES_ZK=true
>>>
>>> hbase-site-xml
>>>  <property>
>>>    <name>hbase.zookeeper.property.clientPort</name>
>>>    <value>2222</value>
>>>  </property>
>>>  <property>
>>>    <name>hbase.zookeeper.quorum</name>
>>>    <value>node1.local,node2.local,node3.local,node4.local</value>
>>>  </property>
>>>
>>> The errors I get suggest it is being read as I see
>>> ...org.apache.zookeeper.ZooKeeper: Initiating client connection,
>>> host=node4.local:2222
>>> but I do get Connection refused errors.
>>>
>>> Are there some more steps perhaps I am missing?
>>> Sorry but I have never used ZooKeeper - how do I check if it is happily
>>> running?
>>>
>>> Thanks again,
>>> (sorry for the bombardment of newbie questions)
>>>
>>> Tim
>>>
>>>
>>> The error:
>>>
>>> 2009-07-20 18:54:45,729 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>>> Initializing JVM Metrics with processName=MAP, sessionId=
>>> 2009-07-20 18:54:46,331 INFO org.apache.hadoop.mapred.MapTask: io.sort.mb =
>>> 100
>>> 2009-07-20 18:54:46,574 INFO org.apache.hadoop.mapred.MapTask: data
>>> buffer = 79691776/99614720
>>> 2009-07-20 18:54:46,574 INFO org.apache.hadoop.mapred.MapTask: record
>>> buffer = 262144/327680
>>> 2009-07-20 18:54:46,909 INFO org.apache.zookeeper.ZooKeeper: Client
>>> environment:zookeeper.version=3.2.0--1, built on 05/15/2009 06:05 GMT
>>> 2009-07-20 18:54:46,909 INFO org.apache.zookeeper.ZooKeeper: Client
>>> environment:host.name=192.168.2.12
>>> 2009-07-20 18:54:46,909 INFO org.apache.zookeeper.ZooKeeper: Client
>>> environment:java.version=1.6.0_13
>>> 2009-07-20 18:54:46,909 INFO org.apache.zookeeper.ZooKeeper: Client
>>> environment:java.vendor=Apple Inc.
>>> 2009-07-20 18:54:46,909 INFO org.apache.zookeeper.ZooKeeper: Client
>>>
>>> environment:java.home=/System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home
>>> 2009-07-20 18:54:46,909 INFO org.apache.zookeeper.ZooKeeper: Client
>>>
>>> environment:java.class.path=/var/root/hadoop-0.20.0/bin/../conf:/System/Library/Frameworks/JavaVM.framework/Versions/1.6/Home/lib/tools.jar:/var/root/hadoop-0.20.0/bin/..:/var/root/hadoop-0.20.0/bin/../hadoop-0.20.0-core.jar:/var/root/hadoop-0.20.0/bin/../lib/commons-cli-2.0-SNAPSHOT.jar:/var/root/hadoop-0.20.0/bin/../lib/commons-codec-1.3.jar:/var/root/hadoop-0.20.0/bin/../lib/commons-el-1.0.jar:/var/root/hadoop-0.20.0/bin/../lib/commons-httpclient-3.0.1.jar:/var/root/hadoop-0.20.0/bin/../lib/commons-logging-1.0.4.jar:/var/root/hadoop-0.20.0/bin/../lib/commons-logging-api-1.0.4.jar:/var/root/hadoop-0.20.0/bin/../lib/commons-net-1.4.1.jar:/var/root/hadoop-0.20.0/bin/../lib/core-3.1.1.jar:/var/root/hadoop-0.20.0/bin/../lib/hsqldb-1.8.0.10.jar:/var/root/hadoop-0.20.0/bin/../lib/jasper-compiler-5.5.12.jar:/var/root/hadoop-0.20.0/bin/../lib/jasper-runtime-5.5.12.jar:/var/root/hadoop-0.20.0/bin/../lib/jets3t-0.6.1.jar:/var/root/hadoop-0.20.0/bin/../lib/jetty-6.1.14.jar:/var/root/hadoop-0.20.0/bin/../lib/jetty-util-6.1.14.jar:/var/root/hadoop-0.20.0/bin/../lib/junit-3.8.1.jar:/var/root/hadoop-0.20.0/bin/../lib/kfs-0.2.2.jar:/var/root/hadoop-0.20.0/bin/../lib/log4j-1.2.15.jar:/var/root/hadoop-0.20.0/bin/../lib/oro-2.0.8.jar:/var/root/hadoop-0.20.0/bin/../lib/servlet-api-2.5-6.1.14.jar:/var/root/hadoop-0.20.0/bin/../lib/slf4j-api-1.4.3.jar:/var/root/hadoop-0.20.0/bin/../lib/slf4j-log4j12-1.4.3.jar:/var/root/hadoop-0.20.0/bin/../lib/xmlenc-0.52.jar:/var/root/hadoop-0.20.0/bin/../lib/jsp-2.1/jsp-2.1.jar:/var/root/hadoop-0.20.0/bin/../lib/jsp-2.1/jsp-api-2.1.jar:/var/root/hbase-0.20.0/hbase-0.20.0-dev.jar:/var/root/hbase-0.20.0/lib/zookeeper-r785019-hbase-1329.jar:/var/root/hbase-0.20.0/conf::/tmp/hadoop/taskTracker/jobcache/job_200907201850_0001/jars/classes:/tmp/hadoop/taskTracker/jobcache/job_200907201850_0001/jars:/tmp/hadoop/taskTracker/jobcache/job_200907201850_0001/attempt_200907201850_0001_m_000000_0/work
>>> 2009-07-20 18:54:46,910 INFO org.apache.zookeeper.ZooKeeper: Client
>>>
>>> environment:java.library.path=/var/root/hadoop-0.20.0/bin/../lib/native/Mac_OS_X-x86_64-64:/tmp/hadoop/taskTracker/jobcache/job_200907201850_0001/attempt_200907201850_0001_m_000000_0/work
>>> 2009-07-20 18:54:46,910 INFO org.apache.zookeeper.ZooKeeper: Client
>>>
>>> environment:java.io.tmpdir=/tmp/hadoop/taskTracker/jobcache/job_200907201850_0001/attempt_200907201850_0001_m_000000_0/work/tmp
>>> 2009-07-20 18:54:46,910 INFO org.apache.zookeeper.ZooKeeper: Client
>>> environment:java.compiler=<NA>
>>> 2009-07-20 18:54:46,910 INFO org.apache.zookeeper.ZooKeeper: Client
>>> environment:os.name=Mac OS X
>>> 2009-07-20 18:54:46,910 INFO org.apache.zookeeper.ZooKeeper: Client
>>> environment:os.arch=x86_64
>>> 2009-07-20 18:54:46,910 INFO org.apache.zookeeper.ZooKeeper: Client
>>> environment:os.version=10.5.7
>>> 2009-07-20 18:54:46,910 INFO org.apache.zookeeper.ZooKeeper: Client
>>> environment:user.name=root
>>> 2009-07-20 18:54:46,910 INFO org.apache.zookeeper.ZooKeeper: Client
>>> environment:user.home=/var/root
>>> 2009-07-20 18:54:46,910 INFO org.apache.zookeeper.ZooKeeper: Client
>>>
>>> environment:user.dir=/private/tmp/hadoop/taskTracker/jobcache/job_200907201850_0001/attempt_200907201850_0001_m_000000_0/work
>>> 2009-07-20 18:54:46,912 INFO org.apache.zookeeper.ZooKeeper:
>>> Initiating client connection,
>>> host=node4.local:2222,node3.local:2222,node2.local:2222,node1.local:2222
>>> sessionTimeout=30000
>>>
>>> watcher=org.apache.hadoop.hbase.client.hconnectionmanager$tableserv...@7e2bd615
>>> 2009-07-20 18:54:46,913 INFO org.apache.zookeeper.ClientCnxn:
>>> zookeeper.disableAutoWatchReset is false
>>> 2009-07-20 18:54:46,925 INFO org.apache.zookeeper.ClientCnxn:
>>> Attempting connection to server node1.local/192.168.2.12:2222
>>> 2009-07-20 18:54:46,927 WARN org.apache.zookeeper.ClientCnxn:
>>> Exception closing session 0x0 to sun.nio.ch.selectionkeyi...@6908b095
>>> java.net.ConnectException: Connection refused
>>>        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>>>        at
>>> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>>>        at
>>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:885)
>>> 2009-07-20 18:54:46,929 WARN org.apache.zookeeper.ClientCnxn: Ignoring
>>> exception during shutdown input
>>> java.nio.channels.ClosedChannelException
>>>        at
>>> sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
>>>        at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
>>>        at
>>> org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:951)
>>>        at
>>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:922)
>>> 2009-07-20 18:54:46,930 WARN org.apache.zookeeper.ClientCnxn: Ignoring
>>> exception during shutdown output
>>> java.nio.channels.ClosedChannelException
>>>        at
>>> sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
>>>        at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
>>>        at
>>> org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:956)
>>>        at
>>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:922)
>>> 2009-07-20 18:54:47,108 WARN
>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper: Failed to create
>>> /hbase:
>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>>> KeeperErrorCode = ConnectionLoss for /hbase
>>>        at
>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:90)
>>>        at
>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:42)
>>>        at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:522)
>>>        at
>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper.ensureExists(ZooKeeperWrapper.java:342)
>>>        at
>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper.ensureParentExists(ZooKeeperWrapper.java:363)
>>>        at
>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper.checkOutOfSafeMode(ZooKeeperWrapper.java:476)
>>>        at
>>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRootRegion(HConnectionManager.java:848)
>>>        at
>>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:517)
>>>        at
>>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:493)
>>>        at
>>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:567)
>>>        at
>>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:526)
>>>        at
>>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:493)
>>>        at
>>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:567)
>>>        at
>>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:530)
>>>        at
>>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:493)
>>>        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:124)
>>>        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:102)
>>>        at
>>> org.gbif.occurrencestore.mapreduce.DwCTabFileLoader$MapLoad.setup(DwCTabFileLoader.java:64)
>>>        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142)
>>>        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:518)
>>>        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:303)
>>>        at org.apache.hadoop.mapred.Child.main(Child.java:170)
>>>
>>
>

Reply via email to