It can be, but in this case, you need to troubleshoot first why Zookeeper is not running, as it's acting as an interface between Hadoop and HBase.
On Mon, Nov 28, 2011 at 3:13 PM, Mohammad Tariq <[email protected]> wrote: > Is there any possibility that this is happening because of improper > forward and reverse DNS resolving???? > > Regards, > Mohammad Tariq > > > > On Mon, Nov 28, 2011 at 7:39 PM, Mohammad Tariq <[email protected]> > wrote: > > Hello :) > > > > I am not starting ZooKeeper manually and yes, I am using > bin/start-hbase.sh > > > > Regards, > > Mohammad Tariq > > > > > > > > On Mon, Nov 28, 2011 at 7:36 PM, Dejan Menges <[email protected]> > wrote: > >> Hi again :) > >> > >> Looks to me like ZooKeeper is not started? > >> > >> Are you starting and managing it manually or trough HBase? > >> > >> How are you starting HBase, using $HBASE_HOME/bin/start-hbase.sh script > or > >> manually? > >> > >> Tnx, > >> Dejan > >> > >> On Mon, Nov 28, 2011 at 3:00 PM, Mohammad Tariq <[email protected]> > wrote: > >> > >>> These are the contents of datanode log file - > >>> > >>> 2011-11-28 19:27:50,669 INFO > >>> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: > >>> /************************************************************ > >>> STARTUP_MSG: Starting DataNode > >>> STARTUP_MSG: host = ubuntu/127.0.1.1 > >>> STARTUP_MSG: args = [] > >>> STARTUP_MSG: version = 0.20.205.0 > >>> STARTUP_MSG: build = > >>> > >>> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-205 > >>> -r 1179940; compiled by 'hortonfo' on Fri Oct 7 06:20:32 UTC 2011 > >>> ************************************************************/ > >>> 2011-11-28 19:27:50,766 INFO > >>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from > >>> hadoop-metrics2.properties > >>> 2011-11-28 19:27:50,774 INFO > >>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source > >>> MetricsSystem,sub=Stats registered. > >>> 2011-11-28 19:27:50,775 INFO > >>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot > >>> period at 10 second(s). > >>> 2011-11-28 19:27:50,775 INFO > >>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics > >>> system started > >>> 2011-11-28 19:27:50,876 INFO > >>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source > >>> ugi registered. > >>> 2011-11-28 19:27:50,879 WARN > >>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi > >>> already exists! > >>> 2011-11-28 19:27:56,127 INFO > >>> org.apache.hadoop.hdfs.server.datanode.DataNode: Registered > >>> FSDatasetStatusMBean > >>> 2011-11-28 19:27:56,139 INFO > >>> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at > >>> 50010 > >>> 2011-11-28 19:27:56,140 INFO > >>> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is > >>> 1048576 bytes/s > >>> 2011-11-28 19:28:01,195 INFO org.mortbay.log: Logging to > >>> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via > >>> org.mortbay.log.Slf4jLog > >>> 2011-11-28 19:28:01,235 INFO org.apache.hadoop.http.HttpServer: Added > >>> global filtersafety > >>> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter) > >>> 2011-11-28 19:28:01,238 INFO org.apache.hadoop.http.HttpServer: Port > >>> returned by webServer.getConnectors()[0].getLocalPort() before open() > >>> is -1. Opening the listener on 50075 > >>> 2011-11-28 19:28:01,239 INFO org.apache.hadoop.http.HttpServer: > >>> listener.getLocalPort() returned 50075 > >>> webServer.getConnectors()[0].getLocalPort() returned 50075 > >>> 2011-11-28 19:28:01,239 INFO org.apache.hadoop.http.HttpServer: Jetty > >>> bound to port 50075 > >>> 2011-11-28 19:28:01,239 INFO org.mortbay.log: jetty-6.1.26 > >>> 2011-11-28 19:28:01,449 INFO org.mortbay.log: Started > >>> [email protected]:50075 > >>> 2011-11-28 19:28:01,452 INFO > >>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source > >>> jvm registered. > >>> 2011-11-28 19:28:01,453 INFO > >>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source > >>> DataNode registered. > >>> > >>> And these are the contents of namenode log files - > >>> > >>> 2011-11-28 19:27:49,306 INFO > >>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: > >>> /************************************************************ > >>> STARTUP_MSG: Starting NameNode > >>> STARTUP_MSG: host = ubuntu/127.0.1.1 > >>> STARTUP_MSG: args = [] > >>> STARTUP_MSG: version = 0.20.205.0 > >>> STARTUP_MSG: build = > >>> > >>> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-205 > >>> -r 1179940; compiled by 'hortonfo' on Fri Oct 7 06:20:32 UTC 2011 > >>> ************************************************************/ > >>> 2011-11-28 19:27:49,403 INFO > >>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from > >>> hadoop-metrics2.properties > >>> 2011-11-28 19:27:49,411 INFO > >>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source > >>> MetricsSystem,sub=Stats registered. > >>> 2011-11-28 19:27:49,411 INFO > >>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot > >>> period at 10 second(s). > >>> 2011-11-28 19:27:49,411 INFO > >>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics > >>> system started > >>> 2011-11-28 19:27:49,535 INFO > >>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source > >>> ugi registered. > >>> 2011-11-28 19:27:49,538 WARN > >>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi > >>> already exists! > >>> 2011-11-28 19:27:49,541 INFO > >>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source > >>> jvm registered. > >>> 2011-11-28 19:27:49,542 INFO > >>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source > >>> NameNode registered. > >>> 2011-11-28 19:27:49,561 INFO org.apache.hadoop.hdfs.util.GSet: VM type > >>> = 64-bit > >>> 2011-11-28 19:27:49,562 INFO org.apache.hadoop.hdfs.util.GSet: 2% max > >>> memory = 17.77875 MB > >>> 2011-11-28 19:27:49,562 INFO org.apache.hadoop.hdfs.util.GSet: > >>> capacity = 2^21 = 2097152 entries > >>> 2011-11-28 19:27:49,562 INFO org.apache.hadoop.hdfs.util.GSet: > >>> recommended=2097152, actual=2097152 > >>> 2011-11-28 19:27:49,575 INFO > >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=solr > >>> 2011-11-28 19:27:49,575 INFO > >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: > >>> supergroup=supergroup > >>> 2011-11-28 19:27:49,575 INFO > >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: > >>> isPermissionEnabled=true > >>> 2011-11-28 19:27:49,578 INFO > >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: > >>> dfs.block.invalidate.limit=100 > >>> 2011-11-28 19:27:49,578 INFO > >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: > >>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), > >>> accessTokenLifetime=0 min(s) > >>> 2011-11-28 19:27:49,731 INFO > >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered > >>> FSNamesystemStateMBean and NameNodeMXBean > >>> 2011-11-28 19:27:49,744 INFO > >>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names > >>> occuring more than 10 times > >>> 2011-11-28 19:27:49,751 INFO > >>> org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1 > >>> 2011-11-28 19:27:49,754 INFO > >>> org.apache.hadoop.hdfs.server.common.Storage: Number of files under > >>> construction = 0 > >>> 2011-11-28 19:27:49,754 INFO > >>> org.apache.hadoop.hdfs.server.common.Storage: Image file of size 110 > >>> loaded in 0 seconds. > >>> 2011-11-28 19:27:49,754 INFO > >>> org.apache.hadoop.hdfs.server.common.Storage: Edits file > >>> /home/solr/hdfs/name/current/edits of size 4 edits # 0 loaded in 0 > >>> seconds. > >>> 2011-11-28 19:27:49,755 INFO > >>> org.apache.hadoop.hdfs.server.common.Storage: Image file of size 110 > >>> saved in 0 seconds. > >>> 2011-11-28 19:27:50,328 INFO > >>> org.apache.hadoop.hdfs.server.common.Storage: Image file of size 110 > >>> saved in 0 seconds. > >>> 2011-11-28 19:27:50,747 INFO > >>> org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 > >>> entries 0 lookups > >>> 2011-11-28 19:27:50,747 INFO > >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading > >>> FSImage in 1176 msecs > >>> 2011-11-28 19:27:50,755 INFO > >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of > >>> blocks = 0 > >>> 2011-11-28 19:27:50,755 INFO > >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid > >>> blocks = 0 > >>> 2011-11-28 19:27:50,755 INFO > >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of > >>> under-replicated blocks = 0 > >>> 2011-11-28 19:27:50,755 INFO > >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of > >>> over-replicated blocks = 0 > >>> 2011-11-28 19:27:50,755 INFO org.apache.hadoop.hdfs.StateChange: > >>> STATE* Safe mode termination scan for invalid, over- and > >>> under-replicated blocks completed in 8 msec > >>> 2011-11-28 19:27:50,755 INFO org.apache.hadoop.hdfs.StateChange: > >>> STATE* Leaving safe mode after 1 secs. > >>> 2011-11-28 19:27:50,755 INFO org.apache.hadoop.hdfs.StateChange: > >>> STATE* Network topology has 0 racks and 0 datanodes > >>> 2011-11-28 19:27:50,755 INFO org.apache.hadoop.hdfs.StateChange: > >>> STATE* UnderReplicatedBlocks has 0 blocks > >>> 2011-11-28 19:27:50,760 INFO org.apache.hadoop.util.HostsFileReader: > >>> Refreshing hosts (include/exclude) list > >>> 2011-11-28 19:27:50,760 INFO > >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue > >>> QueueProcessingStatistics: First cycle completed 0 blocks in 1 msec > >>> 2011-11-28 19:27:50,760 INFO > >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue > >>> QueueProcessingStatistics: Queue flush completed 0 blocks in 1 msec > >>> processing time, 1 msec clock time, 1 cycles > >>> 2011-11-28 19:27:50,760 INFO > >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue > >>> QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec > >>> 2011-11-28 19:27:50,760 INFO > >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue > >>> QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec > >>> processing time, 0 msec clock time, 1 cycles > >>> 2011-11-28 19:27:50,763 INFO > >>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source > >>> FSNamesystemMetrics registered. > >>> 2011-11-28 19:27:50,776 INFO org.apache.hadoop.ipc.Server: Starting > >>> SocketReader > >>> 2011-11-28 19:27:50,778 INFO > >>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source > >>> RpcDetailedActivityForPort9000 registered. > >>> 2011-11-28 19:27:50,779 INFO > >>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source > >>> RpcActivityForPort9000 registered. > >>> 2011-11-28 19:27:50,782 INFO > >>> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: > >>> localhost/127.0.0.1:9000 > >>> 2011-11-28 19:27:55,829 INFO org.mortbay.log: Logging to > >>> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via > >>> org.mortbay.log.Slf4jLog > >>> 2011-11-28 19:27:55,868 INFO org.apache.hadoop.http.HttpServer: Added > >>> global filtersafety > >>> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter) > >>> 2011-11-28 19:27:55,873 INFO org.apache.hadoop.http.HttpServer: Port > >>> returned by webServer.getConnectors()[0].getLocalPort() before open() > >>> is -1. Opening the listener on 50070 > >>> 2011-11-28 19:27:55,874 INFO org.apache.hadoop.http.HttpServer: > >>> listener.getLocalPort() returned 50070 > >>> webServer.getConnectors()[0].getLocalPort() returned 50070 > >>> 2011-11-28 19:27:55,874 INFO org.apache.hadoop.http.HttpServer: Jetty > >>> bound to port 50070 > >>> 2011-11-28 19:27:55,874 INFO org.mortbay.log: jetty-6.1.26 > >>> 2011-11-28 19:27:56,049 INFO org.mortbay.log: Started > >>> [email protected]:50070 > >>> 2011-11-28 19:27:56,049 INFO > >>> org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: > >>> 0.0.0.0:50070 > >>> 2011-11-28 19:27:56,049 INFO org.apache.hadoop.ipc.Server: IPC Server > >>> Responder: starting > >>> 2011-11-28 19:27:56,050 INFO org.apache.hadoop.ipc.Server: IPC Server > >>> listener on 9000: starting > >>> 2011-11-28 19:27:56,051 INFO org.apache.hadoop.ipc.Server: IPC Server > >>> handler 0 on 9000: starting > >>> 2011-11-28 19:27:56,051 INFO org.apache.hadoop.ipc.Server: IPC Server > >>> handler 1 on 9000: starting > >>> 2011-11-28 19:27:56,051 INFO org.apache.hadoop.ipc.Server: IPC Server > >>> handler 2 on 9000: starting > >>> 2011-11-28 19:27:56,051 INFO org.apache.hadoop.ipc.Server: IPC Server > >>> handler 3 on 9000: starting > >>> 2011-11-28 19:27:56,051 INFO org.apache.hadoop.ipc.Server: IPC Server > >>> handler 4 on 9000: starting > >>> 2011-11-28 19:27:56,051 INFO org.apache.hadoop.ipc.Server: IPC Server > >>> handler 5 on 9000: starting > >>> 2011-11-28 19:27:56,051 INFO org.apache.hadoop.ipc.Server: IPC Server > >>> handler 6 on 9000: starting > >>> 2011-11-28 19:27:56,051 INFO org.apache.hadoop.ipc.Server: IPC Server > >>> handler 7 on 9000: starting > >>> 2011-11-28 19:27:56,051 INFO org.apache.hadoop.ipc.Server: IPC Server > >>> handler 8 on 9000: starting > >>> 2011-11-28 19:27:56,052 INFO org.apache.hadoop.ipc.Server: IPC Server > >>> handler 9 on 9000: starting > >>> 2011-11-28 19:28:06,628 INFO org.apache.hadoop.hdfs.StateChange: > >>> BLOCK* NameSystem.registerDatanode: node registration from > >>> 127.0.0.1:50010 storage DS-1097957079-127.0.1.1-50010-1322465103376 > >>> 2011-11-28 19:28:06,633 INFO org.apache.hadoop.net.NetworkTopology: > >>> Adding a new node: /default-rack/127.0.0.1:50010 > >>> 2011-11-28 19:28:06,646 INFO org.apache.hadoop.hdfs.StateChange: > >>> *BLOCK* NameSystem.processReport: from 127.0.0.1:50010, blocks: 0, > >>> processing time: 2 msecs > >>> Regards, > >>> Mohammad Tariq > >>> > >>> > >>> > >>> On Mon, Nov 28, 2011 at 7:23 PM, Mohammad Tariq <[email protected]> > >>> wrote: > >>> > Hi Dejan, > >>> > Here is the o/p of jps - > >>> > solr@ubuntu:~$ jps > >>> > 14792 NameNode > >>> > 17899 HMaster > >>> > 15014 DataNode > >>> > 18001 Jps > >>> > 15251 SecondaryNameNode > >>> > > >>> > Regards, > >>> > Mohammad Tariq > >>> > > >>> > > >>> > > >>> > On Mon, Nov 28, 2011 at 7:11 PM, Dejan Menges < > [email protected]> > >>> wrote: > >>> >> Hi Mohammad, > >>> >> > >>> >> Looks to me like your hosts file is OK, but HDFS/Namenode is not > running > >>> >> but it's trying to connect to Namenode on port 9000? > >>> >> > >>> >> Can you list your local java processes with 'jps' here and check > your > >>> >> Namenode/Datanode logs? > >>> >> > >>> >> Tnx, > >>> >> Dejan > >>> >> > >>> >> On Mon, Nov 28, 2011 at 2:36 PM, Mohammad Tariq <[email protected] > > > >>> wrote: > >>> >> > >>> >>> Could anyone who has used Hbase in pseudo-distributed mode shere > >>> >>> his/her hosts file???I am getting following error - > >>> >>> Mon Nov 28 19:03:20 IST 2011 Starting master on ubuntu > >>> >>> ulimit -n 32768 > >>> >>> 2011-11-28 19:03:21,038 INFO > >>> >>> org.apache.zookeeper.server.ZooKeeperServer: Server > >>> >>> environment:zookeeper.version=3.3.2-1031432, built on 11/05/2010 > 05:32 > >>> >>> GMT > >>> >>> 2011-11-28 19:03:21,038 INFO > >>> >>> org.apache.zookeeper.server.ZooKeeperServer: Server > >>> >>> environment:host.name=solr@ubuntu > >>> >>> 2011-11-28 19:03:21,038 INFO > >>> >>> org.apache.zookeeper.server.ZooKeeperServer: Server > >>> >>> environment:java.version=1.6.0_26 > >>> >>> 2011-11-28 19:03:21,038 INFO > >>> >>> org.apache.zookeeper.server.ZooKeeperServer: Server > >>> >>> environment:java.vendor=Sun Microsystems Inc. > >>> >>> 2011-11-28 19:03:21,038 INFO > >>> >>> org.apache.zookeeper.server.ZooKeeperServer: Server > >>> >>> environment:java.home=/usr/lib/jvm/java-6-sun-1.6.0.26/jre > >>> >>> 2011-11-28 19:03:21,038 INFO > >>> >>> org.apache.zookeeper.server.ZooKeeperServer: Server > >>> >>> > >>> >>> > >>> > environment:java.class.path=/home/solr/hbase-0.90.4/bin/../conf:/usr/lib/jvm/java-6-sun/lib/tools.jar:/home/solr/hbase-0.90.4/bin/..:/home/solr/hbase-0.90.4/bin/../hbase-0.90.4.jar:/home/solr/hbase-0.90.4/bin/../hbase-0.90.4-tests.jar:/home/solr/hbase-0.90.4/bin/../lib/activation-1.1.jar:/home/solr/hbase-0.90.4/bin/../lib/asm-3.1.jar:/home/solr/hbase-0.90.4/bin/../lib/avro-1.3.3.jar:/home/solr/hbase-0.90.4/bin/../lib/commons-cli-1.2.jar:/home/solr/hbase-0.90.4/bin/../lib/commons-codec-1.4.jar:/home/solr/hbase-0.90.4/bin/../lib/commons-el-1.0.jar:/home/solr/hbase-0.90.4/bin/../lib/commons-httpclient-3.1.jar:/home/solr/hbase-0.90.4/bin/../lib/commons-lang-2.5.jar:/home/solr/hbase-0.90.4/bin/../lib/commons-logging-1.1.1.jar:/home/solr/hbase-0.90.4/bin/../lib/commons-net-1.4.1.jar:/home/solr/hbase-0.90.4/bin/../lib/core-3.1.1.jar:/home/solr/hbase-0.90.4/bin/../lib/guava-r06.jar:/home/solr/hbase-0.90.4/bin/../lib/hadoop-core-0.20-append-r1056497.jar:/home/solr/hbase-0.90.4/bin/../lib/jackson-core-asl-1.5.5.jar:/home/solr/hbase-0.90.4/bin/../lib/jackson-jaxrs-1.5.5.jar:/home/solr/hbase-0.90.4/bin/../lib/jackson-mapper-asl-1.4.2.jar:/home/solr/hbase-0.90.4/bin/../lib/jackson-xc-1.5.5.jar:/home/solr/hbase-0.90.4/bin/../lib/jasper-compiler-5.5.23.jar:/home/solr/hbase-0.90.4/bin/../lib/jasper-runtime-5.5.23.jar:/home/solr/hbase-0.90.4/bin/../lib/jaxb-api-2.1.jar:/home/solr/hbase-0.90.4/bin/../lib/jaxb-impl-2.1.12.jar:/home/solr/hbase-0.90.4/bin/../lib/jersey-core-1.4.jar:/home/solr/hbase-0.90.4/bin/../lib/jersey-json-1.4.jar:/home/solr/hbase-0.90.4/bin/../lib/jersey-server-1.4.jar:/home/solr/hbase-0.90.4/bin/../lib/jettison-1.1.jar:/home/solr/hbase-0.90.4/bin/../lib/jetty-6.1.26.jar:/home/solr/hbase-0.90.4/bin/../lib/jetty-util-6.1.26.jar:/home/solr/hbase-0.90.4/bin/../lib/jruby-complete-1.6.0.jar:/home/solr/hbase-0.90.4/bin/../lib/jsp-2.1-6.1.14.jar:/home/solr/hbase-0.90.4/bin/../lib/jsp-api-2.1-6.1.14.jar:/home/solr/hbase-0.90.4/bin/../lib/jsr311-api-1.1.1.jar:/home/solr/hbase-0.90.4/bin/../lib/log4j-1.2.16.jar:/home/solr/hbase-0.90.4/bin/../lib/protobuf-java-2.3.0.jar:/home/solr/hbase-0.90.4/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/solr/hbase-0.90.4/bin/../lib/slf4j-api-1.5.8.jar:/home/solr/hbase-0.90.4/bin/../lib/slf4j-log4j12-1.5.8.jar:/home/solr/hbase-0.90.4/bin/../lib/stax-api-1.0.1.jar:/home/solr/hbase-0.90.4/bin/../lib/thrift-0.2.0.jar:/home/solr/hbase-0.90.4/bin/../lib/xmlenc-0.52.jar:/home/solr/hbase-0.90.4/bin/../lib/zookeeper-3.3.2.jar > >>> >>> 2011-11-28 19:03:21,038 INFO > >>> >>> org.apache.zookeeper.server.ZooKeeperServer: Server > >>> >>> > >>> >>> > >>> > environment:java.library.path=/usr/lib/jvm/java-6-sun-1.6.0.26/jre/lib/amd64/server:/usr/lib/jvm/java-6-sun-1.6.0.26/jre/lib/amd64:/usr/lib/jvm/java-6-sun-1.6.0.26/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib > >>> >>> 2011-11-28 19:03:21,038 INFO > >>> >>> org.apache.zookeeper.server.ZooKeeperServer: Server > >>> >>> environment:java.io.tmpdir=/tmp > >>> >>> 2011-11-28 19:03:21,038 INFO > >>> >>> org.apache.zookeeper.server.ZooKeeperServer: Server > >>> >>> environment:java.compiler=<NA> > >>> >>> 2011-11-28 19:03:21,038 INFO > >>> >>> org.apache.zookeeper.server.ZooKeeperServer: Server > >>> >>> environment:os.name=Linux > >>> >>> 2011-11-28 19:03:21,038 INFO > >>> >>> org.apache.zookeeper.server.ZooKeeperServer: Server > >>> >>> environment:os.arch=amd64 > >>> >>> 2011-11-28 19:03:21,038 INFO > >>> >>> org.apache.zookeeper.server.ZooKeeperServer: Server > >>> >>> environment:os.version=3.2.0-1-generic > >>> >>> 2011-11-28 19:03:21,038 INFO > >>> >>> org.apache.zookeeper.server.ZooKeeperServer: Server > >>> >>> environment:user.name=solr > >>> >>> 2011-11-28 19:03:21,038 INFO > >>> >>> org.apache.zookeeper.server.ZooKeeperServer: Server > >>> >>> environment:user.home=/home/solr > >>> >>> 2011-11-28 19:03:21,039 INFO > >>> >>> org.apache.zookeeper.server.ZooKeeperServer: Server > >>> >>> environment:user.dir=/home/solr/hbase-0.90.4 > >>> >>> 2011-11-28 19:03:21,049 INFO > >>> >>> org.apache.zookeeper.server.ZooKeeperServer: Created server with > >>> >>> tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 > datadir > >>> >>> /tmp/hbase-solr/zookeeper/zookeeper/version-2 snapdir > >>> >>> /tmp/hbase-solr/zookeeper/zookeeper/version-2 > >>> >>> 2011-11-28 19:03:21,067 INFO > >>> >>> org.apache.zookeeper.server.NIOServerCnxn: binding to port > >>> >>> 0.0.0.0/0.0.0.0:2181 > >>> >>> 2011-11-28 19:03:21,071 INFO > >>> >>> org.apache.zookeeper.server.persistence.FileTxnSnapLog: > Snapshotting: > >>> >>> 0 > >>> >>> 2011-11-28 19:03:21,121 INFO > >>> >>> org.apache.zookeeper.server.NIOServerCnxn: Accepted socket > connection > >>> >>> from /192.168.2.106:58263 > >>> >>> 2011-11-28 19:03:21,123 INFO > >>> >>> org.apache.zookeeper.server.NIOServerCnxn: Processing stat command > >>> >>> from /192.168.2.106:58263 > >>> >>> 2011-11-28 19:03:21,124 INFO > >>> >>> org.apache.zookeeper.server.NIOServerCnxn: Stat command output > >>> >>> 2011-11-28 19:03:21,126 INFO > >>> >>> org.apache.zookeeper.server.NIOServerCnxn: Closed socket connection > >>> >>> for client /192.168.2.106:58263 (no session established for > client) > >>> >>> 2011-11-28 19:03:21,126 INFO > >>> >>> org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster: Started > MiniZK > >>> >>> Server on client port: 2181 > >>> >>> 2011-11-28 19:03:21,176 INFO > >>> >>> org.apache.hadoop.hbase.ipc.HBaseRpcMetrics: Initializing RPC > Metrics > >>> >>> with hostName=HMasterCommandLine$LocalHMaster, port=40697 > >>> >>> 2011-11-28 19:03:21,197 INFO org.apache.hadoop.hbase.security.User: > >>> >>> Skipping login, not running on secure Hadoop > >>> >>> 2011-11-28 19:03:21,198 INFO org.apache.hadoop.ipc.HBaseServer: IPC > >>> >>> Server Responder: starting > >>> >>> 2011-11-28 19:03:21,198 INFO org.apache.hadoop.ipc.HBaseServer: IPC > >>> >>> Server listener on 40697: starting > >>> >>> 2011-11-28 19:03:21,206 INFO org.apache.hadoop.ipc.HBaseServer: IPC > >>> >>> Server handler 0 on 40697: starting > >>> >>> 2011-11-28 19:03:21,206 INFO org.apache.hadoop.ipc.HBaseServer: IPC > >>> >>> Server handler 1 on 40697: starting > >>> >>> 2011-11-28 19:03:21,206 INFO org.apache.hadoop.ipc.HBaseServer: IPC > >>> >>> Server handler 2 on 40697: starting > >>> >>> 2011-11-28 19:03:21,206 INFO org.apache.hadoop.ipc.HBaseServer: IPC > >>> >>> Server handler 3 on 40697: starting > >>> >>> 2011-11-28 19:03:21,206 INFO org.apache.hadoop.ipc.HBaseServer: IPC > >>> >>> Server handler 4 on 40697: starting > >>> >>> 2011-11-28 19:03:21,206 INFO org.apache.hadoop.ipc.HBaseServer: IPC > >>> >>> Server handler 5 on 40697: starting > >>> >>> 2011-11-28 19:03:21,207 INFO org.apache.hadoop.ipc.HBaseServer: IPC > >>> >>> Server handler 6 on 40697: starting > >>> >>> 2011-11-28 19:03:21,207 INFO org.apache.hadoop.ipc.HBaseServer: IPC > >>> >>> Server handler 7 on 40697: starting > >>> >>> 2011-11-28 19:03:21,207 INFO org.apache.hadoop.ipc.HBaseServer: IPC > >>> >>> Server handler 8 on 40697: starting > >>> >>> 2011-11-28 19:03:21,207 INFO org.apache.hadoop.ipc.HBaseServer: IPC > >>> >>> Server handler 9 on 40697: starting > >>> >>> 2011-11-28 19:03:21,216 INFO org.apache.zookeeper.ZooKeeper: Client > >>> >>> environment:zookeeper.version=3.3.2-1031432, built on 11/05/2010 > 05:32 > >>> >>> GMT > >>> >>> 2011-11-28 19:03:21,216 INFO org.apache.zookeeper.ZooKeeper: Client > >>> >>> environment:host.name=solr@ubuntu > >>> >>> 2011-11-28 19:03:21,216 INFO org.apache.zookeeper.ZooKeeper: Client > >>> >>> environment:java.version=1.6.0_26 > >>> >>> 2011-11-28 19:03:21,216 INFO org.apache.zookeeper.ZooKeeper: Client > >>> >>> environment:java.vendor=Sun Microsystems Inc. > >>> >>> 2011-11-28 19:03:21,216 INFO org.apache.zookeeper.ZooKeeper: Client > >>> >>> environment:java.home=/usr/lib/jvm/java-6-sun-1.6.0.26/jre > >>> >>> 2011-11-28 19:03:21,216 INFO org.apache.zookeeper.ZooKeeper: Client > >>> >>> > >>> >>> > >>> > environment:java.class.path=/home/solr/hbase-0.90.4/bin/../conf:/usr/lib/jvm/java-6-sun/lib/tools.jar:/home/solr/hbase-0.90.4/bin/..:/home/solr/hbase-0.90.4/bin/../hbase-0.90.4.jar:/home/solr/hbase-0.90.4/bin/../hbase-0.90.4-tests.jar:/home/solr/hbase-0.90.4/bin/../lib/activation-1.1.jar:/home/solr/hbase-0.90.4/bin/../lib/asm-3.1.jar:/home/solr/hbase-0.90.4/bin/../lib/avro-1.3.3.jar:/home/solr/hbase-0.90.4/bin/../lib/commons-cli-1.2.jar:/home/solr/hbase-0.90.4/bin/../lib/commons-codec-1.4.jar:/home/solr/hbase-0.90.4/bin/../lib/commons-el-1.0.jar:/home/solr/hbase-0.90.4/bin/../lib/commons-httpclient-3.1.jar:/home/solr/hbase-0.90.4/bin/../lib/commons-lang-2.5.jar:/home/solr/hbase-0.90.4/bin/../lib/commons-logging-1.1.1.jar:/home/solr/hbase-0.90.4/bin/../lib/commons-net-1.4.1.jar:/home/solr/hbase-0.90.4/bin/../lib/core-3.1.1.jar:/home/solr/hbase-0.90.4/bin/../lib/guava-r06.jar:/home/solr/hbase-0.90.4/bin/../lib/hadoop-core-0.20-append-r1056497.jar:/home/solr/hbase-0.90.4/bin/../lib/jackson-core-asl-1.5.5.jar:/home/solr/hbase-0.90.4/bin/../lib/jackson-jaxrs-1.5.5.jar:/home/solr/hbase-0.90.4/bin/../lib/jackson-mapper-asl-1.4.2.jar:/home/solr/hbase-0.90.4/bin/../lib/jackson-xc-1.5.5.jar:/home/solr/hbase-0.90.4/bin/../lib/jasper-compiler-5.5.23.jar:/home/solr/hbase-0.90.4/bin/../lib/jasper-runtime-5.5.23.jar:/home/solr/hbase-0.90.4/bin/../lib/jaxb-api-2.1.jar:/home/solr/hbase-0.90.4/bin/../lib/jaxb-impl-2.1.12.jar:/home/solr/hbase-0.90.4/bin/../lib/jersey-core-1.4.jar:/home/solr/hbase-0.90.4/bin/../lib/jersey-json-1.4.jar:/home/solr/hbase-0.90.4/bin/../lib/jersey-server-1.4.jar:/home/solr/hbase-0.90.4/bin/../lib/jettison-1.1.jar:/home/solr/hbase-0.90.4/bin/../lib/jetty-6.1.26.jar:/home/solr/hbase-0.90.4/bin/../lib/jetty-util-6.1.26.jar:/home/solr/hbase-0.90.4/bin/../lib/jruby-complete-1.6.0.jar:/home/solr/hbase-0.90.4/bin/../lib/jsp-2.1-6.1.14.jar:/home/solr/hbase-0.90.4/bin/../lib/jsp-api-2.1-6.1.14.jar:/home/solr/hbase-0.90.4/bin/../lib/jsr311-api-1.1.1.jar:/home/solr/hbase-0.90.4/bin/../lib/log4j-1.2.16.jar:/home/solr/hbase-0.90.4/bin/../lib/protobuf-java-2.3.0.jar:/home/solr/hbase-0.90.4/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/solr/hbase-0.90.4/bin/../lib/slf4j-api-1.5.8.jar:/home/solr/hbase-0.90.4/bin/../lib/slf4j-log4j12-1.5.8.jar:/home/solr/hbase-0.90.4/bin/../lib/stax-api-1.0.1.jar:/home/solr/hbase-0.90.4/bin/../lib/thrift-0.2.0.jar:/home/solr/hbase-0.90.4/bin/../lib/xmlenc-0.52.jar:/home/solr/hbase-0.90.4/bin/../lib/zookeeper-3.3.2.jar > >>> >>> 2011-11-28 19:03:21,216 INFO org.apache.zookeeper.ZooKeeper: Client > >>> >>> > >>> >>> > >>> > environment:java.library.path=/usr/lib/jvm/java-6-sun-1.6.0.26/jre/lib/amd64/server:/usr/lib/jvm/java-6-sun-1.6.0.26/jre/lib/amd64:/usr/lib/jvm/java-6-sun-1.6.0.26/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib > >>> >>> 2011-11-28 19:03:21,216 INFO org.apache.zookeeper.ZooKeeper: Client > >>> >>> environment:java.io.tmpdir=/tmp > >>> >>> 2011-11-28 19:03:21,216 INFO org.apache.zookeeper.ZooKeeper: Client > >>> >>> environment:java.compiler=<NA> > >>> >>> 2011-11-28 19:03:21,216 INFO org.apache.zookeeper.ZooKeeper: Client > >>> >>> environment:os.name=Linux > >>> >>> 2011-11-28 19:03:21,216 INFO org.apache.zookeeper.ZooKeeper: Client > >>> >>> environment:os.arch=amd64 > >>> >>> 2011-11-28 19:03:21,216 INFO org.apache.zookeeper.ZooKeeper: Client > >>> >>> environment:os.version=3.2.0-1-generic > >>> >>> 2011-11-28 19:03:21,216 INFO org.apache.zookeeper.ZooKeeper: Client > >>> >>> environment:user.name=solr > >>> >>> 2011-11-28 19:03:21,216 INFO org.apache.zookeeper.ZooKeeper: Client > >>> >>> environment:user.home=/home/solr > >>> >>> 2011-11-28 19:03:21,216 INFO org.apache.zookeeper.ZooKeeper: Client > >>> >>> environment:user.dir=/home/solr/hbase-0.90.4 > >>> >>> 2011-11-28 19:03:21,217 INFO org.apache.zookeeper.ZooKeeper: > >>> >>> Initiating client connection, connectString=localhost:2181 > >>> >>> sessionTimeout=180000 watcher=master:40697 > >>> >>> 2011-11-28 19:03:21,224 INFO org.apache.zookeeper.ClientCnxn: > Opening > >>> >>> socket connection to server localhost/192.168.2.106:2181 > >>> >>> 2011-11-28 19:03:21,224 INFO > >>> >>> org.apache.zookeeper.server.NIOServerCnxn: Accepted socket > connection > >>> >>> from /192.168.2.106:58264 > >>> >>> 2011-11-28 19:03:21,224 INFO org.apache.zookeeper.ClientCnxn: > Socket > >>> >>> connection established to localhost/192.168.2.106:2181, initiating > >>> >>> session > >>> >>> 2011-11-28 19:03:21,228 INFO > >>> >>> org.apache.zookeeper.server.NIOServerCnxn: Client attempting to > >>> >>> establish new session at /192.168.2.106:58264 > >>> >>> 2011-11-28 19:03:21,230 INFO > >>> >>> org.apache.zookeeper.server.persistence.FileTxnLog: Creating new > log > >>> >>> file: log.1 > >>> >>> 2011-11-28 19:03:21,335 INFO > >>> >>> org.apache.zookeeper.server.NIOServerCnxn: Established session > >>> >>> 0x133ea613d340000 with negotiated timeout 40000 for client > >>> >>> /192.168.2.106:58264 > >>> >>> 2011-11-28 19:03:21,335 INFO org.apache.zookeeper.ClientCnxn: > Session > >>> >>> establishment complete on server localhost/192.168.2.106:2181, > >>> >>> sessionid = 0x133ea613d340000, negotiated timeout = 40000 > >>> >>> 2011-11-28 19:03:21,473 INFO > org.apache.hadoop.metrics.jvm.JvmMetrics: > >>> >>> Initializing JVM Metrics with processName=Master, > >>> >>> sessionId=solr@ubuntu:40697 > >>> >>> 2011-11-28 19:03:21,483 INFO org.apache.hadoop.hbase.metrics: > >>> >>> MetricsString added: revision > >>> >>> 2011-11-28 19:03:21,483 INFO org.apache.hadoop.hbase.metrics: > >>> >>> MetricsString added: hdfsUser > >>> >>> 2011-11-28 19:03:21,483 INFO org.apache.hadoop.hbase.metrics: > >>> >>> MetricsString added: hdfsDate > >>> >>> 2011-11-28 19:03:21,483 INFO org.apache.hadoop.hbase.metrics: > >>> >>> MetricsString added: hdfsUrl > >>> >>> 2011-11-28 19:03:21,483 INFO org.apache.hadoop.hbase.metrics: > >>> >>> MetricsString added: date > >>> >>> 2011-11-28 19:03:21,483 INFO org.apache.hadoop.hbase.metrics: > >>> >>> MetricsString added: hdfsRevision > >>> >>> 2011-11-28 19:03:21,483 INFO org.apache.hadoop.hbase.metrics: > >>> >>> MetricsString added: user > >>> >>> 2011-11-28 19:03:21,483 INFO org.apache.hadoop.hbase.metrics: > >>> >>> MetricsString added: hdfsVersion > >>> >>> 2011-11-28 19:03:21,483 INFO org.apache.hadoop.hbase.metrics: > >>> >>> MetricsString added: url > >>> >>> 2011-11-28 19:03:21,483 INFO org.apache.hadoop.hbase.metrics: > >>> >>> MetricsString added: version > >>> >>> 2011-11-28 19:03:21,483 INFO org.apache.hadoop.hbase.metrics: new > >>> MBeanInfo > >>> >>> 2011-11-28 19:03:21,484 INFO org.apache.hadoop.hbase.metrics: new > >>> MBeanInfo > >>> >>> 2011-11-28 19:03:21,484 INFO > >>> >>> org.apache.hadoop.hbase.master.metrics.MasterMetrics: Initialized > >>> >>> 2011-11-28 19:03:21,504 INFO org.apache.zookeeper.ZooKeeper: > >>> >>> Initiating client connection, connectString=localhost:2181 > >>> >>> sessionTimeout=180000 watcher=hconnection > >>> >>> 2011-11-28 19:03:21,504 INFO org.apache.zookeeper.ClientCnxn: > Opening > >>> >>> socket connection to server localhost/192.168.2.106:2181 > >>> >>> 2011-11-28 19:03:21,505 INFO org.apache.zookeeper.ClientCnxn: > Socket > >>> >>> connection established to localhost/192.168.2.106:2181, initiating > >>> >>> session > >>> >>> 2011-11-28 19:03:21,505 INFO > >>> >>> org.apache.zookeeper.server.NIOServerCnxn: Accepted socket > connection > >>> >>> from /192.168.2.106:58265 > >>> >>> 2011-11-28 19:03:21,505 INFO > >>> >>> org.apache.zookeeper.server.NIOServerCnxn: Client attempting to > >>> >>> establish new session at /192.168.2.106:58265 > >>> >>> 2011-11-28 19:03:21,539 INFO > >>> >>> org.apache.zookeeper.server.NIOServerCnxn: Established session > >>> >>> 0x133ea613d340001 with negotiated timeout 40000 for client > >>> >>> /192.168.2.106:58265 > >>> >>> 2011-11-28 19:03:21,539 INFO org.apache.zookeeper.ClientCnxn: > Session > >>> >>> establishment complete on server localhost/192.168.2.106:2181, > >>> >>> sessionid = 0x133ea613d340001, negotiated timeout = 40000 > >>> >>> 2011-11-28 19:03:21,557 INFO > >>> >>> org.apache.hadoop.hbase.ipc.HBaseRpcMetrics: Initializing RPC > Metrics > >>> >>> with hostName=HRegionServer, port=50557 > >>> >>> 2011-11-28 19:03:21,574 INFO org.apache.hadoop.hbase.security.User: > >>> >>> Skipping login, not running on secure Hadoop > >>> >>> 2011-11-28 19:03:21,575 INFO org.apache.zookeeper.ZooKeeper: > >>> >>> Initiating client connection, connectString=localhost:2181 > >>> >>> sessionTimeout=180000 watcher=regionserver:50557 > >>> >>> 2011-11-28 19:03:21,576 INFO org.apache.zookeeper.ClientCnxn: > Opening > >>> >>> socket connection to server localhost/192.168.2.106:2181 > >>> >>> 2011-11-28 19:03:21,576 INFO org.apache.zookeeper.ClientCnxn: > Socket > >>> >>> connection established to localhost/192.168.2.106:2181, initiating > >>> >>> session > >>> >>> 2011-11-28 19:03:21,577 INFO > >>> >>> org.apache.zookeeper.server.NIOServerCnxn: Accepted socket > connection > >>> >>> from /192.168.2.106:58266 > >>> >>> 2011-11-28 19:03:21,577 INFO > >>> >>> org.apache.zookeeper.server.NIOServerCnxn: Client attempting to > >>> >>> establish new session at /192.168.2.106:58266 > >>> >>> 2011-11-28 19:03:21,603 INFO > >>> >>> org.apache.hadoop.hbase.master.ActiveMasterManager: > >>> >>> Master=solr@ubuntu:40697 > >>> >>> 2011-11-28 19:03:21,631 INFO > >>> >>> org.apache.zookeeper.server.NIOServerCnxn: Established session > >>> >>> 0x133ea613d340002 with negotiated timeout 40000 for client > >>> >>> /192.168.2.106:58266 > >>> >>> 2011-11-28 19:03:21,631 INFO org.apache.zookeeper.ClientCnxn: > Session > >>> >>> establishment complete on server localhost/192.168.2.106:2181, > >>> >>> sessionid = 0x133ea613d340002, negotiated timeout = 40000 > >>> >>> 2011-11-28 19:03:21,793 FATAL > org.apache.hadoop.hbase.master.HMaster: > >>> >>> Unhandled exception. Starting shutdown. > >>> >>> java.io.IOException: Call to localhost/192.168.2.106:9000 failed > on > >>> >>> local exception: java.io.EOFException > >>> >>> at > org.apache.hadoop.ipc.Client.wrapException(Client.java:775) > >>> >>> at org.apache.hadoop.ipc.Client.call(Client.java:743) > >>> >>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220) > >>> >>> at $Proxy6.getProtocolVersion(Unknown Source) > >>> >>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359) > >>> >>> at > >>> >>> > org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:113) > >>> >>> at > org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:215) > >>> >>> at > org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:177) > >>> >>> at > >>> >>> > >>> > org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82) > >>> >>> at > >>> >>> > org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378) > >>> >>> at > >>> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66) > >>> >>> at > >>> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390) > >>> >>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196) > >>> >>> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:175) > >>> >>> at > >>> org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:364) > >>> >>> at > >>> >>> > >>> > org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:81) > >>> >>> at > >>> >>> > >>> > org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:346) > >>> >>> at > org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:282) > >>> >>> at > >>> >>> > >>> > org.apache.hadoop.hbase.master.HMasterCommandLine$LocalHMaster.run(HMasterCommandLine.java:193) > >>> >>> at java.lang.Thread.run(Thread.java:662) > >>> >>> Caused by: java.io.EOFException > >>> >>> at java.io.DataInputStream.readInt(DataInputStream.java:375) > >>> >>> at > >>> >>> > >>> > org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501) > >>> >>> at > org.apache.hadoop.ipc.Client$Connection.run(Client.java:446) > >>> >>> 2011-11-28 19:03:21,795 INFO > org.apache.hadoop.hbase.master.HMaster: > >>> >>> Aborting > >>> >>> 2011-11-28 19:03:21,795 DEBUG > org.apache.hadoop.hbase.master.HMaster: > >>> >>> Stopping service threads > >>> >>> 2011-11-28 19:03:21,795 INFO org.apache.hadoop.ipc.HBaseServer: > >>> >>> Stopping server on 40697 > >>> >>> 2011-11-28 19:03:21,795 INFO org.apache.hadoop.ipc.HBaseServer: IPC > >>> >>> Server handler 0 on 40697: exiting > >>> >>> 2011-11-28 19:03:21,795 INFO org.apache.hadoop.ipc.HBaseServer: IPC > >>> >>> Server handler 3 on 40697: exiting > >>> >>> 2011-11-28 19:03:21,795 INFO org.apache.hadoop.ipc.HBaseServer: IPC > >>> >>> Server handler 2 on 40697: exiting > >>> >>> 2011-11-28 19:03:21,795 INFO org.apache.hadoop.ipc.HBaseServer: IPC > >>> >>> Server handler 7 on 40697: exiting > >>> >>> 2011-11-28 19:03:21,795 INFO org.apache.hadoop.ipc.HBaseServer: IPC > >>> >>> Server handler 6 on 40697: exiting > >>> >>> 2011-11-28 19:03:21,795 INFO org.apache.hadoop.ipc.HBaseServer: IPC > >>> >>> Server handler 8 on 40697: exiting > >>> >>> 2011-11-28 19:03:21,795 INFO org.apache.hadoop.ipc.HBaseServer: > >>> >>> Stopping IPC Server listener on 40697 > >>> >>> 2011-11-28 19:03:21,795 INFO org.apache.hadoop.ipc.HBaseServer: IPC > >>> >>> Server handler 1 on 40697: exiting > >>> >>> 2011-11-28 19:03:21,795 INFO org.apache.hadoop.ipc.HBaseServer: IPC > >>> >>> Server handler 9 on 40697: exiting > >>> >>> 2011-11-28 19:03:21,795 INFO org.apache.hadoop.ipc.HBaseServer: IPC > >>> >>> Server handler 5 on 40697: exiting > >>> >>> 2011-11-28 19:03:21,795 INFO org.apache.hadoop.ipc.HBaseServer: IPC > >>> >>> Server handler 4 on 40697: exiting > >>> >>> 2011-11-28 19:03:21,795 INFO org.apache.hadoop.ipc.HBaseServer: > >>> >>> Stopping IPC Server Responder > >>> >>> 2011-11-28 19:03:21,826 INFO > >>> >>> org.apache.zookeeper.server.PrepRequestProcessor: Processed session > >>> >>> termination for sessionid: 0x133ea613d340000 > >>> >>> 2011-11-28 19:03:21,855 INFO org.apache.zookeeper.ZooKeeper: > Session: > >>> >>> 0x133ea613d340000 closed > >>> >>> 2011-11-28 19:03:21,856 INFO org.apache.zookeeper.ClientCnxn: > >>> >>> EventThread shut down > >>> >>> 2011-11-28 19:03:21,856 INFO > org.apache.hadoop.hbase.master.HMaster: > >>> >>> HMaster main thread exiting > >>> >>> 2011-11-28 19:03:21,856 INFO > >>> >>> org.apache.zookeeper.server.NIOServerCnxn: Closed socket connection > >>> >>> for client /192.168.2.106:58264 which had sessionid > 0x133ea613d340000 > >>> >>> 2011-11-28 19:03:21,857 INFO > >>> >>> org.apache.zookeeper.server.NIOServerCnxn: Closed socket connection > >>> >>> for client /192.168.2.106:58265 which had sessionid > 0x133ea613d340001 > >>> >>> 2011-11-28 19:03:21,858 INFO org.apache.zookeeper.ClientCnxn: > Unable > >>> >>> to read additional data from server sessionid 0x133ea613d340001, > >>> >>> likely server has closed socket, closing socket connection and > >>> >>> attempting reconnect > >>> >>> 2011-11-28 19:03:21,858 INFO > >>> >>> org.apache.zookeeper.server.NIOServerCnxn: Closed socket connection > >>> >>> for client /192.168.2.106:58266 which had sessionid > 0x133ea613d340002 > >>> >>> 2011-11-28 19:03:21,859 INFO org.apache.zookeeper.ClientCnxn: > Unable > >>> >>> to read additional data from server sessionid 0x133ea613d340002, > >>> >>> likely server has closed socket, closing socket connection and > >>> >>> attempting reconnect > >>> >>> 2011-11-28 19:03:21,859 INFO > >>> >>> org.apache.zookeeper.server.NIOServerCnxn: NIOServerCnxn factory > >>> >>> exited run method > >>> >>> 2011-11-28 19:03:21,859 INFO > >>> >>> org.apache.zookeeper.server.PrepRequestProcessor: > PrepRequestProcessor > >>> >>> exited loop! > >>> >>> 2011-11-28 19:03:21,859 INFO > >>> >>> org.apache.zookeeper.server.SyncRequestProcessor: > SyncRequestProcessor > >>> >>> exited! > >>> >>> 2011-11-28 19:03:21,859 INFO > >>> >>> org.apache.zookeeper.server.FinalRequestProcessor: shutdown of > request > >>> >>> processor complete > >>> >>> 2011-11-28 19:03:22,000 INFO > >>> >>> org.apache.zookeeper.server.SessionTrackerImpl: SessionTrackerImpl > >>> >>> exited loop! > >>> >>> 2011-11-28 19:03:23,149 INFO org.apache.zookeeper.ClientCnxn: > Opening > >>> >>> socket connection to server localhost/192.168.2.106:2181 > >>> >>> 2011-11-28 19:03:23,150 WARN org.apache.zookeeper.ClientCnxn: > Session > >>> >>> 0x133ea613d340001 for server null, unexpected error, closing socket > >>> >>> connection and attempting reconnect > >>> >>> java.net.ConnectException: Connection refused > >>> >>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > >>> >>> at > >>> >>> > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) > >>> >>> at > >>> >>> > org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119) > >>> >>> 2011-11-28 19:03:23,775 INFO org.apache.zookeeper.ClientCnxn: > Opening > >>> >>> socket connection to server localhost/192.168.2.106:2181 > >>> >>> 2011-11-28 19:03:23,776 WARN org.apache.zookeeper.ClientCnxn: > Session > >>> >>> 0x133ea613d340002 for server null, unexpected error, closing socket > >>> >>> connection and attempting reconnect > >>> >>> java.net.ConnectException: Connection refused > >>> >>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > >>> >>> at > >>> >>> > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) > >>> >>> at > >>> >>> > org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119) > >>> >>> 2011-11-28 19:03:24,462 INFO org.apache.zookeeper.ClientCnxn: > Opening > >>> >>> socket connection to server localhost/192.168.2.106:2181 > >>> >>> 2011-11-28 19:03:24,463 WARN org.apache.zookeeper.ClientCnxn: > Session > >>> >>> 0x133ea613d340001 for server null, unexpected error, closing socket > >>> >>> connection and attempting reconnect > >>> >>> java.net.ConnectException: Connection refused > >>> >>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > >>> >>> at > >>> >>> > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) > >>> >>> at > >>> >>> > org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119) > >>> >>> 2011-11-28 19:03:25,369 INFO org.apache.zookeeper.ClientCnxn: > Opening > >>> >>> socket connection to server localhost/192.168.2.106:2181 > >>> >>> 2011-11-28 19:03:25,370 WARN org.apache.zookeeper.ClientCnxn: > Session > >>> >>> 0x133ea613d340002 for server null, unexpected error, closing socket > >>> >>> connection and attempting reconnect > >>> >>> java.net.ConnectException: Connection refused > >>> >>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > >>> >>> at > >>> >>> > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) > >>> >>> at > >>> >>> > org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119) > >>> >>> 2011-11-28 19:03:25,910 INFO org.apache.zookeeper.ClientCnxn: > Opening > >>> >>> socket connection to server localhost/192.168.2.106:2181 > >>> >>> 2011-11-28 19:03:25,911 WARN org.apache.zookeeper.ClientCnxn: > Session > >>> >>> 0x133ea613d340001 for server null, unexpected error, closing socket > >>> >>> connection and attempting reconnect > >>> >>> java.net.ConnectException: Connection refused > >>> >>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > >>> >>> at > >>> >>> > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) > >>> >>> at > >>> >>> > org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119) > >>> >>> 2011-11-28 19:03:27,036 INFO org.apache.zookeeper.ClientCnxn: > Opening > >>> >>> socket connection to server localhost/192.168.2.106:2181 > >>> >>> 2011-11-28 19:03:27,036 WARN org.apache.zookeeper.ClientCnxn: > Session > >>> >>> 0x133ea613d340001 for server null, unexpected error, closing socket > >>> >>> connection and attempting reconnect > >>> >>> java.net.ConnectException: Connection refused > >>> >>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > >>> >>> at > >>> >>> > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) > >>> >>> at > >>> >>> > org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119) > >>> >>> 2011-11-28 19:03:27,386 INFO org.apache.zookeeper.ClientCnxn: > Opening > >>> >>> socket connection to server localhost/192.168.2.106:2181 > >>> >>> 2011-11-28 19:03:27,386 WARN org.apache.zookeeper.ClientCnxn: > Session > >>> >>> 0x133ea613d340002 for server null, unexpected error, closing socket > >>> >>> connection and attempting reconnect > >>> >>> java.net.ConnectException: Connection refused > >>> >>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > >>> >>> at > >>> >>> > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) > >>> >>> at > >>> >>> > org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119) > >>> >>> 2011-11-28 19:03:28,212 INFO org.apache.zookeeper.ClientCnxn: > Opening > >>> >>> socket connection to server localhost/192.168.2.106:2181 > >>> >>> 2011-11-28 19:03:28,213 WARN org.apache.zookeeper.ClientCnxn: > Session > >>> >>> 0x133ea613d340001 for server null, unexpected error, closing socket > >>> >>> connection and attempting reconnect > >>> >>> java.net.ConnectException: Connection refused > >>> >>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > >>> >>> at > >>> >>> > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) > >>> >>> at > >>> >>> > org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119) > >>> >>> 2011-11-28 19:03:29,039 INFO org.apache.zookeeper.ClientCnxn: > Opening > >>> >>> socket connection to server localhost/192.168.2.106:2181 > >>> >>> 2011-11-28 19:03:29,039 WARN org.apache.zookeeper.ClientCnxn: > Session > >>> >>> 0x133ea613d340002 for server null, unexpected error, closing socket > >>> >>> connection and attempting reconnect > >>> >>> java.net.ConnectException: Connection refused > >>> >>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > >>> >>> at > >>> >>> > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) > >>> >>> at > >>> >>> > org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119) > >>> >>> 2011-11-28 19:03:29,780 INFO org.apache.zookeeper.ClientCnxn: > Opening > >>> >>> socket connection to server localhost/192.168.2.106:2181 > >>> >>> 2011-11-28 19:03:29,781 WARN org.apache.zookeeper.ClientCnxn: > Session > >>> >>> 0x133ea613d340001 for server null, unexpected error, closing socket > >>> >>> connection and attempting reconnect > >>> >>> java.net.ConnectException: Connection refused > >>> >>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > >>> >>> at > >>> >>> > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) > >>> >>> at > >>> >>> > org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119) > >>> >>> 2011-11-28 19:03:30,671 INFO org.apache.zookeeper.ClientCnxn: > Opening > >>> >>> socket connection to server localhost/192.168.2.106:2181 > >>> >>> 2011-11-28 19:03:30,672 WARN org.apache.zookeeper.ClientCnxn: > Session > >>> >>> 0x133ea613d340002 for server null, unexpected error, closing socket > >>> >>> connection and attempting reconnect > >>> >>> java.net.ConnectException: Connection refused > >>> >>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > >>> >>> at > >>> >>> > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) > >>> >>> at > >>> >>> > org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119) > >>> >>> 2011-11-28 19:03:30,991 INFO org.apache.zookeeper.ClientCnxn: > Opening > >>> >>> socket connection to server localhost/192.168.2.106:2181 > >>> >>> 2011-11-28 19:03:30,992 WARN org.apache.zookeeper.ClientCnxn: > Session > >>> >>> 0x133ea613d340001 for server null, unexpected error, closing socket > >>> >>> connection and attempting reconnect > >>> >>> java.net.ConnectException: Connection refused > >>> >>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > >>> >>> at > >>> >>> > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) > >>> >>> at > >>> >>> > org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119) > >>> >>> 2011-11-28 19:03:31,955 INFO org.apache.zookeeper.ClientCnxn: > Opening > >>> >>> socket connection to server localhost/192.168.2.106:2181 > >>> >>> 2011-11-28 19:03:31,956 WARN org.apache.zookeeper.ClientCnxn: > Session > >>> >>> 0x133ea613d340002 for server null, unexpected error, closing socket > >>> >>> connection and attempting reconnect > >>> >>> java.net.ConnectException: Connection refused > >>> >>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > >>> >>> at > >>> >>> > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) > >>> >>> at > >>> >>> > org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119) > >>> >>> 2011-11-28 19:03:32,604 INFO org.apache.zookeeper.ClientCnxn: > Opening > >>> >>> socket connection to server localhost/192.168.2.106:2181 > >>> >>> 2011-11-28 19:03:32,604 WARN org.apache.zookeeper.ClientCnxn: > Session > >>> >>> 0x133ea613d340001 for server null, unexpected error, closing socket > >>> >>> connection and attempting reconnect > >>> >>> java.net.ConnectException: Connection refused > >>> >>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > >>> >>> at > >>> >>> > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) > >>> >>> at > >>> >>> > org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119) > >>> >>> 2011-11-28 19:03:33,715 INFO org.apache.zookeeper.ClientCnxn: > Opening > >>> >>> socket connection to server localhost/192.168.2.106:2181 > >>> >>> 2011-11-28 19:03:33,716 WARN org.apache.zookeeper.ClientCnxn: > Session > >>> >>> 0x133ea613d340001 for server null, unexpected error, closing socket > >>> >>> connection and attempting reconnect > >>> >>> java.net.ConnectException: Connection refused > >>> >>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > >>> >>> at > >>> >>> > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) > >>> >>> at > >>> >>> > org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119) > >>> >>> 2011-11-28 19:03:33,765 INFO org.apache.zookeeper.ClientCnxn: > Opening > >>> >>> socket connection to server localhost/192.168.2.106:2181 > >>> >>> 2011-11-28 19:03:33,765 WARN org.apache.zookeeper.ClientCnxn: > Session > >>> >>> 0x133ea613d340002 for server null, unexpected error, closing socket > >>> >>> connection and attempting reconnect > >>> >>> java.net.ConnectException: Connection refused > >>> >>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > >>> >>> at > >>> >>> > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) > >>> >>> at > >>> >>> > org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119) > >>> >>> > >>> >>> Many thanks in advance > >>> >>> > >>> >>> Regards, > >>> >>> Mohammad Tariq > >>> >>> > >>> >> > >>> > > >>> > >> > > >
