Anyone has a working basic-test for hbase 0.94.0 ? I would like to have a reference for setups...
Thanks. On Sat, Jun 2, 2012 at 9:42 PM, Andrew Purtell <[email protected]>wrote: > It's highly likely you are not starting up zookeeper at all then. The > security warning is a red herring. I filed a zookeeper issue to change the > log level from warn to info so it doesn't send folks like yourself down the > wrong path. > > I'd advise cloning one of the working 0.94 tests that spins up a full > cluster and go from there. > > - Andy > > On Jun 2, 2012, at 6:26 PM, Amit Sela <[email protected]> wrote: > > > I don't know about a local zookeeper running. > > I'm trying to run a test that extends HBaseTestingUtils. > > It worked fine with the old versions when it extended > HBaseClusterTestCase. > > Since it is deprecated (also same for HBaseTestCase) I adjusted the test > a > > little bit (things like cong, fs, dfs etc.. became private and use > getters > > now) - maybe what i'm missing is here ? > > > > On Sat, Jun 2, 2012 at 2:43 PM, Andrew Purtell <[email protected] > >wrote: > > > >> Do you have a local zookeeper running? Telnet localhost 2181 connect to > >> anything? > >> > >> Obviously we run 0.94 with no security setup with no problem. Hence > >> looking for basic setup problems. > >> > >> - Andy > >> > >> On Jun 2, 2012, at 1:20 PM, Amit Sela <[email protected]> wrote: > >> > >>> I still get the same error. > >>> > >>> This is the contents of the configuration as it is set right before > >>> calling "new > >>> HBaseAdmin(getConfiguration())": > >>> > >>> key :hbase.auth.token.max.lifetime > >>> value :604800000 > >>> key :hbase.thrift.maxQueuedRequests > >>> value :1000 > >>> key :io.seqfile.compress.blocksize > >>> value :1000000 > >>> key :hbase.hstore.compactionThreshold > >>> value :3 > >>> key :hbase.coprocessor.abortonerror > >>> value :false > >>> key :hadoop.log.dir > >>> value :/tmp > >>> key :hbase.master.port > >>> value :60000 > >>> key :webinterface.private.actions > >>> value :false > >>> key :dfs.support.append > >>> value :true > >>> key :hbase.rpc.engine > >>> value :org.apache.hadoop.hbase.ipc.WritableRpcEngine > >>> key :hbase.auth.key.update.interval > >>> value :86400000 > >>> key :fs.s3.impl > >>> value :org.apache.hadoop.fs.s3.S3FileSystem > >>> key :hbase.zookeeper.leaderport > >>> value :3888 > >>> key :hadoop.native.lib > >>> value :true > >>> key :fs.checkpoint.edits.dir > >>> value :${fs.checkpoint.dir} > >>> key :ipc.server.listen.queue.size > >>> value :128 > >>> key :hbase.regionserver.hlog.reader.impl > >>> value :org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader > >>> key :hbase.regionserver.info.bindAddress > >>> value :0.0.0.0 > >>> key :hadoop.security.authorization > >>> value :false > >>> key :hbase.mapreduce.hfileoutputformat.blocksize > >>> value :65536 > >>> key :hbase.regionserver.hlog.writer.impl > >>> value :org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter > >>> key :hbase.regionserver.logroll.errors.tolerated > >>> value :2 > >>> key :hbase.regionserver.nbreservationblocks > >>> value :4 > >>> key :hbase.tmp.dir > >>> value :/tmp/hbase-${user.name} > >>> key :hbase.zookeeper.dns.nameserver > >>> value :default > >>> key :hbase.hregion.memstore.mslab.enabled > >>> value :true > >>> key :io.file.buffer.size > >>> value :4096 > >>> key :hbase.zookeeper.property.dataDir > >>> value :${hbase.tmp.dir}/zookeeper > >>> key :hbase.data.umask.enable > >>> value :false > >>> key :hadoop.logfile.size > >>> value :10000000 > >>> key :hbase.client.retries.number > >>> value :10 > >>> key :fs.webhdfs.impl > >>> value :org.apache.hadoop.hdfs.web.WebHdfsFileSystem > >>> key :ipc.client.kill.max > >>> value :10 > >>> key :hbase.regionserver.lease.period > >>> value :60000 > >>> key :hbase.defaults.for.version.skip > >>> value :false > >>> key :hbase.zookeeper.property.clientPort > >>> value :2181 > >>> key :zookeeper.znode.acl.parent > >>> value :acl > >>> key :hbase.regionserver.dns.nameserver > >>> value :default > >>> key :ipc.server.tcpnodelay > >>> value :false > >>> key :hbase.balancer.period > >>> value :300000 > >>> key :hbase.rest.readonly > >>> value :false > >>> key :hbase.master.info.bindAddress > >>> value :0.0.0.0 > >>> key :hbase.regionserver.global.memstore.upperLimit > >>> value :0.4 > >>> key :hadoop.logfile.count > >>> value :10 > >>> key :hbase.hregion.majorcompaction > >>> value :86400000 > >>> key :hbase.client.keyvalue.maxsize > >>> value :10485760 > >>> key :hadoop.security.uid.cache.secs > >>> value :14400 > >>> key :fs.ftp.impl > >>> value :org.apache.hadoop.fs.ftp.FTPFileSystem > >>> key :hbase.cluster.distributed > >>> value :false > >>> key :hbase.client.pause > >>> value :1000 > >>> key :hbase.hregion.preclose.flush.size > >>> value :5242880 > >>> key :fs.file.impl > >>> value :org.apache.hadoop.fs.LocalFileSystem > >>> key :hbase.regionserver.global.memstore.lowerLimit > >>> value :0.35 > >>> key :hbase.regionserver.handler.count > >>> value :10 > >>> key :ipc.client.connection.maxidletime > >>> value :10000 > >>> key :hbase.online.schema.update.enable > >>> value :false > >>> key :hbase.hash.type > >>> value :murmur > >>> key :hbase.hregion.max.filesize > >>> value :10737418240 > >>> key :hbase.hregion.memstore.block.multiplier > >>> value :2 > >>> key :hadoop.policy.file > >>> value :hbase-policy.xml > >>> key :hbase.hstore.blockingWaitTime > >>> value :90000 > >>> key :hbase.zookeeper.quorum > >>> value :localhost > >>> key :hbase.hregion.memstore.flush.size > >>> value :134217728 > >>> key :hbase.zookeeper.property.syncLimit > >>> value :5 > >>> key :fs.checkpoint.size > >>> value :67108864 > >>> key :io.skip.checksum.errors > >>> value :false > >>> key :fs.s3n.impl > >>> value :org.apache.hadoop.fs.s3native.NativeS3FileSystem > >>> key :hbase.zookeeper.dns.interface > >>> value :default > >>> key :fs.s3.maxRetries > >>> value :4 > >>> key :hbase.regionserver.logroll.period > >>> value :3600000 > >>> key :hbase.metrics.showTableName > >>> value :true > >>> key :hbase.offheapcache.percentage > >>> value :0 > >>> key :hbase.client.scanner.caching > >>> value :1 > >>> key :hfile.format.version > >>> value :2 > >>> key :hbase.regionserver.port > >>> value :60020 > >>> key :fs.default.name > >>> value :file:/// > >>> key :ipc.client.idlethreshold > >>> value :4000 > >>> key :fs.hsftp.impl > >>> value :org.apache.hadoop.hdfs.HsftpFileSystem > >>> key :hadoop.tmp.dir > >>> value :/tmp/hadoop-${user.name} > >>> key :fs.checkpoint.dir > >>> value :${hadoop.tmp.dir}/dfs/namesecondary > >>> key :fs.s3.block.size > >>> value :67108864 > >>> key :hbase.rs.cacheblocksonwrite > >>> value :false > >>> key :hbase.rootdir > >>> value :file:///tmp/hbase-${user.name}/hbase > >>> key :hbase.regionserver.class > >>> value :org.apache.hadoop.hbase.ipc.HRegionInterface > >>> key :hbase.regionserver.info.port > >>> value :60030 > >>> key :io.serializations > >>> value :org.apache.hadoop.io.serializer.WritableSerialization > >>> key :hbase.regionserver.msginterval > >>> value :3000 > >>> key :hbase.regionserver.dns.interface > >>> value :default > >>> key :hadoop.util.hash.type > >>> value :murmur > >>> key :io.seqfile.lazydecompress > >>> value :true > >>> key :hbase.rest.port > >>> value :8080 > >>> key :hbase.defaults.for.version > >>> value :0.94.0 > >>> key :hbase.zookeeper.peerport > >>> value :2888 > >>> key :zookeeper.znode.rootserver > >>> value :root-region-server > >>> key :io.mapfile.bloom.size > >>> value :1048576 > >>> key :io.storefile.bloom.block.size > >>> value :131072 > >>> key :fs.s3.buffer.dir > >>> value :${hadoop.tmp.dir}/s3 > >>> key :hbase.zookeeper.property.maxClientCnxns > >>> value :300 > >>> key :hbase.master.dns.interface > >>> value :default > >>> key :hbase.server.versionfile.writeattempts > >>> value :3 > >>> key :hbase.thrift.minWorkerThreads > >>> value :16 > >>> key :io.compression.codecs > >>> value > >>> > >> > :org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.SnappyCodec > >>> key :topology.script.number.args > >>> value :100 > >>> key :fs.har.impl > >>> value :org.apache.hadoop.fs.HarFileSystem > >>> key :io.seqfile.sorter.recordlimit > >>> value :1000000 > >>> key :zookeeper.session.timeout > >>> value :1800000 > >>> key :fs.trash.interval > >>> value :0 > >>> key :local.cache.size > >>> value :10737418240 > >>> key :hadoop.security.authentication > >>> value :simple > >>> key :hadoop.security.group.mapping > >>> value :org.apache.hadoop.security.ShellBasedUnixGroupsMapping > >>> key :hbase.regions.slop > >>> value :0.2 > >>> key :hadoop.security.token.service.use_ip > >>> value :true > >>> key :ipc.client.connect.max.retries > >>> value :10 > >>> key :fs.ramfs.impl > >>> value :org.apache.hadoop.fs.InMemoryFileSystem > >>> key :hadoop.rpc.socket.factory.class.default > >>> value :org.apache.hadoop.net.StandardSocketFactory > >>> key :fs.kfs.impl > >>> value :org.apache.hadoop.fs.kfs.KosmosFileSystem > >>> key :hfile.block.index.cacheonwrite > >>> value :false > >>> key :hbase.master.dns.nameserver > >>> value :default > >>> key :hbase.bulkload.retries.number > >>> value :0 > >>> key :hbase.hstore.compaction.max > >>> value :10 > >>> key :fs.checkpoint.period > >>> value :3600 > >>> key :topology.node.switch.mapping.impl > >>> value :org.apache.hadoop.net.ScriptBasedMapping > >>> key :zookeeper.znode.parent > >>> value :/hbase > >>> key :mapred.output.dir > >>> value :/tmp/hadoop-amits > >>> key :hbase.server.thread.wakefrequency > >>> value :10000 > >>> key :hbase.master.info.port > >>> value :60010 > >>> key :hfile.index.block.max.size > >>> value :131072 > >>> key :hbase.regionserver.optionallogflushinterval > >>> value :1000 > >>> key :fs.hdfs.impl > >>> value :org.apache.hadoop.hdfs.DistributedFileSystem > >>> key :hbase.thrift.maxWorkerThreads > >>> value :1000 > >>> key :io.storefile.bloom.cacheonwrite > >>> value :false > >>> key :hbase.hstore.blockingStoreFiles > >>> value :7 > >>> key :hfile.block.cache.size > >>> value :0.25 > >>> key :io.mapfile.bloom.error.rate > >>> value :0.005 > >>> key :io.bytes.per.checksum > >>> value :512 > >>> key :hbase.zookeeper.property.initLimit > >>> value :10 > >>> key :fs.har.impl.disable.cache > >>> value :true > >>> key :ipc.client.tcpnodelay > >>> value :false > >>> key :fs.hftp.impl > >>> value :org.apache.hadoop.hdfs.HftpFileSystem > >>> key :hbase.data.umask > >>> value :000 > >>> key :hbase.master.logcleaner.plugins > >>> value :org.apache.hadoop.hbase.master.TimeToLiveLogCleaner > >>> key :hbase.master.logcleaner.ttl > >>> value :600000 > >>> key :hbase.regionserver.regionSplitLimit > >>> value :2147483647 > >>> key :fs.s3.sleepTimeSeconds > >>> value :10 > >>> key :hbase.client.write.buffer > >>> value :2097152 > >>> key :hbase.regionserver.info.port.auto > >>> value :false > >>> > >>> Am I missing something ? because it looks OK to me. > >>> > >>> On Thu, May 31, 2012 at 11:05 PM, Andrew Purtell > >>> <[email protected]>wrote: > >>> > >>>> Great, now remove any security related Zookeeper properties that you > >> added > >>>> in hbase-site.xml. Only keep hbase.zookeeper.quorum. > >>>> > >>>> On May 31, 2012, at 9:52 PM, Amit Sela <[email protected]> wrote: > >>>> > >>>>> I did some debug and the code does calls HBaseConfiguration.create() > - > >>>>> since my test extends HBaseTestingUtility. > >>>>> and conf.properties.get("hbase.zookeeper.quorum") returns > "localhost". > >>>>> > >>>>> Is that properly set, or should it be something else ? keep in my > it's > >> a > >>>>> test running on my laptop, so it seems OK to me. > >>>>> > >>>>> > >>>>> On Thu, May 31, 2012 at 10:01 PM, Andrew Purtell > >>>>> <[email protected]>wrote: > >>>>> > >>>>>> I mean of course server null means that hbase.zookeeper.quorum > config > >>>>>> property is unset. And the two most common reasons are: > >>>>>> > >>>>>> 1. Not defined in the site file > >>>>>> > >>>>>> 2. Configuration object not created with HBaseConfiguration.create() > >>>>>> > >>>>>> I hope this is clearer. > >>>>>> > >>>>>> On May 31, 2012, at 8:59 PM, Andrew Purtell < > [email protected] > >>> > >>>>>> wrote: > >>>>>> > >>>>>>> Server null usually means you haven't configured > >> hbase.zookeeper.quorum > >>>>>> in your client's hbase-site.xml file. And that is usually because > you > >>>> are > >>>>>> using a Configuration not created by HBaseConfiguration.create() > >>>>>>> > >>>>>>> If so the JAAS warning is a red herring. > >>>>>>> > >>>>>>> On May 31, 2012, at 8:52 PM, Amit Sela <[email protected]> > wrote: > >>>>>>> > >>>>>>>> I'm trying to run a test for HBase (some think we wrote, internal) > >> on > >>>> my > >>>>>>>> laptop - runs perfectly with the old versions of Hadoop, HBase and > >>>>>>>> ZooKeeeper. > >>>>>>>> > >>>>>>>> After deploying the new versions and re-compiling our code, I run > >> the > >>>>>> test. > >>>>>>>> > >>>>>>>> When I try to instantiate "new HBaseAdmin(getConfiguration())" - > >>>> where > >>>>>> the > >>>>>>>> configuration is from HBaseTestingUtility, I get the following on > >> the > >>>>>>>> console output: > >>>>>>>> > >>>>>>>> 2012-05-31 21:36:51.728 > >> [main-SendThread(localhost.localdomain:2181)] > >>>>>> WARN > >>>>>>>> org.apache.zookeeper.client.ZooKeeperSaslClient - > >>>>>>>> SecurityException: java.lang.SecurityException: Unable to locate a > >>>> login > >>>>>>>> configuration occurred when trying to find JAAS configuration. > >>>>>>>> 2012-05-31 21:36:51.741 > >> [main-SendThread(localhost.localdomain:2181)] > >>>>>> WARN > >>>>>>>> org.apache.zookeeper.ClientCnxn - > >> Session > >>>>>> 0x0 > >>>>>>>> for server null, unexpected error, closing socket connection and > >>>>>> attempting > >>>>>>>> reconnect > >>>>>>>> java.net.ConnectException: Connection refused > >>>>>>>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > >>>>>> ~[na:1.6.0_31] > >>>>>>>> at > >>>>>> > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) > >>>>>>>> ~[na:1.6.0_31] > >>>>>>>> at > >>>>>>>> > >>>>>> > >>>> > >> > org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:286) > >>>>>>>> ~[zookeeper-3.4.3.jar:3.4.3-1240972] > >>>>>>>> at > >>>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1035) > >>>>>>>> ~[zookeeper-3.4.3.jar:3.4.3-1240972] > >>>>>>>> 2012-05-31 21:36:51.852 [main] WARN > >>>>>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper - > >>>> Possibly > >>>>>>>> transient ZooKeeper exception: > >>>>>>>> org.apache.zookeeper.KeeperException$ConnectionLossException: > >>>>>>>> KeeperErrorCode = ConnectionLoss for /hbase/master > >>>>>>>> 2012-05-31 21:36:52.847 > >> [main-SendThread(localhost.localdomain:2181)] > >>>>>> WARN > >>>>>>>> org.apache.zookeeper.client.ZooKeeperSaslClient - > >>>>>>>> SecurityException: java.lang.SecurityException: Unable to locate a > >>>> login > >>>>>>>> configuration occurred when trying to find JAAS configuration. > >>>>>>>> 2012-05-31 21:36:52.848 > >> [main-SendThread(localhost.localdomain:2181)] > >>>>>> WARN > >>>>>>>> org.apache.zookeeper.ClientCnxn - > >> Session > >>>>>> 0x0 > >>>>>>>> for server null, unexpected error, closing socket connection and > >>>>>> attempting > >>>>>>>> reconnect > >>>>>>>> java.net.ConnectException: Connection refused > >>>>>>>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > >>>>>> ~[na:1.6.0_31] > >>>>>>>> at > >>>>>> > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) > >>>>>>>> ~[na:1.6.0_31] > >>>>>>>> at > >>>>>>>> > >>>>>> > >>>> > >> > org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:286) > >>>>>>>> ~[zookeeper-3.4.3.jar:3.4.3-1240972] > >>>>>>>> at > >>>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1035) > >>>>>>>> ~[zookeeper-3.4.3.jar:3.4.3-1240972] > >>>>>>>> 2012-05-31 21:36:53.949 > >> [main-SendThread(localhost.localdomain:2181)] > >>>>>> WARN > >>>>>>>> org.apache.zookeeper.client.ZooKeeperSaslClient - > >>>>>>>> SecurityException: java.lang.SecurityException: Unable to locate a > >>>> login > >>>>>>>> configuration occurred when trying to find JAAS configuration. > >>>>>>>> 2012-05-31 21:36:53.951 > >> [main-SendThread(localhost.localdomain:2181)] > >>>>>> WARN > >>>>>>>> org.apache.zookeeper.ClientCnxn - > >> Session > >>>>>> 0x0 > >>>>>>>> for server null, unexpected error, closing socket connection and > >>>>>> attempting > >>>>>>>> reconnect > >>>>>>>> java.net.ConnectException: Connection refused > >>>>>>>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > >>>>>> ~[na:1.6.0_31] > >>>>>>>> at > >>>>>> > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) > >>>>>>>> ~[na:1.6.0_31] > >>>>>>>> at > >>>>>>>> > >>>>>> > >>>> > >> > org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:286) > >>>>>>>> ~[zookeeper-3.4.3.jar:3.4.3-1240972] > >>>>>>>> at > >>>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1035) > >>>>>>>> ~[zookeeper-3.4.3.jar:3.4.3-1240972] > >>>>>>>> 2012-05-31 21:36:54.052 [main] WARN > >>>>>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper - > >>>> Possibly > >>>>>>>> transient ZooKeeper exception: > >>>>>>>> org.apache.zookeeper.KeeperException$ConnectionLossException: > >>>>>>>> KeeperErrorCode = ConnectionLoss for /hbase/master > >>>>>>>> 2012-05-31 21:36:55.052 > >> [main-SendThread(localhost.localdomain:2181)] > >>>>>> WARN > >>>>>>>> org.apache.zookeeper.client.ZooKeeperSaslClient - > >>>>>>>> SecurityException: java.lang.SecurityException: Unable to locate a > >>>> login > >>>>>>>> configuration occurred when trying to find JAAS configuration. > >>>>>>>> 2012-05-31 21:36:55.053 > >> [main-SendThread(localhost.localdomain:2181)] > >>>>>> WARN > >>>>>>>> org.apache.zookeeper.ClientCnxn - > >> Session > >>>>>> 0x0 > >>>>>>>> for server null, unexpected error, closing socket connection and > >>>>>> attempting > >>>>>>>> reconnect > >>>>>>>> java.net.ConnectException: Connection refused > >>>>>>>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > >>>>>> ~[na:1.6.0_31] > >>>>>>>> at > >>>>>> > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) > >>>>>>>> ~[na:1.6.0_31] > >>>>>>>> at > >>>>>>>> > >>>>>> > >>>> > >> > org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:286) > >>>>>>>> ~[zookeeper-3.4.3.jar:3.4.3-1240972] > >>>>>>>> at > >>>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1035) > >>>>>>>> ~[zookeeper-3.4.3.jar:3.4.3-1240972] > >>>>>>>> 2012-05-31 21:36:56.155 > >> [main-SendThread(localhost.localdomain:2181)] > >>>>>> WARN > >>>>>>>> org.apache.zookeeper.client.ZooKeeperSaslClient - > >>>>>>>> SecurityException: java.lang.SecurityException: Unable to locate a > >>>> login > >>>>>>>> configuration occurred when trying to find JAAS configuration. > >>>>>>>> 2012-05-31 21:36:56.156 > >> [main-SendThread(localhost.localdomain:2181)] > >>>>>> WARN > >>>>>>>> org.apache.zookeeper.ClientCnxn - > >> Session > >>>>>> 0x0 > >>>>>>>> for server null, unexpected error, closing socket connection and > >>>>>> attempting > >>>>>>>> reconnect > >>>>>>>> java.net.ConnectException: Connection refused > >>>>>>>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > >>>>>> ~[na:1.6.0_31] > >>>>>>>> at > >>>>>> > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) > >>>>>>>> ~[na:1.6.0_31] > >>>>>>>> at > >>>>>>>> > >>>>>> > >>>> > >> > org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:286) > >>>>>>>> ~[zookeeper-3.4.3.jar:3.4.3-1240972] > >>>>>>>> at > >>>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1035) > >>>>>>>> ~[zookeeper-3.4.3. > >> >
