Daniel/All,

Not sure if this is full solution, but found an interesting post that might 
push things along:  
http://coheigea.blogspot.com/2017/04/securing-apache-hadoop-distributed-file_21.html
 
<http://coheigea.blogspot.com/2017/04/securing-apache-hadoop-distributed-file_21.html>
In particular, two additional environment variables need to be set to get local 
standalone versions of HBASE and SOLR started (I presume with their own 
zookeeper instances)
export MANAGE_LOCAL_HBASE=true
export MANAGE_LOCAL_SOLR=true

If you read through the atlas_start.py script, you see that local mode should 
be indicated when these are set to True.  
I hope this helps.
Thanks,
-Anthony


> On Oct 5, 2017, at 3:18 PM, Daniel Lee <[email protected]> wrote:
> 
> As a follow on, I tried doing the following:
> 
> Uncommented
> export HBASE_MANAGES_ZK=true
> 
> in hbase/conf/hbase-env.sh
> 
> and set
> 
> 
> atlas.server.run.setup.on.start=true
> 
> in conf/atlas-application.properties
> 
> Thanks
> 
> Daniel Lee
> 
> 
> On Thu, Oct 5, 2017 at 2:52 PM, Daniel Lee <[email protected] 
> <mailto:[email protected]>> wrote:
> Hey guys,
> 
> Still running into problems starting up a fully functional standalone mode 
> atlas instance. This is all on a Mac 10.12.6 . After borking out with the 
> lock error running against berkeleydb, I followed the instructions on
> 
> http://atlas.apache.org/InstallationSteps.html 
> <http://atlas.apache.org/InstallationSteps.html>
> 
> To try the embedded-hbase-solr instructions.
> 
> mvn clean package -Pdist,embedded-hbase-solr
> works fine and even starts and runs a local instance during the testing 
> phases.
> 
> The line :
> Using the embedded-hbase-solr profile will configure Atlas so that an HBase 
> instance and a Solr instance will be started and stopped along with the Atlas 
> server by default.
> 
> implies I should be able to start the whole shebang with
> 
> bin/atlas_start.py
> 
> But I get a pretty ugly error messages in both application.log and *.out. I 
> won't post it all, but should be easily replicable. Relevant portions in the 
> application.log are:
> 
> 2017-10-05 14:45:35,349 WARN  - [main-SendThread(localhost:2181):] ~ Session 
> 0x0 for server null, unexpected error, closing socket connection and 
> attempting reconnect (ClientCnxn$SendThread:1102)
> 
> java.net.ConnectException: Connection refused
> 
>         at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> 
>         at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
> 
>         at 
> org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
> 
>         at 
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
> 
> and
> 
> 2017-10-05 14:45:52,059 WARN  - [main:] ~ hconnection-0x5e9f73b0x0, 
> quorum=localhost:2181, baseZNode=/hbase Unable to set watcher on znode 
> (/hbase/hbaseid) (ZKUtil:544)
> 
> org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
> = ConnectionLoss for /hbase/hbaseid
> 
>         at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
> 
>         at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> 
> 
>         at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)
> 
> All of which point to a failure on the zookeeper node. Do I need to start up 
> my own zookeeper instance locally?
> 
> Thanks!
> 
> Daniel Lee
> 
> 

Reply via email to