Hi, Santosh
    The log seems that you cannot connect to zookeeper, which port is by
default 2181.
    Can you please check if hbase has been started correctly, and the
hbase-site.xml in your hbase classpath has correct configuration of
zookeeper for your VM.

Best Regard
Zhou QianHao





On 2/13/15, 5:08 PM, "Santoshakhilesh" <[email protected]> wrote:

>Hi Zhou,
>    The exceptions I see are below , So it seems Kylin instance is not
>deployed correctly;
>
>WARNING: Failed to scan [file:/contrib/capacity-scheduler/*.jar] from
>classloader hierarchy
>java.io.FileNotFoundException: /contrib/capacity-scheduler/*.jar (No such
>file or directory)
> at java.util.zip.ZipFile.open(Native Method)
> at java.util.zip.ZipFile.<init>(ZipFile.java:215)
> at java.util.zip.ZipFile.<init>(ZipFile.java:145)
> at java.util.jar.JarFile.<init>(JarFile.java:154)
> at java.util.jar.JarFile.<init>(JarFile.java:91)
>
>Feb 13, 2015 10:27:32 PM org.apache.catalina.startup.ContextConfig
>processResourceJARs
>SEVERE: Failed to processes JAR found at URL
>[jar:file:/contrib/capacity-scheduler/*.jar!/] for static resources to be
>included in context with name
>[jar:file:/contrib/capacity-scheduler/*.jar!/]
>
>resource loaded through InputStream
>[localhost-startStop-1]:[2015-02-13
>22:27:34,475][WARN][org.apache.kylin.common.KylinConfig.getKylinProperties
>(KylinConfig.java:527)] - KYLIN_CONF_HOME has not been set
>2015-02-13 22:27:34,919 INFO  [localhost-startStop-1]
>zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2b5a652e
>connecting to ZooKeeper ensemble=localhost:2181
>
>2015-02-13 22:27:34,958 INFO
>[localhost-startStop-1-SendThread(localhost:2181)] zookeeper.ClientCnxn:
>Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will
>not attempt to authenticate using SASL (unknown error)
>2015-02-13 22:27:34,965 WARN
>[localhost-startStop-1-SendThread(localhost:2181)] zookeeper.ClientCnxn:
>Session 0x0 for server null, unexpected error, closing socket connection
>and attempting reconnect
>java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
> at 
>org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.j
>ava:361)
> at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
>2015-02-13 22:27:35,070 INFO
>[localhost-startStop-1-SendThread(localhost:2181)] zookeeper.ClientCnxn:
>Opening socket connection to server localhost/127.0.0.1:2181. Will not
>attempt to authenticate using SASL (unknown error)
>
>2015-02-13 22:27:51,920 WARN  [localhost-startStop-1]
>zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper,
>quorum=localhost:2181,
>exception=org.apache.zookeeper.KeeperException$ConnectionLossException:
>KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
>2015-02-13 22:27:51,921 ERROR [localhost-startStop-1]
>zookeeper.RecoverableZooKeeper: ZooKeeper exists failed after 4 attempts
>2015-02-13 22:27:51,921 WARN  [localhost-startStop-1] zookeeper.ZKUtil:
>hconnection-0x2b5a652e, quorum=localhost:2181, baseZNode=/hbase Unable to
>set watcher on znode (/hbase/hbaseid)
>org.apache.zookeeper.KeeperException$ConnectionLossException:
>KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
> at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
> at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)
> at 
>org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZ
>ooKeeper.java:222)
> at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:479)
> at 
>org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKCluster
>Id.java:65)
> at 
>org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperReg
>istry.java:83)
> at 
>org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementatio
>n.retrieveClusterId(HConnectionManager.java:897)
> at 
>org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementatio
>n.<init>(HConnectionManager.java:694)
>
>
>Regards,
>Santosh Akhilesh
>Bangalore R&D
>HUAWEI TECHNOLOGIES CO.,LTD.
>
>www.huawei.com
>--------------------------------------------------------------------------
>-----------------------------------------------------------
>This e-mail and its attachments contain confidential information from
>HUAWEI, which
>is intended only for the person or entity whose address is listed above.
>Any use of the
>information contained herein in any way (including, but not limited to,
>total or partial
>disclosure, reproduction, or dissemination) by persons other than the
>intended
>recipient(s) is prohibited. If you receive this e-mail in error, please
>notify the sender by
>phone or email immediately and delete it!
>
>________________________________________
>From: Zhou, Qianhao [[email protected]]
>Sent: Friday, February 13, 2015 2:07 PM
>To: [email protected]
>Subject: Re: About building kylin
>
>Hi, Santosh
>Please check tomcat/logs/kylin.log to see if there is any exception
>
>Best Regard
>Zhou QianHao
>
>
>
>
>
>On 2/13/15, 4:24 PM, "Santoshakhilesh" <[email protected]>
>wrote:
>
>>Hi Zhou,
>>      I am able to access the http://127.0.0.17070/kylin using firefox
>>      All I get is a blank page , its not 404 , it means I could connect
>>to web server successfully but there is some issue in rendering. I was
>>noty asked for user name and password.
>>       When I run start kylin shell file , following prompt really means
>>everuthing went ok , or I should check some log files to know the real
>>status ?
>>A new Kylin instance is started by root, stop it using "stop-kylin.sh"
>>Please visit http://<your_sandbox_ip>:7070/kylin to play with the cubes!
>>
>>Regards,
>>Santosh Akhilesh
>>Bangalore R&D
>>HUAWEI TECHNOLOGIES CO.,LTD.
>>
>>www.huawei.com
>>-------------------------------------------------------------------------
>>-
>>-----------------------------------------------------------
>>This e-mail and its attachments contain confidential information from
>>HUAWEI, which
>>is intended only for the person or entity whose address is listed above.
>>Any use of the
>>information contained herein in any way (including, but not limited to,
>>total or partial
>>disclosure, reproduction, or dissemination) by persons other than the
>>intended
>>recipient(s) is prohibited. If you receive this e-mail in error, please
>>notify the sender by
>>phone or email immediately and delete it!
>>
>>________________________________________
>>From: Zhou, Qianhao [[email protected]]
>>Sent: Friday, February 13, 2015 1:04 PM
>>To: [email protected]
>>Cc: Kulbhushan Rana
>>Subject: Re: About building kylin
>>
>>Hi, Santosh
>>    You will see a kylin.war inside ${KYLIN_HOME}/tomcat/webapps. That is
>>the server war package. The tomcat version is 7.0.59.
>>    If you want to bind 0.0.0.0:7070, please refer to the tomcat
>>document,
>>there should be some configurations to let you do that.
>>
>>Best Regard
>>Zhou QianHao
>>
>>
>>
>>
>>
>>On 2/13/15, 1:46 PM, "Santoshakhilesh" <[email protected]>
>>wrote:
>>
>>>Hi Zhou,
>>>     Due to some practical constarints , I have to setup clusster
>>>manually.
>>>     I have downloaded all the dependency like hadoop , hbase , hive ,
>>>tomcat and I have run the check-env.sh after it passes , I have run the
>>>start-kylin.sh
>>>     Following is the log on console after running the ./start-kylin.sh.
>>>     Does this mean everything is ok  or I should check some other logs
>>>?
>>>Currently I am unable to login to http://<your_sandbox_ip>:7070/kylin
>>>from outside the machine.
>>>    Is it possible to bind it to 0.0.0.0 :7070 so that I can connect
>>>from
>>>outside the lab network ? Where can I configure this ?
>>>
>>> ./start-kylin.sh
>>>Checking KYLIN_HOME...
>>>KYLIN_HOME is set to /opt/kylinb
>>>Checking hbase...
>>>hbase check passed
>>>Checking hive...
>>>hive check passed
>>>Checking hadoop...
>>>hadoop check passed
>>>Logging initialized using configuration in
>>>jar:file:/opt/hive/apache-hive-0.14.0-bin/lib/hive-common-0.14.0.jar!/hi
>>>v
>>>e
>>>-log4j.properties
>>>SLF4J: Class path contains multiple SLF4J bindings.
>>>SLF4J: Found binding in
>>>[jar:file:/opt/hadoop/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12
>>>-
>>>1
>>>.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>SLF4J: Found binding in
>>>[jar:file:/opt/hive/apache-hive-0.14.0-bin/lib/hive-jdbc-0.14.0-standalo
>>>n
>>>e
>>>.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>>>explanation.
>>>SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>>>hive dependency:
>>>/usr/hdp/current/hive-client/conf/:/hive/lib/*:/hive-hcatalog/share/hcat
>>>a
>>>l
>>>og/*
>>>A new Kylin instance is started by root, stop it using "stop-kylin.sh"
>>>Please visit http://<your_sandbox_ip>:7070/kylin to play with the cubes!
>>>(Useranme: ADMIN, Password: KYLIN)
>>>You can check the log at ./../tomcat/logs/kylin.log
>>>
>>>Regards,
>>>Santosh Akhilesh
>>>Bangalore R&D
>>>HUAWEI TECHNOLOGIES CO.,LTD.
>>>
>>>www.huawei.com
>>>------------------------------------------------------------------------
>>>-
>>>-
>>>-----------------------------------------------------------
>>>This e-mail and its attachments contain confidential information from
>>>HUAWEI, which
>>>is intended only for the person or entity whose address is listed above.
>>>Any use of the
>>>information contained herein in any way (including, but not limited to,
>>>total or partial
>>>disclosure, reproduction, or dissemination) by persons other than the
>>>intended
>>>recipient(s) is prohibited. If you receive this e-mail in error, please
>>>notify the sender by
>>>phone or email immediately and delete it!
>>>
>>>________________________________________
>>>From: Zhou, Qianhao [[email protected]]
>>>Sent: Thursday, February 12, 2015 2:59 PM
>>>To: [email protected]
>>>Subject: Re: About building kylin
>>>
>>>Hi Santosh
>>>   Surely you can run kylin on single node.
>>>   Currently kylin is depend on
>>>   1. Hive (where source data come from)
>>>   2. HBase (where metadata and pre-computed data stored)
>>>   However once you have the hadoop environment ready, it does not
>>>matter
>>>how many vm running in the cluster.
>>>   As for your circumstances, a sandbox is strongly recommended which
>>>will
>>>save you a lot of time of setting up the environment. Hortonworks or
>>>Cloudera will be good choice.
>>>
>>>Best Regard
>>>Zhou QianHao

Reply via email to