No http connection error.
“job engine” node’s kylin.log
+------------------------------------------------------------------------------------------------------+
2546 | Update Cube Info
|
2547
+------------------------------------------------------------------------------------------------------+
2548 [pool-7-thread-10]:[2015-05-06
17:36:21,904][DEBUG][org.apache.kylin.common.persistence.ResourceStore.putResource(ResourceStore.java:171)]
- Saving resource /execute_output/f03d3c3c-68
f4-4593-b01d-2778fcaf098b-15 (Store kylin_metadata@hbase)
2549 [pool-7-thread-10]:[2015-05-06
17:36:21,905][INFO][org.apache.kylin.job.manager.ExecutableManager.updateJobOutput(ExecutableManager.java:222)]
- job id:f03d3c3c-68f4-4593-b01d-2778fcaf 098b-15 from READY to RUNNING
2550 [pool-7-thread-10]:[2015-05-06
17:36:21,908][INFO][org.apache.kylin.cube.CubeManager.promoteNewlyBuiltSegments(CubeManager.java:500)]
- Promoting cube CUBE[name=lbs_map_new_user_fact], new segments
[Lorg.apache.kylin.cube.CubeSegment;@5d7bf62a
2551 [pool-7-thread-10]:[2015-05-06
17:36:21,908][DEBUG][org.apache.kylin.common.persistence.ResourceStore.putResource(ResourceStore.java:171)]
- Saving resource /cube/lbs_map_new_user_fact .json (Store
kylin_metadata@hbase)
2552 [pool-8-thread-1]:[2015-05-06
17:36:21,910][INFO][org.apache.kylin.common.restclient.Broadcaster$1.run(Broadcaster.java:71)]
- new broadcast event:BroadcastEvent{type=cube, name=lbs_ma
p_new_user_fact, action=update}
2553 [pool-7-thread-10]:[2015-05-06
17:36:21,911][DEBUG][org.apache.kylin.common.persistence.ResourceStore.putResource(ResourceStore.java:171)]
- Saving resource /execute_output/f03d3c3c-68
f4-4593-b01d-2778fcaf098b-15 (Store kylin_metadata@hbase)
2554 [pool-7-thread-10]:[2015-05-06
17:36:21,921][DEBUG][org.apache.kylin.common.persistence.ResourceStore.putResource(ResourceStore.java:171)]
- Saving resource /execute_output/f03d3c3c-68
f4-4593-b01d-2778fcaf098b-15 (Store kylin_metadata@hbase)
2555 [http-bio-8081-exec-5]:[2015-05-06
17:36:21,922][INFO][org.apache.kylin.rest.controller.CacheController.wipeCache(CacheController.java:63)]
- wipe cache type: CUBE event:UPDATE name:lb s_map_new_user_fact
2556 [pool-7-thread-10]:[2015-05-06
17:36:21,922][INFO][org.apache.kylin.job.manager.ExecutableManager.updateJobOutput(ExecutableManager.java:222)]
- job id:f03d3c3c-68f4-4593-b01d-2778fcaf 098b-15 from RUNNING to
SUCCEED
2557 [pool-7-thread-10]:[2015-05-06
17:36:21,950][DEBUG][org.apache.kylin.common.persistence.ResourceStore.putResource(ResourceStore.java:171)]
- Saving resource /execute_output/f03d3c3c-68 f4-4593-b01d-2778fcaf098b
(Store kylin_metadata@hbase)
2558 [pool-7-thread-10]:[2015-05-06
17:36:21,971][DEBUG][org.apache.kylin.common.persistence.ResourceStore.putResource(ResourceStore.java:171)]
- Saving resource /execute_output/f03d3c3c-68 f4-4593-b01d-2778fcaf098b
(Store kylin_metadata@hbase)
2559 [pool-7-thread-10]:[2015-05-06
17:36:21,973][DEBUG][org.apache.kylin.common.persistence.ResourceStore.putResource(ResourceStore.java:171)]
- Saving resource /execute_output/f03d3c3c-68 f4-4593-b01d-2778fcaf098b
(Store kylin_metadata@hbase)
2560 [pool-7-thread-10]:[2015-05-06
17:36:21,974][INFO][org.apache.kylin.job.manager.ExecutableManager.updateJobOutput(ExecutableManager.java:222)]
- job id:f03d3c3c-68f4-4593-b01d-2778fcaf 098b from RUNNING to SUCCEED
2561 [pool-6-thread-1]:[2015-05-06
17:37:21,884][INFO][org.apache.kylin.job.impl.threadpool.DefaultScheduler$FetcherRunner.run(DefaultScheduler.java:117)]
- Job Fetcher: 0 running, 0 actual running, 0 ready, 92 others
2562 [http-bio-8081-exec-4]:[2015-05-06
17:37:26,409][DEBUG][org.apache.kylin.rest.service.AdminService.getConfigAsString(AdminService.java:91)]
- Get Kylin Runtime Config
2563 [pool-6-thread-1]:[2015-05-06
17:38:21,885][INFO][org.apache.kylin.job.impl.threadpool.DefaultScheduler$FetcherRunner.run(DefaultScheduler.java:117)]
- Job Fetcher: 0 running, 0 actual running, 0 ready, 92 others
2564 [pool-6-thread-1]:[2015-05-06
17:39:21,885][INFO][org.apache.kylin.job.impl.threadpool.DefaultScheduler$FetcherRunner.run(DefaultScheduler.java:117)]
- Job Fetcher: 0 running, 0 actual running, 0 ready, 92 others
2565 [pool-6-thread-1]:[2015-05-06
17:40:21,893][INFO][org.apache.kylin.job.impl.threadpool.DefaultScheduler$FetcherRunner.run(DefaultScheduler.java:117)]
- Job Fetcher: 0 running, 0 actual running, 0 ready, 92 others
2015-05-06 17:47 GMT+08:00 Shi, Shaofeng <[email protected]>:
> Okay… is there any http connection error in the “job engine” node’s
> kylin.log? If it failed to notify other nodes, there should be some errors;
>
> On 5/6/15, 5:44 PM, "Tao Wong" <[email protected]> wrote:
>
> >Yes.
> >
> >Three instances are all configured with
> >*kylin.rest.servers=localhost:8080,localhost:8085,localhost:8086*
> > the query mode nodes can get the job state changes but cann't get the
> >cube's state change.
> >
> >
> >2015-05-06 17:30 GMT+08:00 Shi, Shaofeng <[email protected]>:
> >
> >> In this case it should be 8086; you should be able to access this Kylin
> >> instance with http://<host>:8086/kylin/ , right?
> >>
> >> On 5/6/15, 5:04 PM, "Tao Wong" <[email protected]> wrote:
> >>
> >> >Cube status on query mode instance does not change automatically.
> >> >
> >> >BTW:the port you mean ?
> >> >for example this is my server.xml .
> >> >which port should i set behind hostname? 9007? 8080? 9011?
> >> >
> >> >
> >> >sever.xml
> >> >
> >> > <?xml version='1.0' encoding='utf-8'?>
> >> > 2 <!--
> >> > 3 Licensed to the Apache Software Foundation (ASF) under one or
> >>more
> >> > 4 contributor license agreements. See the NOTICE file distributed
> >> >with
> >> > 5 this work for additional information regarding copyright
> >>ownership.
> >> > 6 The ASF licenses this file to You under the Apache License,
> >>Version
> >> >2.0
> >> > 7 (the "License"); you may not use this file except in compliance
> >>with
> >> > 8 the License. You may obtain a copy of the License at
> >> > 9
> >> > 10 http://www.apache.org/licenses/LICENSE-2.0
> >> > 11
> >> > 12 Unless required by applicable law or agreed to in writing,
> >>software
> >> > 13 distributed under the License is distributed on an "AS IS" BASIS,
> >> > 14 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
> >> >implied.
> >> > 15 See the License for the specific language governing permissions
> >>and
> >> > 16 limitations under the License.
> >> > 17 -->
> >> > 18 <!-- Note: A "Server" is not itself a "Container", so you may not
> >> > 19 define subcomponents such as "Valves" at this level.
> >> > 20 Documentation at /docs/config/server.html
> >> > 21 -->
> >> > 22 <*Server port="9007" *shutdown="SHUTDOWN">
> >> > 23 <!-- Security listener. Documentation at
> >> >/docs/config/listeners.html
> >> > 24 <Listener
> >> >className="org.apache.catalina.security.SecurityListener"
> >> >/>
> >> > 25 -->
> >> > 26 <!--APR library loader. Documentation at /docs/apr.html -->
> >> > 27 <Listener
> >> >className="org.apache.catalina.core.AprLifecycleListener"
> >> >SSLEngine="on" />
> >> > 28 <!--Initialize Jasper prior to webapps are loaded.
> >>Documentation
> >> >at
> >> >/docs/jasper-howto.html -->
> >> > 29 <Listener className="org.apache.catalina.core.JasperListener"
> >>/>
> >> > 30 <!-- Prevent memory leaks due to use of particular java/javax
> >> >APIs-->
> >> > 31 <Listener
> >> >className="org.apache.catalina.core.JreMemoryLeakPreventionListener" />
> >> > 32 <Listener
> >>
> >>>className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener"
> >>>/>
> >> > 33 <Listener
> >> >className="org.apache.catalina.core.ThreadLocalLeakPreventionListener"
> >>/>
> >> > 34
> >> > 35 <!-- Global JNDI resources
> >> > 36 Documentation at /docs/jndi-resources-howto.html
> >> > 37 -->
> >> > 38 <GlobalNamingResources>
> >> > 39 <!-- Editable user database that can also be used by
> >> > 40 UserDatabaseRealm to authenticate users
> >> > 41 -->
> >> > 42 <Resource name="UserDatabase" auth="Container"
> >> > 43 type="org.apache.catalina.UserDatabase"
> >> > 44 description="User database that can be updated
> >>and
> >> >saved"
> >> > 45
> >> >factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
> >> > 46 pathname="conf/tomcat-users.xml" />
> >> > 47 </GlobalNamingResources>
> >> > 48
> >> > 49 <!-- A "Service" is a collection of one or more "Connectors"
> >>that
> >> >share
> >> > 50 a single "Container" Note: A "Service" is not itself a
> >> >"Container",
> >> > 51 so you may not define subcomponents such as "Valves" at
> >>this
> >> >level.
> >> > 52 Documentation at /docs/config/service.html
> >> > 53 -->
> >> > 54 <Service name="Catalina">
> >> > 55
> >> > 56 <!--The connectors can use a shared executor, you can
> >>define
> >> >one or more named thread pools-->
> >> > 57 <!--
> >> > 58 <Executor name="tomcatThreadPool"
> >>namePrefix="catalina-exec-"
> >> > 59 maxThreads="150" minSpareThreads="4"/>
> >> > 60 -->
> >> > 61
> >> > 62
> >> > 63 <!-- A "Connector" represents an endpoint by which requests
> >> >are
> >> >received
> >> > 64 and responses are returned. Documentation at :
> >> > 65 Java HTTP Connector: /docs/config/http.html (blocking
> >>&
> >> >non-blocking)
> >> > 66 Java AJP Connector: /docs/config/ajp.html
> >> > 67 APR (HTTP/AJP) Connector: /docs/apr.html
> >> > 68 Define a non-SSL HTTP/1.1 Connector on port 8080
> >> > 69 -->
> >> > 70 <Connector port="8086" protocol="HTTP/1.1"
> >> > 71 connectionTimeout="20000"
> >> > 72 redirectPort="9443"
> >> > 73 compression="on"
> >> > 74 compressionMinSize="2048"
> >> > 75 noCompressionUserAgents="gozilla,traviata"
> >> > 76
> >> >
> >>
> >>>compressableMimeType="text/html,text/xml,text/javascript,application/jav
> >>>as
> >> >cript,application/json,text/css,text/plain"
> >> > 77 />
> >> > 78 <!-- A "Connector" using the shared thread pool-->
> >> > 79 <!--
> >> > 80 <Connector executor="tomcatThreadPool"
> >> > 81 *port="8080" *protocol="HTTP/1.1"
> >> > 82 connectionTimeout="20000"
> >> > 83 redirectPort="8443" />
> >> > 84 -->
> >> > 85 <!-- Define a SSL HTTP/1.1 Connector on port 8443
> >> > 86 This connector uses the BIO implementation that
> >>requires
> >> >the JSSE
> >> > 87 style configuration. When using the APR/native
> >> >implementation, the
> >> > 88 OpenSSL style configuration is required as described
> >>in
> >> >the APR/native
> >> > 89 documentation -->
> >> > 90 <!--
> >> > 91 <Connector port="8443"
> >> >protocol="org.apache.coyote.http11.Http11Protocol"
> >> > 92 maxThreads="150" SSLEnabled="true"
> >>scheme="https"
> >> >secure="true"
> >> > 93 clientAuth="false" sslProtocol="TLS" />
> >> > 94 -->
> >> > 95
> >> > 96 <!-- Define an AJP 1.3 Connector on port 8009 -->
> >> > 97 <*Connector port="9011"* protocol="AJP/1.3"
> >> >redirectPort="9443"
> >> >/>
> >> > 98
> >> > 99
> >> >100 <!-- An Engine represents the entry point (within Catalina)
> >> >that processes
> >> >101 every request. The Engine implementation for Tomcat
> >> >stand
> >> >alone
> >> >102 analyzes the HTTP headers included with the request,
> >>and
> >> >passes them
> >> >103 on to the appropriate Host (virtual host).
> >> >104 Documentation at /docs/config/engine.html -->
> >> >105
> >> >106 <!-- You should set jvmRoute to support load-balancing via
> >>AJP
> >> >ie :
> >> >107 <Engine name="Catalina" defaultHost="localhost"
> >> >jvmRoute="jvm1">
> >> >108 -->
> >> >109 <Engine name="Catalina" defaultHost="localhost">
> >> >110
> >> >111 <!--For clustering, please take a look at documentation
> >> >at:
> >> >112 /docs/cluster-howto.html (simple how to)
> >> >113 /docs/config/cluster.html (reference documentation)
> >> >-->
> >> >114 <!--
> >> >115 <Cluster
> >> >className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
> >> >116 -->
> >> >117
> >> >118 <!-- Use the LockOutRealm to prevent attempts to guess
> >> >user
> >> >passwords
> >> >119 via a brute-force attack -->
> >> >120 <Realm
> >>className="org.apache.catalina.realm.LockOutRealm">
> >> >121 <!-- This Realm uses the UserDatabase configured in
> >> >the
> >> >global JNDI
> >> >122 resources under the key "UserDatabase". Any
> >> >edits
> >> >123 that are performed against this UserDatabase
> >>are
> >> >immediately
> >> >124 available for use by the Realm. -->
> >> >125 <Realm
> >> >className="org.apache.catalina.realm.UserDatabaseRealm"
> >> >126 resourceName="UserDatabase"/>
> >> >127 </Realm>
> >> >128
> >> >129 <Host name="localhost" appBase="webapps"
> >> >130 unpackWARs="true" autoDeploy="true">
> >> >131
> >> >132 <!-- SingleSignOn valve, share authentication
> >>between
> >> >web applications
> >> >133 Documentation at: /docs/config/valve.html -->
> >> >134 <!--
> >> >135 <Valve
> >> >className="org.apache.catalina.authenticator.SingleSignOn" />
> >> >136 -->
> >> >137
> >> >138 <!-- Access log processes all example.
> >> >139 Documentation at: /docs/config/valve.html
> >> >140 Note: The pattern used is equivalent to using
> >> >pattern="common" -->
> >> >141 <Valve
> >> >className="org.apache.catalina.valves.AccessLogValve" directory="logs"
> >> >142 prefix="localhost_access_log." suffix=".txt"
> >> >143 pattern="%h %l %u %t "%r" %s %b"
> >>/>
> >> >144
> >> >145 </Host>
> >> >146 </Engine>
> >> >147 </Service>
> >> >148 </Server>
> >> >
> >> >2015-05-06 16:45 GMT+08:00 Shi, Shaofeng <[email protected]>:
> >> >
> >> >> Hi Dong, I¹m asking Tao whether the ³QUERY² mode instances can get
> >>the
> >> >> ³Cube² status change automatically (not job status);
> >> >>
> >> >> For example, I create a new cube, its initial status is ³Disabled²;
> >> >>Then I
> >> >> trigger a build job, which will be executed in the ³job engine²
> >> >>instance;
> >> >> When the job build is completed, the job engine will update this
> >>cube¹s
> >> >> status to ³Active², and also send REST calls to the instances in
> >> >> ³kylin.rest.servers²; The instances in ³kylin.rest.servers² will
> >>flush
> >> >> their caches on receiving this REST call, so they will get the cube¹s
> >> >> latest status ³Active²; All these things happen automatically;
> >> >>
> >> >> If you observed the the ³query² mode nodes didn¹t get the cube¹s
> >>state
> >> >> change, while the ³job engine² instance got successfully, that
> >>indicates
> >> >> the configuration may be wrong for other instances; because the ³job
> >> >> engine² node also depends this REST call to flush the cache in
> >>itself;
> >> >>
> >> >> If all your instances are in the same machine, you can use localhost
> >>as
> >> >> the hostname, I¹m not sure this can solve your problem, but you can
> >> >>give a
> >> >> try; If the problem is still there, welcome to open a JIRA for us.
> >> >>
> >> >> Thanks for the input to Kylin.
> >> >>
> >> >> On 5/6/15, 4:23 PM, "dong wang" <[email protected]> wrote:
> >> >>
> >> >> >yes, all the "QUERY" mode instances got the job status for the cube
> >> >> >correctly, and as checked the
> >> >>
> >>
> >>>>>kylin.rest.servers=all-machine:por1,query-1-machine:port2,query-3-mach
> >>>>>in
> >> >>>e:
> >> >> >port3
> >> >> >should be correct as well
> >> >>
> >> >>
> >>
> >>
>
>