I guess KafkaSource Class file where it has enrichSourcePartitionBeforeBuild() 
to get partition values. There its getting Error out. Do we know how can we 
test out to find why its coming as 0 for start & end offset?

Regards,
Manoj

From: Kumar, Manoj H
Sent: Thursday, October 12, 2017 3:35 PM
To: '[email protected]'
Subject: RE: Kafka Streaming data - Error while building the Cube

Yes its there.. I could see the messages..

Regards,
Manoj

From: Billy Liu [mailto:[email protected]]
Sent: Thursday, October 12, 2017 3:11 PM
To: user
Subject: Re: Kafka Streaming data - Error while building the Cube

STREAMING_SALES_TABLE table reads messages from Kafka topic kylin_demo,but got 
0 message.

Could you check if the topic has incoming message: 
bin/kafka-console-consumer.sh --zookeeper localhost:2181 --bootstrap-server 
localhost:9092 --topic kylin_demo

2017-10-12 17:19 GMT+08:00 Kumar, Manoj H 
<[email protected]<mailto:[email protected]>>:
Pls. find below information about consumer config from Kylin log file.

2017-10-11 02:11:43,787 INFO  [pool-8-thread-1] threadpool.DefaultScheduler:123 
: Job Fetcher: 0 should running, 0 actual running, 0 stopped, 0 ready, 1 
already succeed, 0 error, 0 discarded, 0 others
2017-10-11 02:12:13,783 INFO  [pool-8-thread-1] threadpool.DefaultScheduler:123 
: Job Fetcher: 0 should running, 0 actual running, 0 stopped, 0 ready, 1 
already succeed, 0 error, 0 discarded, 0 others
2017-10-11 02:12:40,734 INFO  [http-bio-7070-exec-3] 
streaming.StreamingManager:222 : Reloading Streaming Metadata from folder 
kylin_metadata(key='/streaming')@kylin_metadata@hbase
2017-10-11 02:12:40,760 DEBUG [http-bio-7070-exec-3] 
streaming.StreamingManager:247 : Loaded 1 StreamingConfig(s)
2017-10-11 02:12:43,789 INFO  [pool-8-thread-1] threadpool.DefaultScheduler:123 
: Job Fetcher: 0 should running, 0 actual running, 0 stopped, 0 ready, 1 
already succeed, 0 error, 0 discarded, 0 others
2017-10-11 02:13:13,788 INFO  [pool-8-thread-1] threadpool.DefaultScheduler:123 
: Job Fetcher: 0 should running, 0 actual running, 0 stopped, 0 ready, 1 
already succeed, 0 error, 0 discarded, 0 others
2017-10-11 02:13:43,785 INFO  [pool-8-thread-1] threadpool.DefaultScheduler:123 
: Job Fetcher: 0 should running, 0 actual running, 0 stopped, 0 ready, 1 
already succeed, 0 error, 0 discarded, 0 others
2017-10-11 02:14:13,789 INFO  [pool-8-thread-1] threadpool.DefaultScheduler:123 
: Job Fetcher: 0 should running, 0 actual running, 0 stopped, 0 ready, 1 
already succeed, 0 error, 0 discarded, 0 others
2017-10-11 02:14:43,796 INFO  [pool-8-thread-1] threadpool.DefaultScheduler:123 
: Job Fetcher: 0 should running, 0 actual running, 0 stopped, 0 ready, 1 
already succeed, 0 error, 0 discarded, 0 others
2017-10-11 02:15:03,911 DEBUG [http-bio-7070-exec-1] 
controller.StreamingController:255 : Saving StreamingConfig 
{"uuid":"8613b0e1-40ac-438c-bdf5-72be4d91c230","last_modified":1507705685859,"version":"2.1.0","name":"DEFAULT.STREAMING_SALES_TABLE","type":"kafka"}
2017-10-11 02:15:03,913 DEBUG [http-bio-7070-exec-1] 
controller.StreamingController:273 : Saving KafkaConfig 
{"uuid":"87dc6ab5-5141-4bd8-8e00-c16ec86dce41","last_modified":1507705685916,"version":"2.1.0","name":"DEFAULT.STREAMING_SALES_TABLE","clusters":[{"brokers":[{"id":"1","host":"sandbox","port":"9092"}]}],"topic":"kylin_demo","timeout":60000,"parserName":"org.apache.kylin.source.kafka.TimedJsonStreamParser","parserTimeStampField":null,"margin":0,"parserProperties":"tsColName=order_time"}
2017-10-11 02:15:03,963 DEBUG [pool-7-thread-1] cachesync.Broadcaster:132 : 
Servers in the cluster: [localhost:7070]
2017-10-11 02:15:04,000 DEBUG [pool-7-thread-1] cachesync.Broadcaster:139 : 
Announcing new broadcast event: BroadcastEvent{entity=streaming, event=update, 
cacheKey=DEFAULT.STREAMING_SALES_TABLE}
2017-10-11 02:15:04,009 DEBUG [pool-7-thread-1] cachesync.Broadcaster:132 : 
Servers in the cluster: [localhost:7070]
2017-10-11 02:15:04,009 DEBUG [pool-7-thread-1] cachesync.Broadcaster:139 : 
Announcing new broadcast event: BroadcastEvent{entity=kafka, event=update, 
cacheKey=DEFAULT.STREAMING_SALES_TABLE}
2017-10-11 02:15:04,164 DEBUG [http-bio-7070-exec-9] cachesync.Broadcaster:236 
: Done broadcasting metadata change: entity=streaming, event=UPDATE, 
cacheKey=DEFAULT.STREAMING_SALES_TABLE
2017-10-11 02:15:04,192 DEBUG [http-bio-7070-exec-10] cachesync.Broadcaster:236 
: Done broadcasting metadata change: entity=kafka, event=UPDATE, 
cacheKey=DEFAULT.STREAMING_SALES_TABLE
2017-10-11 02:15:13,789 INFO  [pool-8-thread-1] threadpool.DefaultScheduler:123 
: Job Fetcher: 0 should running, 0 actual running, 0 stopped, 0 ready, 1 
already succeed, 0 error, 0 discarded, 0 others
2017-10-11 02:15:23,780 DEBUG [http-bio-7070-exec-7] kafka.KafkaSource:83 : 
Last segment doesn't exist, and didn't initiate the start offset, will seek 
from topic's earliest offset.

2017-10-11 20:50:42,558 INFO  [http-bio-7070-exec-8] utils.AppInfoParser:83 : 
Kafka version : 0.10.2-kafka-2.2.0
2017-10-11 20:50:42,563 INFO  [http-bio-7070-exec-8] utils.AppInfoParser:84 : 
Kafka commitId : unknown
2017-10-11 20:50:42,570 DEBUG [http-bio-7070-exec-8] kafka.KafkaSource:105 : 
Seek end offsets from topic
2017-10-11 20:50:42,570 INFO  [http-bio-7070-exec-8] 
consumer.ConsumerConfig:196 : ConsumerConfig values:
        auto.commit.interval.ms<http://auto.commit.interval.ms> = 5000
        auto.offset.reset = latest
        bootstrap.servers = [localhost:9092]
        check.crcs = true
        client.id<http://client.id> =
        connections.max.idle.ms<http://connections.max.idle.ms> = 540000
        enable.auto.commit = false
        exclude.internal.topics = true
        fetch.max.bytes = 52428800
        fetch.max.wait.ms<http://fetch.max.wait.ms> = 500
        fetch.min.bytes = 1
        group.id<http://group.id> = streaming_cube
        heartbeat.interval.ms<http://heartbeat.interval.ms> = 3000
        interceptor.classes = null
        internal.leave.group.on.close = true
        key.deserializer = class 
org.apache.kafka.common.serialization.StringDeserializer
        max.partition.fetch.bytes = 1048576
        max.poll.interval.ms<http://max.poll.interval.ms> = 300000
        max.poll.records = 500
        metadata.max.age.ms<http://metadata.max.age.ms> = 300000
        metric.reporters = []
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms<http://metrics.sample.window.ms> = 30000
        partition.assignment.strategy = [class 
org.apache.kafka.clients.consumer.RangeAssignor]
        receive.buffer.bytes = 65536
        reconnect.backoff.ms<http://reconnect.backoff.ms> = 50
        request.timeout.ms<http://request.timeout.ms> = 305000
        retry.backoff.ms<http://retry.backoff.ms> = 100
        sasl.jaas.config = null
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.min.time.before.relogin = 60000
        sasl.kerberos.service.name<http://sasl.kerberos.service.name> = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        request.timeout.ms<http://request.timeout.ms> = 305000
        retry.backoff.ms<http://retry.backoff.ms> = 100
        sasl.jaas.config = null
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.min.time.before.relogin = 60000
        sasl.kerberos.service.name<http://sasl.kerberos.service.name> = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        sasl.kerberos.ticket.renew.window.factor = 0.8
        sasl.mechanism = GSSAPI
        security.protocol = PLAINTEXT
        send.buffer.bytes = 131072
        session.timeout.ms<http://session.timeout.ms> = 10000
        ssl.cipher.suites = null
        ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
        ssl.endpoint.identification.algorithm = null
        ssl.key.password = null
        ssl.keymanager.algorithm = SunX509
        ssl.keystore.location = null
        ssl.keystore.password = null
        ssl.keystore.type = JKS
        ssl.protocol = TLS
        ssl.provider = null
        ssl.secure.random.implementation = null
        ssl.trustmanager.algorithm = PKIX
        ssl.truststore.location = null
        ssl.truststore.password = null
        ssl.truststore.type = JKS
        value.deserializer = class 
org.apache.kafka.common.serialization.StringDeserializer

2017-10-11 20:50:42,573 INFO  [http-bio-7070-exec-8] utils.AppInfoParser:83 : 
Kafka version : 0.10.2-kafka-2.2.0
2017-10-11 20:50:42,573 INFO  [http-bio-7070-exec-8] utils.AppInfoParser:84 : 
Kafka commitId : unknown
2017-10-11 20:50:42,586 DEBUG [http-bio-7070-exec-8] kafka.KafkaSource:107 : 
The end offsets are {0=0}
2017-10-11 20:50:42,588 ERROR [http-bio-7070-exec-8] 
controller.CubeController:305 : No new message comes, startOffset = endOffset:0
java.lang.IllegalArgumentException: No new message comes, startOffset = 
endOffset:0
        at 
org.apache.kylin.source.kafka.KafkaSource.enrichSourcePartitionBeforeBuild(KafkaSource.java:134)
        at 
org.apache.kylin.rest.service.JobService.submitJobInternal(JobService.java:236)
Regards,
Manoj

From: Billy Liu [mailto:[email protected]<mailto:[email protected]>]
Sent: Thursday, October 12, 2017 1:06 PM
To: user
Subject: Re: Kafka Streaming data - Error while building the Cube

Hi Kumar,

Could you paste more Kafka Consumer related log in kylin.log? And also check 
from the Kafka broker side, if the Kylin client has connected to Broker.

2017-10-12 14:29 GMT+08:00 Kumar, Manoj H 
<[email protected]<mailto:[email protected]>>:
Building the Cube from Kylin UI - Although Messages are there in Kafka topic 
but Kylin is not able read the offset. Can someone help on this?

2017-10-11 20:50:42,573 INFO  [http-bio-7070-exec-8] utils.AppInfoParser:83 : 
Kafka version : 0.10.2-kafka-2.2.0
2017-10-11 20:50:42,573 INFO  [http-bio-7070-exec-8] utils.AppInfoParser:84 : 
Kafka commitId : unknown
2017-10-11 20:50:42,586 DEBUG [http-bio-7070-exec-8] kafka.KafkaSource:107 : 
The end offsets are {0=0}
2017-10-11 20:50:42,588 ERROR [http-bio-7070-exec-8] 
controller.CubeController:305 : No new message comes, startOffset = endOffset:0
java.lang.IllegalArgumentException: No new message comes, startOffset = 
endOffset:0
        at 
org.apache.kylin.source.kafka.KafkaSource.enrichSourcePartitionBeforeBuild(KafkaSource.java:134)
        at 
org.apache.kylin.rest.service.JobService.submitJobInternal(JobService.java:236)
        at 
org.apache.kylin.rest.service.JobService.submitJob(JobService.java:208)
        at 
org.apache.kylin.rest.service.JobService$$FastClassBySpringCGLIB$$83a44b2a.invoke(<generated>)

Regards,
Manoj


This message is confidential and subject to terms at: 
http://www.jpmorgan.com/emaildisclaimer including on confidentiality, legal 
privilege, viruses and monitoring of electronic messages. If you are not the 
intended recipient, please delete this message and notify the sender 
immediately. Any unauthorized use is strictly prohibited.


This message is confidential and subject to terms at: 
http://www.jpmorgan.com/emaildisclaimer<http://www.jpmorgan.com/emaildisclaimer>
 including on confidentiality, legal privilege, viruses and monitoring of 
electronic messages. If you are not the intended recipient, please delete this 
message and notify the sender immediately. Any unauthorized use is strictly 
prohibited.


This message is confidential and subject to terms at: 
http://www.jpmorgan.com/emaildisclaimer including on confidentiality, legal 
privilege, viruses and monitoring of electronic messages. If you are not the 
intended recipient, please delete this message and notify the sender 
immediately. Any unauthorized use is strictly prohibited.

Reply via email to