Hi,

 

I am attaching the Map Reduce job log of the application, I am not able to find 
what is the issue. Please help in this regards.

 

 

 

Regards,

Prasanna.P

 

From: ShaoFeng Shi [mailto:[email protected]] 
Sent: 11 September 2018 19:13
To: user
Subject: Re: Kafka streaming cube build error.

 

Hi, please check the log of the map reduce job, there should be the detail 
error message;

 

Prasanna <[email protected]> 于2018年9月11日周二 下午9:34写道:

Dear Team,

 

I am using kylin 2.3.1 and kafka 0.10.1 versions. I am trying to create 
streaming cube using kafka as source. I am able to create cube, but while build 
the cube I am getting error in the first step itself. I check the logs ,but its 
not giving any error log. I am attaching the logs, please  go through and 
suggest me where I did mistake. I followed the document. 

 

Log:

 

2018-09-11 18:35:13,296 DEBUG [http-bio-7070-exec-3] kafka.KafkaSource:78 : 
Last segment exists, continue from last segment 2581_9574's end position: 
{0=3178, 1=3218, 2=3178}

2018-09-11 18:35:13,297 INFO  [http-bio-7070-exec-3] 
consumer.ConsumerConfig:180 : ConsumerConfig values: 

                auto.commit.interval.ms = 5000

                auto.offset.reset = latest

                bootstrap.servers = [master01.kylinmobility.local:6667, 
master02.kylinmobility.local:6667, slave01.kylinmobility.local:6667]

                check.crcs = true

                client.id = 

                connections.max.idle.ms = 540000

                enable.auto.commit = false

                exclude.internal.topics = true

                fetch.max.bytes = 52428800

                fetch.max.wait.ms = 500

                fetch.min.bytes = 1

                group.id = streaming_cube

                heartbeat.interval.ms = 3000

                interceptor.classes = null

                key.deserializer = class 
org.apache.kafka.common.serialization.StringDeserializer

                max.partition.fetch.bytes = 1048576

                max.poll.interval.ms = 300000

                max.poll.records = 500

                metadata.max.age.ms = 300000

                metric.reporters = []

                metrics.num.samples = 2

                metrics.sample.window.ms = 30000

                partition.assignment.strategy = [class 
org.apache.kafka.clients.consumer.RangeAssignor]

                receive.buffer.bytes = 65536

                reconnect.backoff.ms = 50

                request.timeout.ms = 305000

                retry.backoff.ms = 100

                sasl.kerberos.kinit.cmd = /usr/bin/kinit

                sasl.kerberos.min.time.before.relogin = 60000

                sasl.kerberos.service.name = null

                sasl.kerberos.ticket.renew.jitter = 0.05

                sasl.kerberos.ticket.renew.window.factor = 0.8

                sasl.mechanism = GSSAPI

                security.protocol = PLAINTEXT

                send.buffer.bytes = 131072

                session.timeout.ms = 10000

                ssl.cipher.suites = null

                ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]

                ssl.endpoint.identification.algorithm = null

                ssl.key.password = null

                ssl.keymanager.algorithm = SunX509

                ssl.keystore.location = null

                ssl.keystore.password = null

                ssl.keystore.type = JKS

                ssl.protocol = TLS

                ssl.provider = null

                ssl.secure.random.implementation = null

                ssl.trustmanager.algorithm = PKIX

                ssl.truststore.location = null

                ssl.truststore.password = null

                ssl.truststore.type = JKS

                value.deserializer = class 
org.apache.kafka.common.serialization.StringDeserializer

 

2018-09-11 18:35:13,298 INFO  [http-bio-7070-exec-3] 
consumer.ConsumerConfig:180 : ConsumerConfig values: 

                auto.commit.interval.ms = 5000

                auto.offset.reset = latest

                bootstrap.servers = [master01.kylinmobility.local:6667, 
master02.kylinmobility.local:6667, slave01.kylinmobility.local:6667]

                check.crcs = true

                client.id = consumer-3

                connections.max.idle.ms = 540000

                enable.auto.commit = false

                exclude.internal.topics = true

                fetch.max.bytes = 52428800

                fetch.max.wait.ms = 500

                fetch.min.bytes = 1

                group.id = streaming_cube

                heartbeat.interval.ms = 3000

                interceptor.classes = null

                key.deserializer = class 
org.apache.kafka.common.serialization.StringDeserializer

                max.partition.fetch.bytes = 1048576

                max.poll.interval.ms = 300000

                max.poll.records = 500

                metadata.max.age.ms = 300000

                metric.reporters = []

                metrics.num.samples = 2

                metrics.sample.window.ms = 30000

                partition.assignment.strategy = [class 
org.apache.kafka.clients.consumer.RangeAssignor]

                receive.buffer.bytes = 65536

                reconnect.backoff.ms = 50

                request.timeout.ms = 305000

                retry.backoff.ms = 100

                sasl.kerberos.kinit.cmd = /usr/bin/kinit

                sasl.kerberos.min.time.before.relogin = 60000

                sasl.kerberos.service.name = null

                sasl.kerberos.ticket.renew.jitter = 0.05

                sasl.kerberos.ticket.renew.window.factor = 0.8

                sasl.mechanism = GSSAPI

                security.protocol = PLAINTEXT

                send.buffer.bytes = 131072

                session.timeout.ms = 10000

                ssl.cipher.suites = null

                ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]

                ssl.endpoint.identification.algorithm = null

                ssl.key.password = null

                ssl.keymanager.algorithm = SunX509

                ssl.keystore.location = null

                ssl.keystore.password = null

                ssl.keystore.type = JKS

                ssl.protocol = TLS

                ssl.provider = null

                ssl.secure.random.implementation = null

                ssl.trustmanager.algorithm = PKIX

                ssl.truststore.location = null

                ssl.truststore.password = null

                ssl.truststore.type = JKS

                value.deserializer = class 
org.apache.kafka.common.serialization.StringDeserializer

 

2018-09-11 18:35:13,300 INFO  [http-bio-7070-exec-3] utils.AppInfoParser:83 : 
Kafka version : 0.10.1.2.6.3.0-235

2018-09-11 18:35:13,300 INFO  [http-bio-7070-exec-3] utils.AppInfoParser:84 : 
Kafka commitId : ba0af6800a08d2f8

2018-09-11 18:35:13,302 INFO  [http-bio-7070-exec-3] kafka.KafkaSource:96 : Get 
3 partitions for topic kylin_streaming_topic 

2018-09-11 18:35:13,303 DEBUG [http-bio-7070-exec-3] kafka.KafkaSource:107 : 
Seek end offsets from topic kylin_streaming_topic

2018-09-11 18:35:13,304 INFO  [http-bio-7070-exec-3] 
consumer.ConsumerConfig:180 : ConsumerConfig values: 

                auto.commit.interval.ms = 5000

                auto.offset.reset = latest

                bootstrap.servers = [master01.kylinmobility.local:6667, 
master02.kylinmobility.local:6667, slave01.kylinmobility.local:6667]

                check.crcs = true

                client.id = 

                connections.max.idle.ms = 540000

                enable.auto.commit = false

                exclude.internal.topics = true

                fetch.max.bytes = 52428800

                fetch.max.wait.ms = 500

                fetch.min.bytes = 1

                group.id = streaming_cube

                heartbeat.interval.ms = 3000

                interceptor.classes = null

                key.deserializer = class 
org.apache.kafka.common.serialization.StringDeserializer

                max.partition.fetch.bytes = 1048576

                max.poll.interval.ms = 300000

                max.poll.records = 500

                metadata.max.age.ms = 300000

                metric.reporters = []

                metrics.num.samples = 2

                metrics.sample.window.ms = 30000

                partition.assignment.strategy = [class 
org.apache.kafka.clients.consumer.RangeAssignor]

                receive.buffer.bytes = 65536

                reconnect.backoff.ms = 50

                request.timeout.ms = 305000

                retry.backoff.ms = 100

                sasl.kerberos.kinit.cmd = /usr/bin/kinit

                sasl.kerberos.min.time.before.relogin = 60000

                sasl.kerberos.service.name = null

                sasl.kerberos.ticket.renew.jitter = 0.05

                sasl.kerberos.ticket.renew.window.factor = 0.8

                sasl.mechanism = GSSAPI

                security.protocol = PLAINTEXT

                send.buffer.bytes = 131072

                session.timeout.ms = 10000

                ssl.cipher.suites = null

                ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]

                ssl.endpoint.identification.algorithm = null

                ssl.key.password = null

                ssl.keymanager.algorithm = SunX509

                ssl.keystore.location = null

                ssl.keystore.password = null

                ssl.keystore.type = JKS

                ssl.protocol = TLS

                ssl.provider = null

                ssl.secure.random.implementation = null

                ssl.trustmanager.algorithm = PKIX

                ssl.truststore.location = null

                ssl.truststore.password = null

                ssl.truststore.type = JKS

                value.deserializer = class 
org.apache.kafka.common.serialization.StringDeserializer

 

2018-09-11 18:35:13,304 INFO  [http-bio-7070-exec-3] 
consumer.ConsumerConfig:180 : ConsumerConfig values: 

                auto.commit.interval.ms = 5000

                auto.offset.reset = latest

                bootstrap.servers = [master01.kylinmobility.local:6667, 
master02.kylinmobility.local:6667, slave01.kylinmobility.local:6667]

                check.crcs = true

                client.id = consumer-4

                connections.max.idle.ms = 540000

                enable.auto.commit = false

                exclude.internal.topics = true

                fetch.max.bytes = 52428800

                fetch.max.wait.ms = 500

                fetch.min.bytes = 1

                group.id = streaming_cube

                heartbeat.interval.ms = 3000

                interceptor.classes = null

                key.deserializer = class 
org.apache.kafka.common.serialization.StringDeserializer

                max.partition.fetch.bytes = 1048576

                max.poll.interval.ms = 300000

                max.poll.records = 500

                metadata.max.age.ms = 300000

                metric.reporters = []

                metrics.num.samples = 2

                metrics.sample.window.ms = 30000

                partition.assignment.strategy = [class 
org.apache.kafka.clients.consumer.RangeAssignor]

                receive.buffer.bytes = 65536

                reconnect.backoff.ms = 50

                request.timeout.ms = 305000

                retry.backoff.ms = 100

                sasl.kerberos.kinit.cmd = /usr/bin/kinit

                sasl.kerberos.min.time.before.relogin = 60000

                sasl.kerberos.service.name = null

                sasl.kerberos.ticket.renew.jitter = 0.05

                sasl.kerberos.ticket.renew.window.factor = 0.8

                sasl.mechanism = GSSAPI

                security.protocol = PLAINTEXT

                send.buffer.bytes = 131072

                session.timeout.ms = 10000

                ssl.cipher.suites = null

                ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]

                ssl.endpoint.identification.algorithm = null

                ssl.key.password = null

                ssl.keymanager.algorithm = SunX509

                ssl.keystore.location = null

                ssl.keystore.password = null

                ssl.keystore.type = JKS

                ssl.protocol = TLS

                ssl.provider = null

                ssl.secure.random.implementation = null

                ssl.trustmanager.algorithm = PKIX

                ssl.truststore.location = null

                ssl.truststore.password = null

                ssl.truststore.type = JKS

                value.deserializer = class 
org.apache.kafka.common.serialization.StringDeserializer

 

2018-09-11 18:35:13,305 INFO  [http-bio-7070-exec-3] utils.AppInfoParser:83 : 
Kafka version : 0.10.1.2.6.3.0-235

2018-09-11 18:35:13,305 INFO  [http-bio-7070-exec-3] utils.AppInfoParser:84 : 
Kafka commitId : ba0af6800a08d2f8

2018-09-11 18:35:13,419 DEBUG [http-bio-7070-exec-3] kafka.KafkaSource:109 : 
The end offsets are {0=4429, 1=4436, 2=4413}

2018-09-11 18:35:13,420 INFO  [http-bio-7070-exec-3] cube.CubeManager:297 : 
Updating cube instance 'streaming_cube'

2018-09-11 18:35:13,421 DEBUG [http-bio-7070-exec-3] 
cachesync.CachedCrudAssist:190 : Saving CubeInstance at 
/cube/streaming_cube.json

2018-09-11 18:35:13,426 DEBUG [pool-6-thread-1] cachesync.Broadcaster:113 : 
Servers in the cluster: [192.168.1.135:7070, 192.168.1.136:7070]

2018-09-11 18:35:13,427 DEBUG [pool-6-thread-1] cachesync.Broadcaster:123 : 
Announcing new broadcast to all: BroadcastEvent{entity=cube, event=update, 
cacheKey=streaming_cube}

2018-09-11 18:35:13,429 INFO  [http-bio-7070-exec-3] 
mr.BatchCubingJobBuilder2:54 : MR_V2 new job to BUILD segment 
streaming_cube[9574_13278]

2018-09-11 18:35:13,443 DEBUG [http-bio-7070-exec-2] cachesync.Broadcaster:247 
: Broadcasting UPDATE, cube, streaming_cube

2018-09-11 18:35:13,446 DEBUG [http-bio-7070-exec-2] cachesync.Broadcaster:247 
: Broadcasting UPDATE, project_data, test

2018-09-11 18:35:13,447 INFO  [http-bio-7070-exec-2] service.CacheService:120 : 
cleaning cache for project test (currently remove all entries)

2018-09-11 18:35:13,448 DEBUG [http-bio-7070-exec-2] cachesync.Broadcaster:281 
: Done broadcasting UPDATE, project_data, test

2018-09-11 18:35:13,454 DEBUG [http-bio-7070-exec-2] cachesync.Broadcaster:281 
: Done broadcasting UPDATE, cube, streaming_cube

2018-09-11 18:35:13,560 DEBUG [http-bio-7070-exec-3] hbase.HBaseConnection:181 
: Using the working dir FS for HBase: hdfs://kylinbdcluster

 

 

 






 

-- 

Best regards,

 

Shaofeng Shi 史少锋

 

¯16oæ­·à-ؤÑÉ—àT†_žïa®maä£;%
’ sr 
norg.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController$IndexedLogsMeta
   D=é- I versionL aclst Ljava/util/Map;L compressNamet 
Ljava/lang/String;L logMetast Ljava/util/List;L nodeIdq ~ L userq ~ xp   
sr java.util.HashMapÚÁÃ`Ñ F 
loadFactorI     thresholdxp?@     w      ~r 
8org.apache.hadoop.yarn.api.records.ApplicationAccessType          xr 
java.lang.Enum          xpt VIEW_APPt  ~q ~ t 
MODIFY_APPt  xt gzsr java.util.ArrayListxÒ™Ça I sizexp   w   
sr 
{org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController$IndexedPerAggregationLogMeta
    ê4UÏ J uploadTimeStampL logMetasq ~ L remoteNodeLogFileNameq ~ xp  
eȲLsq ~ ?@     w      t *container_e23_1536306347237_0101_02_000001sq 
~     w    xt *container_e23_1536306347237_0101_01_000001sq ~     w    xxt 
#slave01.xyz.local_45454xt #slave01.xyz.local:45454t hdfs  Ö¯16oæ­·à
-ؤÑÉ—àT†_žïa®maä£;%

Reply via email to