A question:

(input order)
test1
test2
test3
test 2017-08-10
|2017-08-10 test1
2017-08-10 test2


If get them  using
*--from-beginning*
(received order)
test1
test 2017-08-10
2017-08-10 test1
test2
test3
2017-08-10 test2

Any idea how to get the message in the correct order as inputted?







On Thu, Aug 10, 2017 at 6:37 PM, Ascot Moss <ascot.m...@gmail.com> wrote:

> Works!
> Many thanks
>
> On Thu, Aug 10, 2017 at 4:33 PM, M. Manna <manme...@gmail.com> wrote:
>
>> you missed port - comment that out too.
>>
>> Debugging can enabled by
>>
>> 1) Setting root logger to DEBUG - more information on you cluster
>> 2) SSL debugging - edit kafka-run-class - to add
>> -Djavax.security.debug=all
>>  (see some examples of how some other values are configured)
>>
>> could you please set:
>> zookeeper.connection.timeout.ms = 15000
>> zookeeper.sync.time.ms=10000
>> retries=10
>>
>> It seems that your group metadata is expiring all time. Try with the above
>> and see if it improves.
>>
>>
>> On 10 August 2017 at 00:17, Ascot Moss <ascot.m...@gmail.com> wrote:
>>
>> > I commented out both #host.name, #advertised.host.nam
>> >
>> > (new server.properties)
>> > broker.id=11
>> > port=9093
>> > #host.name=n1.test.com
>> > #advertised.host.name=192.168.0.11
>> > allow.everyone.if.no.acl.found=true
>> > super.users=User:CN=n1.test.com,OU=TEST,O=TEST,L=TEST,ST=TEST,C=TEST
>> > listeners=SSL://n1.test.com:9093
>> > advertised.listeners=SSL://n1.test.com:9093
>> > ssl.client.auth=required
>> > ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
>> > ssl.keystore.type=JKS
>> > ssl.truststore.type=JKS
>> > security.inter.broker.protocol=SSL
>> > ssl.keystore.location=/home/kafka/kafka.server.keystore.jks
>> > ssl.keystore.password=Test2017
>> > ssl.key.password=Test2017
>> > ssl.truststore.location=/home/kafka/kafka.server.truststore.jks
>> > ssl.truststore.password=Test2017
>> > authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
>> > principal.builder.class=org.apache.kafka.common.security.
>> > auth.DefaultPrincipalBuilder
>> > num.replica.fetchers=4
>> > replica.fetch.max.bytes=1048576
>> > replica.fetch.wait.max.ms=500
>> > replica.high.watermark.checkpoint.interval.ms=5000
>> > replica.socket.timeout.ms=30000
>> > replica.socket.receive.buffer.bytes=65536
>> > replica.lag.time.max.ms=10000
>> > controller.socket.timeout.ms=30000
>> > controller.message.queue.size=10
>> > default.replication.factor=3
>> > log.dirs=/usr/log/kafka
>> > kafka.logs.dir=/usr/log/kafka
>> > num.partitions=20
>> > message.max.bytes=1000000
>> > auto.create.topics.enable=true
>> > log.index.interval.bytes=4096
>> > log.index.size.max.bytes=10485760
>> > log.retention.hours=720
>> > log.flush.interval.ms=10000
>> > log.flush.interval.messages=20000
>> > log.flush.scheduler.interval.ms=2000
>> > log.roll.hours=168
>> > log.retention.check.interval.ms=300000
>> > log.segment.bytes=1073741824
>> > delete.topic.enable=true
>> > socket.request.max.bytes=104857600
>> > socket.receive.buffer.bytes=1048576
>> > socket.send.buffer.bytes=1048576
>> > num.io.threads=8
>> > num.network.threads=8
>> > queued.max.requests=16
>> > fetch.purgatory.purge.interval.requests=100
>> > producer.purgatory.purge.interval.requests=100
>> > zookeeper.connect=n1:2181,n2:2181,n3:2181
>> > zookeeper.connection.timeout.ms=2000
>> > zookeeper.sync.time.ms=2000
>> >
>> >
>> > (producer.properties)
>> > bootstrap.servers=n1.test.com:9093
>> > security.protocol=SSL
>> > ssl.truststore.location=/home/kafka/kafka.client.truststore.jks
>> > ssl.truststore.password=testkafka
>> > ssl.keystore.location=/home/kafka/kafka.client.keystore.jks
>> > ssl.keystore.password=testkafka
>> > ssl.key.password=testkafka
>> >
>> >
>> > (run producer)
>> > ./bin/kafka-console-producer.sh \
>> > --broker-list n1:9093 \
>> > --producer.config /home/kafka/config/producer.n1.properties \
>> > --sync --topic test02
>> >
>> >
>> > (got error)
>> >
>> > [2017-08-10 07:10:31,881] ERROR Error when sending message to topic
>> test02
>> > with key: null, value: 0 bytes with error:
>> > (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
>> > org.apache.kafka.common.errors.TimeoutException: Expiring 2 record(s)
>> for
>> > test02-0: 1518 ms has passed since batch creation plus linger time
>> >
>> > [2017-08-10 07:10:32,230] ERROR Error when sending message to topic
>> test02
>> > with key: null, value: 0 bytes with error:
>> > (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
>> > org.apache.kafka.common.errors.TimeoutException: Expiring 2 record(s)
>> for
>> > test02-1: 1543 ms has passed since batch creation plus linger time
>> >
>> >
>> >
>> > By the way, where to set "-Djavax.security.debug=all"  for Kafka?
>> >
>> >
>> > On Thu, Aug 10, 2017 at 5:25 AM, M. Manna <manme...@gmail.com> wrote:
>> >
>> > > if you remove host.name, advertised.host.name and port from
>> > > server.properties, does it work for you?
>> > >
>> > > I am using SSL without ACL. it seems to be working fine.
>> > >
>> > > On 9 August 2017 at 22:03, Ascot Moss <ascot.m...@gmail.com> wrote:
>> > >
>> > > > About:
>> > > > zookeeper-shell.sh localhost:2181
>> > > > get /brokers/ids/11
>> > > >
>> > > >
>> > > > The result:
>> > > >
>> > > > zookeeper-shell.sh n1.test.com:2181
>> > > >
>> > > > Connecting to n1.test.com:2181
>> > > >
>> > > > Welcome to ZooKeeper!
>> > > >
>> > > > JLine support is disabled
>> > > >
>> > > > WATCHER::
>> > > >
>> > > > WatchedEvent state:SyncConnected type:None path:null
>> > > >
>> > > > WATCHER::
>> > > >
>> > > >
>> > > >
>> > > >
>> > > > get /brokers/ids/11
>> > > >
>> > > > WatchedEvent state:SaslAuthenticated type:None path:null
>> > > >
>> > > > {"listener_security_protocol_map":{"SSL":"SSL"},"endpoints":
>> ["SSL://
>> > > > n1.test.com:9093
>> > > > "],"jmx_port":-1,"host":null,"timestamp":"1502310695312","
>> > > > port":-1,"version":4}
>> > > >
>> > > > cZxid = 0x40002787d
>> > > >
>> > > > ctime = Thu Aug 10 04:31:37 HKT 2017
>> > > >
>> > > > mZxid = 0x40002787d
>> > > >
>> > > > mtime = Thu Aug 10 04:31:37 HKT 2017
>> > > >
>> > > > pZxid = 0x40002787d
>> > > >
>> > > > cversion = 0
>> > > >
>> > > > dataVersion = 0
>> > > >
>> > > > aclVersion = 0
>> > > >
>> > > > ephemeralOwner = 0x35d885c689c00a6
>> > > >
>> > > > dataLength = 168
>> > > >
>> > > > numChildren = 0
>> > > >
>> > > > On Thu, Aug 10, 2017 at 4:46 AM, Ascot Moss <ascot.m...@gmail.com>
>> > > wrote:
>> > > >
>> > > > > About:  zookeeper-shell.sh localhost:2181
>> > > > > get /brokers/ids/11
>> > > > >
>> > > > > The result:
>> > > > >
>> > > > > zookeeper-shell.sh n1.test.com:2181
>> > > > >
>> > > > > Connecting to n1.test.com:2181
>> > > > >
>> > > > > Welcome to ZooKeeper!
>> > > > >
>> > > > > JLine support is disabled
>> > > > >
>> > > > > WATCHER::
>> > > > >
>> > > > > WatchedEvent state:SyncConnected type:None path:null
>> > > > >
>> > > > > WATCHER::
>> > > > >
>> > > > > WatchedEvent state:SaslAuthenticated type:None path:null
>> > > > >
>> > > > >
>> > > > > On Thu, Aug 10, 2017 at 4:43 AM, Ascot Moss <ascot.m...@gmail.com
>> >
>> > > > wrote:
>> > > > >
>> > > > >> FYI, about zookeeper, I used my existing zookeeper (as I have
>> > existing
>> > > > >> zookeeper up and running, which is also used for hbase)
>> > > > >>
>> > > > >> zookeeper versoom: 3.4.10
>> > > > >>
>> > > > >> zoo.cfg
>> > > > >> ######
>> > > > >>
>> > > > >> tickTime=2000
>> > > > >>
>> > > > >> initLimit=10
>> > > > >>
>> > > > >> syncLimit=5
>> > > > >>
>> > > > >> dataDir=/usr/local/zookeeper/data
>> > > > >>
>> > > > >> dataLogDir=/usr/local/zookeeper/datalog
>> > > > >>
>> > > > >> clientPort=2181
>> > > > >>
>> > > > >> maxClientCnxns=60
>> > > > >>
>> > > > >> server.1=n1.test.com:2888:3888
>> > > > >>
>> > > > >> server.2=n2.test.com:2888:3888
>> > > > >>
>> > > > >> server.3=n3.test.com:2888:3888
>> > > > >>
>> > > > >> authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenti
>> > > > >> cationProvider
>> > > > >>
>> > > > >> jaasLoginRenew=3600000
>> > > > >>
>> > > > >> requireClientAuthScheme=sasl
>> > > > >>
>> > > > >> zookeeper.allowSaslFailedClients=false
>> > > > >>
>> > > > >> kerberos.removeHostFromPrincipal=true
>> > > > >>
>> > > > >> ######
>> > > > >>
>> > > > >>
>> > > > >>
>> > > > >> On Thu, Aug 10, 2017 at 4:35 AM, Ascot Moss <
>> ascot.m...@gmail.com>
>> > > > wrote:
>> > > > >>
>> > > > >>> server.properties
>> > > > >>>
>> > > > >>> ######
>> > > > >>>
>> > > > >>> broker.id=11
>> > > > >>>
>> > > > >>> port=9093
>> > > > >>>
>> > > > >>> host.name=n1
>> > > > >>>
>> > > > >>> advertised.host.name=192.168.0.11
>> > > > >>>
>> > > > >>> allow.everyone.if.no.acl.found=true
>> > > > >>>
>> > > > >>> super.users=User:CN=n1.test.com,OU=TEST,O=TEST,L=TEST,ST=
>> > TEST,C=TEST
>> > > > >>>
>> > > > >>> listeners=SSL://n1.test.com:9093 <http://n1.test.com:9092/>
>> > > > >>>
>> > > > >>> advertised.listeners=SSL://n1.test.com:9093 <
>> > > http://n1.test.com:9092/>
>> > > > >>>
>> > > > >>> ssl.client.auth=required
>> > > > >>>
>> > > > >>> ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
>> > > > >>>
>> > > > >>> ssl.keystore.type=JKS
>> > > > >>>
>> > > > >>> ssl.truststore.type=JKS
>> > > > >>>
>> > > > >>> security.inter.broker.protocol=SSL
>> > > > >>>
>> > > > >>> ssl.keystore.location=/home/kafka/kafka.server.keystore.jks
>> > > > >>>
>> > > > >>> ssl.keystore.password=Test2017
>> > > > >>>
>> > > > >>> ssl.key.password=Test2017
>> > > > >>>
>> > > > >>> ssl.truststore.location=/home/kafka/kafka.server.truststore.jks
>> > > > >>>
>> > > > >>> ssl.truststore.password=Test2017
>> > > > >>>
>> > > > >>> authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
>> > > > >>>
>> > > > >>> principal.builder.class=org.apache.kafka.common.security.aut
>> > > > >>> h.DefaultPrincipalBuilder
>> > > > >>>
>> > > > >>> num.replica.fetchers=4
>> > > > >>>
>> > > > >>> replica.fetch.max.bytes=1048576
>> > > > >>>
>> > > > >>> replica.fetch.wait.max.ms=500
>> > > > >>>
>> > > > >>> replica.high.watermark.checkpoint.interval.ms=5000
>> > > > >>>
>> > > > >>> replica.socket.timeout.ms=30000
>> > > > >>>
>> > > > >>> replica.socket.receive.buffer.bytes=65536
>> > > > >>>
>> > > > >>> replica.lag.time.max.ms=10000
>> > > > >>>
>> > > > >>> controller.socket.timeout.ms=30000
>> > > > >>>
>> > > > >>> controller.message.queue.size=10
>> > > > >>>
>> > > > >>> default.replication.factor=3
>> > > > >>>
>> > > > >>> log.dirs=/usr/log/kafka
>> > > > >>>
>> > > > >>> kafka.logs.dir=/usr/log/kafka
>> > > > >>>
>> > > > >>> num.partitions=20
>> > > > >>>
>> > > > >>> message.max.bytes=1000000
>> > > > >>>
>> > > > >>> auto.create.topics.enable=true
>> > > > >>>
>> > > > >>> log.index.interval.bytes=4096
>> > > > >>>
>> > > > >>> log.index.size.max.bytes=10485760
>> > > > >>>
>> > > > >>> log.retention.hours=720
>> > > > >>>
>> > > > >>> log.flush.interval.ms=10000
>> > > > >>>
>> > > > >>> log.flush.interval.messages=20000
>> > > > >>>
>> > > > >>> log.flush.scheduler.interval.ms=2000
>> > > > >>>
>> > > > >>> log.roll.hours=168
>> > > > >>>
>> > > > >>> log.retention.check.interval.ms=300000
>> > > > >>>
>> > > > >>> log.segment.bytes=1073741824
>> > > > >>>
>> > > > >>> delete.topic.enable=true
>> > > > >>>
>> > > > >>> socket.request.max.bytes=104857600
>> > > > >>>
>> > > > >>> socket.receive.buffer.bytes=1048576
>> > > > >>>
>> > > > >>> socket.send.buffer.bytes=1048576
>> > > > >>>
>> > > > >>> num.io.threads=8
>> > > > >>>
>> > > > >>> num.network.threads=8
>> > > > >>>
>> > > > >>> queued.max.requests=16
>> > > > >>>
>> > > > >>> fetch.purgatory.purge.interval.requests=100
>> > > > >>>
>> > > > >>> producer.purgatory.purge.interval.requests=100
>> > > > >>>
>> > > > >>> zookeeper.connect=n1:2181,n2:2181,n3:2181
>> > > > >>>
>> > > > >>> zookeeper.connection.timeout.ms=2000
>> > > > >>>
>> > > > >>> zookeeper.sync.time.ms=2000
>> > > > >>>
>> > > > >>> ######
>> > > > >>>
>> > > > >>>
>> > > > >>>
>> > > > >>>
>> > > > >>>
>> > > > >>> producer.properties
>> > > > >>>
>> > > > >>> ######
>> > > > >>>
>> > > > >>> bootstrap.servers=n1.test.com:9093 <http://n1.test.com:9092/>
>> > > > >>>
>> > > > >>> security.protocol=SSL
>> > > > >>>
>> > > > >>> ssl.truststore.location=/home/kafka/kafka.client.truststore.jks
>> > > > >>>
>> > > > >>> ssl.truststore.password=testkafka
>> > > > >>>
>> > > > >>> ssl.keystore.location=/home/kafka/kafka.client.keystore.jks
>> > > > >>>
>> > > > >>> ssl.keystore.password=testkafka
>> > > > >>>
>> > > > >>> ssl.key.password=testkafka
>> > > > >>> #####
>> > > > >>>
>> > > > >>>
>> > > > >>> (I had tried to switch to another port, 9093 is the correct
>> port)
>> > > > >>>
>> > > > >>> On Thu, Aug 10, 2017 at 4:28 AM, M. Manna <manme...@gmail.com>
>> > > wrote:
>> > > > >>>
>> > > > >>>> Your openssl test is showing connected with port 9092. but your
>> > > > previous
>> > > > >>>> messages show 9093 - is there some typo issues? Where is SSL
>> > running
>> > > > >>>>
>> > > > >>>> Please share the following and don't leave any details out.
>> This
>> > > will
>> > > > >>>> only
>> > > > >>>> create more assumptions.
>> > > > >>>>
>> > > > >>>> 1) server.properties
>> > > > >>>> 2) Zookeeper.properties
>> > > > >>>>
>> > > > >>>> Also, run the following command (when the cluster is running)
>> > > > >>>> zookeeper-shell.sh localhost:2181
>> > > > >>>> get /brokers/ids/11
>> > > > >>>>
>> > > > >>>> Does it show that your broker #11 is connected?
>> > > > >>>>
>> > > > >>>>
>> > > > >>>>
>> > > > >>>>
>> > > > >>>> On 9 August 2017 at 21:17, Ascot Moss <ascot.m...@gmail.com>
>> > wrote:
>> > > > >>>>
>> > > > >>>> > Dear Manna,
>> > > > >>>> >
>> > > > >>>> >
>> > > > >>>> > What's the status of your SSL? Have you verified that the
>> setup
>> > is
>> > > > >>>> working?
>> > > > >>>> > Yes, I used "
>> > > > >>>> >
>> > > > >>>> > openssl s_client -debug -connect n1.test.com:9092 -tls1
>> > > > >>>> > Output:
>> > > > >>>> >
>> > > > >>>> > CONNECTED(00000003)
>> > > > >>>> >
>> > > > >>>> > write to 0x853e70 [0x89fd43] (155 bytes => 155 (0x9B))
>> > > > >>>> >
>> > > > >>>> > 0000 - 16 03 01 00 96 01 00 00-92 03 01 59 8b 6d 0d b1
>> > > > >>>>  ...........Y.m..
>> > > > >>>> > ...
>> > > > >>>> >
>> > > > >>>> > Server certificate
>> > > > >>>> >
>> > > > >>>> > -----BEGIN CERTIFICATE-----
>> > > > >>>> >
>> > > > >>>> > CwwCSEsxGT............
>> > > > >>>> >
>> > > > >>>> > -----END CERTIFICATE-----
>> > > > >>>> >
>> > > > >>>> > ---
>> > > > >>>> >
>> > > > >>>> > SSL handshake has read 2470 bytes and written 161 bytes
>> > > > >>>> >
>> > > > >>>> > ---
>> > > > >>>> >
>> > > > >>>> > New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
>> > > > >>>> >
>> > > > >>>> >     PSK identity hint: None
>> > > > >>>> >
>> > > > >>>> >     Start Time: 1502309645
>> > > > >>>> >
>> > > > >>>> >     Timeout   : 7200 (sec)
>> > > > >>>> >
>> > > > >>>> >     Verify return code: 19 (self signed certificate in
>> > certificate
>> > > > >>>> chain)
>> > > > >>>> >
>> > > > >>>> > ---
>> > > > >>>> >
>> > > > >>>> > Regards
>> > > > >>>> >
>> > > > >>>> > On Wed, Aug 9, 2017 at 10:29 PM, M. Manna <
>> manme...@gmail.com>
>> > > > wrote:
>> > > > >>>> >
>> > > > >>>> > > Hi,
>> > > > >>>> > >
>> > > > >>>> > > What's the status of your SSL? Have you verified that the
>> > setup
>> > > is
>> > > > >>>> > working?
>> > > > >>>> > >
>> > > > >>>> > > You can enable rough logins using log4j.properties file
>> > supplier
>> > > > >>>> with
>> > > > >>>> > kafka
>> > > > >>>> > > and set the root logging level to DEBUG. This prints out
>> more
>> > > info
>> > > > >>>> to
>> > > > >>>> > trace
>> > > > >>>> > > things. Also, you can enable security logging by adding
>> > > > >>>> > > -Djavax.security.debug=all
>> > > > >>>> > >
>> > > > >>>> > > Please share your producer/broker configs with us.
>> > > > >>>> > >
>> > > > >>>> > > Kindest Regards,
>> > > > >>>> > > M. Manna
>> > > > >>>> > >
>> > > > >>>> > > On 9 August 2017 at 14:38, Ascot Moss <
>> ascot.m...@gmail.com>
>> > > > wrote:
>> > > > >>>> > >
>> > > > >>>> > > > Hi,
>> > > > >>>> > > >
>> > > > >>>> > > >
>> > > > >>>> > > > I have setup Kafka 0.10.2.1 with SSL.
>> > > > >>>> > > >
>> > > > >>>> > > >
>> > > > >>>> > > > Check Status:
>> > > > >>>> > > >
>> > > > >>>> > > > openssl s_client -debug -connect n1:9093 -tls1
>> > > > >>>> > > >
>> > > > >>>> > > > New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
>> > > > >>>> > > >
>> > > > >>>> > > > ... SSL-Session:
>> > > > >>>> > > >
>> > > > >>>> > > >     Protocol  : TLSv1
>> > > > >>>> > > >
>> > > > >>>> > > >     PSK identity hint: None
>> > > > >>>> > > >
>> > > > >>>> > > >     Start Time: 1502285690
>> > > > >>>> > > >
>> > > > >>>> > > >     Timeout   : 7200 (sec)
>> > > > >>>> > > >
>> > > > >>>> > > >     Verify return code: 19 (self signed certificate in
>> > > > certificate
>> > > > >>>> > chain)
>> > > > >>>> > > >
>> > > > >>>> > > >
>> > > > >>>> > > > Create Topic:
>> > > > >>>> > > >
>> > > > >>>> > > > kafka-topics.sh --create --zookeeper
>> n1:2181,n2:2181,n3:2181
>> > > > >>>> > > > --replication-factor 3 --partitions 3 --topic test02
>> > > > >>>> > > >
>> > > > >>>> > > > ERROR [ReplicaFetcherThread-2-111], Error for partition
>> > > > >>>> [test02,2] to
>> > > > >>>> > > > broker 1:org.apache.kafka.common.erro
>> > > > >>>> rs.UnknownTopicOrPartitionExcepti
>> > > > >>>> > > on:
>> > > > >>>> > > > This server does not host this topic-partition.
>> > > > >>>> > > > (kafka.server.ReplicaFetcherThread)
>> > > > >>>> > > >
>> > > > >>>> > > > However, if I run describe topic, I can see it is created
>> > > > >>>> > > >
>> > > > >>>> > > >
>> > > > >>>> > > >
>> > > > >>>> > > > Describe Topic:
>> > > > >>>> > > >
>> > > > >>>> > > > kafka-topics.sh --zookeeper n1:2181,n2:2181,n3:2181
>> > --describe
>> > > > >>>> --topic
>> > > > >>>> > > > test02
>> > > > >>>> > > >
>> > > > >>>> > > > Topic:test02 PartitionCount:3 ReplicationFactor:3
>> Configs:
>> > > > >>>> > > >
>> > > > >>>> > > > Topic: test02 Partition: 0 Leader: 12 Replicas: 12,13,11
>> > Isr:
>> > > > >>>> 12,13,11
>> > > > >>>> > > >
>> > > > >>>> > > > Topic: test02 Partition: 1 Leader: 13 Replicas: 13,11,12
>> > Isr:
>> > > > >>>> 13,11,12
>> > > > >>>> > > >
>> > > > >>>> > > > Topic: test02 Partition: 2 Leader: 11 Replicas: 11,12,13
>> > Isr:
>> > > > >>>> 11,12,13
>> > > > >>>> > > >
>> > > > >>>> > > >
>> > > > >>>> > > > Consumer:
>> > > > >>>> > > >
>> > > > >>>> > > > kafka-console-consumer.sh --bootstrap-server n1:9093
>> > > > >>>> --consumer.config
>> > > > >>>> > > > /home/kafka/config/consumer.n1.properties --topic test02
>> > > > >>>> > > --from-beginning
>> > > > >>>> > > >
>> > > > >>>> > > >
>> > > > >>>> > > >
>> > > > >>>> > > > Producer:
>> > > > >>>> > > >
>> > > > >>>> > > > kafka-console-producer.sh --broker-list n1:9093
>> > > > --producer.config
>> > > > >>>> > > > /homey/kafka/config/producer.n1.properties --sync
>> --topic
>> > > > test02
>> > > > >>>> > > >
>> > > > >>>> > > > ERROR Error when sending message to topic test02 with
>> key:
>> > > null,
>> > > > >>>> > value: 0
>> > > > >>>> > > > bytes with error:
>> > > > >>>> > > > (org.apache.kafka.clients.producer.internals.
>> > ErrorLoggingCal
>> > > > >>>> lback)
>> > > > >>>> > > >
>> > > > >>>> > > > org.apache.kafka.common.errors.TimeoutException:
>> Expiring 1
>> > > > >>>> record(s)
>> > > > >>>> > > for
>> > > > >>>> > > > test02-1: 1506 ms has passed since batch creation plus
>> > linger
>> > > > time
>> > > > >>>> > > >
>> > > > >>>> > > >
>> > > > >>>> > > > How to resolve it?
>> > > > >>>> > > >
>> > > > >>>> > > > Regards
>> > > > >>>> > > >
>> > > > >>>> > >
>> > > > >>>> >
>> > > > >>>>
>> > > > >>>
>> > > > >>>
>> > > > >>
>> > > > >
>> > > >
>> > >
>> >
>>
>
>

Reply via email to