Re: Create Topic Error: Create Topic Error and cannot write to console producer

2017-08-13 Thread Ascot Moss
Hi,


Without changing any configuration, got the error again now:

[2017-08-13 20:09:52,727] ERROR Error when sending message to topic test02
with key: null, value: 5 bytes with error:
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)

org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for
test02-1: 1542 ms has passed since batch creation plus linger time

[2017-08-13 20:09:53,835] ERROR Error when sending message to topic test02
with key: null, value: 5 bytes with error:
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)

org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for
test02-0: 1532 ms has passed since batch creation plus linger time


Producer:

kafka-console-producer.sh \
--broker-list n1:9093  \
--producer.config /homey/kafka/config/producer.n1.properties
--sync --topic test02


Consumer:

kafka-console-consumer.sh \
--bootstrap-server n1:9093  \
--consumer.config /home/kafka/config/consumer.n1.properties \
--topic test02 --from-beginning


Re: Create Topic Error: Create Topic Error and cannot write to console producer

2017-08-10 Thread Ascot Moss
Could point me where the document is?

On Thu, Aug 10, 2017 at 6:50 PM, M. Manna  wrote:

> This is due to partitions you are consuming from. Documentation section
> explains what needs to be done.
>
>
>
>
> On 10 August 2017 at 11:43, Ascot Moss  wrote:
>
> > A question:
> >
> > (input order)
> > test1
> > test2
> > test3
> > test 2017-08-10
> > |2017-08-10 test1
> > 2017-08-10 test2
> >
> >
> > If get them  using
> > *--from-beginning*
> > (received order)
> > test1
> > test 2017-08-10
> > 2017-08-10 test1
> > test2
> > test3
> > 2017-08-10 test2
> >
> > Any idea how to get the message in the original order as input?
> >
>


Re: Create Topic Error: Create Topic Error and cannot write to console producer

2017-08-10 Thread M. Manna
This is due to partitions you are consuming from. Documentation section
explains what needs to be done.




On 10 August 2017 at 11:43, Ascot Moss  wrote:

> A question:
>
> (input order)
> test1
> test2
> test3
> test 2017-08-10
> |2017-08-10 test1
> 2017-08-10 test2
>
>
> If get them  using
> *--from-beginning*
> (received order)
> test1
> test 2017-08-10
> 2017-08-10 test1
> test2
> test3
> 2017-08-10 test2
>
> Any idea how to get the message in the original order as input?
>


Re: Create Topic Error: Create Topic Error and cannot write to console producer

2017-08-10 Thread Ascot Moss
A question:

(input order)
test1
test2
test3
test 2017-08-10
|2017-08-10 test1
2017-08-10 test2


If get them  using
*--from-beginning*
(received order)
test1
test 2017-08-10
2017-08-10 test1
test2
test3
2017-08-10 test2

Any idea how to get the message in the original order as input?


Re: Create Topic Error: Create Topic Error and cannot write to console producer

2017-08-10 Thread Ascot Moss
A question:

(input order)
test1
test2
test3
test 2017-08-10
|2017-08-10 test1
2017-08-10 test2


If get them  using
*--from-beginning*
(received order)
test1
test 2017-08-10
2017-08-10 test1
test2
test3
2017-08-10 test2

Any idea how to get the message in the correct order as inputted?







On Thu, Aug 10, 2017 at 6:37 PM, Ascot Moss  wrote:

> Works!
> Many thanks
>
> On Thu, Aug 10, 2017 at 4:33 PM, M. Manna  wrote:
>
>> you missed port - comment that out too.
>>
>> Debugging can enabled by
>>
>> 1) Setting root logger to DEBUG - more information on you cluster
>> 2) SSL debugging - edit kafka-run-class - to add
>> -Djavax.security.debug=all
>>  (see some examples of how some other values are configured)
>>
>> could you please set:
>> zookeeper.connection.timeout.ms = 15000
>> zookeeper.sync.time.ms=1
>> retries=10
>>
>> It seems that your group metadata is expiring all time. Try with the above
>> and see if it improves.
>>
>>
>> On 10 August 2017 at 00:17, Ascot Moss  wrote:
>>
>> > I commented out both #host.name, #advertised.host.nam
>> >
>> > (new server.properties)
>> > broker.id=11
>> > port=9093
>> > #host.name=n1.test.com
>> > #advertised.host.name=192.168.0.11
>> > allow.everyone.if.no.acl.found=true
>> > super.users=User:CN=n1.test.com,OU=TEST,O=TEST,L=TEST,ST=TEST,C=TEST
>> > listeners=SSL://n1.test.com:9093
>> > advertised.listeners=SSL://n1.test.com:9093
>> > ssl.client.auth=required
>> > ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
>> > ssl.keystore.type=JKS
>> > ssl.truststore.type=JKS
>> > security.inter.broker.protocol=SSL
>> > ssl.keystore.location=/home/kafka/kafka.server.keystore.jks
>> > ssl.keystore.password=Test2017
>> > ssl.key.password=Test2017
>> > ssl.truststore.location=/home/kafka/kafka.server.truststore.jks
>> > ssl.truststore.password=Test2017
>> > authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
>> > principal.builder.class=org.apache.kafka.common.security.
>> > auth.DefaultPrincipalBuilder
>> > num.replica.fetchers=4
>> > replica.fetch.max.bytes=1048576
>> > replica.fetch.wait.max.ms=500
>> > replica.high.watermark.checkpoint.interval.ms=5000
>> > replica.socket.timeout.ms=3
>> > replica.socket.receive.buffer.bytes=65536
>> > replica.lag.time.max.ms=1
>> > controller.socket.timeout.ms=3
>> > controller.message.queue.size=10
>> > default.replication.factor=3
>> > log.dirs=/usr/log/kafka
>> > kafka.logs.dir=/usr/log/kafka
>> > num.partitions=20
>> > message.max.bytes=100
>> > auto.create.topics.enable=true
>> > log.index.interval.bytes=4096
>> > log.index.size.max.bytes=10485760
>> > log.retention.hours=720
>> > log.flush.interval.ms=1
>> > log.flush.interval.messages=2
>> > log.flush.scheduler.interval.ms=2000
>> > log.roll.hours=168
>> > log.retention.check.interval.ms=30
>> > log.segment.bytes=1073741824
>> > delete.topic.enable=true
>> > socket.request.max.bytes=104857600
>> > socket.receive.buffer.bytes=1048576
>> > socket.send.buffer.bytes=1048576
>> > num.io.threads=8
>> > num.network.threads=8
>> > queued.max.requests=16
>> > fetch.purgatory.purge.interval.requests=100
>> > producer.purgatory.purge.interval.requests=100
>> > zookeeper.connect=n1:2181,n2:2181,n3:2181
>> > zookeeper.connection.timeout.ms=2000
>> > zookeeper.sync.time.ms=2000
>> >
>> >
>> > (producer.properties)
>> > bootstrap.servers=n1.test.com:9093
>> > security.protocol=SSL
>> > ssl.truststore.location=/home/kafka/kafka.client.truststore.jks
>> > ssl.truststore.password=testkafka
>> > ssl.keystore.location=/home/kafka/kafka.client.keystore.jks
>> > ssl.keystore.password=testkafka
>> > ssl.key.password=testkafka
>> >
>> >
>> > (run producer)
>> > ./bin/kafka-console-producer.sh \
>> > --broker-list n1:9093 \
>> > --producer.config /home/kafka/config/producer.n1.properties \
>> > --sync --topic test02
>> >
>> >
>> > (got error)
>> >
>> > [2017-08-10 07:10:31,881] ERROR Error when sending message to topic
>> test02
>> > with key: null, value: 0 bytes with error:
>> > (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
>> > org.apache.kafka.common.errors.TimeoutException: Expiring 2 record(s)
>> for
>> > test02-0: 1518 ms has passed since batch creation plus linger time
>> >
>> > [2017-08-10 07:10:32,230] ERROR Error when sending message to topic
>> test02
>> > with key: null, value: 0 bytes with error:
>> > (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
>> > org.apache.kafka.common.errors.TimeoutException: Expiring 2 record(s)
>> for
>> > test02-1: 1543 ms has passed since batch creation plus linger time
>> >
>> >
>> >
>> > By the way, where to set "-Djavax.security.debug=all"  for Kafka?
>> >
>> >
>> > On Thu, Aug 10, 2017 at 5:25 AM, M. Manna  wrote:
>> >
>> > > if you remove host.name, advertised.host.name and port from
>> > > server.properties, does it work for you?
>> > >
>> > > I am using SSL without ACL. it seems to be working 

Re: Create Topic Error: Create Topic Error and cannot write to console producer

2017-08-10 Thread Ascot Moss
Works!
Many thanks

On Thu, Aug 10, 2017 at 4:33 PM, M. Manna  wrote:

> you missed port - comment that out too.
>
> Debugging can enabled by
>
> 1) Setting root logger to DEBUG - more information on you cluster
> 2) SSL debugging - edit kafka-run-class - to add -Djavax.security.debug=all
>  (see some examples of how some other values are configured)
>
> could you please set:
> zookeeper.connection.timeout.ms = 15000
> zookeeper.sync.time.ms=1
> retries=10
>
> It seems that your group metadata is expiring all time. Try with the above
> and see if it improves.
>
>
> On 10 August 2017 at 00:17, Ascot Moss  wrote:
>
> > I commented out both #host.name, #advertised.host.nam
> >
> > (new server.properties)
> > broker.id=11
> > port=9093
> > #host.name=n1.test.com
> > #advertised.host.name=192.168.0.11
> > allow.everyone.if.no.acl.found=true
> > super.users=User:CN=n1.test.com,OU=TEST,O=TEST,L=TEST,ST=TEST,C=TEST
> > listeners=SSL://n1.test.com:9093
> > advertised.listeners=SSL://n1.test.com:9093
> > ssl.client.auth=required
> > ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
> > ssl.keystore.type=JKS
> > ssl.truststore.type=JKS
> > security.inter.broker.protocol=SSL
> > ssl.keystore.location=/home/kafka/kafka.server.keystore.jks
> > ssl.keystore.password=Test2017
> > ssl.key.password=Test2017
> > ssl.truststore.location=/home/kafka/kafka.server.truststore.jks
> > ssl.truststore.password=Test2017
> > authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
> > principal.builder.class=org.apache.kafka.common.security.
> > auth.DefaultPrincipalBuilder
> > num.replica.fetchers=4
> > replica.fetch.max.bytes=1048576
> > replica.fetch.wait.max.ms=500
> > replica.high.watermark.checkpoint.interval.ms=5000
> > replica.socket.timeout.ms=3
> > replica.socket.receive.buffer.bytes=65536
> > replica.lag.time.max.ms=1
> > controller.socket.timeout.ms=3
> > controller.message.queue.size=10
> > default.replication.factor=3
> > log.dirs=/usr/log/kafka
> > kafka.logs.dir=/usr/log/kafka
> > num.partitions=20
> > message.max.bytes=100
> > auto.create.topics.enable=true
> > log.index.interval.bytes=4096
> > log.index.size.max.bytes=10485760
> > log.retention.hours=720
> > log.flush.interval.ms=1
> > log.flush.interval.messages=2
> > log.flush.scheduler.interval.ms=2000
> > log.roll.hours=168
> > log.retention.check.interval.ms=30
> > log.segment.bytes=1073741824
> > delete.topic.enable=true
> > socket.request.max.bytes=104857600
> > socket.receive.buffer.bytes=1048576
> > socket.send.buffer.bytes=1048576
> > num.io.threads=8
> > num.network.threads=8
> > queued.max.requests=16
> > fetch.purgatory.purge.interval.requests=100
> > producer.purgatory.purge.interval.requests=100
> > zookeeper.connect=n1:2181,n2:2181,n3:2181
> > zookeeper.connection.timeout.ms=2000
> > zookeeper.sync.time.ms=2000
> >
> >
> > (producer.properties)
> > bootstrap.servers=n1.test.com:9093
> > security.protocol=SSL
> > ssl.truststore.location=/home/kafka/kafka.client.truststore.jks
> > ssl.truststore.password=testkafka
> > ssl.keystore.location=/home/kafka/kafka.client.keystore.jks
> > ssl.keystore.password=testkafka
> > ssl.key.password=testkafka
> >
> >
> > (run producer)
> > ./bin/kafka-console-producer.sh \
> > --broker-list n1:9093 \
> > --producer.config /home/kafka/config/producer.n1.properties \
> > --sync --topic test02
> >
> >
> > (got error)
> >
> > [2017-08-10 07:10:31,881] ERROR Error when sending message to topic
> test02
> > with key: null, value: 0 bytes with error:
> > (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> > org.apache.kafka.common.errors.TimeoutException: Expiring 2 record(s)
> for
> > test02-0: 1518 ms has passed since batch creation plus linger time
> >
> > [2017-08-10 07:10:32,230] ERROR Error when sending message to topic
> test02
> > with key: null, value: 0 bytes with error:
> > (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> > org.apache.kafka.common.errors.TimeoutException: Expiring 2 record(s)
> for
> > test02-1: 1543 ms has passed since batch creation plus linger time
> >
> >
> >
> > By the way, where to set "-Djavax.security.debug=all"  for Kafka?
> >
> >
> > On Thu, Aug 10, 2017 at 5:25 AM, M. Manna  wrote:
> >
> > > if you remove host.name, advertised.host.name and port from
> > > server.properties, does it work for you?
> > >
> > > I am using SSL without ACL. it seems to be working fine.
> > >
> > > On 9 August 2017 at 22:03, Ascot Moss  wrote:
> > >
> > > > About:
> > > > zookeeper-shell.sh localhost:2181
> > > > get /brokers/ids/11
> > > >
> > > >
> > > > The result:
> > > >
> > > > zookeeper-shell.sh n1.test.com:2181
> > > >
> > > > Connecting to n1.test.com:2181
> > > >
> > > > Welcome to ZooKeeper!
> > > >
> > > > JLine support is disabled
> > > >
> > > > WATCHER::
> > > >
> > > > WatchedEvent state:SyncConnected type:None path:null
> > > >
> > > > 

Re: Create Topic Error: Create Topic Error and cannot write to console producer

2017-08-10 Thread M. Manna
you missed port - comment that out too.

Debugging can enabled by

1) Setting root logger to DEBUG - more information on you cluster
2) SSL debugging - edit kafka-run-class - to add -Djavax.security.debug=all
 (see some examples of how some other values are configured)

could you please set:
zookeeper.connection.timeout.ms = 15000
zookeeper.sync.time.ms=1
retries=10

It seems that your group metadata is expiring all time. Try with the above
and see if it improves.


On 10 August 2017 at 00:17, Ascot Moss  wrote:

> I commented out both #host.name, #advertised.host.nam
>
> (new server.properties)
> broker.id=11
> port=9093
> #host.name=n1.test.com
> #advertised.host.name=192.168.0.11
> allow.everyone.if.no.acl.found=true
> super.users=User:CN=n1.test.com,OU=TEST,O=TEST,L=TEST,ST=TEST,C=TEST
> listeners=SSL://n1.test.com:9093
> advertised.listeners=SSL://n1.test.com:9093
> ssl.client.auth=required
> ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
> ssl.keystore.type=JKS
> ssl.truststore.type=JKS
> security.inter.broker.protocol=SSL
> ssl.keystore.location=/home/kafka/kafka.server.keystore.jks
> ssl.keystore.password=Test2017
> ssl.key.password=Test2017
> ssl.truststore.location=/home/kafka/kafka.server.truststore.jks
> ssl.truststore.password=Test2017
> authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
> principal.builder.class=org.apache.kafka.common.security.
> auth.DefaultPrincipalBuilder
> num.replica.fetchers=4
> replica.fetch.max.bytes=1048576
> replica.fetch.wait.max.ms=500
> replica.high.watermark.checkpoint.interval.ms=5000
> replica.socket.timeout.ms=3
> replica.socket.receive.buffer.bytes=65536
> replica.lag.time.max.ms=1
> controller.socket.timeout.ms=3
> controller.message.queue.size=10
> default.replication.factor=3
> log.dirs=/usr/log/kafka
> kafka.logs.dir=/usr/log/kafka
> num.partitions=20
> message.max.bytes=100
> auto.create.topics.enable=true
> log.index.interval.bytes=4096
> log.index.size.max.bytes=10485760
> log.retention.hours=720
> log.flush.interval.ms=1
> log.flush.interval.messages=2
> log.flush.scheduler.interval.ms=2000
> log.roll.hours=168
> log.retention.check.interval.ms=30
> log.segment.bytes=1073741824
> delete.topic.enable=true
> socket.request.max.bytes=104857600
> socket.receive.buffer.bytes=1048576
> socket.send.buffer.bytes=1048576
> num.io.threads=8
> num.network.threads=8
> queued.max.requests=16
> fetch.purgatory.purge.interval.requests=100
> producer.purgatory.purge.interval.requests=100
> zookeeper.connect=n1:2181,n2:2181,n3:2181
> zookeeper.connection.timeout.ms=2000
> zookeeper.sync.time.ms=2000
>
>
> (producer.properties)
> bootstrap.servers=n1.test.com:9093
> security.protocol=SSL
> ssl.truststore.location=/home/kafka/kafka.client.truststore.jks
> ssl.truststore.password=testkafka
> ssl.keystore.location=/home/kafka/kafka.client.keystore.jks
> ssl.keystore.password=testkafka
> ssl.key.password=testkafka
>
>
> (run producer)
> ./bin/kafka-console-producer.sh \
> --broker-list n1:9093 \
> --producer.config /home/kafka/config/producer.n1.properties \
> --sync --topic test02
>
>
> (got error)
>
> [2017-08-10 07:10:31,881] ERROR Error when sending message to topic test02
> with key: null, value: 0 bytes with error:
> (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> org.apache.kafka.common.errors.TimeoutException: Expiring 2 record(s) for
> test02-0: 1518 ms has passed since batch creation plus linger time
>
> [2017-08-10 07:10:32,230] ERROR Error when sending message to topic test02
> with key: null, value: 0 bytes with error:
> (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> org.apache.kafka.common.errors.TimeoutException: Expiring 2 record(s) for
> test02-1: 1543 ms has passed since batch creation plus linger time
>
>
>
> By the way, where to set "-Djavax.security.debug=all"  for Kafka?
>
>
> On Thu, Aug 10, 2017 at 5:25 AM, M. Manna  wrote:
>
> > if you remove host.name, advertised.host.name and port from
> > server.properties, does it work for you?
> >
> > I am using SSL without ACL. it seems to be working fine.
> >
> > On 9 August 2017 at 22:03, Ascot Moss  wrote:
> >
> > > About:
> > > zookeeper-shell.sh localhost:2181
> > > get /brokers/ids/11
> > >
> > >
> > > The result:
> > >
> > > zookeeper-shell.sh n1.test.com:2181
> > >
> > > Connecting to n1.test.com:2181
> > >
> > > Welcome to ZooKeeper!
> > >
> > > JLine support is disabled
> > >
> > > WATCHER::
> > >
> > > WatchedEvent state:SyncConnected type:None path:null
> > >
> > > WATCHER::
> > >
> > >
> > >
> > >
> > > get /brokers/ids/11
> > >
> > > WatchedEvent state:SaslAuthenticated type:None path:null
> > >
> > > {"listener_security_protocol_map":{"SSL":"SSL"},"endpoints":["SSL://
> > > n1.test.com:9093
> > > "],"jmx_port":-1,"host":null,"timestamp":"1502310695312","
> > > port":-1,"version":4}
> > >
> > > cZxid = 0x40002787d
> > >
> > > ctime = Thu Aug 10 

Re: Create Topic Error: Create Topic Error and cannot write to console producer

2017-08-09 Thread Ascot Moss
I commented out both #host.name, #advertised.host.nam

(new server.properties)
broker.id=11
port=9093
#host.name=n1.test.com
#advertised.host.name=192.168.0.11
allow.everyone.if.no.acl.found=true
super.users=User:CN=n1.test.com,OU=TEST,O=TEST,L=TEST,ST=TEST,C=TEST
listeners=SSL://n1.test.com:9093
advertised.listeners=SSL://n1.test.com:9093
ssl.client.auth=required
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.type=JKS
ssl.truststore.type=JKS
security.inter.broker.protocol=SSL
ssl.keystore.location=/home/kafka/kafka.server.keystore.jks
ssl.keystore.password=Test2017
ssl.key.password=Test2017
ssl.truststore.location=/home/kafka/kafka.server.truststore.jks
ssl.truststore.password=Test2017
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
principal.builder.class=org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
num.replica.fetchers=4
replica.fetch.max.bytes=1048576
replica.fetch.wait.max.ms=500
replica.high.watermark.checkpoint.interval.ms=5000
replica.socket.timeout.ms=3
replica.socket.receive.buffer.bytes=65536
replica.lag.time.max.ms=1
controller.socket.timeout.ms=3
controller.message.queue.size=10
default.replication.factor=3
log.dirs=/usr/log/kafka
kafka.logs.dir=/usr/log/kafka
num.partitions=20
message.max.bytes=100
auto.create.topics.enable=true
log.index.interval.bytes=4096
log.index.size.max.bytes=10485760
log.retention.hours=720
log.flush.interval.ms=1
log.flush.interval.messages=2
log.flush.scheduler.interval.ms=2000
log.roll.hours=168
log.retention.check.interval.ms=30
log.segment.bytes=1073741824
delete.topic.enable=true
socket.request.max.bytes=104857600
socket.receive.buffer.bytes=1048576
socket.send.buffer.bytes=1048576
num.io.threads=8
num.network.threads=8
queued.max.requests=16
fetch.purgatory.purge.interval.requests=100
producer.purgatory.purge.interval.requests=100
zookeeper.connect=n1:2181,n2:2181,n3:2181
zookeeper.connection.timeout.ms=2000
zookeeper.sync.time.ms=2000


(producer.properties)
bootstrap.servers=n1.test.com:9093
security.protocol=SSL
ssl.truststore.location=/home/kafka/kafka.client.truststore.jks
ssl.truststore.password=testkafka
ssl.keystore.location=/home/kafka/kafka.client.keystore.jks
ssl.keystore.password=testkafka
ssl.key.password=testkafka


(run producer)
./bin/kafka-console-producer.sh \
--broker-list n1:9093 \
--producer.config /home/kafka/config/producer.n1.properties \
--sync --topic test02


(got error)

[2017-08-10 07:10:31,881] ERROR Error when sending message to topic test02
with key: null, value: 0 bytes with error:
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 2 record(s) for
test02-0: 1518 ms has passed since batch creation plus linger time

[2017-08-10 07:10:32,230] ERROR Error when sending message to topic test02
with key: null, value: 0 bytes with error:
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 2 record(s) for
test02-1: 1543 ms has passed since batch creation plus linger time



By the way, where to set "-Djavax.security.debug=all"  for Kafka?


On Thu, Aug 10, 2017 at 5:25 AM, M. Manna  wrote:

> if you remove host.name, advertised.host.name and port from
> server.properties, does it work for you?
>
> I am using SSL without ACL. it seems to be working fine.
>
> On 9 August 2017 at 22:03, Ascot Moss  wrote:
>
> > About:
> > zookeeper-shell.sh localhost:2181
> > get /brokers/ids/11
> >
> >
> > The result:
> >
> > zookeeper-shell.sh n1.test.com:2181
> >
> > Connecting to n1.test.com:2181
> >
> > Welcome to ZooKeeper!
> >
> > JLine support is disabled
> >
> > WATCHER::
> >
> > WatchedEvent state:SyncConnected type:None path:null
> >
> > WATCHER::
> >
> >
> >
> >
> > get /brokers/ids/11
> >
> > WatchedEvent state:SaslAuthenticated type:None path:null
> >
> > {"listener_security_protocol_map":{"SSL":"SSL"},"endpoints":["SSL://
> > n1.test.com:9093
> > "],"jmx_port":-1,"host":null,"timestamp":"1502310695312","
> > port":-1,"version":4}
> >
> > cZxid = 0x40002787d
> >
> > ctime = Thu Aug 10 04:31:37 HKT 2017
> >
> > mZxid = 0x40002787d
> >
> > mtime = Thu Aug 10 04:31:37 HKT 2017
> >
> > pZxid = 0x40002787d
> >
> > cversion = 0
> >
> > dataVersion = 0
> >
> > aclVersion = 0
> >
> > ephemeralOwner = 0x35d885c689c00a6
> >
> > dataLength = 168
> >
> > numChildren = 0
> >
> > On Thu, Aug 10, 2017 at 4:46 AM, Ascot Moss 
> wrote:
> >
> > > About:  zookeeper-shell.sh localhost:2181
> > > get /brokers/ids/11
> > >
> > > The result:
> > >
> > > zookeeper-shell.sh n1.test.com:2181
> > >
> > > Connecting to n1.test.com:2181
> > >
> > > Welcome to ZooKeeper!
> > >
> > > JLine support is disabled
> > >
> > > WATCHER::
> > >
> > > WatchedEvent state:SyncConnected type:None path:null
> > >
> > > WATCHER::
> > >
> > > WatchedEvent state:SaslAuthenticated type:None path:null
> > >
> > >
> 

Re: Create Topic Error: Create Topic Error and cannot write to console producer

2017-08-09 Thread M. Manna
if you remove host.name, advertised.host.name and port from
server.properties, does it work for you?

I am using SSL without ACL. it seems to be working fine.

On 9 August 2017 at 22:03, Ascot Moss  wrote:

> About:
> zookeeper-shell.sh localhost:2181
> get /brokers/ids/11
>
>
> The result:
>
> zookeeper-shell.sh n1.test.com:2181
>
> Connecting to n1.test.com:2181
>
> Welcome to ZooKeeper!
>
> JLine support is disabled
>
> WATCHER::
>
> WatchedEvent state:SyncConnected type:None path:null
>
> WATCHER::
>
>
>
>
> get /brokers/ids/11
>
> WatchedEvent state:SaslAuthenticated type:None path:null
>
> {"listener_security_protocol_map":{"SSL":"SSL"},"endpoints":["SSL://
> n1.test.com:9093
> "],"jmx_port":-1,"host":null,"timestamp":"1502310695312","
> port":-1,"version":4}
>
> cZxid = 0x40002787d
>
> ctime = Thu Aug 10 04:31:37 HKT 2017
>
> mZxid = 0x40002787d
>
> mtime = Thu Aug 10 04:31:37 HKT 2017
>
> pZxid = 0x40002787d
>
> cversion = 0
>
> dataVersion = 0
>
> aclVersion = 0
>
> ephemeralOwner = 0x35d885c689c00a6
>
> dataLength = 168
>
> numChildren = 0
>
> On Thu, Aug 10, 2017 at 4:46 AM, Ascot Moss  wrote:
>
> > About:  zookeeper-shell.sh localhost:2181
> > get /brokers/ids/11
> >
> > The result:
> >
> > zookeeper-shell.sh n1.test.com:2181
> >
> > Connecting to n1.test.com:2181
> >
> > Welcome to ZooKeeper!
> >
> > JLine support is disabled
> >
> > WATCHER::
> >
> > WatchedEvent state:SyncConnected type:None path:null
> >
> > WATCHER::
> >
> > WatchedEvent state:SaslAuthenticated type:None path:null
> >
> >
> > On Thu, Aug 10, 2017 at 4:43 AM, Ascot Moss 
> wrote:
> >
> >> FYI, about zookeeper, I used my existing zookeeper (as I have existing
> >> zookeeper up and running, which is also used for hbase)
> >>
> >> zookeeper versoom: 3.4.10
> >>
> >> zoo.cfg
> >> ##
> >>
> >> tickTime=2000
> >>
> >> initLimit=10
> >>
> >> syncLimit=5
> >>
> >> dataDir=/usr/local/zookeeper/data
> >>
> >> dataLogDir=/usr/local/zookeeper/datalog
> >>
> >> clientPort=2181
> >>
> >> maxClientCnxns=60
> >>
> >> server.1=n1.test.com:2888:3888
> >>
> >> server.2=n2.test.com:2888:3888
> >>
> >> server.3=n3.test.com:2888:3888
> >>
> >> authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenti
> >> cationProvider
> >>
> >> jaasLoginRenew=360
> >>
> >> requireClientAuthScheme=sasl
> >>
> >> zookeeper.allowSaslFailedClients=false
> >>
> >> kerberos.removeHostFromPrincipal=true
> >>
> >> ##
> >>
> >>
> >>
> >> On Thu, Aug 10, 2017 at 4:35 AM, Ascot Moss 
> wrote:
> >>
> >>> server.properties
> >>>
> >>> ##
> >>>
> >>> broker.id=11
> >>>
> >>> port=9093
> >>>
> >>> host.name=n1
> >>>
> >>> advertised.host.name=192.168.0.11
> >>>
> >>> allow.everyone.if.no.acl.found=true
> >>>
> >>> super.users=User:CN=n1.test.com,OU=TEST,O=TEST,L=TEST,ST=TEST,C=TEST
> >>>
> >>> listeners=SSL://n1.test.com:9093 
> >>>
> >>> advertised.listeners=SSL://n1.test.com:9093 
> >>>
> >>> ssl.client.auth=required
> >>>
> >>> ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
> >>>
> >>> ssl.keystore.type=JKS
> >>>
> >>> ssl.truststore.type=JKS
> >>>
> >>> security.inter.broker.protocol=SSL
> >>>
> >>> ssl.keystore.location=/home/kafka/kafka.server.keystore.jks
> >>>
> >>> ssl.keystore.password=Test2017
> >>>
> >>> ssl.key.password=Test2017
> >>>
> >>> ssl.truststore.location=/home/kafka/kafka.server.truststore.jks
> >>>
> >>> ssl.truststore.password=Test2017
> >>>
> >>> authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
> >>>
> >>> principal.builder.class=org.apache.kafka.common.security.aut
> >>> h.DefaultPrincipalBuilder
> >>>
> >>> num.replica.fetchers=4
> >>>
> >>> replica.fetch.max.bytes=1048576
> >>>
> >>> replica.fetch.wait.max.ms=500
> >>>
> >>> replica.high.watermark.checkpoint.interval.ms=5000
> >>>
> >>> replica.socket.timeout.ms=3
> >>>
> >>> replica.socket.receive.buffer.bytes=65536
> >>>
> >>> replica.lag.time.max.ms=1
> >>>
> >>> controller.socket.timeout.ms=3
> >>>
> >>> controller.message.queue.size=10
> >>>
> >>> default.replication.factor=3
> >>>
> >>> log.dirs=/usr/log/kafka
> >>>
> >>> kafka.logs.dir=/usr/log/kafka
> >>>
> >>> num.partitions=20
> >>>
> >>> message.max.bytes=100
> >>>
> >>> auto.create.topics.enable=true
> >>>
> >>> log.index.interval.bytes=4096
> >>>
> >>> log.index.size.max.bytes=10485760
> >>>
> >>> log.retention.hours=720
> >>>
> >>> log.flush.interval.ms=1
> >>>
> >>> log.flush.interval.messages=2
> >>>
> >>> log.flush.scheduler.interval.ms=2000
> >>>
> >>> log.roll.hours=168
> >>>
> >>> log.retention.check.interval.ms=30
> >>>
> >>> log.segment.bytes=1073741824
> >>>
> >>> delete.topic.enable=true
> >>>
> >>> socket.request.max.bytes=104857600
> >>>
> >>> socket.receive.buffer.bytes=1048576
> >>>
> >>> socket.send.buffer.bytes=1048576
> >>>
> >>> num.io.threads=8
> >>>
> >>> num.network.threads=8
> >>>
> >>> queued.max.requests=16
> >>>
> 

Re: Create Topic Error: Create Topic Error and cannot write to console producer

2017-08-09 Thread Ascot Moss
Dear Manna,

Where can I set "-Djavax.security.debug=all"  for Kafka?

Regards

On Thu, Aug 10, 2017 at 5:08 AM, Ascot Moss  wrote:

> ( I have 3 test nodes)
>
> get /brokers/ids/11
>
> {"listener_security_protocol_map":{"SSL":"SSL"},"endpoints":["SSL://
> n1.test.com:9093"],"jmx_port":-1,"host":null,"timestamp":"1502310695312","
> port":-1,"version":4}
>
> cZxid = 0x40002787d
>
> ctime = Thu Aug 10 04:31:37 HKT 2017
>
> mZxid = 0x40002787d
>
> mtime = Thu Aug 10 04:31:37 HKT 2017
>
> pZxid = 0x40002787d
>
> cversion = 0
>
> dataVersion = 0
>
> aclVersion = 0
>
> ephemeralOwner = 0x35d885c689c00a6
>
> dataLength = 168
>
> numChildren = 0
>
>
> get /brokers/ids/12
>
> {"listener_security_protocol_map":{"SSL":"SSL"},"endpoints":["SSL://
> n2.test.com:9093"],"jmx_port":-1,"host":null,"timestamp":"1502284073115","
> port":-1,"version":4}
>
> cZxid = 0x400026c66
>
> ctime = Wed Aug 09 21:07:53 HKT 2017
>
> mZxid = 0x400026c66
>
> mtime = Wed Aug 09 21:07:53 HKT 2017
>
> pZxid = 0x400026c66
>
> cversion = 0
>
> dataVersion = 0
>
> aclVersion = 0
>
> ephemeralOwner = 0x25d6b41469a0110
>
> dataLength = 168
>
> numChildren = 0
>
>
> get /brokers/ids/13
>
> {"listener_security_protocol_map":{"SSL":"SSL"},"endpoints":["SSL://
> n3.test.com:9093"],"jmx_port":-1,"host":null,"timestamp":"1502284080461","
> port":-1,"version":4}
>
> cZxid = 0x400026c6c
>
> ctime = Wed Aug 09 21:07:59 HKT 2017
>
> mZxid = 0x400026c6c
>
> mtime = Wed Aug 09 21:07:59 HKT 2017
>
> pZxid = 0x400026c6c
>
> cversion = 0
>
> dataVersion = 0
>
> aclVersion = 0
>
> ephemeralOwner = 0x35d885c689c00a2
>
> dataLength = 168
>
> numChildren = 0
>
> On Thu, Aug 10, 2017 at 5:03 AM, Ascot Moss  wrote:
>
>>
>> About:
>> zookeeper-shell.sh localhost:2181
>> get /brokers/ids/11
>>
>>
>> The result:
>>
>> zookeeper-shell.sh n1.test.com:2181
>>
>> Connecting to n1.test.com:2181
>>
>> Welcome to ZooKeeper!
>>
>> JLine support is disabled
>>
>> WATCHER::
>>
>> WatchedEvent state:SyncConnected type:None path:null
>>
>> WATCHER::
>>
>>
>>
>>
>> get /brokers/ids/11
>>
>> WatchedEvent state:SaslAuthenticated type:None path:null
>>
>> {"listener_security_protocol_map":{"SSL":"SSL"},"endpoints":["SSL://
>> n1.test.com:9093"],"jmx_port":-1,"host":null,"timest
>> amp":"1502310695312","port":-1,"version":4}
>>
>> cZxid = 0x40002787d
>>
>> ctime = Thu Aug 10 04:31:37 HKT 2017
>>
>> mZxid = 0x40002787d
>>
>> mtime = Thu Aug 10 04:31:37 HKT 2017
>>
>> pZxid = 0x40002787d
>>
>> cversion = 0
>>
>> dataVersion = 0
>>
>> aclVersion = 0
>>
>> ephemeralOwner = 0x35d885c689c00a6
>>
>> dataLength = 168
>>
>> numChildren = 0
>>
>> On Thu, Aug 10, 2017 at 4:46 AM, Ascot Moss  wrote:
>>
>>> About:  zookeeper-shell.sh localhost:2181
>>> get /brokers/ids/11
>>>
>>> The result:
>>>
>>> zookeeper-shell.sh n1.test.com:2181
>>>
>>> Connecting to n1.test.com:2181
>>>
>>> Welcome to ZooKeeper!
>>>
>>> JLine support is disabled
>>>
>>> WATCHER::
>>>
>>> WatchedEvent state:SyncConnected type:None path:null
>>>
>>> WATCHER::
>>>
>>> WatchedEvent state:SaslAuthenticated type:None path:null
>>>
>>>
>>> On Thu, Aug 10, 2017 at 4:43 AM, Ascot Moss 
>>> wrote:
>>>
 FYI, about zookeeper, I used my existing zookeeper (as I have existing
 zookeeper up and running, which is also used for hbase)

 zookeeper versoom: 3.4.10

 zoo.cfg
 ##

 tickTime=2000

 initLimit=10

 syncLimit=5

 dataDir=/usr/local/zookeeper/data

 dataLogDir=/usr/local/zookeeper/datalog

 clientPort=2181

 maxClientCnxns=60

 server.1=n1.test.com:2888:3888

 server.2=n2.test.com:2888:3888

 server.3=n3.test.com:2888:3888

 authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenti
 cationProvider

 jaasLoginRenew=360

 requireClientAuthScheme=sasl

 zookeeper.allowSaslFailedClients=false

 kerberos.removeHostFromPrincipal=true

 ##



 On Thu, Aug 10, 2017 at 4:35 AM, Ascot Moss 
 wrote:

> server.properties
>
> ##
>
> broker.id=11
>
> port=9093
>
> host.name=n1
>
> advertised.host.name=192.168.0.11
>
> allow.everyone.if.no.acl.found=true
>
> super.users=User:CN=n1.test.com,OU=TEST,O=TEST,L=TEST,ST=TEST,C=TEST
>
> listeners=SSL://n1.test.com:9093 
>
> advertised.listeners=SSL://n1.test.com:9093 
>
> ssl.client.auth=required
>
> ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
>
> ssl.keystore.type=JKS
>
> ssl.truststore.type=JKS
>
> security.inter.broker.protocol=SSL
>
> ssl.keystore.location=/home/kafka/kafka.server.keystore.jks
>
> ssl.keystore.password=Test2017
>
> ssl.key.password=Test2017
>
> 

Re: Create Topic Error: Create Topic Error and cannot write to console producer

2017-08-09 Thread Ascot Moss
( I have 3 test nodes)

get /brokers/ids/11

{"listener_security_protocol_map":{"SSL":"SSL"},"endpoints":["SSL://
n1.test.com:9093
"],"jmx_port":-1,"host":null,"timestamp":"1502310695312","port":-1,"version":4}

cZxid = 0x40002787d

ctime = Thu Aug 10 04:31:37 HKT 2017

mZxid = 0x40002787d

mtime = Thu Aug 10 04:31:37 HKT 2017

pZxid = 0x40002787d

cversion = 0

dataVersion = 0

aclVersion = 0

ephemeralOwner = 0x35d885c689c00a6

dataLength = 168

numChildren = 0


get /brokers/ids/12

{"listener_security_protocol_map":{"SSL":"SSL"},"endpoints":["SSL://
n2.test.com:9093
"],"jmx_port":-1,"host":null,"timestamp":"1502284073115","port":-1,"version":4}

cZxid = 0x400026c66

ctime = Wed Aug 09 21:07:53 HKT 2017

mZxid = 0x400026c66

mtime = Wed Aug 09 21:07:53 HKT 2017

pZxid = 0x400026c66

cversion = 0

dataVersion = 0

aclVersion = 0

ephemeralOwner = 0x25d6b41469a0110

dataLength = 168

numChildren = 0


get /brokers/ids/13

{"listener_security_protocol_map":{"SSL":"SSL"},"endpoints":["SSL://
n3.test.com:9093
"],"jmx_port":-1,"host":null,"timestamp":"1502284080461","port":-1,"version":4}

cZxid = 0x400026c6c

ctime = Wed Aug 09 21:07:59 HKT 2017

mZxid = 0x400026c6c

mtime = Wed Aug 09 21:07:59 HKT 2017

pZxid = 0x400026c6c

cversion = 0

dataVersion = 0

aclVersion = 0

ephemeralOwner = 0x35d885c689c00a2

dataLength = 168

numChildren = 0

On Thu, Aug 10, 2017 at 5:03 AM, Ascot Moss  wrote:

>
> About:
> zookeeper-shell.sh localhost:2181
> get /brokers/ids/11
>
>
> The result:
>
> zookeeper-shell.sh n1.test.com:2181
>
> Connecting to n1.test.com:2181
>
> Welcome to ZooKeeper!
>
> JLine support is disabled
>
> WATCHER::
>
> WatchedEvent state:SyncConnected type:None path:null
>
> WATCHER::
>
>
>
>
> get /brokers/ids/11
>
> WatchedEvent state:SaslAuthenticated type:None path:null
>
> {"listener_security_protocol_map":{"SSL":"SSL"},"endpoints":["SSL://
> n1.test.com:9093"],"jmx_port":-1,"host":null,"timestamp":"1502310695312","
> port":-1,"version":4}
>
> cZxid = 0x40002787d
>
> ctime = Thu Aug 10 04:31:37 HKT 2017
>
> mZxid = 0x40002787d
>
> mtime = Thu Aug 10 04:31:37 HKT 2017
>
> pZxid = 0x40002787d
>
> cversion = 0
>
> dataVersion = 0
>
> aclVersion = 0
>
> ephemeralOwner = 0x35d885c689c00a6
>
> dataLength = 168
>
> numChildren = 0
>
> On Thu, Aug 10, 2017 at 4:46 AM, Ascot Moss  wrote:
>
>> About:  zookeeper-shell.sh localhost:2181
>> get /brokers/ids/11
>>
>> The result:
>>
>> zookeeper-shell.sh n1.test.com:2181
>>
>> Connecting to n1.test.com:2181
>>
>> Welcome to ZooKeeper!
>>
>> JLine support is disabled
>>
>> WATCHER::
>>
>> WatchedEvent state:SyncConnected type:None path:null
>>
>> WATCHER::
>>
>> WatchedEvent state:SaslAuthenticated type:None path:null
>>
>>
>> On Thu, Aug 10, 2017 at 4:43 AM, Ascot Moss  wrote:
>>
>>> FYI, about zookeeper, I used my existing zookeeper (as I have existing
>>> zookeeper up and running, which is also used for hbase)
>>>
>>> zookeeper versoom: 3.4.10
>>>
>>> zoo.cfg
>>> ##
>>>
>>> tickTime=2000
>>>
>>> initLimit=10
>>>
>>> syncLimit=5
>>>
>>> dataDir=/usr/local/zookeeper/data
>>>
>>> dataLogDir=/usr/local/zookeeper/datalog
>>>
>>> clientPort=2181
>>>
>>> maxClientCnxns=60
>>>
>>> server.1=n1.test.com:2888:3888
>>>
>>> server.2=n2.test.com:2888:3888
>>>
>>> server.3=n3.test.com:2888:3888
>>>
>>> authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenti
>>> cationProvider
>>>
>>> jaasLoginRenew=360
>>>
>>> requireClientAuthScheme=sasl
>>>
>>> zookeeper.allowSaslFailedClients=false
>>>
>>> kerberos.removeHostFromPrincipal=true
>>>
>>> ##
>>>
>>>
>>>
>>> On Thu, Aug 10, 2017 at 4:35 AM, Ascot Moss 
>>> wrote:
>>>
 server.properties

 ##

 broker.id=11

 port=9093

 host.name=n1

 advertised.host.name=192.168.0.11

 allow.everyone.if.no.acl.found=true

 super.users=User:CN=n1.test.com,OU=TEST,O=TEST,L=TEST,ST=TEST,C=TEST

 listeners=SSL://n1.test.com:9093 

 advertised.listeners=SSL://n1.test.com:9093 

 ssl.client.auth=required

 ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1

 ssl.keystore.type=JKS

 ssl.truststore.type=JKS

 security.inter.broker.protocol=SSL

 ssl.keystore.location=/home/kafka/kafka.server.keystore.jks

 ssl.keystore.password=Test2017

 ssl.key.password=Test2017

 ssl.truststore.location=/home/kafka/kafka.server.truststore.jks

 ssl.truststore.password=Test2017

 authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer

 principal.builder.class=org.apache.kafka.common.security.aut
 h.DefaultPrincipalBuilder

 num.replica.fetchers=4

 replica.fetch.max.bytes=1048576

 replica.fetch.wait.max.ms=500

 replica.high.watermark.checkpoint.interval.ms=5000

 replica.socket.timeout.ms=3

Re: Create Topic Error: Create Topic Error and cannot write to console producer

2017-08-09 Thread Ascot Moss
About:
zookeeper-shell.sh localhost:2181
get /brokers/ids/11


The result:

zookeeper-shell.sh n1.test.com:2181

Connecting to n1.test.com:2181

Welcome to ZooKeeper!

JLine support is disabled

WATCHER::

WatchedEvent state:SyncConnected type:None path:null

WATCHER::




get /brokers/ids/11

WatchedEvent state:SaslAuthenticated type:None path:null

{"listener_security_protocol_map":{"SSL":"SSL"},"endpoints":["SSL://
n1.test.com:9093
"],"jmx_port":-1,"host":null,"timestamp":"1502310695312","port":-1,"version":4}

cZxid = 0x40002787d

ctime = Thu Aug 10 04:31:37 HKT 2017

mZxid = 0x40002787d

mtime = Thu Aug 10 04:31:37 HKT 2017

pZxid = 0x40002787d

cversion = 0

dataVersion = 0

aclVersion = 0

ephemeralOwner = 0x35d885c689c00a6

dataLength = 168

numChildren = 0

On Thu, Aug 10, 2017 at 4:46 AM, Ascot Moss  wrote:

> About:  zookeeper-shell.sh localhost:2181
> get /brokers/ids/11
>
> The result:
>
> zookeeper-shell.sh n1.test.com:2181
>
> Connecting to n1.test.com:2181
>
> Welcome to ZooKeeper!
>
> JLine support is disabled
>
> WATCHER::
>
> WatchedEvent state:SyncConnected type:None path:null
>
> WATCHER::
>
> WatchedEvent state:SaslAuthenticated type:None path:null
>
>
> On Thu, Aug 10, 2017 at 4:43 AM, Ascot Moss  wrote:
>
>> FYI, about zookeeper, I used my existing zookeeper (as I have existing
>> zookeeper up and running, which is also used for hbase)
>>
>> zookeeper versoom: 3.4.10
>>
>> zoo.cfg
>> ##
>>
>> tickTime=2000
>>
>> initLimit=10
>>
>> syncLimit=5
>>
>> dataDir=/usr/local/zookeeper/data
>>
>> dataLogDir=/usr/local/zookeeper/datalog
>>
>> clientPort=2181
>>
>> maxClientCnxns=60
>>
>> server.1=n1.test.com:2888:3888
>>
>> server.2=n2.test.com:2888:3888
>>
>> server.3=n3.test.com:2888:3888
>>
>> authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenti
>> cationProvider
>>
>> jaasLoginRenew=360
>>
>> requireClientAuthScheme=sasl
>>
>> zookeeper.allowSaslFailedClients=false
>>
>> kerberos.removeHostFromPrincipal=true
>>
>> ##
>>
>>
>>
>> On Thu, Aug 10, 2017 at 4:35 AM, Ascot Moss  wrote:
>>
>>> server.properties
>>>
>>> ##
>>>
>>> broker.id=11
>>>
>>> port=9093
>>>
>>> host.name=n1
>>>
>>> advertised.host.name=192.168.0.11
>>>
>>> allow.everyone.if.no.acl.found=true
>>>
>>> super.users=User:CN=n1.test.com,OU=TEST,O=TEST,L=TEST,ST=TEST,C=TEST
>>>
>>> listeners=SSL://n1.test.com:9093 
>>>
>>> advertised.listeners=SSL://n1.test.com:9093 
>>>
>>> ssl.client.auth=required
>>>
>>> ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
>>>
>>> ssl.keystore.type=JKS
>>>
>>> ssl.truststore.type=JKS
>>>
>>> security.inter.broker.protocol=SSL
>>>
>>> ssl.keystore.location=/home/kafka/kafka.server.keystore.jks
>>>
>>> ssl.keystore.password=Test2017
>>>
>>> ssl.key.password=Test2017
>>>
>>> ssl.truststore.location=/home/kafka/kafka.server.truststore.jks
>>>
>>> ssl.truststore.password=Test2017
>>>
>>> authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
>>>
>>> principal.builder.class=org.apache.kafka.common.security.aut
>>> h.DefaultPrincipalBuilder
>>>
>>> num.replica.fetchers=4
>>>
>>> replica.fetch.max.bytes=1048576
>>>
>>> replica.fetch.wait.max.ms=500
>>>
>>> replica.high.watermark.checkpoint.interval.ms=5000
>>>
>>> replica.socket.timeout.ms=3
>>>
>>> replica.socket.receive.buffer.bytes=65536
>>>
>>> replica.lag.time.max.ms=1
>>>
>>> controller.socket.timeout.ms=3
>>>
>>> controller.message.queue.size=10
>>>
>>> default.replication.factor=3
>>>
>>> log.dirs=/usr/log/kafka
>>>
>>> kafka.logs.dir=/usr/log/kafka
>>>
>>> num.partitions=20
>>>
>>> message.max.bytes=100
>>>
>>> auto.create.topics.enable=true
>>>
>>> log.index.interval.bytes=4096
>>>
>>> log.index.size.max.bytes=10485760
>>>
>>> log.retention.hours=720
>>>
>>> log.flush.interval.ms=1
>>>
>>> log.flush.interval.messages=2
>>>
>>> log.flush.scheduler.interval.ms=2000
>>>
>>> log.roll.hours=168
>>>
>>> log.retention.check.interval.ms=30
>>>
>>> log.segment.bytes=1073741824
>>>
>>> delete.topic.enable=true
>>>
>>> socket.request.max.bytes=104857600
>>>
>>> socket.receive.buffer.bytes=1048576
>>>
>>> socket.send.buffer.bytes=1048576
>>>
>>> num.io.threads=8
>>>
>>> num.network.threads=8
>>>
>>> queued.max.requests=16
>>>
>>> fetch.purgatory.purge.interval.requests=100
>>>
>>> producer.purgatory.purge.interval.requests=100
>>>
>>> zookeeper.connect=n1:2181,n2:2181,n3:2181
>>>
>>> zookeeper.connection.timeout.ms=2000
>>>
>>> zookeeper.sync.time.ms=2000
>>>
>>> ##
>>>
>>>
>>>
>>>
>>>
>>> producer.properties
>>>
>>> ##
>>>
>>> bootstrap.servers=n1.test.com:9093 
>>>
>>> security.protocol=SSL
>>>
>>> ssl.truststore.location=/home/kafka/kafka.client.truststore.jks
>>>
>>> ssl.truststore.password=testkafka
>>>
>>> ssl.keystore.location=/home/kafka/kafka.client.keystore.jks
>>>
>>> ssl.keystore.password=testkafka
>>>
>>> ssl.key.password=testkafka
>>> 

Re: Create Topic Error: Create Topic Error and cannot write to console producer

2017-08-09 Thread Ascot Moss
About:  zookeeper-shell.sh localhost:2181
get /brokers/ids/11

The result:

zookeeper-shell.sh n1.test.com:2181

Connecting to n1.test.com:2181

Welcome to ZooKeeper!

JLine support is disabled

WATCHER::

WatchedEvent state:SyncConnected type:None path:null

WATCHER::

WatchedEvent state:SaslAuthenticated type:None path:null


On Thu, Aug 10, 2017 at 4:43 AM, Ascot Moss  wrote:

> FYI, about zookeeper, I used my existing zookeeper (as I have existing
> zookeeper up and running, which is also used for hbase)
>
> zookeeper versoom: 3.4.10
>
> zoo.cfg
> ##
>
> tickTime=2000
>
> initLimit=10
>
> syncLimit=5
>
> dataDir=/usr/local/zookeeper/data
>
> dataLogDir=/usr/local/zookeeper/datalog
>
> clientPort=2181
>
> maxClientCnxns=60
>
> server.1=n1.test.com:2888:3888
>
> server.2=n2.test.com:2888:3888
>
> server.3=n3.test.com:2888:3888
>
> authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
>
> jaasLoginRenew=360
>
> requireClientAuthScheme=sasl
>
> zookeeper.allowSaslFailedClients=false
>
> kerberos.removeHostFromPrincipal=true
>
> ##
>
>
>
> On Thu, Aug 10, 2017 at 4:35 AM, Ascot Moss  wrote:
>
>> server.properties
>>
>> ##
>>
>> broker.id=11
>>
>> port=9093
>>
>> host.name=n1
>>
>> advertised.host.name=192.168.0.11
>>
>> allow.everyone.if.no.acl.found=true
>>
>> super.users=User:CN=n1.test.com,OU=TEST,O=TEST,L=TEST,ST=TEST,C=TEST
>>
>> listeners=SSL://n1.test.com:9093 
>>
>> advertised.listeners=SSL://n1.test.com:9093 
>>
>> ssl.client.auth=required
>>
>> ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
>>
>> ssl.keystore.type=JKS
>>
>> ssl.truststore.type=JKS
>>
>> security.inter.broker.protocol=SSL
>>
>> ssl.keystore.location=/home/kafka/kafka.server.keystore.jks
>>
>> ssl.keystore.password=Test2017
>>
>> ssl.key.password=Test2017
>>
>> ssl.truststore.location=/home/kafka/kafka.server.truststore.jks
>>
>> ssl.truststore.password=Test2017
>>
>> authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
>>
>> principal.builder.class=org.apache.kafka.common.security.aut
>> h.DefaultPrincipalBuilder
>>
>> num.replica.fetchers=4
>>
>> replica.fetch.max.bytes=1048576
>>
>> replica.fetch.wait.max.ms=500
>>
>> replica.high.watermark.checkpoint.interval.ms=5000
>>
>> replica.socket.timeout.ms=3
>>
>> replica.socket.receive.buffer.bytes=65536
>>
>> replica.lag.time.max.ms=1
>>
>> controller.socket.timeout.ms=3
>>
>> controller.message.queue.size=10
>>
>> default.replication.factor=3
>>
>> log.dirs=/usr/log/kafka
>>
>> kafka.logs.dir=/usr/log/kafka
>>
>> num.partitions=20
>>
>> message.max.bytes=100
>>
>> auto.create.topics.enable=true
>>
>> log.index.interval.bytes=4096
>>
>> log.index.size.max.bytes=10485760
>>
>> log.retention.hours=720
>>
>> log.flush.interval.ms=1
>>
>> log.flush.interval.messages=2
>>
>> log.flush.scheduler.interval.ms=2000
>>
>> log.roll.hours=168
>>
>> log.retention.check.interval.ms=30
>>
>> log.segment.bytes=1073741824
>>
>> delete.topic.enable=true
>>
>> socket.request.max.bytes=104857600
>>
>> socket.receive.buffer.bytes=1048576
>>
>> socket.send.buffer.bytes=1048576
>>
>> num.io.threads=8
>>
>> num.network.threads=8
>>
>> queued.max.requests=16
>>
>> fetch.purgatory.purge.interval.requests=100
>>
>> producer.purgatory.purge.interval.requests=100
>>
>> zookeeper.connect=n1:2181,n2:2181,n3:2181
>>
>> zookeeper.connection.timeout.ms=2000
>>
>> zookeeper.sync.time.ms=2000
>>
>> ##
>>
>>
>>
>>
>>
>> producer.properties
>>
>> ##
>>
>> bootstrap.servers=n1.test.com:9093 
>>
>> security.protocol=SSL
>>
>> ssl.truststore.location=/home/kafka/kafka.client.truststore.jks
>>
>> ssl.truststore.password=testkafka
>>
>> ssl.keystore.location=/home/kafka/kafka.client.keystore.jks
>>
>> ssl.keystore.password=testkafka
>>
>> ssl.key.password=testkafka
>> #
>>
>>
>> (I had tried to switch to another port, 9093 is the correct port)
>>
>> On Thu, Aug 10, 2017 at 4:28 AM, M. Manna  wrote:
>>
>>> Your openssl test is showing connected with port 9092. but your previous
>>> messages show 9093 - is there some typo issues? Where is SSL running
>>>
>>> Please share the following and don't leave any details out. This will
>>> only
>>> create more assumptions.
>>>
>>> 1) server.properties
>>> 2) Zookeeper.properties
>>>
>>> Also, run the following command (when the cluster is running)
>>> zookeeper-shell.sh localhost:2181
>>> get /brokers/ids/11
>>>
>>> Does it show that your broker #11 is connected?
>>>
>>>
>>>
>>>
>>> On 9 August 2017 at 21:17, Ascot Moss  wrote:
>>>
>>> > Dear Manna,
>>> >
>>> >
>>> > What's the status of your SSL? Have you verified that the setup is
>>> working?
>>> > Yes, I used "
>>> >
>>> > openssl s_client -debug -connect n1.test.com:9092 -tls1
>>> > Output:
>>> >
>>> > CONNECTED(0003)
>>> >
>>> > write to 0x853e70 [0x89fd43] (155 bytes => 155 (0x9B))

Re: Create Topic Error: Create Topic Error and cannot write to console producer

2017-08-09 Thread Ascot Moss
FYI, about zookeeper, I used my existing zookeeper (as I have existing
zookeeper up and running, which is also used for hbase)

zookeeper versoom: 3.4.10

zoo.cfg
##

tickTime=2000

initLimit=10

syncLimit=5

dataDir=/usr/local/zookeeper/data

dataLogDir=/usr/local/zookeeper/datalog

clientPort=2181

maxClientCnxns=60

server.1=n1.test.com:2888:3888

server.2=n2.test.com:2888:3888

server.3=n3.test.com:2888:3888

authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider

jaasLoginRenew=360

requireClientAuthScheme=sasl

zookeeper.allowSaslFailedClients=false

kerberos.removeHostFromPrincipal=true

##



On Thu, Aug 10, 2017 at 4:35 AM, Ascot Moss  wrote:

> server.properties
>
> ##
>
> broker.id=11
>
> port=9093
>
> host.name=n1
>
> advertised.host.name=192.168.0.11
>
> allow.everyone.if.no.acl.found=true
>
> super.users=User:CN=n1.test.com,OU=TEST,O=TEST,L=TEST,ST=TEST,C=TEST
>
> listeners=SSL://n1.test.com:9093 
>
> advertised.listeners=SSL://n1.test.com:9093 
>
> ssl.client.auth=required
>
> ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
>
> ssl.keystore.type=JKS
>
> ssl.truststore.type=JKS
>
> security.inter.broker.protocol=SSL
>
> ssl.keystore.location=/home/kafka/kafka.server.keystore.jks
>
> ssl.keystore.password=Test2017
>
> ssl.key.password=Test2017
>
> ssl.truststore.location=/home/kafka/kafka.server.truststore.jks
>
> ssl.truststore.password=Test2017
>
> authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
>
> principal.builder.class=org.apache.kafka.common.security.
> auth.DefaultPrincipalBuilder
>
> num.replica.fetchers=4
>
> replica.fetch.max.bytes=1048576
>
> replica.fetch.wait.max.ms=500
>
> replica.high.watermark.checkpoint.interval.ms=5000
>
> replica.socket.timeout.ms=3
>
> replica.socket.receive.buffer.bytes=65536
>
> replica.lag.time.max.ms=1
>
> controller.socket.timeout.ms=3
>
> controller.message.queue.size=10
>
> default.replication.factor=3
>
> log.dirs=/usr/log/kafka
>
> kafka.logs.dir=/usr/log/kafka
>
> num.partitions=20
>
> message.max.bytes=100
>
> auto.create.topics.enable=true
>
> log.index.interval.bytes=4096
>
> log.index.size.max.bytes=10485760
>
> log.retention.hours=720
>
> log.flush.interval.ms=1
>
> log.flush.interval.messages=2
>
> log.flush.scheduler.interval.ms=2000
>
> log.roll.hours=168
>
> log.retention.check.interval.ms=30
>
> log.segment.bytes=1073741824
>
> delete.topic.enable=true
>
> socket.request.max.bytes=104857600
>
> socket.receive.buffer.bytes=1048576
>
> socket.send.buffer.bytes=1048576
>
> num.io.threads=8
>
> num.network.threads=8
>
> queued.max.requests=16
>
> fetch.purgatory.purge.interval.requests=100
>
> producer.purgatory.purge.interval.requests=100
>
> zookeeper.connect=n1:2181,n2:2181,n3:2181
>
> zookeeper.connection.timeout.ms=2000
>
> zookeeper.sync.time.ms=2000
>
> ##
>
>
>
>
>
> producer.properties
>
> ##
>
> bootstrap.servers=n1.test.com:9093 
>
> security.protocol=SSL
>
> ssl.truststore.location=/home/kafka/kafka.client.truststore.jks
>
> ssl.truststore.password=testkafka
>
> ssl.keystore.location=/home/kafka/kafka.client.keystore.jks
>
> ssl.keystore.password=testkafka
>
> ssl.key.password=testkafka
> #
>
>
> (I had tried to switch to another port, 9093 is the correct port)
>
> On Thu, Aug 10, 2017 at 4:28 AM, M. Manna  wrote:
>
>> Your openssl test is showing connected with port 9092. but your previous
>> messages show 9093 - is there some typo issues? Where is SSL running
>>
>> Please share the following and don't leave any details out. This will only
>> create more assumptions.
>>
>> 1) server.properties
>> 2) Zookeeper.properties
>>
>> Also, run the following command (when the cluster is running)
>> zookeeper-shell.sh localhost:2181
>> get /brokers/ids/11
>>
>> Does it show that your broker #11 is connected?
>>
>>
>>
>>
>> On 9 August 2017 at 21:17, Ascot Moss  wrote:
>>
>> > Dear Manna,
>> >
>> >
>> > What's the status of your SSL? Have you verified that the setup is
>> working?
>> > Yes, I used "
>> >
>> > openssl s_client -debug -connect n1.test.com:9092 -tls1
>> > Output:
>> >
>> > CONNECTED(0003)
>> >
>> > write to 0x853e70 [0x89fd43] (155 bytes => 155 (0x9B))
>> >
>> >  - 16 03 01 00 96 01 00 00-92 03 01 59 8b 6d 0d b1
>>  ...Y.m..
>> > ...
>> >
>> > Server certificate
>> >
>> > -BEGIN CERTIFICATE-
>> >
>> > CwwCSEsxGT
>> >
>> > -END CERTIFICATE-
>> >
>> > ---
>> >
>> > SSL handshake has read 2470 bytes and written 161 bytes
>> >
>> > ---
>> >
>> > New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
>> >
>> > PSK identity hint: None
>> >
>> > Start Time: 1502309645
>> >
>> > Timeout   : 7200 (sec)
>> >
>> > Verify return code: 19 (self signed certificate in certificate
>> chain)
>> >
>> > ---
>> >
>> > Regards
>> >
>> > On Wed, Aug 9, 2017 at 10:29 

Re: Create Topic Error: Create Topic Error and cannot write to console producer

2017-08-09 Thread Ascot Moss
server.properties

##

broker.id=11

port=9093

host.name=n1

advertised.host.name=192.168.0.11

allow.everyone.if.no.acl.found=true

super.users=User:CN=n1.test.com,OU=TEST,O=TEST,L=TEST,ST=TEST,C=TEST

listeners=SSL://n1.test.com:9093 

advertised.listeners=SSL://n1.test.com:9093 

ssl.client.auth=required

ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1

ssl.keystore.type=JKS

ssl.truststore.type=JKS

security.inter.broker.protocol=SSL

ssl.keystore.location=/home/kafka/kafka.server.keystore.jks

ssl.keystore.password=Test2017

ssl.key.password=Test2017

ssl.truststore.location=/home/kafka/kafka.server.truststore.jks

ssl.truststore.password=Test2017

authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer

principal.builder.class=org.apache.kafka.common.security.auth.DefaultPrincipalBuilder

num.replica.fetchers=4

replica.fetch.max.bytes=1048576

replica.fetch.wait.max.ms=500

replica.high.watermark.checkpoint.interval.ms=5000

replica.socket.timeout.ms=3

replica.socket.receive.buffer.bytes=65536

replica.lag.time.max.ms=1

controller.socket.timeout.ms=3

controller.message.queue.size=10

default.replication.factor=3

log.dirs=/usr/log/kafka

kafka.logs.dir=/usr/log/kafka

num.partitions=20

message.max.bytes=100

auto.create.topics.enable=true

log.index.interval.bytes=4096

log.index.size.max.bytes=10485760

log.retention.hours=720

log.flush.interval.ms=1

log.flush.interval.messages=2

log.flush.scheduler.interval.ms=2000

log.roll.hours=168

log.retention.check.interval.ms=30

log.segment.bytes=1073741824

delete.topic.enable=true

socket.request.max.bytes=104857600

socket.receive.buffer.bytes=1048576

socket.send.buffer.bytes=1048576

num.io.threads=8

num.network.threads=8

queued.max.requests=16

fetch.purgatory.purge.interval.requests=100

producer.purgatory.purge.interval.requests=100

zookeeper.connect=n1:2181,n2:2181,n3:2181

zookeeper.connection.timeout.ms=2000

zookeeper.sync.time.ms=2000

##





producer.properties

##

bootstrap.servers=n1.test.com:9093 

security.protocol=SSL

ssl.truststore.location=/home/kafka/kafka.client.truststore.jks

ssl.truststore.password=testkafka

ssl.keystore.location=/home/kafka/kafka.client.keystore.jks

ssl.keystore.password=testkafka

ssl.key.password=testkafka
#


(I had tried to switch to another port, 9093 is the correct port)

On Thu, Aug 10, 2017 at 4:28 AM, M. Manna  wrote:

> Your openssl test is showing connected with port 9092. but your previous
> messages show 9093 - is there some typo issues? Where is SSL running
>
> Please share the following and don't leave any details out. This will only
> create more assumptions.
>
> 1) server.properties
> 2) Zookeeper.properties
>
> Also, run the following command (when the cluster is running)
> zookeeper-shell.sh localhost:2181
> get /brokers/ids/11
>
> Does it show that your broker #11 is connected?
>
>
>
>
> On 9 August 2017 at 21:17, Ascot Moss  wrote:
>
> > Dear Manna,
> >
> >
> > What's the status of your SSL? Have you verified that the setup is
> working?
> > Yes, I used "
> >
> > openssl s_client -debug -connect n1.test.com:9092 -tls1
> > Output:
> >
> > CONNECTED(0003)
> >
> > write to 0x853e70 [0x89fd43] (155 bytes => 155 (0x9B))
> >
> >  - 16 03 01 00 96 01 00 00-92 03 01 59 8b 6d 0d b1   ...Y.m..
> > ...
> >
> > Server certificate
> >
> > -BEGIN CERTIFICATE-
> >
> > CwwCSEsxGT
> >
> > -END CERTIFICATE-
> >
> > ---
> >
> > SSL handshake has read 2470 bytes and written 161 bytes
> >
> > ---
> >
> > New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
> >
> > PSK identity hint: None
> >
> > Start Time: 1502309645
> >
> > Timeout   : 7200 (sec)
> >
> > Verify return code: 19 (self signed certificate in certificate chain)
> >
> > ---
> >
> > Regards
> >
> > On Wed, Aug 9, 2017 at 10:29 PM, M. Manna  wrote:
> >
> > > Hi,
> > >
> > > What's the status of your SSL? Have you verified that the setup is
> > working?
> > >
> > > You can enable rough logins using log4j.properties file supplier with
> > kafka
> > > and set the root logging level to DEBUG. This prints out more info to
> > trace
> > > things. Also, you can enable security logging by adding
> > > -Djavax.security.debug=all
> > >
> > > Please share your producer/broker configs with us.
> > >
> > > Kindest Regards,
> > > M. Manna
> > >
> > > On 9 August 2017 at 14:38, Ascot Moss  wrote:
> > >
> > > > Hi,
> > > >
> > > >
> > > > I have setup Kafka 0.10.2.1 with SSL.
> > > >
> > > >
> > > > Check Status:
> > > >
> > > > openssl s_client -debug -connect n1:9093 -tls1
> > > >
> > > > New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
> > > >
> > > > ... SSL-Session:
> > > >
> > > > Protocol  : TLSv1
> > > >
> > > > PSK identity hint: None
> > > >
> > > > Start Time: 

Re: Create Topic Error: Create Topic Error and cannot write to console producer

2017-08-09 Thread M. Manna
Your openssl test is showing connected with port 9092. but your previous
messages show 9093 - is there some typo issues? Where is SSL running

Please share the following and don't leave any details out. This will only
create more assumptions.

1) server.properties
2) Zookeeper.properties

Also, run the following command (when the cluster is running)
zookeeper-shell.sh localhost:2181
get /brokers/ids/11

Does it show that your broker #11 is connected?




On 9 August 2017 at 21:17, Ascot Moss  wrote:

> Dear Manna,
>
>
> What's the status of your SSL? Have you verified that the setup is working?
> Yes, I used "
>
> openssl s_client -debug -connect n1.test.com:9092 -tls1
> Output:
>
> CONNECTED(0003)
>
> write to 0x853e70 [0x89fd43] (155 bytes => 155 (0x9B))
>
>  - 16 03 01 00 96 01 00 00-92 03 01 59 8b 6d 0d b1   ...Y.m..
> ...
>
> Server certificate
>
> -BEGIN CERTIFICATE-
>
> CwwCSEsxGT
>
> -END CERTIFICATE-
>
> ---
>
> SSL handshake has read 2470 bytes and written 161 bytes
>
> ---
>
> New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
>
> PSK identity hint: None
>
> Start Time: 1502309645
>
> Timeout   : 7200 (sec)
>
> Verify return code: 19 (self signed certificate in certificate chain)
>
> ---
>
> Regards
>
> On Wed, Aug 9, 2017 at 10:29 PM, M. Manna  wrote:
>
> > Hi,
> >
> > What's the status of your SSL? Have you verified that the setup is
> working?
> >
> > You can enable rough logins using log4j.properties file supplier with
> kafka
> > and set the root logging level to DEBUG. This prints out more info to
> trace
> > things. Also, you can enable security logging by adding
> > -Djavax.security.debug=all
> >
> > Please share your producer/broker configs with us.
> >
> > Kindest Regards,
> > M. Manna
> >
> > On 9 August 2017 at 14:38, Ascot Moss  wrote:
> >
> > > Hi,
> > >
> > >
> > > I have setup Kafka 0.10.2.1 with SSL.
> > >
> > >
> > > Check Status:
> > >
> > > openssl s_client -debug -connect n1:9093 -tls1
> > >
> > > New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
> > >
> > > ... SSL-Session:
> > >
> > > Protocol  : TLSv1
> > >
> > > PSK identity hint: None
> > >
> > > Start Time: 1502285690
> > >
> > > Timeout   : 7200 (sec)
> > >
> > > Verify return code: 19 (self signed certificate in certificate
> chain)
> > >
> > >
> > > Create Topic:
> > >
> > > kafka-topics.sh --create --zookeeper n1:2181,n2:2181,n3:2181
> > > --replication-factor 3 --partitions 3 --topic test02
> > >
> > > ERROR [ReplicaFetcherThread-2-111], Error for partition [test02,2] to
> > > broker 1:org.apache.kafka.common.errors.UnknownTopicOrPartitionExcepti
> > on:
> > > This server does not host this topic-partition.
> > > (kafka.server.ReplicaFetcherThread)
> > >
> > > However, if I run describe topic, I can see it is created
> > >
> > >
> > >
> > > Describe Topic:
> > >
> > > kafka-topics.sh --zookeeper n1:2181,n2:2181,n3:2181 --describe --topic
> > > test02
> > >
> > > Topic:test02 PartitionCount:3 ReplicationFactor:3 Configs:
> > >
> > > Topic: test02 Partition: 0 Leader: 12 Replicas: 12,13,11 Isr: 12,13,11
> > >
> > > Topic: test02 Partition: 1 Leader: 13 Replicas: 13,11,12 Isr: 13,11,12
> > >
> > > Topic: test02 Partition: 2 Leader: 11 Replicas: 11,12,13 Isr: 11,12,13
> > >
> > >
> > > Consumer:
> > >
> > > kafka-console-consumer.sh --bootstrap-server n1:9093  --consumer.config
> > > /home/kafka/config/consumer.n1.properties --topic test02
> > --from-beginning
> > >
> > >
> > >
> > > Producer:
> > >
> > > kafka-console-producer.sh --broker-list n1:9093  --producer.config
> > > /homey/kafka/config/producer.n1.properties --sync --topic test02
> > >
> > > ERROR Error when sending message to topic test02 with key: null,
> value: 0
> > > bytes with error:
> > > (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> > >
> > > org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s)
> > for
> > > test02-1: 1506 ms has passed since batch creation plus linger time
> > >
> > >
> > > How to resolve it?
> > >
> > > Regards
> > >
> >
>


Re: Create Topic Error: Create Topic Error and cannot write to console producer

2017-08-09 Thread Ascot Moss
And,

server.properties
##

broker.id=11

port=9092

host.name=n1

advertised.host.name=192.168.0.11

allow.everyone.if.no.acl.found=true

super.users=User:CN=n1.test.com,OU=TEST,O=TEST,L=TEST,ST=TEST,C=TEST

listeners=SSL://n1.test.com:9092

advertised.listeners=SSL://n1.test.com:9092

ssl.client.auth=required

ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1

ssl.keystore.type=JKS

ssl.truststore.type=JKS

security.inter.broker.protocol=SSL

ssl.keystore.location=/home/kafka/kafka.server.keystore.jks

ssl.keystore.password=Test2017

ssl.key.password=Test2017

ssl.truststore.location=/home/kafka/kafka.server.truststore.jks

ssl.truststore.password=Test2017

authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer

principal.builder.class=org.apache.kafka.common.security.auth.DefaultPrincipalBuilder

num.replica.fetchers=4

replica.fetch.max.bytes=1048576

replica.fetch.wait.max.ms=500

replica.high.watermark.checkpoint.interval.ms=5000

replica.socket.timeout.ms=3

replica.socket.receive.buffer.bytes=65536

replica.lag.time.max.ms=1

controller.socket.timeout.ms=3

controller.message.queue.size=10

default.replication.factor=3

log.dirs=/usr/log/kafka

kafka.logs.dir=/usr/log/kafka

num.partitions=20

message.max.bytes=100

auto.create.topics.enable=true

log.index.interval.bytes=4096

log.index.size.max.bytes=10485760

log.retention.hours=720

log.flush.interval.ms=1

log.flush.interval.messages=2

log.flush.scheduler.interval.ms=2000

log.roll.hours=168

log.retention.check.interval.ms=30

log.segment.bytes=1073741824

delete.topic.enable=true

socket.request.max.bytes=104857600

socket.receive.buffer.bytes=1048576

socket.send.buffer.bytes=1048576

num.io.threads=8

num.network.threads=8

queued.max.requests=16

fetch.purgatory.purge.interval.requests=100

producer.purgatory.purge.interval.requests=100

zookeeper.connect=n1:2181,n2:2181,n3:2181

zookeeper.connection.timeout.ms=2000

zookeeper.sync.time.ms=2000
##




producer.properties
##

bootstrap.servers=n1.test.com:9092

security.protocol=SSL

ssl.truststore.location=/home/kafka/kafka.client.truststore.jks

ssl.truststore.password=testkafka

ssl.keystore.location=/home/kafka/kafka.client.keystore.jks

ssl.keystore.password=testkafka

ssl.key.password=testkafka
#


On Thu, Aug 10, 2017 at 4:17 AM, Ascot Moss  wrote:

> Dear Manna,
>
>
> What's the status of your SSL? Have you verified that the setup is working?
> Yes, I used "
>
> openssl s_client -debug -connect n1.test.com:9092 -tls1
> Output:
>
> CONNECTED(0003)
>
> write to 0x853e70 [0x89fd43] (155 bytes => 155 (0x9B))
>
>  - 16 03 01 00 96 01 00 00-92 03 01 59 8b 6d 0d b1   ...Y.m..
> ...
>
> Server certificate
>
> -BEGIN CERTIFICATE-
>
> CwwCSEsxGT
>
> -END CERTIFICATE-
>
> ---
>
> SSL handshake has read 2470 bytes and written 161 bytes
>
> ---
>
> New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
>
> PSK identity hint: None
>
> Start Time: 1502309645
>
> Timeout   : 7200 (sec)
>
> Verify return code: 19 (self signed certificate in certificate chain)
>
> ---
>
> Regards
>
> On Wed, Aug 9, 2017 at 10:29 PM, M. Manna  wrote:
>
>> Hi,
>>
>> What's the status of your SSL? Have you verified that the setup is
>> working?
>>
>> You can enable rough logins using log4j.properties file supplier with
>> kafka
>> and set the root logging level to DEBUG. This prints out more info to
>> trace
>> things. Also, you can enable security logging by adding
>> -Djavax.security.debug=all
>>
>> Please share your producer/broker configs with us.
>>
>> Kindest Regards,
>> M. Manna
>>
>> On 9 August 2017 at 14:38, Ascot Moss  wrote:
>>
>> > Hi,
>> >
>> >
>> > I have setup Kafka 0.10.2.1 with SSL.
>> >
>> >
>> > Check Status:
>> >
>> > openssl s_client -debug -connect n1:9093 -tls1
>> >
>> > New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
>> >
>> > ... SSL-Session:
>> >
>> > Protocol  : TLSv1
>> >
>> > PSK identity hint: None
>> >
>> > Start Time: 1502285690
>> >
>> > Timeout   : 7200 (sec)
>> >
>> > Verify return code: 19 (self signed certificate in certificate
>> chain)
>> >
>> >
>> > Create Topic:
>> >
>> > kafka-topics.sh --create --zookeeper n1:2181,n2:2181,n3:2181
>> > --replication-factor 3 --partitions 3 --topic test02
>> >
>> > ERROR [ReplicaFetcherThread-2-111], Error for partition [test02,2] to
>> > broker 1:org.apache.kafka.common.errors.UnknownTopicOrPartitionExce
>> ption:
>> > This server does not host this topic-partition.
>> > (kafka.server.ReplicaFetcherThread)
>> >
>> > However, if I run describe topic, I can see it is created
>> >
>> >
>> >
>> > Describe Topic:
>> >
>> > kafka-topics.sh --zookeeper n1:2181,n2:2181,n3:2181 --describe --topic
>> > test02
>> >
>> > Topic:test02 PartitionCount:3 ReplicationFactor:3 Configs:
>> >
>> > Topic: test02 Partition: 0 Leader: 12 Replicas: 12,13,11 Isr: 12,13,11
>> >
>> > 

Re: Create Topic Error: Create Topic Error and cannot write to console producer

2017-08-09 Thread Ascot Moss
Dear Manna,


What's the status of your SSL? Have you verified that the setup is working?
Yes, I used "

openssl s_client -debug -connect n1.test.com:9092 -tls1
Output:

CONNECTED(0003)

write to 0x853e70 [0x89fd43] (155 bytes => 155 (0x9B))

 - 16 03 01 00 96 01 00 00-92 03 01 59 8b 6d 0d b1   ...Y.m..
...

Server certificate

-BEGIN CERTIFICATE-

CwwCSEsxGT

-END CERTIFICATE-

---

SSL handshake has read 2470 bytes and written 161 bytes

---

New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA

PSK identity hint: None

Start Time: 1502309645

Timeout   : 7200 (sec)

Verify return code: 19 (self signed certificate in certificate chain)

---

Regards

On Wed, Aug 9, 2017 at 10:29 PM, M. Manna  wrote:

> Hi,
>
> What's the status of your SSL? Have you verified that the setup is working?
>
> You can enable rough logins using log4j.properties file supplier with kafka
> and set the root logging level to DEBUG. This prints out more info to trace
> things. Also, you can enable security logging by adding
> -Djavax.security.debug=all
>
> Please share your producer/broker configs with us.
>
> Kindest Regards,
> M. Manna
>
> On 9 August 2017 at 14:38, Ascot Moss  wrote:
>
> > Hi,
> >
> >
> > I have setup Kafka 0.10.2.1 with SSL.
> >
> >
> > Check Status:
> >
> > openssl s_client -debug -connect n1:9093 -tls1
> >
> > New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
> >
> > ... SSL-Session:
> >
> > Protocol  : TLSv1
> >
> > PSK identity hint: None
> >
> > Start Time: 1502285690
> >
> > Timeout   : 7200 (sec)
> >
> > Verify return code: 19 (self signed certificate in certificate chain)
> >
> >
> > Create Topic:
> >
> > kafka-topics.sh --create --zookeeper n1:2181,n2:2181,n3:2181
> > --replication-factor 3 --partitions 3 --topic test02
> >
> > ERROR [ReplicaFetcherThread-2-111], Error for partition [test02,2] to
> > broker 1:org.apache.kafka.common.errors.UnknownTopicOrPartitionExcepti
> on:
> > This server does not host this topic-partition.
> > (kafka.server.ReplicaFetcherThread)
> >
> > However, if I run describe topic, I can see it is created
> >
> >
> >
> > Describe Topic:
> >
> > kafka-topics.sh --zookeeper n1:2181,n2:2181,n3:2181 --describe --topic
> > test02
> >
> > Topic:test02 PartitionCount:3 ReplicationFactor:3 Configs:
> >
> > Topic: test02 Partition: 0 Leader: 12 Replicas: 12,13,11 Isr: 12,13,11
> >
> > Topic: test02 Partition: 1 Leader: 13 Replicas: 13,11,12 Isr: 13,11,12
> >
> > Topic: test02 Partition: 2 Leader: 11 Replicas: 11,12,13 Isr: 11,12,13
> >
> >
> > Consumer:
> >
> > kafka-console-consumer.sh --bootstrap-server n1:9093  --consumer.config
> > /home/kafka/config/consumer.n1.properties --topic test02
> --from-beginning
> >
> >
> >
> > Producer:
> >
> > kafka-console-producer.sh --broker-list n1:9093  --producer.config
> > /homey/kafka/config/producer.n1.properties --sync --topic test02
> >
> > ERROR Error when sending message to topic test02 with key: null, value: 0
> > bytes with error:
> > (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> >
> > org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s)
> for
> > test02-1: 1506 ms has passed since batch creation plus linger time
> >
> >
> > How to resolve it?
> >
> > Regards
> >
>


Re: Create Topic Error: Create Topic Error and cannot write to console producer

2017-08-09 Thread M. Manna
Hi,

What's the status of your SSL? Have you verified that the setup is working?

You can enable rough logins using log4j.properties file supplier with kafka
and set the root logging level to DEBUG. This prints out more info to trace
things. Also, you can enable security logging by adding
-Djavax.security.debug=all

Please share your producer/broker configs with us.

Kindest Regards,
M. Manna

On 9 August 2017 at 14:38, Ascot Moss  wrote:

> Hi,
>
>
> I have setup Kafka 0.10.2.1 with SSL.
>
>
> Check Status:
>
> openssl s_client -debug -connect n1:9093 -tls1
>
> New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
>
> ... SSL-Session:
>
> Protocol  : TLSv1
>
> PSK identity hint: None
>
> Start Time: 1502285690
>
> Timeout   : 7200 (sec)
>
> Verify return code: 19 (self signed certificate in certificate chain)
>
>
> Create Topic:
>
> kafka-topics.sh --create --zookeeper n1:2181,n2:2181,n3:2181
> --replication-factor 3 --partitions 3 --topic test02
>
> ERROR [ReplicaFetcherThread-2-111], Error for partition [test02,2] to
> broker 1:org.apache.kafka.common.errors.UnknownTopicOrPartitionException:
> This server does not host this topic-partition.
> (kafka.server.ReplicaFetcherThread)
>
> However, if I run describe topic, I can see it is created
>
>
>
> Describe Topic:
>
> kafka-topics.sh --zookeeper n1:2181,n2:2181,n3:2181 --describe --topic
> test02
>
> Topic:test02 PartitionCount:3 ReplicationFactor:3 Configs:
>
> Topic: test02 Partition: 0 Leader: 12 Replicas: 12,13,11 Isr: 12,13,11
>
> Topic: test02 Partition: 1 Leader: 13 Replicas: 13,11,12 Isr: 13,11,12
>
> Topic: test02 Partition: 2 Leader: 11 Replicas: 11,12,13 Isr: 11,12,13
>
>
> Consumer:
>
> kafka-console-consumer.sh --bootstrap-server n1:9093  --consumer.config
> /home/kafka/config/consumer.n1.properties --topic test02 --from-beginning
>
>
>
> Producer:
>
> kafka-console-producer.sh --broker-list n1:9093  --producer.config
> /homey/kafka/config/producer.n1.properties --sync --topic test02
>
> ERROR Error when sending message to topic test02 with key: null, value: 0
> bytes with error:
> (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
>
> org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for
> test02-1: 1506 ms has passed since batch creation plus linger time
>
>
> How to resolve it?
>
> Regards
>