Thanks Isuru! I’ve updated the properties to match those from the current dev 
servers. I don’t see the error now – but I do see kafka going down unexpectedly 
sometimes. Just restarting kafka seems to work in this case. As you suggested - 
I’ll check on the permissions for kafka- and see if that’s where the issue is.

Here’s the PR with the latest ansible scripts:
https://github.com/apache/airavata-custos/pull/290

Thanks,
Abhinav

From: Isuru Ranawaka <irjan...@gmail.com>
Date: Wednesday, July 27, 2022 at 9:07 AM
To: Airavata Dev <dev@airavata.apache.org>
Subject: Re: Custos Baremetal Deployment Ansible - Kafka error - Broker may not 
be available
Hi Abinav,

You need to install Kafka and add Kafka URL in the properties file.

On Tue, Jul 26, 2022 at 7:20 PM Abhinav Sinha 
<abhinav7.si...@gmail.com<mailto:abhinav7.si...@gmail.com>> wrote:
Hi Isuru, all,

When I try to run custos on a remote server – I get the following message:


WARN [Custos-Core-Services-Server,,,] 140067 --- [sEventPublisher] 
org.apache.kafka.clients.NetworkClient   : [Producer 
clientId=custosEventPublisher] Connection to node -1 could not be established. 
Broker may not be available.

Here are the producer config values at runtime:
       acks = 1
       batch.size = 16384
       bootstrap.servers = [localhost:9092]
       buffer.memory = 33554432
       client.id<http://client.id> = custosEventPublisher
       compression.type = none
       connections.max.idle.ms<http://connections.max.idle.ms> = 540000
       enable.idempotence = false
       interceptor.classes = null
       key.serializer = class 
org.apache.kafka.common.serialization.StringSerializer
       linger.ms<http://linger.ms> = 0
       max.block.ms<http://max.block.ms> = 60000
       max.in.flight.requests.per.connection = 5
       max.request.size = 1048576
       metadata.max.age.ms<http://metadata.max.age.ms> = 300000
       metric.reporters = []
       metrics.num.samples = 2
       metrics.recording.level = INFO
       metrics.sample.window.ms<http://metrics.sample.window.ms> = 30000
       partitioner.class = class 
org.apache.kafka.clients.producer.internals.DefaultPartitioner
       receive.buffer.bytes = 32768
       reconnect.backoff.max.ms<http://reconnect.backoff.max.ms> = 1000
       reconnect.backoff.ms<http://reconnect.backoff.ms> = 50
       request.timeout.ms<http://request.timeout.ms> = 30000
       retries = 0
       retry.backoff.ms<http://retry.backoff.ms> = 100
       sasl.jaas.config = null
       sasl.kerberos.kinit.cmd = /usr/bin/kinit
       sasl.kerberos.min.time.before.relogin = 60000
       sasl.kerberos.service.name<http://sasl.kerberos.service.name> = null
       sasl.kerberos.ticket.renew.jitter = 0.05
       sasl.kerberos.ticket.renew.window.factor = 0.8
       sasl.mechanism = GSSAPI
       security.protocol = PLAINTEXT
       send.buffer.bytes = 131072
       ssl.cipher.suites = null
       ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
       ssl.endpoint.identification.algorithm = null
       ssl.key.password = null
       ssl.keymanager.algorithm = SunX509
       ssl.keystore.location = null
       ssl.keystore.password = null
       ssl.keystore.type = JKS
       ssl.protocol = TLS
       ssl.provider = null
       ssl.secure.random.implementation = null
       ssl.trustmanager.algorithm = PKIX
       ssl.truststore.location = null
       ssl.truststore.password = null
       ssl.truststore.type = JKS
       transaction.timeout.ms<http://transaction.timeout.ms> = 60000
       transactional.id<http://transactional.id> = null
       value.serializer = class 
org.apache.custos.messaging.events.model.MessageSerializer

Here’s the link to the application properties file with the properties:
https://github.com/abhinav7sinha/airavata-custos/blob/ansible-baremetal/ansible/roles/custos/templates/custos-core-services/application.properties.j2

Do you know what could cause this?
Thanks,
Abhinav


--
Research Software Engineer
Indiana University, IN

Reply via email to