Hi Dietmar, The short answer is that there is no driver to communicate with ControlLogix PLC's that uses CIP. You could use Kepware and then the PLC4X OPCUA driver if you wanted.
However, depending on how adventurous you are there is a logix_develop branch that should work. You should be able to merge it and re-build the Kafka connector. Let me know if you want to and I can send some more information. Ben On Wed, Nov 16, 2022 at 9:34 AM Giljohann, Dietmar <[email protected]> wrote: > Dear PLC4X community, > > > > We started a Kafka pilot with the PLC4X adapter to communicate with > Rockwell PLCs of type 1756-EN2T. As the communication does not yet work I > would be interested if Kafka *source.properties and *sink.properties > template files already exist for this communication case with PLCs. > > > > With such properties files I would like to answer our questions: > > 1. Is there any other software (like Kepware) needed for the > communication between Kafka and the PLC? > 2. Wow can I define in the PLC4X properties file the PLC slot in the > opcua configuration? > 3. how can I define in the PLC4X properties file the PLC tags with > specified address like: master_plc.temperature? Would it be > "master_plc/temperature" and is their type INTEGER? > 4. how can I define in the PLC4X properties file PLC tags if they are > arrays of defined length? (e.g. PLC_MASTER.STS_DATA[0].LOT3.LOTNUM.DATA/25 > and type is string) > > > > Thank you for your help! As reference below 2 error logs received after a > json file communication from Kafka to the PLC: > > > > > > (org.apache.plc4x.java.spi.Plc4xNettyWrapper:134) > > [2022-11-16 15:10:56,244] DEBUG [mhs-linetakeway-source|task-0] write > field message > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:33) > > [2022-11-16 15:10:56,244] DEBUG [mhs-linetakeway-source|task-0] write > field messageType > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterDiscriminator:33) > > [2022-11-16 15:10:56,245] DEBUG [mhs-linetakeway-source|task-0] write > field chunk (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:33) > > [2022-11-16 15:10:56,245] DEBUG [mhs-linetakeway-source|task-0] write > field messageSize > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterImplicit:33) > > [2022-11-16 15:10:56,245] DEBUG [mhs-linetakeway-source|task-0] write > field version > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:33) > > [2022-11-16 15:10:56,245] DEBUG [mhs-linetakeway-source|task-0] write > field receiveBufferSize > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:33) > > [2022-11-16 15:10:56,245] DEBUG [mhs-linetakeway-source|task-0] write > field sendBufferSize > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:33) > > [2022-11-16 15:10:56,245] DEBUG [mhs-linetakeway-source|task-0] write > field maxMessageSize > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:33) > > [2022-11-16 15:10:56,245] DEBUG [mhs-linetakeway-source|task-0] write > field maxChunkCount > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:33) > > [2022-11-16 15:10:56,245] DEBUG [mhs-linetakeway-source|task-0] write > field endpoint > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:33) > > [2022-11-16 15:10:56,245] DEBUG [mhs-linetakeway-source|task-0] write > field sLength > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterImplicit:33) > > [2022-11-16 15:10:56,245] DEBUG [mhs-linetakeway-source|task-0] write > field stringValue > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:33) > > [2022-11-16 15:10:56,245] DEBUG [mhs-linetakeway-source|task-0] write > field message > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:61) > > [2022-11-16 15:10:56,246] DEBUG [mhs-linetakeway-source|task-0] write > field messageType > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterDiscriminator:61) > > [2022-11-16 15:10:56,246] DEBUG [mhs-linetakeway-source|task-0] write > field chunk (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:61) > > [2022-11-16 15:10:56,246] DEBUG [mhs-linetakeway-source|task-0] write > field messageSize > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterImplicit:61) > > [2022-11-16 15:10:56,246] DEBUG [mhs-linetakeway-source|task-0] write > field version > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:61) > > [2022-11-16 15:10:56,246] DEBUG [mhs-linetakeway-source|task-0] write > field receiveBufferSize > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:61) > > [2022-11-16 15:10:56,246] DEBUG [mhs-linetakeway-source|task-0] write > field sendBufferSize > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:61) > > [2022-11-16 15:10:56,246] DEBUG [mhs-linetakeway-source|task-0] write > field maxMessageSize > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:61) > > [2022-11-16 15:10:56,246] DEBUG [mhs-linetakeway-source|task-0] write > field maxChunkCount > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:61) > > [2022-11-16 15:10:56,246] DEBUG [mhs-linetakeway-source|task-0] write > field endpoint > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:61) > > [2022-11-16 15:10:56,246] DEBUG [mhs-linetakeway-source|task-0] write > field sLength > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterImplicit:61) > > [2022-11-16 15:10:56,247] DEBUG [mhs-linetakeway-source|task-0] write > field stringValue > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:61) > > [2022-11-16 15:10:56,247] DEBUG [mhs-linetakeway-source|task-0] Sending > bytes to PLC for message > > ╔═OpcuaAPU/message/MessagePDU > ═════════════════════════════════════════════════════════════════════════════════════╗ > > ║╔═messageType╗ > > ║ > > ║║ HEL ║ > > ║ > > ║╚════════════╝ > > ║ > > ║╔═OpcuaHelloRequest > ═════════════════════════════════════════════════════════════════════════════════════════════╗║ > > ║║╔═chunk╗╔═messageSize═╗╔═version════╗╔═receiveBufferSize╗╔═ > sendBufferSize═╗╔═maxMessageSize═══╗╔═maxChunkCount╗║║ > > ║║║ F ║║0x0000003d 61║║0x00000000 0║║ 0x0000ffff 65535 ║║0x0000ffff > 65535║║0x00200000 2097152║║0x00000040 64 ║║║ > > > ║║╚══════╝╚═════════════╝╚════════════╝╚══════════════════╝╚════════════════╝╚══════════════════╝╚══════════════╝║║ > > ║║╔═endpoint/PascalString═══════════════════════════════════════╗ > ║║ > > ║║║╔═sLength═════╗╔═stringValue═════════════════╗╭┄stringLength╮║ > ║║ > > ║║║║0x0000001d 29║║opc.tcp://143.21.40.239:44818║┆ 0x1d 29 ┆║ > ║║ > > ║║║╚═════════════╝╚═════════════════════════════╝╰┄┄┄┄┄┄┄┄┄┄┄┄┄╯║ > ║║ > > ║║╚═════════════════════════════════════════════════════════════╝ > ║║ > > > ║╚═══════════════════════════════════════════════════════════════════════════════════════════════════════════════╝║ > > > ╚═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╝ > > as data > 48454c463d00000000000000ffff0000ffff000000002000400000001d0000006f70632e7463703a2f2f3134332e32312e34302e3233393a3434383138 > (org.apache.plc4x.java.spi.GeneratedDriverByteToMessageCodec:61) > > [2022-11-16 15:10:56,395] DEBUG [Consumer > clientId=consumer-connect-cluster-2, groupId=connect-cluster] Received > FETCH response from node 0 for request with header > RequestHeader(apiKey=FETCH, apiVersion=13, > clientId=consumer-connect-cluster-2, correlationId=30): > FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1186581989, > responses=[]) (org.apache.kafka.clients.NetworkClient:879) > > [2022-11-16 15:10:56,395] DEBUG [Consumer > clientId=consumer-connect-cluster-2, groupId=connect-cluster] Node 0 sent > an incremental fetch response with throttleTimeMs = 0 for session > 1186581989 with 0 response partition(s), 5 implied partition(s) > (org.apache.kafka.clients.FetchSessionHandler:584) > > [2022-11-16 15:10:56,395] DEBUG [Consumer > clientId=consumer-connect-cluster-2, groupId=connect-cluster] Added > READ_UNCOMMITTED fetch request for partition connect-statuses-3 at position > FetchPosition{offset=0, offsetEpoch=Optional.empty, > currentLeader=LeaderAndEpoch{leader=Optional[craboolg02.eu.pg.com:9092 > (id: 0 rack: null)], epoch=0}} to node craboolg02.eu.pg.com:9092 (id: 0 > rack: null) (org.apache.kafka.clients.consumer.internals.Fetcher:1232) > > [2022-11-16 15:10:56,395] DEBUG [Consumer > clientId=consumer-connect-cluster-2, groupId=connect-cluster] Added > READ_UNCOMMITTED fetch request for partition connect-statuses-1 at position > FetchPosition{offset=0, offsetEpoch=Optional.empty, > currentLeader=LeaderAndEpoch{leader=Optional[craboolg02.eu.pg.com:9092 > (id: 0 rack: null)], epoch=0}} to node craboolg02.eu.pg.com:9092 (id: 0 > rack: null) (org.apache.kafka.clients.consumer.internals.Fetcher:1232) > > [2022-11-16 15:10:56,396] DEBUG [Consumer > clientId=consumer-connect-cluster-2, groupId=connect-cluster] Added > READ_UNCOMMITTED fetch request for partition connect-statuses-2 at position > FetchPosition{offset=0, offsetEpoch=Optional.empty, > currentLeader=LeaderAndEpoch{leader=Optional[craboolg02.eu.pg.com:9092 > (id: 0 rack: null)], epoch=0}} to node craboolg02.eu.pg.com:9092 (id: 0 > rack: null) (org.apache.kafka.clients.consumer.internals.Fetcher:1232) > > [2022-11-16 15:10:56,396] DEBUG [Consumer > clientId=consumer-connect-cluster-2, groupId=connect-cluster] Added > READ_UNCOMMITTED fetch request for partition connect-statuses-0 at position > FetchPosition{offset=59, offsetEpoch=Optional[0], > currentLeader=LeaderAndEpoch{leader=Optional[craboolg02.eu.pg.com:9092 > (id: 0 rack: null)], epoch=0}} to node craboolg02.eu.pg.com:9092 (id: 0 > rack: null) (org.apache.kafka.clients.consumer.internals.Fetcher:1232) > > [2022-11-16 15:10:56,396] DEBUG [Consumer > clientId=consumer-connect-cluster-2, groupId=connect-cluster] Added > READ_UNCOMMITTED fetch request for partition connect-statuses-4 at position > FetchPosition{offset=64, offsetEpoch=Optional[0], > currentLeader=LeaderAndEpoch{leader=Optional[craboolg02.eu.pg.com:9092 > (id: 0 rack: null)], epoch=0}} to node craboolg02.eu.pg.com:9092 (id: 0 > rack: null) (org.apache.kafka.clients.consumer.internals.Fetcher:1232) > > [2022-11-16 15:10:56,396] DEBUG [Consumer > clientId=consumer-connect-cluster-2, groupId=connect-cluster] Built > incremental fetch (sessionId=1186581989, epoch=25) for node 0. Added 0 > partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 > partition(s) out of 5 partition(s) > (org.apache.kafka.clients.FetchSessionHandler:351) > > [2022-11-16 15:10:56,396] DEBUG [Consumer > clientId=consumer-connect-cluster-2, groupId=connect-cluster] Sending > READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), > toReplace=(), implied=(connect-statuses-0, connect-statuses-3, > connect-statuses-4, connect-statuses-1, connect-statuses-2), > canUseTopicIds=True) to broker craboolg02.eu.pg.com:9092 (id: 0 rack: > null) (org.apache.kafka.clients.consumer.internals.Fetcher:272) > > [2022-11-16 15:10:56,396] DEBUG [Consumer > clientId=consumer-connect-cluster-2, groupId=connect-cluster] Sending FETCH > request with header RequestHeader(apiKey=FETCH, apiVersion=13, > clientId=consumer-connect-cluster-2, correlationId=31) and timeout 30000 to > node 0: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, > minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1186581989, > sessionEpoch=25, topics=[], forgottenTopicsData=[], rackId='') > (org.apache.kafka.clients.NetworkClient:521) > > [2022-11-16 15:10:56,419] DEBUG [mhs-linetakeway-source|task-0] > WorkerSourceTask{id=mhs-linetakeway-source-0} Committing offsets > (org.apache.kafka.connect.runtime.WorkerSourceTask:198) > > [2022-11-16 15:10:56,419] DEBUG [mhs-linetakeway-source|task-0] > WorkerSourceTask{id=mhs-linetakeway-source-0} Either no records were > produced by the task since the last offset commit, or every record has been > filtered out by a transformation or dropped due to transformation or > conversion errors. (org.apache.kafka.connect.runtime.WorkerSourceTask:210) > > [2022-11-16 15:10:56,420] DEBUG [mhs-linetakeway-source|task-0] > WorkerSourceTask{id=mhs-linetakeway-source-0} Finished offset commitOffsets > successfully in 1 ms (org.apache.kafka.connect.runtime.WorkerSourceTask:246) > > [2022-11-16 15:10:56,421] INFO [mhs-linetakeway-source|task-0] Stopping > scraper... > (org.apache.plc4x.java.scraper.triggeredscraper.TriggeredScraperImpl:265) > > [2022-11-16 15:10:56,421] DEBUG [mhs-linetakeway-source|task-0] Stopping > task > TriggeredScraperTask{driverManager=org.apache.plc4x.java.utils.connectionpool2.PooledDriverManager@2f213f02, > jobName='data-acquisition', connectionAlias='MHSLineTakeway', > connectionString='opcua:tcp://143.21.40.239:44818', > requestTimeoutMs=2000, > executorService=java.util.concurrent.ThreadPoolExecutor@24e13670[Running, > pool size = 5, active threads = 1, queued tasks = 0, completed tasks = 10], > resultHandler=org.apache.plc4x.kafka.Plc4xSourceTask$$Lambda$697/2096727146@7874926, > > triggerHandler=org.apache.plc4x.java.scraper.triggeredscraper.triggerhandler.TriggerHandlerImpl@28323655}... > (org.apache.plc4x.java.scraper.triggeredscraper.TriggeredScraperImpl:267) > > [2022-11-16 15:10:56,422] WARN [mhs-linetakeway-source|task-0] Exception > during scraping of Job data-acquisition, Connection-Alias MHSLineTakeway: > Error-message: null - for stack-trace change logging to DEBUG > (org.apache.plc4x.java.scraper.triggeredscraper.TriggeredScraperTask:148) > > [2022-11-16 15:10:56,423] DEBUG [mhs-linetakeway-source|task-0] Detailed > exception occurred at scraping > (org.apache.plc4x.java.scraper.triggeredscraper.TriggeredScraperTask:206) > > java.lang.InterruptedException > > at > java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:347) > > at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928) > > at > org.apache.plc4x.java.scraper.triggeredscraper.TriggeredScraperImpl.getPlcConnection(TriggeredScraperImpl.java:316) > > at > org.apache.plc4x.java.scraper.triggeredscraper.TriggeredScraperTask.run(TriggeredScraperTask.java:112) > > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > > at > java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > > at java.lang.Thread.run(Thread.java:748) > > [2022-11-16 15:10:56,421] INFO [mhs-linetakeway-source|task-0] [Producer > clientId=connector-producer-mhs-linetakeway-source-0] Closing the Kafka > producer with timeoutMillis = 30000 ms. > (org.apache.kafka.clients.producer.KafkaProducer:1297) > > [2022-11-16 15:10:56,423] DEBUG [mhs-linetakeway-source|task-0] [Producer > clientId=connector-producer-mhs-linetakeway-source-0] Beginning shutdown of > Kafka producer I/O thread, sending remaining records. > (org.apache.kafka.clients.producer.internals.Sender:249) > > [2022-11-16 15:10:56,424] DEBUG [mhs-linetakeway-source|task-0] [Producer > clientId=connector-producer-mhs-linetakeway-source-0] Shutdown of Kafka > producer I/O thread has completed. > (org.apache.kafka.clients.producer.internals.Sender:291) > > [2022-11-16 15:10:56,435] INFO [mhs-linetakeway-source|task-0] Metrics > scheduler closed (org.apache.kafka.common.metrics.Metrics:693) > > [2022-11-16 15:10:56,436] INFO [mhs-linetakeway-source|task-0] Closing > reporter org.apache.kafka.common.metrics.JmxReporter > (org.apache.kafka.common.metrics.Metrics:697) > > [2022-11-16 15:10:56,436] INFO [mhs-linetakeway-source|task-0] Metrics > reporters closed (org.apache.kafka.common.metrics.Metrics:703) > > [2022-11-16 15:10:56,436] INFO [mhs-linetakeway-source|task-0] App info > kafka.producer for connector-producer-mhs-linetakeway-source-0 unregistered > (org.apache.kafka.common.utils.AppInfoParser:83) > > [2022-11-16 15:10:56,436] DEBUG [mhs-linetakeway-source|task-0] [Producer > clientId=connector-producer-mhs-linetakeway-source-0] Kafka producer has > been closed (org.apache.kafka.clients.producer.KafkaProducer:1350) > > [2022-11-16 15:10:56,438] DEBUG [Producer clientId=producer-2] Sending > PRODUCE request with header RequestHeader(apiKey=PRODUCE, apiVersion=9, > clientId=producer-2, correlationId=7) and timeout 30000 to node 0: > {acks=-1,timeout=30000,partitionSizes=[connect-statuses-0=190]} > (org.apache.kafka.clients.NetworkClient:521) > > [2022-11-16 15:10:56,438] DEBUG [mhs-linetakeway-source|task-0] Graceful > stop of task mhs-linetakeway-source-0 succeeded. > (org.apache.kafka.connect.runtime.Worker:1060) > > [2022-11-16 15:10:56,441] DEBUG [Worker clientId=connect-1, > groupId=connect-cluster] Heartbeat thread has closed > (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:1537) > > [2022-11-16 15:10:56,441] INFO [Worker clientId=connect-1, > groupId=connect-cluster] Member > connect-1-0268689d-4a69-4e5a-97d8-378b53127faf sending LeaveGroup request > to coordinator craboolg02.eu.pg.com:9092 (id: 2147483647 rack: null) due > to the consumer is being closed > (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:1127) > > [2022-11-16 15:10:56,442] DEBUG [Worker clientId=connect-1, > groupId=connect-cluster] Sending LEAVE_GROUP request with header > RequestHeader(apiKey=LEAVE_GROUP, apiVersion=5, clientId=connect-1, > correlationId=10) and timeout 40000 to node 2147483647: > LeaveGroupRequestData(groupId='connect-cluster', memberId='', > members=[MemberIdentity(memberId='connect-1-0268689d-4a69-4e5a-97d8-378b53127faf', > groupInstanceId=null, reason='the consumer is being closed')]) > (org.apache.kafka.clients.NetworkClient:521) > > [2022-11-16 15:10:56,442] DEBUG [Consumer > clientId=consumer-connect-cluster-2, groupId=connect-cluster] Received > FETCH response from node 0 for request with header > RequestHeader(apiKey=FETCH, apiVersion=13, > clientId=consumer-connect-cluster-2, correlationId=31): > FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1186581989, > responses=[FetchableTopicResponse(topic='', topicId=bw_OkdvxRlKKuz85rKLDZw, > partitions=[PartitionData(partitionIndex=0, errorCode=0, highWatermark=60, > lastStableOffset=60, logStartOffset=0, > divergingEpoch=EpochEndOffset(epoch=-1, endOffset=-1), > currentLeader=LeaderIdAndEpoch(leaderId=-1, leaderEpoch=-1), > snapshotId=SnapshotId(endOffset=-1, epoch=-1), abortedTransactions=null, > preferredReadReplica=-1, records=MemoryRecords(size=190, > buffer=java.nio.HeapByteBuffer[pos=0 lim=190 cap=193]))])]) > (org.apache.kafka.clients.NetworkClient:879) > > [2022-11-16 15:10:56,442] INFO [Worker clientId=connect-1, > groupId=connect-cluster] Resetting generation and member id due to: > consumer pro-actively leaving the group > (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:1019) > > [2022-11-16 15:10:56,442] DEBUG [Worker clientId=connect-1, > groupId=connect-cluster] Request joining group due to: consumer > pro-actively leaving the group > (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:106) > > > > And: > > > > > > [2022-11-16 15:10:56,199] DEBUG [Consumer > clientId=consumer-connect-cluster-3, groupId=connect-cluster] Sending > READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), > toReplace=(), implied=(connect-configs-0), canUseTopicIds=True) to broker > craboolg02.eu.pg.com:9092 (id: 0 rack: null) > (org.apache.kafka.clients.consumer.internals.Fetcher:272) > > [2022-11-16 15:10:56,199] DEBUG [Consumer > clientId=consumer-connect-cluster-3, groupId=connect-cluster] Sending FETCH > request with header RequestHeader(apiKey=FETCH, apiVersion=13, > clientId=consumer-connect-cluster-3, correlationId=29) and timeout 30000 to > node 0: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, > minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1245792197, > sessionEpoch=23, topics=[], forgottenTopicsData=[], rackId='') > (org.apache.kafka.clients.NetworkClient:521) > > [2022-11-16 15:10:56,214] DEBUG [mhs-linetakeway-source|task-0] Connection > was detected as broken and is invalidated in Cached Manager > (org.apache.plc4x.java.utils.connectionpool2.CachedDriverManager:127) > > [2022-11-16 15:10:56,214] WARN [mhs-linetakeway-source|task-0] Broken > Connection was returned, although it is not borrowed, currently. > (org.apache.plc4x.java.utils.connectionpool2.CachedDriverManager:132) > > [2022-11-16 15:10:56,214] DEBUG [mhs-linetakeway-source|task-0] Unable to > Close 'broken' Connection > (org.apache.plc4x.java.utils.connectionpool2.CachedDriverManager:138) > > java.lang.NullPointerException > > at > org.apache.plc4x.java.utils.connectionpool2.CachedDriverManager.handleBrokenConnection(CachedDriverManager.java:136) > > at > org.apache.plc4x.java.utils.connectionpool2.CachedDriverManager.getConnection(CachedDriverManager.java:181) > > at > org.apache.plc4x.java.utils.connectionpool2.PooledDriverManager.getConnection(PooledDriverManager.java:71) > > at > org.apache.plc4x.java.scraper.triggeredscraper.TriggeredScraperImpl.lambda$getPlcConnection$1(TriggeredScraperImpl.java:301) > > at > java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604) > > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > > at java.lang.Thread.run(Thread.java:748) > > [2022-11-16 15:10:56,214] WARN [mhs-linetakeway-source|task-0] Unable to > instantiate connection to opcua:tcp://143.21.40.239:44818 > (org.apache.plc4x.java.scraper.triggeredscraper.TriggeredScraperImpl:303) > > org.apache.plc4x.java.api.exceptions.PlcConnectionException: No Connection > Available, timed out while waiting in queue. > > at > org.apache.plc4x.java.utils.connectionpool2.CachedDriverManager.getConnection(CachedDriverManager.java:182) > > at > org.apache.plc4x.java.utils.connectionpool2.PooledDriverManager.getConnection(PooledDriverManager.java:71) > > at > org.apache.plc4x.java.scraper.triggeredscraper.TriggeredScraperImpl.lambda$getPlcConnection$1(TriggeredScraperImpl.java:301) > > at > java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604) > > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > > at java.lang.Thread.run(Thread.java:748) > > Caused by: java.util.concurrent.TimeoutException > > at > java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1784) > > at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928) > > at > org.apache.plc4x.java.utils.connectionpool2.CachedDriverManager.getConnection(CachedDriverManager.java:179) > > ... 6 more > > [2022-11-16 15:10:56,215] WARN [mhs-linetakeway-source|task-0] Exception > during scraping of Job data-acquisition, Connection-Alias MHSLineTakeway: > Error-message: {} - for stack-trace change logging to DEBUG > (org.apache.plc4x.java.scraper.triggeredscraper.TriggeredScraperTask:148) > > org.apache.plc4x.java.api.exceptions.PlcRuntimeException: > org.apache.plc4x.java.api.exceptions.PlcConnectionException: No Connection > Available, timed out while waiting in queue. > > at > org.apache.plc4x.java.scraper.triggeredscraper.TriggeredScraperImpl.lambda$getPlcConnection$1(TriggeredScraperImpl.java:304) > > at > java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604) > > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > > at java.lang.Thread.run(Thread.java:748) > > Caused by: org.apache.plc4x.java.api.exceptions.PlcConnectionException: No > Connection Available, timed out while waiting in queue. > > at > org.apache.plc4x.java.utils.connectionpool2.CachedDriverManager.getConnection(CachedDriverManager.java:182) > > at > org.apache.plc4x.java.utils.connectionpool2.PooledDriverManager.getConnection(PooledDriverManager.java:71) > > at > org.apache.plc4x.java.scraper.triggeredscraper.TriggeredScraperImpl.lambda$getPlcConnection$1(TriggeredScraperImpl.java:301) > > ... 4 more > > Caused by: java.util.concurrent.TimeoutException > > at > java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1784) > > at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928) > > at > org.apache.plc4x.java.utils.connectionpool2.CachedDriverManager.getConnection(CachedDriverManager.java:179) > > ... 6 more > > [2022-11-16 15:10:56,215] DEBUG [mhs-linetakeway-source|task-0] Detailed > exception occurred at scraping > (org.apache.plc4x.java.scraper.triggeredscraper.TriggeredScraperTask:206) > > java.util.concurrent.ExecutionException: > org.apache.plc4x.java.api.exceptions.PlcRuntimeException: > org.apache.plc4x.java.api.exceptions.PlcConnectionException: No Connection > Available, timed out while waiting in queue. > > at > java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) > > at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928) > > at > org.apache.plc4x.java.scraper.triggeredscraper.TriggeredScraperImpl.getPlcConnection(TriggeredScraperImpl.java:316) > > at > org.apache.plc4x.java.scraper.triggeredscraper.TriggeredScraperTask.run(TriggeredScraperTask.java:112) > > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > > at > java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > > at java.lang.Thread.run(Thread.java:748) > > Caused by: org.apache.plc4x.java.api.exceptions.PlcRuntimeException: > org.apache.plc4x.java.api.exceptions.PlcConnectionException: No Connection > Available, timed out while waiting in queue. > > at > org.apache.plc4x.java.scraper.triggeredscraper.TriggeredScraperImpl.lambda$getPlcConnection$1(TriggeredScraperImpl.java:304) > > at > java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604) > > ... 3 more > > Caused by: org.apache.plc4x.java.api.exceptions.PlcConnectionException: No > Connection Available, timed out while waiting in queue. > > at > org.apache.plc4x.java.utils.connectionpool2.CachedDriverManager.getConnection(CachedDriverManager.java:182) > > at > org.apache.plc4x.java.utils.connectionpool2.PooledDriverManager.getConnection(PooledDriverManager.java:71) > > at > org.apache.plc4x.java.scraper.triggeredscraper.TriggeredScraperImpl.lambda$getPlcConnection$1(TriggeredScraperImpl.java:301) > > ... 4 more > > Caused by: java.util.concurrent.TimeoutException > > at > java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1784) > > at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928) > > at > org.apache.plc4x.java.utils.connectionpool2.CachedDriverManager.getConnection(CachedDriverManager.java:179) > > ... 6 more > > [2022-11-16 15:10:56,215] DEBUG [mhs-linetakeway-source|task-0] Trigger > for job data-acquisition and device MHSLineTakeway is met ... scraping > desired data > (org.apache.plc4x.java.scraper.triggeredscraper.TriggeredScraperTask:97) > > [2022-11-16 15:10:56,215] DEBUG [mhs-linetakeway-source|task-0] Connection > was requested but no connection is active, trying to establish a Connection > (org.apache.plc4x.java.utils.connectionpool2.CachedDriverManager:210) > > [2022-11-16 15:10:56,216] DEBUG [mhs-linetakeway-source|task-0] Starting > to establish Connection > (org.apache.plc4x.java.utils.connectionpool2.CachedDriverManager:217) > > [2022-11-16 15:10:56,217] INFO [mhs-linetakeway-source|task-0] Configuring > Bootstrap with Configuration{} > (org.apache.plc4x.java.transport.tcp.TcpChannelFactory:60) > > [2022-11-16 15:10:56,221] DEBUG [mhs-linetakeway-source|task-0] User Event > triggered org.apache.plc4x.java.spi.events.ConnectEvent@33118762 > (org.apache.plc4x.java.spi.Plc4xNettyWrapper:201) > > [2022-11-16 15:10:56,221] DEBUG [mhs-linetakeway-source|task-0] Opcua > Driver running in ACTIVE mode. > (org.apache.plc4x.java.opcua.protocol.OpcuaProtocolLogic:107) > > [2022-11-16 15:10:56,242] DEBUG [mhs-linetakeway-source|task-0] Opcua > Driver running in ACTIVE mode. > (org.apache.plc4x.java.opcua.context.SecureChannel:243) > > [2022-11-16 15:10:56,242] INFO [mhs-linetakeway-source|task-0] Active > transaction Number 0 (org.apache.plc4x.java.opcua.context.SecureChannel:40) > > [2022-11-16 15:10:56,242] DEBUG [mhs-linetakeway-source|task-0] write > field message > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:134) > > [2022-11-16 15:10:56,242] DEBUG [mhs-linetakeway-source|task-0] write > field messageType > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterDiscriminator:134) > > [2022-11-16 15:10:56,242] DEBUG [mhs-linetakeway-source|task-0] write > field chunk (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:134) > > [2022-11-16 15:10:56,242] DEBUG [mhs-linetakeway-source|task-0] write > field messageSize > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterImplicit:134) > > [2022-11-16 15:10:56,242] DEBUG [mhs-linetakeway-source|task-0] write > field version > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:134) > > [2022-11-16 15:10:56,242] DEBUG [mhs-linetakeway-source|task-0] write > field receiveBufferSize > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:134) > > [2022-11-16 15:10:56,243] DEBUG [mhs-linetakeway-source|task-0] write > field sendBufferSize > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:134) > > [2022-11-16 15:10:56,243] DEBUG [mhs-linetakeway-source|task-0] write > field maxMessageSize > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:134) > > [2022-11-16 15:10:56,243] DEBUG [mhs-linetakeway-source|task-0] write > field maxChunkCount > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:134) > > [2022-11-16 15:10:56,243] DEBUG [mhs-linetakeway-source|task-0] write > field endpoint > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:134) > > [2022-11-16 15:10:56,243] DEBUG [mhs-linetakeway-source|task-0] write > field sLength > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterImplicit:134) > > [2022-11-16 15:10:56,243] DEBUG [mhs-linetakeway-source|task-0] write > field stringValue > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:134) > > [2022-11-16 15:10:56,244] DEBUG [mhs-linetakeway-source|task-0] Forwarding > request to plc > > ╔═OpcuaAPU/message/MessagePDU > ═════════════════════════════════════════════════════════════════════════════════════╗ > > ║╔═messageType╗ > > ║ > > ║║ HEL ║ > > ║ > > ║╚════════════╝ > > ║ > > ║╔═OpcuaHelloRequest > ═════════════════════════════════════════════════════════════════════════════════════════════╗║ > > ║║╔═chunk╗╔═messageSize═╗╔═version════╗╔═receiveBufferSize╗╔═ > sendBufferSize═╗╔═maxMessageSize═══╗╔═maxChunkCount╗║║ > > ║║║ F ║║0x0000003d 61║║0x00000000 0║║ 0x0000ffff 65535 ║║0x0000ffff > 65535║║0x00200000 2097152║║0x00000040 64 ║║║ > > > ║║╚══════╝╚═════════════╝╚════════════╝╚══════════════════╝╚════════════════╝╚══════════════════╝╚══════════════╝║║ > > ║║╔═endpoint/PascalString═══════════════════════════════════════╗ > ║║ > > ║║║╔═sLength═════╗╔═stringValue═════════════════╗╭┄stringLength╮║ > ║║ > > ║║║║0x0000001d 29║║opc.tcp://143.21.40.239:44818║┆ 0x1d 29 ┆║ > ║║ > > ║║║╚═════════════╝╚═════════════════════════════╝╰┄┄┄┄┄┄┄┄┄┄┄┄┄╯║ > ║║ > > ║║╚═════════════════════════════════════════════════════════════╝ > ║║ > > > ║╚═══════════════════════════════════════════════════════════════════════════════════════════════════════════════╝║ > > > ╚═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╝ > > (org.apache.plc4x.java.spi.Plc4xNettyWrapper:134) > > [2022-11-16 15:10:56,244] DEBUG [mhs-linetakeway-source|task-0] write > field message > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:33) > > [2022-11-16 15:10:56,244] DEBUG [mhs-linetakeway-source|task-0] write > field messageType > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterDiscriminator:33) > > [2022-11-16 15:10:56,245] DEBUG [mhs-linetakeway-source|task-0] write > field chunk (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:33) > > [2022-11-16 15:10:56,245] DEBUG [mhs-linetakeway-source|task-0] write > field messageSize > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterImplicit:33) > > [2022-11-16 15:10:56,245] DEBUG [mhs-linetakeway-source|task-0] write > field version > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:33) > > [2022-11-16 15:10:56,245] DEBUG [mhs-linetakeway-source|task-0] write > field receiveBufferSize > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:33) > > [2022-11-16 15:10:56,245] DEBUG [mhs-linetakeway-source|task-0] write > field sendBufferSize > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:33) > > [2022-11-16 15:10:56,245] DEBUG [mhs-linetakeway-source|task-0] write > field maxMessageSize > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:33) > > [2022-11-16 15:10:56,245] DEBUG [mhs-linetakeway-source|task-0] write > field maxChunkCount > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:33) > > [2022-11-16 15:10:56,245] DEBUG [mhs-linetakeway-source|task-0] write > field endpoint > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:33) > > [2022-11-16 15:10:56,245] DEBUG [mhs-linetakeway-source|task-0] write > field sLength > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterImplicit:33) > > [2022-11-16 15:10:56,245] DEBUG [mhs-linetakeway-source|task-0] write > field stringValue > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:33) > > [2022-11-16 15:10:56,245] DEBUG [mhs-linetakeway-source|task-0] write > field message > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:61) > > [2022-11-16 15:10:56,246] DEBUG [mhs-linetakeway-source|task-0] write > field messageType > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterDiscriminator:61) > > [2022-11-16 15:10:56,246] DEBUG [mhs-linetakeway-source|task-0] write > field chunk (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:61) > > [2022-11-16 15:10:56,246] DEBUG [mhs-linetakeway-source|task-0] write > field messageSize > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterImplicit:61) > > [2022-11-16 15:10:56,246] DEBUG [mhs-linetakeway-source|task-0] write > field version > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:61) > > [2022-11-16 15:10:56,246] DEBUG [mhs-linetakeway-source|task-0] write > field receiveBufferSize > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:61) > > [2022-11-16 15:10:56,246] DEBUG [mhs-linetakeway-source|task-0] write > field sendBufferSize > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:61) > > [2022-11-16 15:10:56,246] DEBUG [mhs-linetakeway-source|task-0] write > field maxMessageSize > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:61) > > [2022-11-16 15:10:56,246] DEBUG [mhs-linetakeway-source|task-0] write > field maxChunkCount > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:61) > > [2022-11-16 15:10:56,246] DEBUG [mhs-linetakeway-source|task-0] write > field endpoint > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:61) > > [2022-11-16 15:10:56,246] DEBUG [mhs-linetakeway-source|task-0] write > field sLength > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterImplicit:61) > > [2022-11-16 15:10:56,247] DEBUG [mhs-linetakeway-source|task-0] write > field stringValue > (org.apache.plc4x.java.spi.codegen.fields.FieldWriterSimple:61) > > [2022-11-16 15:10:56,247] DEBUG [mhs-linetakeway-source|task-0] Sending > bytes to PLC for message > > ╔═OpcuaAPU/message/MessagePDU > ═════════════════════════════════════════════════════════════════════════════════════╗ > > ║╔═messageType╗ > ║ > > ║║ HEL ║ > > ║ > > ║╚════════════╝ > > ║ > > ║╔═OpcuaHelloRequest > ═════════════════════════════════════════════════════════════════════════════════════════════╗║ > > ║║╔═chunk╗╔═messageSize═╗╔═version════╗╔═receiveBufferSize╗╔═ > sendBufferSize═╗╔═maxMessageSize═══╗╔═maxChunkCount╗║║ > > ║║║ F ║║0x0000003d 61║║0x00000000 0║║ 0x0000ffff 65535 ║║0x0000ffff > 65535║║0x00200000 2097152║║0x00000040 64 ║║║ > > > ║║╚══════╝╚═════════════╝╚════════════╝╚══════════════════╝╚════════════════╝╚══════════════════╝╚══════════════╝║║ > > ║║╔═endpoint/PascalString═══════════════════════════════════════╗ > ║║ > > ║║║╔═sLength═════╗╔═stringValue═════════════════╗╭┄stringLength╮║ > ║║ > > ║║║║0x0000001d 29║║opc.tcp://143.21.40.239:44818║┆ 0x1d 29 ┆║ > ║║ > > ║║║╚═════════════╝╚═════════════════════════════╝╰┄┄┄┄┄┄┄┄┄┄┄┄┄╯║ > ║║ > > ║║╚═════════════════════════════════════════════════════════════╝ > ║║ > > > ║╚═══════════════════════════════════════════════════════════════════════════════════════════════════════════════╝║ > > > ╚═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╝ > > as data > 48454c463d00000000000000ffff0000ffff000000002000400000001d0000006f70632e7463703a2f2f3134332e32312e34302e3233393a3434383138 > (org.apache.plc4x.java.spi.GeneratedDriverByteToMessageCodec:61) > > [2022-11-16 15:10:56,395] DEBUG [Consumer > clientId=consumer-connect-cluster-2, groupId=connect-cluster] Received > FETCH response from node 0 for request with header > RequestHeader(apiKey=FETCH, apiVersion=13, > clientId=consumer-connect-cluster-2, correlationId=30): > FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1186581989, > responses=[]) (org.apache.kafka.clients.NetworkClient:879) > > [2022-11-16 15:10:56,395] DEBUG [Consumer > clientId=consumer-connect-cluster-2, groupId=connect-cluster] Node 0 sent > an incremental fetch response with throttleTimeMs = 0 for session > 1186581989 with 0 response partition(s), 5 implied partition(s) > (org.apache.kafka.clients.FetchSessionHandler:584) > > [2022-11-16 15:10:56,395] DEBUG [Consumer > clientId=consumer-connect-cluster-2, groupId=connect-cluster] Added > READ_UNCOMMITTED fetch request for partition connect-statuses-3 at position > FetchPosition{offset=0, offsetEpoch=Optional.empty, > currentLeader=LeaderAndEpoch{leader=Optional[craboolg02.eu.pg.com:9092 > (id: 0 rack: null)], epoch=0}} to node craboolg02.eu.pg.com:9092 (id: 0 > rack: null) (org.apache.kafka.clients.consumer.internals.Fetcher:1232) > > [2022-11-16 15:10:56,395] DEBUG [Consumer > clientId=consumer-connect-cluster-2, groupId=connect-cluster] Added > READ_UNCOMMITTED fetch request for partition connect-statuses-1 at position > FetchPosition{offset=0, offsetEpoch=Optional.empty, > currentLeader=LeaderAndEpoch{leader=Optional[craboolg02.eu.pg.com:9092 > (id: 0 rack: null)], epoch=0}} to node craboolg02.eu.pg.com:9092 (id: 0 > rack: null) (org.apache.kafka.clients.consumer.internals.Fetcher:1232) > > [2022-11-16 15:10:56,396] DEBUG [Consumer > clientId=consumer-connect-cluster-2, groupId=connect-cluster] Added > READ_UNCOMMITTED fetch request for partition connect-statuses-2 at position > FetchPosition{offset=0, offsetEpoch=Optional.empty, > currentLeader=LeaderAndEpoch{leader=Optional[craboolg02.eu.pg.com:9092 > (id: 0 rack: null)], epoch=0}} to node craboolg02.eu.pg.com:9092 (id: 0 > rack: null) (org.apache.kafka.clients.consumer.internals.Fetcher:1232) > > [2022-11-16 15:10:56,396] DEBUG [Consumer > clientId=consumer-connect-cluster-2, groupId=connect-cluster] Added > READ_UNCOMMITTED fetch request for partition connect-statuses-0 at position > FetchPosition{offset=59, offsetEpoch=Optional[0], > currentLeader=LeaderAndEpoch{leader=Optional[craboolg02.eu.pg.com:9092 > (id: 0 rack: null)], epoch=0}} to node craboolg02.eu.pg.com:9092 (id: 0 > rack: null) (org.apache.kafka.clients.consumer.internals.Fetcher:1232) > > [2022-11-16 15:10:56,396] DEBUG [Consumer > clientId=consumer-connect-cluster-2, groupId=connect-cluster] Added > READ_UNCOMMITTED fetch request for partition connect-statuses-4 at position > FetchPosition{offset=64, offsetEpoch=Optional[0], > currentLeader=LeaderAndEpoch{leader=Optional[craboolg02.eu.pg.com:9092 > (id: 0 rack: null)], epoch=0}} to node craboolg02.eu.pg.com:9092 (id: 0 > rack: null) (org.apache.kafka.clients.consumer.internals.Fetcher:1232) > > [2022-11-16 15:10:56,396] DEBUG [Consumer > clientId=consumer-connect-cluster-2, groupId=connect-cluster] Built > incremental fetch (sessionId=1186581989, epoch=25) for node 0. Added 0 > partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 > partition(s) out of 5 partition(s) > (org.apache.kafka.clients.FetchSessionHandler:351) > > [2022-11-16 15:10:56,396] DEBUG [Consumer > clientId=consumer-connect-cluster-2, groupId=connect-cluster] Sending > READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), > toReplace=(), implied=(connect-statuses-0, connect-statuses-3, > connect-statuses-4, connect-statuses-1, connect-statuses-2), > canUseTopicIds=True) to broker craboolg02.eu.pg.com:9092 (id: 0 rack: > null) (org.apache.kafka.clients.consumer.internals.Fetcher:272) > > [2022-11-16 15:10:56,396] DEBUG [Consumer > clientId=consumer-connect-cluster-2, groupId=connect-cluster] Sending FETCH > request with header RequestHeader(apiKey=FETCH, apiVersion=13, > clientId=consumer-connect-cluster-2, correlationId=31) and timeout 30000 to > node 0: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, > minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1186581989, > sessionEpoch=25, topics=[], forgottenTopicsData=[], rackId='') > (org.apache.kafka.clients.NetworkClient:521) > > [2022-11-16 15:10:56,419] DEBUG [mhs-linetakeway-source|task-0] > WorkerSourceTask{id=mhs-linetakeway-source-0} Committing offsets > (org.apache.kafka.connect.runtime.WorkerSourceTask:198) > > [2022-11-16 15:10:56,419] DEBUG [mhs-linetakeway-source|task-0] > WorkerSourceTask{id=mhs-linetakeway-source-0} Either no records were > produced by the task since the last offset commit, or every record has been > filtered out by a transformation or dropped due to transformation or > conversion errors. (org.apache.kafka.connect.runtime.WorkerSourceTask:210) > > [2022-11-16 15:10:56,420] DEBUG [mhs-linetakeway-source|task-0] > WorkerSourceTask{id=mhs-linetakeway-source-0} Finished offset commitOffsets > successfully in 1 ms (org.apache.kafka.connect.runtime.WorkerSourceTask:246) > > [2022-11-16 15:10:56,421] INFO [mhs-linetakeway-source|task-0] Stopping > scraper... > (org.apache.plc4x.java.scraper.triggeredscraper.TriggeredScraperImpl:265) > > [2022-11-16 15:10:56,421] DEBUG [mhs-linetakeway-source|task-0] Stopping > task > TriggeredScraperTask{driverManager=org.apache.plc4x.java.utils.connectionpool2.PooledDriverManager@2f213f02, > jobName='data-acquisition', connectionAlias='MHSLineTakeway', > connectionString='opcua:tcp://143.21.40.239:44818', > requestTimeoutMs=2000, > executorService=java.util.concurrent.ThreadPoolExecutor@24e13670[Running, > pool size = 5, active threads = 1, queued tasks = 0, completed tasks = 10], > resultHandler=org.apache.plc4x.kafka.Plc4xSourceTask$$Lambda$697/2096727146@7874926, > > triggerHandler=org.apache.plc4x.java.scraper.triggeredscraper.triggerhandler.TriggerHandlerImpl@28323655}... > (org.apache.plc4x.java.scraper.triggeredscraper.TriggeredScraperImpl:267) > > [2022-11-16 15:10:56,422] WARN [mhs-linetakeway-source|task-0] Exception > during scraping of Job data-acquisition, Connection-Alias MHSLineTakeway: > Error-message: null - for stack-trace change logging to DEBUG > (org.apache.plc4x.java.scraper.triggeredscraper.TriggeredScraperTask:148) > > [2022-11-16 15:10:56,423] DEBUG [mhs-linetakeway-source|task-0] Detailed > exception occurred at scraping > (org.apache.plc4x.java.scraper.triggeredscraper.TriggeredScraperTask:206) > > java.lang.InterruptedException > > at > java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:347) > > at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928) > > at > org.apache.plc4x.java.scraper.triggeredscraper.TriggeredScraperImpl.getPlcConnection(TriggeredScraperImpl.java:316) > > at > org.apache.plc4x.java.scraper.triggeredscraper.TriggeredScraperTask.run(TriggeredScraperTask.java:112) > > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > > at > java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > > at java.lang.Thread.run(Thread.java:748) > > [2022-11-16 15:10:56,421] INFO [mhs-linetakeway-source|task-0] [Producer > clientId=connector-producer-mhs-linetakeway-source-0] Closing the Kafka > producer with timeoutMillis = 30000 ms. > (org.apache.kafka.clients.producer.KafkaProducer:1297) > > [2022-11-16 15:10:56,423] DEBUG [mhs-linetakeway-source|task-0] [Producer > clientId=connector-producer-mhs-linetakeway-source-0] Beginning shutdown of > Kafka producer I/O thread, sending remaining records. > (org.apache.kafka.clients.producer.internals.Sender:249) > > [2022-11-16 15:10:56,424] DEBUG [mhs-linetakeway-source|task-0] [Producer > clientId=connector-producer-mhs-linetakeway-source-0] Shutdown of Kafka > producer I/O thread has completed. > (org.apache.kafka.clients.producer.internals.Sender:291) > > [2022-11-16 15:10:56,435] INFO [mhs-linetakeway-source|task-0] Metrics > scheduler closed (org.apache.kafka.common.metrics.Metrics:693) > > [2022-11-16 15:10:56,436] INFO [mhs-linetakeway-source|task-0] Closing > reporter org.apache.kafka.common.metrics.JmxReporter > (org.apache.kafka.common.metrics.Metrics:697) > > [2022-11-16 15:10:56,436] INFO [mhs-linetakeway-source|task-0] Metrics > reporters closed (org.apache.kafka.common.metrics.Metrics:703) > > [2022-11-16 15:10:56,436] INFO [mhs-linetakeway-source|task-0] App info > kafka.producer for connector-producer-mhs-linetakeway-source-0 unregistered > (org.apache.kafka.common.utils.AppInfoParser:83) > > [2022-11-16 15:10:56,436] DEBUG [mhs-linetakeway-source|task-0] [Producer > clientId=connector-producer-mhs-linetakeway-source-0] Kafka producer has > been closed (org.apache.kafka.clients.producer.KafkaProducer:1350) > > [2022-11-16 15:10:56,438] DEBUG [Producer clientId=producer-2] Sending > PRODUCE request with header RequestHeader(apiKey=PRODUCE, apiVersion=9, > clientId=producer-2, correlationId=7) and timeout 30000 to node 0: > {acks=-1,timeout=30000,partitionSizes=[connect-statuses-0=190]} > (org.apache.kafka.clients.NetworkClient:521) > > [2022-11-16 15:10:56,438] DEBUG [mhs-linetakeway-source|task-0] Graceful > stop of task mhs-linetakeway-source-0 succeeded. > (org.apache.kafka.connect.runtime.Worker:1060) > > [2022-11-16 15:10:56,441] DEBUG [Worker clientId=connect-1, > groupId=connect-cluster] Heartbeat thread has closed > (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:1537) > > [2022-11-16 15:10:56,441] INFO [Worker clientId=connect-1, > groupId=connect-cluster] Member > connect-1-0268689d-4a69-4e5a-97d8-378b53127faf sending LeaveGroup request > to coordinator craboolg02.eu.pg.com:9092 (id: 2147483647 rack: null) due > to the consumer is being closed > (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:1127) > > [2022-11-16 15:10:56,442] DEBUG [Worker clientId=connect-1, > groupId=connect-cluster] Sending LEAVE_GROUP request with header > RequestHeader(apiKey=LEAVE_GROUP, apiVersion=5, clientId=connect-1, > correlationId=10) and timeout 40000 to node 2147483647: > LeaveGroupRequestData(groupId='connect-cluster', memberId='', > members=[MemberIdentity(memberId='connect-1-0268689d-4a69-4e5a-97d8-378b53127faf', > groupInstanceId=null, reason='the consumer is being closed')]) > (org.apache.kafka.clients.NetworkClient:521) > > > > > > > > Regards / Mit freundlichen Grüßen, > > Dietmar Giljohann > > Phone: +49 6196 89 5602 > > Mobile: +49 172 6748980 > > Email: [email protected] > > Internet : www.pg.com > > > > _________________________________________________________ > > Procter and Gamble Service GmbH > > Sulzbacher Str. 40, 65824 Schwalbach am Taunus > > Aufsichtsratsvorsitzender: Heinz-Joachim Schultner Geschäftsführer: Janis > Bauer, Gabriele Hässig, Verena Neubauer, Stefan Schamberg, Ingo > Schimmelpfennig, Vijay Sitlani, Astrid Teckentrup, Matthias Weber Sitz: > Schwalbach am Taunus, Amtsgericht: Königstein im Taunus HRB 6593 Ust.ID: DE > 293 480 388 > > >
