Hi Kasun,

Now we can send more messages to Kafka single server. I have shared the
comparison results[1] for JMS producer Vs Kafka producer transport in ESB.

[1] -
https://docs.google.com/a/wso2.com/spreadsheets/d/1LrzxDh98V8PBfXctQ6v6MWZitCKg9Bm41ju5gHZTNWE/edit#gid=0


Thanks,
Kathees

On Thu, Oct 16, 2014 at 12:04 PM, Kathees Rajendram <[email protected]>
wrote:

> Hi Kasun,
>
> I have tested Kafka producer side of Kafka ESB connector and shared the
> document[1].
> Kafka inbound throughput average is around 1/s.Current Kafka inbound
> implementation has a limitation that we cannot reduce the time interval
> less than 1000 ms.
>
> [1] -
> https://docs.google.com/a/wso2.com/spreadsheets/d/1LrzxDh98V8PBfXctQ6v6MWZitCKg9Bm41ju5gHZTNWE/edit?usp=sharing
>
> Thanks,
> Kathees
>
> On Thu, Oct 16, 2014 at 10:02 AM, Kasun Indrasiri <[email protected]> wrote:
>
>> Hi Kathees,
>>
>> Can you please share the performance stats for consumer and producer side
>> of the Kafka inbound and connector please.
>>
>> On Mon, Oct 6, 2014 at 10:29 AM, Kathees Rajendram <[email protected]>
>> wrote:
>>
>>> Hi Dushan,
>>>
>>> Yes I set it. Reason for this issue is that producer connection was not
>>> closed properly. Now number of open file limit is enough.
>>>
>>> Thanks,
>>> Kathees
>>>
>>> On Sun, Oct 5, 2014 at 7:43 PM, Dushan Abeyruwan <[email protected]>
>>> wrote:
>>>
>>>> ulimit ?
>>>>
>>>> have you set them in your OS
>>>>
>>>> On Sat, Oct 4, 2014 at 8:48 PM, Kathees Rajendram <[email protected]>
>>>> wrote:
>>>>
>>>>>
>>>>> Hi,
>>>>>
>>>>> I tried to send 5000 messages to Kafka broker using Jmeter ( 10 thread
>>>>> and 500 messages per a thread,one message size is 105 byes). After 2000
>>>>> messages I am getting the following exception.
>>>>> Exception shows "Too many open files" error.So I changed to increase
>>>>> no. of open files limit on my machine.
>>>>>
>>>>>
>>>>> http://askubuntu.com/questions/162229/how-do-i-increase-the-open-files-limit-for-a-non-root-user
>>>>>
>>>>> and changed buffer size (socket.request.max.bytes) value in
>>>>> server.properties file  but still I am getting same exception.
>>>>>
>>>>> What is the cause for this issue?
>>>>>
>>>>>
>>>>> [2014-10-03 12:31:07,051] ERROR - Utils$ fetching topic metadata for
>>>>> topics [Set(test1)] from broker
>>>>> [ArrayBuffer(id:0,host:localhost,port:9092)] failed
>>>>> kafka.common.KafkaException: fetching topic metadata for topics
>>>>> [Set(test1)] from broker [ArrayBuffer(id:0,host:localhost,port:9092)] 
>>>>> failed
>>>>>     at
>>>>> kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:67)
>>>>>     at
>>>>> kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
>>>>>     at
>>>>> kafka.producer.async.DefaultEventHandler$$anonfun$handle$2.apply$mcV$sp(DefaultEventHandler.scala:78)
>>>>>     at kafka.utils.Utils$.swallow(Utils.scala:167)
>>>>>     at kafka.utils.Logging$class.swallowError(Logging.scala:106)
>>>>>     at kafka.utils.Utils$.swallowError(Utils.scala:46)
>>>>>     at
>>>>> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:78)
>>>>>     at kafka.producer.Producer.send(Producer.scala:76)
>>>>>     at kafka.javaapi.producer.Producer.send(Producer.scala:33)
>>>>>     at
>>>>> org.wso2.carbon.connector.KafkaProduce.send(KafkaProduce.java:71)
>>>>>     at
>>>>> org.wso2.carbon.connector.KafkaProduce.connect(KafkaProduce.java:28)
>>>>>     at
>>>>> org.wso2.carbon.connector.core.AbstractConnector.mediate(AbstractConnector.java:32)
>>>>>     at
>>>>> org.apache.synapse.mediators.ext.ClassMediator.mediate(ClassMediator.java:78)
>>>>>     at
>>>>> org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:77)
>>>>>     at
>>>>> org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:47)
>>>>>     at
>>>>> org.apache.synapse.mediators.template.TemplateMediator.mediate(TemplateMediator.java:77)
>>>>>     at
>>>>> org.apache.synapse.mediators.template.InvokeMediator.mediate(InvokeMediator.java:129)
>>>>>     at
>>>>> org.apache.synapse.mediators.template.InvokeMediator.mediate(InvokeMediator.java:78)
>>>>>     at
>>>>> org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:77)
>>>>>     at
>>>>> org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:47)
>>>>>     at
>>>>> org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:131)
>>>>>     at
>>>>> org.apache.synapse.core.axis2.ProxyServiceMessageReceiver.receive(ProxyServiceMessageReceiver.java:166)
>>>>>     at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
>>>>>     at
>>>>> org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEnclosingRESTHandler(ServerWorker.java:344)
>>>>>     at
>>>>> org.apache.synapse.transport.passthru.ServerWorker.processEntityEnclosingRequest(ServerWorker.java:385)
>>>>>     at
>>>>> org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:183)
>>>>>     at
>>>>> org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
>>>>>     at
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>>>     at
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>>>     at java.lang.Thread.run(Thread.java:745)
>>>>> Caused by: java.net.SocketException: Too many open files
>>>>>     at sun.nio.ch.Net.socket0(Native Method)
>>>>>     at sun.nio.ch.Net.socket(Net.java:423)
>>>>>     at sun.nio.ch.Net.socket(Net.java:416)
>>>>>     at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:104)
>>>>>     at
>>>>> sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:60)
>>>>>     at java.nio.channels.SocketChannel.open(SocketChannel.java:142)
>>>>>     at kafka.network.BlockingChannel.connect(BlockingChannel.scala:48)
>>>>>     at kafka.producer.SyncProducer.connect(SyncProducer.scala:141)
>>>>>     at
>>>>> kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:156)
>>>>>     at
>>>>> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:68)
>>>>>     at kafka.producer.SyncProducer.send(SyncProducer.scala:112)
>>>>>     at
>>>>> kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:53)
>>>>>     ... 29 more
>>>>>
>>>>>
>>>>>
>>>>> Thanks,
>>>>> Kathees
>>>>>
>>>>>
>>>>> --
>>>>> Kathees
>>>>> Software Engineer,
>>>>> email: [email protected]
>>>>> mobile: +94772596173
>>>>>
>>>>> _______________________________________________
>>>>> Dev mailing list
>>>>> [email protected]
>>>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Dushan Abeyruwan | Associate Tech Lead
>>>> Integration Technologies Team
>>>> PMC Member Apache Synpase
>>>> WSO2 Inc. http://wso2.com/
>>>> Blog:http://dushansview.blogspot.com/
>>>> Mobile:(0094)713942042
>>>>
>>>>
>>>
>>>
>>> --
>>> Kathees
>>> Software Engineer,
>>> email: [email protected]
>>> mobile: +94772596173
>>>
>>> _______________________________________________
>>> Dev mailing list
>>> [email protected]
>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>
>>>
>>
>>
>> --
>> Kasun Indrasiri
>> Software Architect
>> WSO2, Inc.; http://wso2.com
>> lean.enterprise.middleware
>>
>> cell: +94 77 556 5206
>> Blog : http://kasunpanorama.blogspot.com/
>>
>
>
>
> --
> Kathees
> Software Engineer,
> email: [email protected]
> mobile: +94772596173
>



-- 
Kathees
Software Engineer,
email: [email protected]
mobile: +94772596173
_______________________________________________
Dev mailing list
[email protected]
http://wso2.org/cgi-bin/mailman/listinfo/dev

Reply via email to