Re: Java broker OOM due to DirectMemory

2017-05-24 Thread Ramayan Tiwari
Memory requirements for use cases such as yours > > should be much more reasonable. > > > > I know you currently have a dependency on the old JMX management > > interface. I'd suggest you look at eliminating the dependency soon, > > so you are free to upgrade when the time

Re: Java broker OOM due to DirectMemory

2017-05-16 Thread Ramayan Tiwari
> flight this week investigating alternative approaches which I am > hoping will conclude by the end of week. I should be able to update > you then. > > Thanks Keith > > On 12 May 2017 at 20:58, Ramayan Tiwari <ramayan.tiw...@gmail.com> wrote: > > Hi Alex, > > > >

Re: Java broker OOM due to DirectMemory

2017-05-12 Thread Ramayan Tiwari
roducing lower and upper thresholds for 'flow to disk'. It > seems like a good idea and we will try to implement it early this week on > trunk first. > > Kind Regards, > Alex > > > On 5 May 2017 at 23:49, Ramayan Tiwari <ramayan.tiw...@gmail.com> wrote: > > > Hi

Re: Java broker OOM due to DirectMemory

2017-05-05 Thread Ramayan Tiwari
ould prevent the broker from going OOM even if the > > compaction strategy outlined above > > should fail for some reason (e.g., the compaction task cannot keep up > > with the arrival of new messages). > > > > Currently, there are patches for the above points but

Re: Java broker OOM due to DirectMemory

2017-04-28 Thread Ramayan Tiwari
tached a patch to this mail that lowers that restriction to the limit > imposed by AMQP (4096 Bytes). > Obviously, you should not use this when using TLS. > > > I hope this reduces the problems you are currently facing until we can > complete the proper fix. > > Kind regards

Re: Java broker OOM due to DirectMemory

2017-04-21 Thread Ramayan Tiwari
. > We intend to be working on these early next week and will be aiming > for a fix that is back-portable to 6.0. > > Apologies that you have run into this defect and thanks for reporting. > > Thanks, Keith > > > > > > > > On 21 April 2017 at 10:21, Ramayan Tiwa

Re: Java broker OOM due to DirectMemory

2017-04-21 Thread Ramayan Tiwari
that to see if we can get some clue. We wanted to share this new information which might help in reasoning about the memory issue. - Ramayan On Thu, Apr 20, 2017 at 11:20 AM, Ramayan Tiwari <ramayan.tiw...@gmail.com> wrote: > Hi Keith, > > Thanks so much for your respo

Re: Java broker OOM due to DirectMemory

2017-04-20 Thread Ramayan Tiwari
this using some perf tests to enqueue with same pattern, will update with the findings. Thanks Ramayan On Wed, Apr 19, 2017 at 6:52 PM, Ramayan Tiwari <ramayan.tiw...@gmail.com> wrote: > Another issue that we noticed is when broker goes OOM due to direct > memory, it doesn't create heap dum

Re: Java broker OOM due to DirectMemory

2017-04-19 Thread Ramayan Tiwari
n able to find a way to get to heap dump for DM OOM? - Ramayan On Wed, Apr 19, 2017 at 11:21 AM, Ramayan Tiwari <ramayan.tiw...@gmail.com> wrote: > Alex, > > Below are the flow to disk logs from broker having 3million+ messages at > this time. We only have one virtual host. Tim

Re: Java broker OOM due to DirectMemory

2017-04-19 Thread Ramayan Tiwari
ry use > {0,number,#}KB within threshold {1,number,#.##}KB > > Kind Regards, > Alex > > > On 19 April 2017 at 17:10, Ramayan Tiwari <ramayan.tiw...@gmail.com> > wrote: > > > Hi Alex, > > > > Thanks for your response, here are the details: > > >

Re: Java broker OOM due to DirectMemory

2017-04-19 Thread Ramayan Tiwari
ntent and > receiving/sending data. Each plain connection utilizes 512K of direct > memory. Each SSL connection uses 1M of direct memory. Your memory settings > look Ok to me. > > Kind Regards, > Alex > > > On 18 April 2017 at 23:39, Ramayan Tiwari <ramayan.tiw...@gmail.co

Java broker OOM due to DirectMemory

2017-04-18 Thread Ramayan Tiwari
Hi All, We are using Java broker 6.0.5, with patch to use MultiQueueConsumer feature. We just finished deploying to production and saw couple of instances of broker OOM due to running out of DirectMemory buffer (exceptions at the end of this email). Here is our setup: 1. Max heap 12g, max direct

Re: Logging ThreadId in Java broker logs

2017-04-03 Thread Ramayan Tiwari
eping threads sharing the same name, on trunk this should >> no longer be the case. >> If you encounter other thread pools with this behaviour please flag it up >> so we can make sure it has been fixed on trunk. >> >> Kind regards, >> Lorenz >> >> [

Logging ThreadId in Java broker logs

2017-03-31 Thread Ramayan Tiwari
Hi All, After looking logback's PatternLayout, I don't think its possible to log thread id by simply supplying a pattern for it. Has anyone looked into ways to achieve it? I would like to have thread ids in the log lines as well, since it appears to be me that same thread name gets assigned for

Recommended GC algorithm for Java broker

2017-02-09 Thread Ramayan Tiwari
Hi All, Has anyone done any perf testing around using different GC algorithms with the Java broker or is there any recommendation on that? Thanks Ramayan

Re: Qpid broker 6.0.4 performance issues

2017-01-04 Thread Ramayan Tiwari
ur Heap over DM but I am reluctant to make > an explicit recommendation. > > Kind regards, > Lorenz > > P.S.: I am going on a 2 day vacation later today but feel free to > continue this conversation with others on this list. > > [1] https://qpid.apache.org/releases/qpi

Re: Qpid broker 6.0.4 performance issues

2016-12-20 Thread Ramayan Tiwari
eive 10/190 * 7.5 GB = 395 MB >while the large Queue receives 100/190 * 7.5 GB = 3950 MB. > > * In total we allocated 10 * 250 MB + 9 * 395 MB + 1 * 3950 MB >totaling 10 GB (within bounds of rounding errors). > > > > On 19/12/16 20:48, Ramayan Tiwari wro

Re: Qpid broker 6.0.4 performance issues

2016-12-19 Thread Ramayan Tiwari
gards, Keith. > > > [1] http://semver.org > > > On 27 October 2016 at 23:19, Ramayan Tiwari <ramayan.tiw...@gmail.com> > wrote: > > Hi Rob, > > > > I have the truck code which I am testing with, I haven't finished the > test > > runs yet. I was h

Re: Qpid broker 6.0.4 performance issues

2016-10-27 Thread Ramayan Tiwari
did you verify that the change works for you? You said you were going to > test with the trunk code... > > I'll discuss with the other developers tomorrow about whether we can put > this change into 6.0.5. > > Cheers, > Rob > > On 27 October 2016 at 20:30, Ramayan

Re: Qpid broker 6.0.4 performance issues

2016-10-27 Thread Ramayan Tiwari
uld it be possible to include > test cases involving many queues and listeners (in the order of thousands > of queues) for future Qpid releases, as part of standard perf testing of > the broker? > > Thanks, > Helen > > On Tue, Oct 18, 2016 at 10:40 AM, Ramayan Tiwari <ramay

Re: Qpid broker 6.0.4 performance issues

2016-10-18 Thread Ramayan Tiwari
t; > > > On 17 October 2016 at 21:24, Ramayan Tiwari <ramayan.tiw...@gmail.com> > > wrote: > > > >> Hi Rob, > >> > >> We are certainly interested in testing the "multi queue consumers" > >> behavior > >> with your patch in

Re: Qpid broker 6.0.4 performance issues

2016-10-17 Thread Ramayan Tiwari
ssue you had with > this functionality before, I believe). Using this model you'd only need a > small number (one?) of consumers per session. The patch I have is to add > this "pull" mode for these consumers (essentially this is a preview of how > all consumers will work in the future

Re: Qpid broker 6.0.4 performance issues

2016-10-15 Thread Ramayan Tiwari
hall > spend the rest of my afternoon pondering this... > > - Rob > > On 15 October 2016 at 17:14, Ramayan Tiwari <ramayan.tiw...@gmail.com> > wrote: > > > Hi Rob, > > > > Thanks so much for your response. We use transacted sessions with > > non-

Re: Qpid broker 6.0.4 performance issues

2016-10-15 Thread Ramayan Tiwari
gt; little more information on the usage pattern - are you using transactions, > auto-ack or client ack? What prefetch size are you using? How large are > your messages? > > Thanks, > Rob > > On 14 October 2016 at 23:46, Ramayan Tiwari <ramayan.tiw...@gmail.com> > wro

Qpid broker 6.0.4 performance issues

2016-10-14 Thread Ramayan Tiwari
Hi All, We have been validating the new Qpid broker (version 6.0.4) and have compared against broker version 0.32 and are seeing major regressions. Following is the summary of our test setup and results: *1. Test Setup * *a). *Qpid broker runs on a dedicated host (12 cores, 32 GB RAM). *b).*

Re: org.apache.qpid.server.store.StoreException

2016-10-06 Thread Ramayan Tiwari
hange... I think you may have already patched your 0.32 > broker anyway, in which case you should be able to add the patch I put > on the JIRA. > > On 6 October 2016 at 23:33, Ramayan Tiwari <ramayan.tiw...@gmail.com> > wrote: > > Hi Rob, > > > > Thanks so muc

Re: org.apache.qpid.server.store.StoreException

2016-10-06 Thread Ramayan Tiwari
to do a patch release for 0.32, but we will likely be > > putting out a new 6.0.x release soon, and soon after a 6.1 release. > > Would you be able to upgrade to one of these, or would you prefer me > > to send you a patch file that you could apply to the 0.32 source to > > tes

org.apache.qpid.server.store.StoreException

2016-10-06 Thread Ramayan Tiwari
Hi, We are ran into this StoreException in our production environment multiple times on different brokers, which caused broker shutdown. We are running 0.32 Java broker with 0.16 client. I see that this was reported and fixed here: https://issues.apache.org/jira/browse/QPID-4012 This is still

Re: Removal of JMX management channel

2016-08-23 Thread Ramayan Tiwari
gt; after a deprecation period. I don't know exactly when JMX was > > first deprecated but in 6.0.x we stepped up the deprecation level > > by removing the documentation and the JMX ports from the default > > configuration. However, the actual code was not removed until > &

Re: Flow to disk behavior with In Memory messages

2016-08-23 Thread Ramayan Tiwari
ll run out of memory. > > Memory VirtualHost(Node)s are more for testing purposes than > anything else. > > > Kind Regards, > Lorenz > > > > On 23/08/16 03:02, Ramayan Tiwari wrote: > >> Hi all, >> >> As I understand, flow to disk tries to protect D

Re: Removal of JMX management channel

2016-08-22 Thread Ramayan Tiwari
Found the JIRA for removing JMX https://issues.apache.org/jira/browse/QPID-6915 Although, I couldn't find the reason why JMX is being removed. Could you point out the reasons why JMX is no longer supported? Thanks Ramayan On Mon, Aug 22, 2016 at 5:05 PM, Ramayan Tiwari <ramayan.tiw...@gmail.

Flow to disk behavior with In Memory messages

2016-08-22 Thread Ramayan Tiwari
Hi all, As I understand, flow to disk tries to protect Direct Memory by sending new messages to disk when using some form of persistent. What is the behavior when virtual host node type is "Memory". I was looking at the implementation, StoredMemoryMessage doesn't seem to do anything in case of

Removal of JMX management channel

2016-08-22 Thread Ramayan Tiwari
Hi all, We (at Salesforce) are currently using Qpid java 0.32 broker and in the process of moving to 6.0.4. We rely heavily on JMX to perform various kind of broker monitoring and management. There is no mention of JMX Management in the documentation of broker 6.0.4 [1]. The documentation of

Problems with setting up Qpid Java in Eclipse

2016-05-10 Thread Ramayan Tiwari
Hi All, I have been trying to setup my dev environment for Qpid Java [1] using Eclipse and I have not been successful yet. Following are the approaches I tried so far: *1. Using Maven pom.xml to import projects* a) After getting the source, I do File -> Import -> Import Maven Project b) Select