Memory requirements for use cases such as yours
> > should be much more reasonable.
> >
> > I know you currently have a dependency on the old JMX management
> > interface. I'd suggest you look at eliminating the dependency soon,
> > so you are free to upgrade when the time
> flight this week investigating alternative approaches which I am
> hoping will conclude by the end of week. I should be able to update
> you then.
>
> Thanks Keith
>
> On 12 May 2017 at 20:58, Ramayan Tiwari <ramayan.tiw...@gmail.com> wrote:
> > Hi Alex,
> >
> >
roducing lower and upper thresholds for 'flow to disk'. It
> seems like a good idea and we will try to implement it early this week on
> trunk first.
>
> Kind Regards,
> Alex
>
>
> On 5 May 2017 at 23:49, Ramayan Tiwari <ramayan.tiw...@gmail.com> wrote:
>
> > Hi
ould prevent the broker from going OOM even if the
> > compaction strategy outlined above
> > should fail for some reason (e.g., the compaction task cannot keep up
> > with the arrival of new messages).
> >
> > Currently, there are patches for the above points but
tached a patch to this mail that lowers that restriction to the limit
> imposed by AMQP (4096 Bytes).
> Obviously, you should not use this when using TLS.
>
>
> I hope this reduces the problems you are currently facing until we can
> complete the proper fix.
>
> Kind regards
.
> We intend to be working on these early next week and will be aiming
> for a fix that is back-portable to 6.0.
>
> Apologies that you have run into this defect and thanks for reporting.
>
> Thanks, Keith
>
>
>
>
>
>
>
> On 21 April 2017 at 10:21, Ramayan Tiwa
that to see if we can get some clue. We
wanted to share this new information which might help in reasoning about
the memory issue.
- Ramayan
On Thu, Apr 20, 2017 at 11:20 AM, Ramayan Tiwari <ramayan.tiw...@gmail.com>
wrote:
> Hi Keith,
>
> Thanks so much for your respo
this using some perf tests to enqueue with
same pattern, will update with the findings.
Thanks
Ramayan
On Wed, Apr 19, 2017 at 6:52 PM, Ramayan Tiwari <ramayan.tiw...@gmail.com>
wrote:
> Another issue that we noticed is when broker goes OOM due to direct
> memory, it doesn't create heap dum
n able to find a way to get to heap dump for DM OOM?
- Ramayan
On Wed, Apr 19, 2017 at 11:21 AM, Ramayan Tiwari <ramayan.tiw...@gmail.com>
wrote:
> Alex,
>
> Below are the flow to disk logs from broker having 3million+ messages at
> this time. We only have one virtual host. Tim
ry use
> {0,number,#}KB within threshold {1,number,#.##}KB
>
> Kind Regards,
> Alex
>
>
> On 19 April 2017 at 17:10, Ramayan Tiwari <ramayan.tiw...@gmail.com>
> wrote:
>
> > Hi Alex,
> >
> > Thanks for your response, here are the details:
> >
>
ntent and
> receiving/sending data. Each plain connection utilizes 512K of direct
> memory. Each SSL connection uses 1M of direct memory. Your memory settings
> look Ok to me.
>
> Kind Regards,
> Alex
>
>
> On 18 April 2017 at 23:39, Ramayan Tiwari <ramayan.tiw...@gmail.co
Hi All,
We are using Java broker 6.0.5, with patch to use MultiQueueConsumer
feature. We just finished deploying to production and saw couple of
instances of broker OOM due to running out of DirectMemory buffer
(exceptions at the end of this email).
Here is our setup:
1. Max heap 12g, max direct
eping threads sharing the same name, on trunk this should
>> no longer be the case.
>> If you encounter other thread pools with this behaviour please flag it up
>> so we can make sure it has been fixed on trunk.
>>
>> Kind regards,
>> Lorenz
>>
>> [
Hi All,
After looking logback's PatternLayout, I don't think its possible to log
thread id by simply supplying a pattern for it. Has anyone looked into ways
to achieve it?
I would like to have thread ids in the log lines as well, since it appears
to be me that same thread name gets assigned for
Hi All,
Has anyone done any perf testing around using different GC algorithms with
the Java broker or is there any recommendation on that?
Thanks
Ramayan
ur Heap over DM but I am reluctant to make
> an explicit recommendation.
>
> Kind regards,
> Lorenz
>
> P.S.: I am going on a 2 day vacation later today but feel free to
> continue this conversation with others on this list.
>
> [1] https://qpid.apache.org/releases/qpi
eive 10/190 * 7.5 GB = 395 MB
>while the large Queue receives 100/190 * 7.5 GB = 3950 MB.
>
> * In total we allocated 10 * 250 MB + 9 * 395 MB + 1 * 3950 MB
>totaling 10 GB (within bounds of rounding errors).
>
>
>
> On 19/12/16 20:48, Ramayan Tiwari wro
gards, Keith.
>
>
> [1] http://semver.org
>
>
> On 27 October 2016 at 23:19, Ramayan Tiwari <ramayan.tiw...@gmail.com>
> wrote:
> > Hi Rob,
> >
> > I have the truck code which I am testing with, I haven't finished the
> test
> > runs yet. I was h
did you verify that the change works for you? You said you were going to
> test with the trunk code...
>
> I'll discuss with the other developers tomorrow about whether we can put
> this change into 6.0.5.
>
> Cheers,
> Rob
>
> On 27 October 2016 at 20:30, Ramayan
uld it be possible to include
> test cases involving many queues and listeners (in the order of thousands
> of queues) for future Qpid releases, as part of standard perf testing of
> the broker?
>
> Thanks,
> Helen
>
> On Tue, Oct 18, 2016 at 10:40 AM, Ramayan Tiwari <ramay
t; >
> > On 17 October 2016 at 21:24, Ramayan Tiwari <ramayan.tiw...@gmail.com>
> > wrote:
> >
> >> Hi Rob,
> >>
> >> We are certainly interested in testing the "multi queue consumers"
> >> behavior
> >> with your patch in
ssue you had with
> this functionality before, I believe). Using this model you'd only need a
> small number (one?) of consumers per session. The patch I have is to add
> this "pull" mode for these consumers (essentially this is a preview of how
> all consumers will work in the future
hall
> spend the rest of my afternoon pondering this...
>
> - Rob
>
> On 15 October 2016 at 17:14, Ramayan Tiwari <ramayan.tiw...@gmail.com>
> wrote:
>
> > Hi Rob,
> >
> > Thanks so much for your response. We use transacted sessions with
> > non-
gt; little more information on the usage pattern - are you using transactions,
> auto-ack or client ack? What prefetch size are you using? How large are
> your messages?
>
> Thanks,
> Rob
>
> On 14 October 2016 at 23:46, Ramayan Tiwari <ramayan.tiw...@gmail.com>
> wro
Hi All,
We have been validating the new Qpid broker (version 6.0.4) and have
compared against broker version 0.32 and are seeing major regressions.
Following is the summary of our test setup and results:
*1. Test Setup *
*a). *Qpid broker runs on a dedicated host (12 cores, 32 GB RAM).
*b).*
hange... I think you may have already patched your 0.32
> broker anyway, in which case you should be able to add the patch I put
> on the JIRA.
>
> On 6 October 2016 at 23:33, Ramayan Tiwari <ramayan.tiw...@gmail.com>
> wrote:
> > Hi Rob,
> >
> > Thanks so muc
to do a patch release for 0.32, but we will likely be
> > putting out a new 6.0.x release soon, and soon after a 6.1 release.
> > Would you be able to upgrade to one of these, or would you prefer me
> > to send you a patch file that you could apply to the 0.32 source to
> > tes
Hi,
We are ran into this StoreException in our production environment multiple
times on different brokers, which caused broker shutdown. We are running
0.32 Java broker with 0.16 client. I see that this was reported and fixed
here:
https://issues.apache.org/jira/browse/QPID-4012
This is still
gt; after a deprecation period. I don't know exactly when JMX was
> > first deprecated but in 6.0.x we stepped up the deprecation level
> > by removing the documentation and the JMX ports from the default
> > configuration. However, the actual code was not removed until
> &
ll run out of memory.
>
> Memory VirtualHost(Node)s are more for testing purposes than
> anything else.
>
>
> Kind Regards,
> Lorenz
>
>
>
> On 23/08/16 03:02, Ramayan Tiwari wrote:
>
>> Hi all,
>>
>> As I understand, flow to disk tries to protect D
Found the JIRA for removing JMX
https://issues.apache.org/jira/browse/QPID-6915
Although, I couldn't find the reason why JMX is being removed. Could you
point out the reasons why JMX is no longer supported?
Thanks
Ramayan
On Mon, Aug 22, 2016 at 5:05 PM, Ramayan Tiwari <ramayan.tiw...@gmail.
Hi all,
As I understand, flow to disk tries to protect Direct Memory by sending new
messages to disk when using some form of persistent.
What is the behavior when virtual host node type is "Memory". I was looking
at the implementation, StoredMemoryMessage doesn't seem to do anything in
case of
Hi all,
We (at Salesforce) are currently using Qpid java 0.32 broker and in the
process of moving to 6.0.4. We rely heavily on JMX to perform various kind
of broker monitoring and management.
There is no mention of JMX Management in the documentation of broker 6.0.4
[1]. The documentation of
Hi All,
I have been trying to setup my dev environment for Qpid Java [1] using
Eclipse and I have not been successful yet. Following are the approaches I
tried so far:
*1. Using Maven pom.xml to import projects*
a) After getting the source, I do File -> Import -> Import Maven Project
b) Select
34 matches
Mail list logo