Re: QPID C++ Broker MALLOC_ARENA_MAX

2017-03-22 Thread Ted Ross

This reply was apparently lost in the email outage.  Resending...


 Forwarded Message 
Subject: Re: QPID C++ Broker MALLOC_ARENA_MAX
Date: Tue, 21 Mar 2017 08:28:35 -0400
From: Ted Ross 
To: users@qpid.apache.org

Hi Clive,

We've seen this before and as I recall, it was specific to RHEL6 (Centos6).

Here's a post from Kim van der Riet from 2011 that summarizes the issue 
and a solution:


http://qpid.2158936.n2.nabble.com/qpidd-using-approx-10x-memory-tp6730073p6775634.html

-Ted

On 03/20/2017 05:55 PM, CLIVE wrote:

Hi,

Been a while since I last posted anything to the QPID newsgroup, mainly
due to the excellent reliability of the QPID C++ broker, keep up the
good work.

But I am seeing a strange issue at a clients site that i thought I would
share with the community.

A client is running a QPID C++  Broker (version 0.32) on a CENTOS 6.7
virtualized platform (8 CPU, 32 cores, and 64G RAM) and is experiencing
memory exhaustion problems. Over the course of 5-30 days the broker
resident memory steadily climbs up until it exhausts the available
memory and gets killed by the kernels OOM. The memory pattern follows
that of a form of memory leak, but I've never seen this kind of behavior
before from a QPID C++ broker, and looking on JIRA that doesn't seem to
be any known memory leak issues.

The broker is running 10 threads, currently supporting 134 long lived
connections, from a range of JAVA JMS (Apache Camel), C++ and Python
Clients, with 25 user defined exchanges and about 100 durable ring
queues All messages are transient.  About 20GBytes of data is pushed
through the broker each data ranging from small little messages of 1K,
to messages of around 100K.

As the broker memory consumption climbs, a 'qpid-stat -g' gives a steady
state queue depth of about 125,000 messages totaling 660M-1Gbytes of
memory. So its not a queue depth issue.

Interestingly when I run pmap -x  I see lots and lots of
64MBytes allocations (about 400) with 300 additional allocations of just
under 64MBytes.

Some searching on the web has turned up a potential candidate for the
memory consumption issue, associated with the design change that was
made to the glibc malloc implementation in glibc 2.10+ which introduced
memory arenas to reduce memory contention in multi-threaded processes.
The malloc implementation uses some basic math to work out how much
total memory is allocated to a process, no of cores * sizeof(long) *
64Mb. So for our 64 bit system that would give 32*8*64Mb = 16G.

Apparently other products have had similar memory issues when they moved
to RHEL 6 (CENTOS 6), from RHEL 5, as the newer OS used glibc 2.12. The
use of the MALLOC_ARENA_MAX environment variable seems to be away of
reducing the memory allocated to the process with a suggested value of 4.

Just wondered if any one else in the community had experienced a similar
kind of broker memory issue, and what advice, if any could be supplied
to localize the problem and stop the broker chewing through 64G of RAM.

Any help/advice gratefully appreciated.

Clive




-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: Qpid Dispatch Release Plans

2017-03-22 Thread Ted Ross

Those both look doable.  I've flagged them for inclusion.

-Ted

On 03/22/2017 04:36 PM, Adel Boutros wrote:

Hello Ted,

What about:
* DISPATCH-627
* DISPATCH-624

+1 for release model.

Regards,
Adel


From: Ted Ross 
Sent: Wednesday, March 22, 2017 2:43:42 PM
To: users@qpid.apache.org
Subject: Qpid Dispatch Release Plans

A couple of topics for discussion:

Qpid Dispatch Router 0.8.0 release
==

As of now, there are 116 resolved issues queued up for 0.8.0.  I'd like
to wrap up this release in the next week.  I plan to defer all of the
pending features to the next release.

If anyone has any bug fixes they would like to see included in this
release that are not already set to "Fix Version: 0.8.0", please put
them on the list.


Release Numbering
=

I'd like to propose that the next release (after 0.8.0) be called 1.0.0.

The x.y.z release numbering would follow the normal expectations:

  - All releases with the same 'x' shall be interoprable and we commit
to not breaking anyone's configurations as the project evolves
through a single major (x) version.
  - Minor (y) releases shall be on a semi-regular cadence and shall
contain feature additions as well as rolled-up bug fixes.
  - z releases shall be for bug fixes only and will be used as-needed.


-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org




-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: Qpid Dispatch Release Plans

2017-03-22 Thread Adel Boutros
Hello Ted,

What about:
* DISPATCH-627
* DISPATCH-624

+1 for release model.

Regards,
Adel


From: Ted Ross 
Sent: Wednesday, March 22, 2017 2:43:42 PM
To: users@qpid.apache.org
Subject: Qpid Dispatch Release Plans

A couple of topics for discussion:

Qpid Dispatch Router 0.8.0 release
==

As of now, there are 116 resolved issues queued up for 0.8.0.  I'd like
to wrap up this release in the next week.  I plan to defer all of the
pending features to the next release.

If anyone has any bug fixes they would like to see included in this
release that are not already set to "Fix Version: 0.8.0", please put
them on the list.


Release Numbering
=

I'd like to propose that the next release (after 0.8.0) be called 1.0.0.

The x.y.z release numbering would follow the normal expectations:

  - All releases with the same 'x' shall be interoprable and we commit
to not breaking anyone's configurations as the project evolves
through a single major (x) version.
  - Minor (y) releases shall be on a semi-regular cadence and shall
contain feature additions as well as rolled-up bug fixes.
  - z releases shall be for bug fixes only and will be used as-needed.


-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: [qpid-proton-cpp] default_container does not implement thread safety at all?

2017-03-22 Thread Alan Conway
On Wed, 2017-03-15 at 22:32 +, Adel Boutros wrote:
> Hello,
> 
> As Alan said, we have for example "abused" the schedule in C++. Today
> we run the container on another thread and we have a shared queue
> between the container threads and the other threads. We call schedule
> very frequently with a delay of 0. Whenever the container thread
> wakes, it checks the queue to send events then calls schedule again.
> The access to the queue is protected by locks.
> 
> The downside is that the container thread abuses a full CPU but we
> can limit this by using a delay of higher than 0 on schedule.
> 
> Regards,
> Adel
> 

And to be clear this is abuse to hack out of a short-term hole - we
will have a properly behaved solution soon.

> 
> From: Alan Conway 
> Sent: Wednesday, March 15, 2017 10:51:24 PM
> To: users@qpid.apache.org
> Subject: Re: [qpid-proton-cpp] default_container does not implement
> thread safety at all?
> 
> On Tue, 2017-03-14 at 01:23 +0200, Fj wrote:
> > Hello, I'm mightily confused.
> > 
> > There's
> > https://qpid.apache.org/releases/qpid-proton-0.17.0/proton/cpp/api/
> > md
> > _mt.html
> > that says in no uncertain terms that I can use event_loop.inject()
> > to
> > execute a callback on the thread that runs the event loop. There's
> > an
> > examples/cpp/mt/broker.cpp linked from the above that's supposed to
> > be a
> > thread safe implementation of a thread pool running a bunch of
> > containers.
> > 
> > But after "grep inject" and carefully sifting through the results,
> > it
> > seems
> > that after some indirection it all ends up calling
> > 
> > // FIXME aconway 2016-06-07: this is not thread safe. It is
> > sufficient for
> > using
> > // default_container::schedule() inside a handler but not for
> > inject() from
> > // another thread.
> > bool event_loop::impl::inject(void_function0& f) {
> > try { f(); } catch(...) {}
> > return true;
> > }
> > 
> > What the hell? Blank exception suppression is just the icing on the
> > cake,
> > nothing here even resembles doing the actual hard thing: telling
> > the
> > reactor to inject a new custom event in a threadsafe manner.
> 
> Guilty as charged mlud. This was work in progress when we started to
> redo the guts in a more profound way. The Real Thing is coming - it
> is
> available in C now as pn_proactor and the C++ binding is being
> refactored on top of it.
> 
> > Am I missing something obvious, or is the documentation there, and
> > in
> > other
> > places, and the example, chemically pure wishful thinking, like, it
> > would
> > be pretty nice if we supported multithreading, and it would work in
> > such
> > and such ways then, but unfortunately we don't, as a matter of
> > fact?
> 
> The state of the release is probably not clearly documented: the C++
> MT
> interfaces *allow* you to build an MT app that replaces the
> proton::container (as demoed in the mt_broker) BUT the
> proton::container itself is still not multithreaded. It has a
> compatible API because it will be in future.
> 
> > If so, maybe you gals and guys should fix the documentation, and
> > not
> > just
> > with "experimental" but with "DOESN'T WORK AT ALL" prominent on the
> > page?
> 
> I agree the doc is not clear, next release it should work as expected
> an not as obscurely caveatted.
> 
> > On more constructive notes:
> > 
> > 1) do people maybe use some custom containers, similar to the
> > epoll_container in the examples (which doesn't support schedule()
> > though, I
> > have to point out)? If so, I much desire to see one.
> 
> Yes, but the plan is to fix the non-custom container to be fully
> thread-safe so you don't have to unless you have special needs. You
> can
> also implement directly in C if you have very special needs.
> 
> > 2) why do you attempt to bolt on your event_loop thing to
> > Connection
> > and
> > below when the only thing that has the run() method is Container,
> > therefore
> > that's the scope that any worker thread would have? It's the
> > Container that
> > should have the inject() method. Underlying C API notwithstanding.
> 
> Nope: the threading model is seralized-per-connection, so functions
> that operate only on connection state and are triggered from an event
> handler don't need to be locked. Only interactions that cross
> connections (e.g. one connection queues something a subscriber on
> another connection needs to wake up) need sync. This is demonstrated
> with the mt_broker stuff. That's why injecting is per-connection.
> 
> > 3) What's the best way forward if I really have to have a
> > threadsafe
> > container that I can inject some event into threadsafely:
> 
> The example demonstrates how you can do it, but is incomplete as you
> point out. Soon it will be built in.
> 
> >   3a) does the C API support thread safety? Like, the whole problem
> > is
> > touching the reactor's job queue or whatever it has with taking an
> > appropriate lock, and it must 

Re: [DISCUSS] Drop 0-10 based JCA/RA artefacts from Qpid JMS 0-x Client.

2017-03-22 Thread Keith W
Resending...  the previous mail hasn't seemed to have made it to the
users@q.a.o list.

On 22 March 2017 at 08:52, Keith W  wrote:
> Two JIRAs were raised for this work:
>
> QPID-7716 - Remove AMQP 0-10 based JCA/RA components from Qpid JMS 0-x Client
> QPID-7717 - Remove tests and dependencies associated with the AMQP
> 0-10 JCA/RA components from Java Broker system test suite.
>
> This work is now completed.
>
> On 20 March 2017 at 11:26, Justin Ross  wrote:
>> +1.  Thanks for raising this, Keith.
>>
>> On Sun, Mar 19, 2017 at 3:54 PM, Keith W  wrote:
>>>
>>> Hi all,
>>>
>>> I'd like to propose that we drop the AMQP 0-10 based JCA (J2EE
>>> Connector Architecture) and RA (Resource Adaptor) artefacts from the
>>> next major Qpid JMS 0-x Client release (6.3.0).
>>>
>>> The artefacts in question have the following Maven coordinates:
>>>
>>> https://mvnrepository.com/artifact/org.apache.qpid/qpid-jca
>>> https://mvnrepository.com/artifact/org.apache.qpid/qpid-ra
>>>
>>> I think these artefacts are already dead.  We stopped linking to the
>>> JCA and RA binary artefacts from the Qpid components pages some time
>>> ago. There have been no feature or defect fix commits to these modules
>>> since 2013 and there is no JIRA or mailing list activity.
>>>
>>> These artefacts will continue to be released from the *existing*
>>> combined Qpid Broker for Java/Qpid JMS 0-x Client 6.1.x and 6.0.x
>>> release lines until defect fixes on these lines are ceased.
>>>
>>> Comments?
>>>
>>> Keith.
>>>
>>> -
>>> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
>>> For additional commands, e-mail: users-h...@qpid.apache.org
>>>
>>

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Qpid Dispatch Release Plans

2017-03-22 Thread Ted Ross

A couple of topics for discussion:

Qpid Dispatch Router 0.8.0 release
==

As of now, there are 116 resolved issues queued up for 0.8.0.  I'd like 
to wrap up this release in the next week.  I plan to defer all of the 
pending features to the next release.


If anyone has any bug fixes they would like to see included in this 
release that are not already set to "Fix Version: 0.8.0", please put 
them on the list.



Release Numbering
=

I'd like to propose that the next release (after 0.8.0) be called 1.0.0.

The x.y.z release numbering would follow the normal expectations:

 - All releases with the same 'x' shall be interoprable and we commit
   to not breaking anyone's configurations as the project evolves
   through a single major (x) version.
 - Minor (y) releases shall be on a semi-regular cadence and shall
   contain feature additions as well as rolled-up bug fixes.
 - z releases shall be for bug fixes only and will be used as-needed.


-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: [qpid java broker 6.1.x] enqueue/dequeue over HTTP?

2017-03-22 Thread Rob Godfrey
Looks like the mailing lists have been having some issues over the last 36
hours or so... so sending this response that seems to have disappeared into
the ether...

On 21 March 2017 at 09:04, Rob Godfrey  wrote:

> Hi Dan,
>
> You are correct - the existing features were essentially aimed at
> management/support operations (deleting messages from a queue), with the
> enqueing feature being added to support a particular use case (brokers
> sitting inside a firewall which only let through HTTP, and did not allow
> WebSocket connections, needing low volume message ingress from outside that
> firewall).
>
> In terms of a more general messaging API over REST, the questions I was
> debating with myself were to what extent the API should be "AMQP"-like
> (with the notion that we might want to try to establish some sort of
> standard way of doing AMQP over REST... e.g. first you PUT a "consumer"
> object, then you interact with that) and, somewhat related, what format the
> messages should be in (the existing API allows for messages to be sent in a
> JSON format rather than the binary AMQP message).  What are your thoughts
> on this?
>
> -- Rob
>
> On 21 March 2017 at 06:14, Dan Langford  wrote:
>
>> I am going through the HTTP API documentation and I just want to confirm
>> what I am seeing. Is there a portion of the API to enqueue and dequeue
>> messages via HTTP? I was hoping for some REST api like Google's TaskQueues
>> 
>> that
>> include a "lease", "delete", and "insert" operations. Or maybe something
>> like Amazon SQS
>> > Reference/API_Operations.html>with
>> actions "ReceiveMessage", "DeleteMessage", and "SendMessage". Or
>> Microsoft ServiceBus
>> or
>> ActiveMQ , or HornetQ
>> > manual/html_single/#d0e163>
>> .
>>
>> In the current QPID API I can see the ability to get the contents of a
>> message and delete but these seems very administrative and not intended
>> for
>> general messaging. Am I missing something or is this just a feature that
>> does not exist? If its truly needed I can throw together a shim unless any
>> of you know of an existing one.
>>
>> Thanks so much
>>
>
>


Test email...

2017-03-22 Thread Ted Ross

Please disregard.

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



[ANNOUNCE] Apache Qpid for Java 6.1.2 released

2017-03-22 Thread Oleksandr Rudyy
The Apache Qpid (http://qpid.apache.org) community is pleased to announce
the immediate availability of Apache Qpid for Java 6.1.2.

This release incorporates a number of defect fixes and enhancements in
Qpid Broker for Java.

The release is available now from our website:
http://qpid.apache.org/download.html

Binaries are also available via Maven Central:
http://qpid.apache.org/maven.html

Release notes can be found at:
http://qpid.apache.org/releases/qpid-java-6.1.2/release-notes.html

Qpid Java 6.1.2 release page can be found here:
http://qpid.apache.org/releases/qpid-java-6.1.2/index.html


Thanks to all involved,
Alex

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org