[RESULT][VOTE] Release Qpid Broker-J 8.0.4

2021-02-17 Thread Alex Rudyy
There were 3 binding +1 votes and 1 non-binding +1 community vote. The vote
has passed.

The voting thread can be found here:
https://lists.apache.org/thread.html/r5d3d3c1671389f1130c29110b9667c0dc170231941ea7ab59f88913f%40%3Cusers.qpid.apache.org%3E

I will add the archives to the dist release repo and release the maven
staging repo shortly. The website will be updated once the artefacts
have had time to sync to the mirrors and maven central.

Kind Regards,
Alex

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org


[RESULT][VOTE] Release Qpid Broker-J 7.1.12

2021-02-17 Thread Oleksandr Rudyy
There were 3 binding +1 votes and 1 non-binding +1 community vote. The vote
has passed.

The voting thread can be found here:
https://lists.apache.org/thread.html/r48b6fbec6380705ed880d9851cb9cffb1aa4a2c5018f8402adb7%40%3Cusers.qpid.apache.org%3E

I will add the archives to the dist release repo and release the maven
staging repo shortly. The website will be updated once the artefacts
have had time to sync to the mirrors and maven central.

Kind Regards,
Alex

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org


Re: [dispatch-router] 1.16.0 release schedule

2021-02-17 Thread Robbie Gemmell
On Wed, 17 Feb 2021 at 16:15, Ken Giusti  wrote:
>
> On Tue, Feb 16, 2021 at 8:52 AM Robbie Gemmell 
> wrote:
>
> > On Mon, 15 Feb 2021 at 19:41, Ken Giusti  wrote:
> > >
> > > Folks,
> > >
> > > Now that Qpid Dispatch Router 1.15.0 has been released it's time to move
> > on
> > > to 1.16 development.
> > >
> > > I'd like to take this opportunity to propose that the project adopt a
> > > time-based release cycle for minor releases, starting with 1.16.0.
> > >
> > > Previous releases have been driven primarily by feature completion rather
> > > than by a pre-established public schedule.  While this approach is great
> > > for developers it doesn't help the rest of the community plan for testing
> > > and deployment of the new release.
> > >
> > > As it stands now the only formal notification provided to the community
> > is
> > > when a release candidate has been cut and the vote is announced.
> > >
> >
> > Nothing requires that of course, it's just what's been happening.
> > Heads up and progress mails could easily have been sent, and could be
> > used to provide similar notice whether working on a specific time
> > schedule or to specific feature completions going forward.
> >
> > Arguably even with time scheduled releases the desired effect
> > discussed still won't fully be realised without such mails.
> >
> >
> A very important point indeed!  I think having a schedule "force our hands"
> by requiring more emails. Reminders of upcoming milestones to be specific.
> And a public schedule makes it fairly difficult to "go dark": having a
> milestone come and go without some sort of announcement - even if it's to
> acknowledge a slip - isn't something the community should let us get away
> with.
>

I dont think it makes too much difference overall, the mails
can/should be similar overall in both cases and can/do get missed in
both cases.

>
> > > Going forward I'd like to propose a quarterly (13 week) minor release
> > > schedule which includes a feature freeze milestone and stabilization
> > > period.  The proposed 13 week release timetable would consist of the
> > > following:
> > >
> > > Development phase:  10 weeks.
> > >
> > > In this phase the master branch is open for all development - features,
> > bug
> > > fixes, test development, etc.
> > >
> > > Feature freeze and start of stabilization phase: 2 weeks.
> > >
> > > After the 10 week development phase a tag is dropped at the head of
> > > master.  This tag is the root of the 1.N.x release branch.  Development
> > for
> > > the next minor release continues on master.
> > >
> >
> > I think such a tag would need to be named to make clear what it
> > represents and that they should typically not be used, as beyond the
> > very point they are created it mainly only seems useful as a delimiter
> > of sorts for master.
> >
> > If the idea is for folks to test upcoming bits before their release,
> > it would seem they should essentially always be using the head of the
> > branch for any pre-release testing as anything else is likely already
> > stale.
> >
> >
> To be honest, my personal preference would be to branch unconditionally.  I
> didn't suggest it simply because I'm under the impression that there'd be
> strong resistance to branching.  My impression is probably wrong, so I'd
> like to propose that a branch is mandatory.  Even if it only contains a
> change to the version.

Ganesh already brought up my dislike of the old branching approach,
but its a targeted dislike as I've said. I dont dislike branches in
general, just how they were being used.

I do dislike branches being created, versions then changed on the
branch immediately and tagged, release votes completed very quickly
after, and master not having any tags at all (final release or
otherwise) but also never having any meaningful diversion from the
release branches that did, which is often what happened before. Then
to top it off those branches were typically never used again. The
version changes and tags could easily have been on master rather than
floating alone on a branch, and e.g ignored by git unless you
explicitly fetch them or were using the branch. Essentially useless
branch, traded off against annoyances of no tags on master.

If instead there is actually expected to be a noticeable period of
overlapping work with real divergence planned from master before the
release (even among bug fixes; not every bug is a blocker...we can do
further releases for them, plus others will come up later) then ok,
branch had good reason to exist. The tradeoff is still the lack of
tags on master, but it at least actually buys something.

If there's also actually going to be bug fix releases made from the
branch after that, then even better. I'd be very in favour of those,
since not everything is a blocker, we can/should do further releases,
and I really dont believe we only find important bugs just during the
couple days or couple weeks before releases that are spaced much
further apart than 

Re: [dispatch-router] 1.16.0 release schedule

2021-02-17 Thread Ken Giusti
On Tue, Feb 16, 2021 at 8:52 AM Robbie Gemmell 
wrote:

> On Mon, 15 Feb 2021 at 19:41, Ken Giusti  wrote:
> >
> > Folks,
> >
> > Now that Qpid Dispatch Router 1.15.0 has been released it's time to move
> on
> > to 1.16 development.
> >
> > I'd like to take this opportunity to propose that the project adopt a
> > time-based release cycle for minor releases, starting with 1.16.0.
> >
> > Previous releases have been driven primarily by feature completion rather
> > than by a pre-established public schedule.  While this approach is great
> > for developers it doesn't help the rest of the community plan for testing
> > and deployment of the new release.
> >
> > As it stands now the only formal notification provided to the community
> is
> > when a release candidate has been cut and the vote is announced.
> >
>
> Nothing requires that of course, it's just what's been happening.
> Heads up and progress mails could easily have been sent, and could be
> used to provide similar notice whether working on a specific time
> schedule or to specific feature completions going forward.
>
> Arguably even with time scheduled releases the desired effect
> discussed still won't fully be realised without such mails.
>
>
A very important point indeed!  I think having a schedule "force our hands"
by requiring more emails. Reminders of upcoming milestones to be specific.
And a public schedule makes it fairly difficult to "go dark": having a
milestone come and go without some sort of announcement - even if it's to
acknowledge a slip - isn't something the community should let us get away
with.


> > Going forward I'd like to propose a quarterly (13 week) minor release
> > schedule which includes a feature freeze milestone and stabilization
> > period.  The proposed 13 week release timetable would consist of the
> > following:
> >
> > Development phase:  10 weeks.
> >
> > In this phase the master branch is open for all development - features,
> bug
> > fixes, test development, etc.
> >
> > Feature freeze and start of stabilization phase: 2 weeks.
> >
> > After the 10 week development phase a tag is dropped at the head of
> > master.  This tag is the root of the 1.N.x release branch.  Development
> for
> > the next minor release continues on master.
> >
>
> I think such a tag would need to be named to make clear what it
> represents and that they should typically not be used, as beyond the
> very point they are created it mainly only seems useful as a delimiter
> of sorts for master.
>
> If the idea is for folks to test upcoming bits before their release,
> it would seem they should essentially always be using the head of the
> branch for any pre-release testing as anything else is likely already
> stale.
>
>
To be honest, my personal preference would be to branch unconditionally.  I
didn't suggest it simply because I'm under the impression that there'd be
strong resistance to branching.  My impression is probably wrong, so I'd
like to propose that a branch is mandatory.  Even if it only contains a
change to the version.

One justification for an unconditional branch is exactly the point you
raise: it makes it much easier to automate CI to simply use the HEAD of the
branch.  And that's really the whole point of a stability phase, isn't it -
to ease and encourage the community to start testing early.



> > The goal of this phase is to allow time for the community to start
> testing
> > the upcoming release and report issues. It gives developers time to land
> > bug fixes without the pressure of holding up the release vote.
> >
> > To keep the release branch as stable as possible only bug fixes are
> > allowed. Fixes destined for the release branch should be made on master
> if
> > applicable and cherry picked onto the release branch once they've proved
> > stable.
> >
> > Any features not completed when the freeze occurs will be rescheduled  to
> > the next minor release.  This may require removing or disabling
> incomplete
> > features on the release branch - specifics will be determined on a case
> by
> > case basis.
> >
> > Release Phase: 1 week.
> >
> > At the end of the stabilization phase a release candidate is made from
> the
> > tip of the release branch and the vote is called.  Failure to pass the
> vote
> > may cause this phase to slip, but hopefully the stabilization phase will
> > make this unlikely.
> >
> > Once the release is done, fix releases (e.g. 1.16.1, 1.16.2) are made
> from
> > the release branch as needed until the next minor release becomes
> available.
> >
> > Thoughts, opinions?
> >
>
> To rephrase, this essentially seems to be a 10-week feature release
> frequency proposal, with 3 further weeks where two streams overlap
> their different stages, giving roughly 5 feature releases a year. That
> is much the same overall to the rate of the recent years, but just
> with the aim of fixed 10 week cadence vs more variable spread of 1 to
> 4 months.
>
> Trying to ensure releases actually occur by having an upper 

Re: Dispatch Router: Wow. Large message test with different buffer sizes

2021-02-17 Thread Robbie Gemmell
On Wed, 17 Feb 2021 at 13:53, Michael Goulish  wrote:
>
> Robbie -- thanks for questions!
>
>
> *The senders are noted as having a 10ms delay between sends, how** exactly
> is that achieved?*
>
> My client (both sender and receiver are same program, different flags) is
> implemented in C using the Proactor interface.  When I run the sender
> 'throttled' here is what happens:
>
>   * When the sender gets a FLOW event, it calls pn_proactor_set_timeout()
> to set a timeout of N milliseconds, where N is the integer argument to the
> command line 'throttle' flag.
>
>   * N milliseconds later, the sender gets the PN_PROACTOR_TIMEOUT event.
> Then I 'wake' the connection.
>
>   * When the sender gets the WAKE event  -- if it has not already sent all
> its messages -- it sends one message -- and sets the timer again to the
> same value.
>
> So, if I set a value of 10 msec for the throttle, the sender will send just
> a little less than 100 messages per second.  A little less because it takes
> a little bit of time (very little) to actually send one message.
>
>

Ok, mainly I was just looking to tease out that it wasnt e.g. pausing
the reactor thread and effectively batching up sends for later a
single later IO.

>
> *Do the receivers receive flat out? *
>
> Yes, there is no form of throttling on the receivers.
>
>
> *Is that 1000 credit window from the receiver to router, or from the router
> to the sender, or both?*
>
> Credit is granted by the receiver and used by the sender. When the sender
> is *not* throttled, it just sends messages as fast as ever it can, until
> credit is exhausted.
>
> However I do *not* think that the router is able to simply pass on the
> number that it got from the receiver all the way back to the sender. I
> think the credit number that the sender gets is probably determined only by
> the configured  'capacity' of the router listener it is talking to.
>

Right, this is why I asked. Unless you are using link-routing, then
what the client receiver grants has no bearing on what the client
sender gets from the router inbetween them.

So your receiver is able running flat out, issuing 1000 credits, while
the sender is throttled to 100 msg/sec and only gets whatever credit
the router gives it (250 was the last default I think I recall?).
Seems very tilted to the receivers being faster, and seeming like they
should always be able to keep up if the router does.

>
> *was there any discernible difference in the 512b test alone at the point
> the receive throughput looks to reduce?*
>
> I didn't have the eyes to see anything changing at that time. I didn't know
> there was that weird inflection point until I graphed the data -- I assumed
> it was just gradually slowing down.
>
> I fully expect we are going to decide to standardize on a larger buffer
> size -- probably 2K -- depending on tests that I am about to do on AMQP.
> Once I do the AMQP tests to support that decision I hope to pursue that
> interesting little inflection point fiercely.
>

I think it's worth pursuing.

The test doesn't seem like an especially overtaxing scenario to me,
and indeed everything apparently handles it fine for >200sec, with the
graphed throughput suggesting there should be no real backlog, until
things suddenly changed and throughput dropped. It's not obvious to me
why the buffer size should make such distinct (or really, any)
difference in that case, and might it doing so might suggest something
interesting? I could understand it reducing CPU usage due to doing
less work, but not so much the rest unless its maxed out already. If
not, perhaps increasing the size is just going to be covering
something up, e.g somehow just delaying the same dropoff situation
happening until a later unknown point or requiring some more intense
load level to get into it.

Ted is right that using session flow control in addition would be
useful, but as in this test each sender maxes at 20MB/s based on their
delayed semds, and the receivers should easily outstrip them, I'm not
sure I would expect it to make a difference in this test unless
something else is already awry.



>
>
>
> On Mon, Feb 15, 2021 at 12:21 PM Robbie Gemmell 
> wrote:
>
> > On Sat, 13 Feb 2021 at 16:40, Ted Ross  wrote:
> > >
> > > On Fri, Feb 12, 2021 at 1:47 PM Michael Goulish 
> > wrote:
> > >
> > > > Well, *this* certainly made a difference!
> > > > I tried this test:
> > > >
> > > > *message size:*  20 bytes
> > > > *client-pairs:*  10
> > > > *sender pause between messages:* 10 msec
> > > > *messages per sender:*   10,000
> > > >* credit window:* 1000
> > > >
> > > >
> > > >
> > > >
> > > >   *Results:*
> > > >
> > > >   router buffer size
> > > >512 bytes4K bytes
> > > >   ---
> > > >CPU517%  102%
> > > >Mem711 MB59 MB
> > > >Latency26.9 *seconds*  

Re: qpid-jms-client-56.0 - Prod issue - consumer stopped receiving message without any connection failure and detached receiver error

2021-02-17 Thread Robbie Gemmell
If you have exceptions then please provide the stacktraces, it may
contain context helpful in itself which the messages doesnt
(particularly when the exception message appears to be, the peer did
not specify an error message).

There haven't been many significant changes since 0.45.0 I would
expect could alter behaviour in this space. Most or all of those were
made to address bugs or change behaviours identified from your reports
both with 0.45.0 and subsequent releases.

Its difficult to reason much at all from the information available, however...

There are a variety of ways I might not expect to receive an exception
in the listener upon detaching, such as if the error was already
reported (something did, unclear what from the minimal
detail...posting the rest more may or may not make it clearer), or the
resource was previously / is currently being attempted to be locally
closed, or the parent session / connection was closed (which naturally
implicitly closes their children, but doesnt provoke a storm of
related exceptions), and i'm sure there are others.

You may also have blocked the thread again somehow, as has happened
before in your use case, perhaps stopping further issues being
communicated.

You also previously reported unexpected 'disabling subscription'
behaviour as a client bug, whereby 0.56.0 reported the very exceptions
you are now indicating it didnt, when really it was the server that
just didnt do what you expected, perhaps that is true here as well?

We do have a test that shows the client fires the exception listener
when a consumer with listener is remotely closed, though it's
certainly possible your somewhat more elaborate use case is getting
into a corner case, but I dont immediately see one, and again not much
has changed in those areas.

It's of course worth adding the obvious, that while a consumer not
receiving messages could indeed indicate an issue with it, it doesn't
necessarily. It may just not have been given any.


Robbie

On Wed, 17 Feb 2021 at 08:52, akabhishek1
 wrote:
>
> Hi Team,
>
> We recently upgraded qpid-jms-client to 56.0. We are using qpid-jms-client
> ti receive message from ServiceBus. We need your urgent support on this
> issue, because we are facing this issue in PROD environment.
>
> Issue - Some consumers are not receiving message from queue without any
> connection failure and detached receiver error
>
> JMS Infrastructure -
> JMS connection - 1
> Exception listener - 1
> 7 destination (Queue/topic) - every destination has 4 consumers
> 28 consumers - every consumer have separate JMS session and MessageConsumerr
>
> Scenario -
> 1. Received only one below error on "onException" block -
>
> Connection exception from 'ServiceBus_Connector', isConnectionActive 'true',
> Reason 'Unknown error from remote peer' - 'class javax.jms.JMSException',
> cause 'org.apache.qpid.jms.provider.ProviderException: Unknown error from
> remote peer'
>
> 2. As connection is active, so we didn't establish re-connection. We checked
> connection status with below method
> 3. Checked receiver status with below isConsumerActive method and found that
> only reciever is closed, so re-established receiver and this consumer is
> consuming messages perfectly.
>
> Impact - 1. One consumer receiving message perfectly
>  2. Six consumer does not throw any connection error, or detached
> error
>  3. These six consumers stopped receiving message from queue
>
> Could you please urgently suggest your opinion on this issue ?
>
>
> public boolean isConnectionActive(Connection connection) {
> boolean connectionStatus = false;
> try {
> String clientID = connection.getClientID();
> connectionStatus = true;
> } catch (Exception e) {
> connectionStatus = false;
> }
> return connectionStatus;
> }
>
> public boolean isConsumerActive(MessageConsumer consumer) {
> boolean isConsumerActive = false;
> try {
> consumer.getMessageListener();
> isConsumerActive = true;
> } catch (Exception e) {
> isConsumerActive = false;
> }
> return isConsumerActive;
> }
>
>
> Regards,
> Abhishek Kumar
>
>
>
> --
> Sent from: http://qpid.2158936.n2.nabble.com/Apache-Qpid-users-f2158936.html
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: Dispatch Router: Wow. Large message test with different buffer sizes

2021-02-17 Thread Michael Goulish
Robbie -- thanks for questions!


*The senders are noted as having a 10ms delay between sends, how** exactly
is that achieved?*

My client (both sender and receiver are same program, different flags) is
implemented in C using the Proactor interface.  When I run the sender
'throttled' here is what happens:

  * When the sender gets a FLOW event, it calls pn_proactor_set_timeout()
to set a timeout of N milliseconds, where N is the integer argument to the
command line 'throttle' flag.

  * N milliseconds later, the sender gets the PN_PROACTOR_TIMEOUT event.
Then I 'wake' the connection.

  * When the sender gets the WAKE event  -- if it has not already sent all
its messages -- it sends one message -- and sets the timer again to the
same value.

So, if I set a value of 10 msec for the throttle, the sender will send just
a little less than 100 messages per second.  A little less because it takes
a little bit of time (very little) to actually send one message.



*Do the receivers receive flat out? *

Yes, there is no form of throttling on the receivers.


*Is that 1000 credit window from the receiver to router, or from the router
to the sender, or both?*

Credit is granted by the receiver and used by the sender. When the sender
is *not* throttled, it just sends messages as fast as ever it can, until
credit is exhausted.

However I do *not* think that the router is able to simply pass on the
number that it got from the receiver all the way back to the sender. I
think the credit number that the sender gets is probably determined only by
the configured  'capacity' of the router listener it is talking to.


*was there any discernible difference in the 512b test alone at the point
the receive throughput looks to reduce?*

I didn't have the eyes to see anything changing at that time. I didn't know
there was that weird inflection point until I graphed the data -- I assumed
it was just gradually slowing down.

I fully expect we are going to decide to standardize on a larger buffer
size -- probably 2K -- depending on tests that I am about to do on AMQP.
Once I do the AMQP tests to support that decision I hope to pursue that
interesting little inflection point fiercely.




On Mon, Feb 15, 2021 at 12:21 PM Robbie Gemmell 
wrote:

> On Sat, 13 Feb 2021 at 16:40, Ted Ross  wrote:
> >
> > On Fri, Feb 12, 2021 at 1:47 PM Michael Goulish 
> wrote:
> >
> > > Well, *this* certainly made a difference!
> > > I tried this test:
> > >
> > > *message size:*  20 bytes
> > > *client-pairs:*  10
> > > *sender pause between messages:* 10 msec
> > > *messages per sender:*   10,000
> > >* credit window:* 1000
> > >
> > >
> > >
> > >
> > >   *Results:*
> > >
> > >   router buffer size
> > >512 bytes4K bytes
> > >   ---
> > >CPU517%  102%
> > >Mem711 MB59 MB
> > >Latency26.9 *seconds*  2.486 *msec*
> > >
> > >
> > > So with the large messages and our normal buffer size of 1/2 K, the
> router
> > > just got overwhelmed. What I recorded was average memory usage, but
> looking
> > > at the time sequence I see that its memory kept increasing steadily
> until
> > > the end of the test.
> > >
> >
> > With the large messages, the credit window is not sufficient to protect
> the
> > memory of the router.  I think this test needs to use a limited session
> > window as well.  This will put back-pressure on the senders much earlier
> in
> > the test.  With 200Kbyte messages x 1000 credits x 10 senders, there's a
> > theoretical maximum of 2Gig of proton buffer memory that can be consumed
> > before the router core ever moves any data.  It's interesting that in the
> > 4K-buffer case, the router core keeps up with the flow and in the
> 512-byte
> > case, it does not.
>
> The senders are noted as having a 10ms delay between sends, how
> exactly is that achieved? Do the receivers receive flat out? Is that
> 1000 credit window from the receiver to router, or from the router to
> the sender, or both?
>
> If the senders are slow compared to the receivers, and only 200MB/sec
> max is actually hitting the router as a result of the governed sends,
> I'm somewhat surprised the router would ever seem to accumulate as
> much data (noted as an average; any idea what was the peak?) in such a
> test unless something odd/interesting starts happening to it at the
> smaller buffer size after some time. From the other mail it seems it
> all plays nicely for > 200 seconds and only then starts to behave
> differently, since delivery speed over time appears as expected from
> the governed sends meaning there should be no accumulation, and then
> it noticeably reduces meaning there must be some if the sends
> maintained their rate. There is a clear disparity in the CPU result
> between the two tests; was there any discernible difference in the
> 512b test alone 

Re: qpid-jms-client-56.0 - Prod issue - consumer stopped receiving message without any connection failure and detached receiver error

2021-02-17 Thread akabhishek1
Hi Team,

After observing this issue for 12 hours , we performed one activity 

- We disabled the one subscription which should throw detached error on
"onException" block, But we haven't  received any error.

It means link got detached from broker without informing to connection
exception listener.

We haven't observed this type of issue in 45 from more than years. 

Could you please take a look on this issue. This looks like a defect.

Regards,
Abhishek Kumar



--
Sent from: http://qpid.2158936.n2.nabble.com/Apache-Qpid-users-f2158936.html

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: [VOTE] Release Qpid Broker-J 7.1.12

2021-02-17 Thread Dedeepya Tunga
+1
I have verified the below :- verified the signature and checksum files- started 
the broker using the binary archive , created the queue through web management 
console.-  posted few messages using the new jms client- verified the Rest api
Regards,Dedeepya.T
Sent from Yahoo Mail on Android 
 
  On Tue, 16 Feb 2021 at 7:46, Oleksandr Rudyy wrote:   +1

* Verified signatures and checksums
* Built successfully from source bundle
* Ran successfully unit and integration tests using default profile and JDK8
* Started broker
* Created queue via web management console
* Published and received test messages using JMS client 0.56.0
* Verified that broker log timestamp is reported in new format

On Mon, 15 Feb 2021 at 00:02, Oleksandr Rudyy  wrote:
>
> Hi folks,
>
> I built release artefacts for Qpid Broker-J version 7.1.12 RC1.
> Please, give them a test out and vote accordingly.
>
> The source and binary archives can be found at:
> https://dist.apache.org/repos/dist/dev/qpid/broker-j/7.1.12-rc1/
>
> The maven artifacts are also staged at:
> https://repository.apache.org/content/repositories/orgapacheqpid-1214
>
> The new version brings a number of improvements and bug fixes.
> You can find the full list of JIRAs included into the release here:
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310520=12349619
>
> Kind Regards,
> Alex
>
> P.S. For testing of maven broker staging repo artefacts, please add into to 
> your project pom the staging repo as below:
>
> 
>    
>      staging
>      
>https://repository.apache.org/content/repositories/orgapacheqpid-1214
>    
> 
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org

  


Re: [VOTE] Release Qpid Broker-J 8.0.4

2021-02-17 Thread Dedeepya Tunga
+1
I have verified the below :- verified the signature and checksum files- started 
the broker using the binary archive , created the queue through web management 
console.-  posted few messages using the new jms client- verified the Rest api
Regards,Dedeepya.T

Sent from Yahoo Mail on Android 
 
  On Tue, 16 Feb 2021 at 7:45, Oleksandr Rudyy wrote:   +1

* Verified signatures and checksums
* Built successfully from source bundle
* Ran successfully unit and integration tests using default profile and JDK8
* Started broker
* Created queue via web management console
* Published and received test messages using JMS client 0.56.0
* Verified that broker log timestamp is reported in new format

On Mon, 15 Feb 2021 at 00:36, Oleksandr Rudyy  wrote:
>
> Hi all,
>
> I built release artefacts for Qpid Broker-J version 8.0.4 RC1.
> Please, give them a test out and vote accordingly.
>
> The source and binary archives can be found at:
> https://dist.apache.org/repos/dist/dev/qpid/broker-j/8.0.4-rc1/
>
> The maven artifacts are also staged at:
> https://repository.apache.org/content/repositories/orgapacheqpid-1215
>
> The new version brings a number of improvements and bug fixes.
> You can find the full list of JIRAs included into the release here:
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310520=12349598
>
> Kind Regards,
> Alex
>
> P.S. For testing of maven broker staging repo artefacts, please add into to 
> your project pom the staging repo as below:
>
> 
>    
>      staging
>      
>https://repository.apache.org/content/repositories/orgapacheqpid-1215
>    
> 
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
> Oleksandr Rudyy 
>
> 00:02 (31 minutes ago)
>
>
> to users

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org

  


qpid-jms-client-56.0 - Prod issue - consumer stopped receiving message without any connection failure and detached receiver error

2021-02-17 Thread akabhishek1
Hi Team,

We recently upgraded qpid-jms-client to 56.0. We are using qpid-jms-client
ti receive message from ServiceBus. We need your urgent support on this
issue, because we are facing this issue in PROD environment.

Issue - Some consumers are not receiving message from queue without any
connection failure and detached receiver error

JMS Infrastructure -
JMS connection - 1
Exception listener - 1
7 destination (Queue/topic) - every destination has 4 consumers
28 consumers - every consumer have separate JMS session and MessageConsumerr

Scenario -
1. Received only one below error on "onException" block -

Connection exception from 'ServiceBus_Connector', isConnectionActive 'true',
Reason 'Unknown error from remote peer' - 'class javax.jms.JMSException',
cause 'org.apache.qpid.jms.provider.ProviderException: Unknown error from
remote peer'

2. As connection is active, so we didn't establish re-connection. We checked
connection status with below method
3. Checked receiver status with below isConsumerActive method and found that
only reciever is closed, so re-established receiver and this consumer is
consuming messages perfectly.

Impact - 1. One consumer receiving message perfectly
 2. Six consumer does not throw any connection error, or detached
error
 3. These six consumers stopped receiving message from queue

Could you please urgently suggest your opinion on this issue ? 


public boolean isConnectionActive(Connection connection) {
boolean connectionStatus = false;
try {
String clientID = connection.getClientID();
connectionStatus = true;
} catch (Exception e) {
connectionStatus = false;
}
return connectionStatus;
}

public boolean isConsumerActive(MessageConsumer consumer) {
boolean isConsumerActive = false;
try {
consumer.getMessageListener();
isConsumerActive = true;
} catch (Exception e) {
isConsumerActive = false;
}
return isConsumerActive;
}


Regards,
Abhishek Kumar



--
Sent from: http://qpid.2158936.n2.nabble.com/Apache-Qpid-users-f2158936.html

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org