RE: Proton-j Reactor - Receiver

2016-04-05 Thread Garlapati Sreeram Kumar
Hello Robbie,

We are using proton-j client with SSL and many of our customers are hitting 
this issue.
Here are my findings after debugging through this issue:

-  When incoming bytes arrive on the SocketChannel – proton-j client 
gets signaled by nio & as a result it unwinds the transport stack – as a result 
all the TransportInput implementations performs its task on the Read Bytes and 
hands off to the Next Layer in the stack (transport to ssl, ssl to frameparser 
etc).

-  While unwinding that stack, SimpleSSLTransportWrapper.unwrapInput 
reads(16k bytes) from _inputBuffer and the result - decoded bytes are written 
to _decodedInputBuffer – as an intermediate buffer.

-  It then flushes bytes from intermediate buffer to the next layer & 
invokes an _underlyingInput.Process() – to signal it that it has bytes in its 
input buffer.

-  If the underlyingInput (lets say FrameParser) buffer size is small – 
lets say 4k – then decodedInputBuffer will be left with 12k bytes & Over time 
this accrues.

The fix here is to flush decodedInputBuffer to the Next transport in the 
Network Stack & call _underlyingInput.Process() - until decodedInputBuffer is 
empty. Here’s the pull request - https://github.com/apache/qpid-proton/pull/73

Pl. let me know if we need to do more to fix this issue comprehensively.

Thx!
Sree

From: Robbie Gemmell
Sent: Thursday, March 31, 2016 9:19 AM
To: proton@qpid.apache.org
Subject: Re: Proton-j Reactor - Receiver

On 31 March 2016 at 04:32, Garlapati Sreeram Kumar  wrote:
> Hello All!
>
> I am using Proton-J reactor API (Version 0.12.0) for receiving AMQP Messages 
> (from Microsoft Azure Event Hubs): 
> https://github.com/Azure/azure-event-hubs/blob/master/java/azure-eventhubs/src/main/java/com/microsoft/azure/servicebus/amqp/ReceiveLinkHandler.java#L124
>
> Am using the onDelivery(Event) callback to receive messages. I really 
> appreciate your help with this issue/behavior:
>
> ISSUE: I noticed that the last few messages on the Queue are not being issued 
> to onDelivery(Event) callback by the Reactor
> - Then, I went ahead and enabled proton Frame tracing (PN_TRACE_FRM=1) and 
> discovered that the Transfer frames corresponding to those messages were not 
> even delivered to Client. Then, I looked at our Service Proton Frames and can 
> clearly see that they are being delivered by the Service. And other AMQP 
> clients (for ex: .net client can see the Transfer frames)
> - Is this a known behavior?
> Does Reactor code path disable Nagle on underlying socket – could this be 
> related? or is there any other Configuration that we should be setting to see 
> all Transfer frames received on the Socket?
>
> Please advice.
>
> Thanks a lot in Advance!
> Sree
>
> Sent from Mail for Windows 10
>

I'm not aware of anyone else reporting anything like that. I don't see
anything in the code suggesting the reactor sets TCP_NODELAY trueon
the socket, but I wouldn't think that should matter here.

The frame trace logging is done after the bytes are given to the
Transport and are processed into frames, so a lack of logging could
suggest various things such as they didnt actually get there, they
werent processed, something went wrong before they did/were, something
went wrong decoding them, etc. Its hard to say much more without more
info.

Robbie


[GitHub] qpid-proton pull request: proton-j: SSL decode buffer implementati...

2016-04-05 Thread SreeramGarlapati
GitHub user SreeramGarlapati opened a pull request:

https://github.com/apache/qpid-proton/pull/73

proton-j: SSL decode buffer implementation doesn't flush all bytes to the 
next transport buffer (_underlyingInput)

Actual Issue/scenario hit by Microsoft Azure EventHubs:
We have a pattern where customers sends messages in a burst to our Queue 
and stop sending and then wait for all of them to be received. Because of this 
issue - many messages were stuck in the SSL Decode Buffer and were only picked 
up when new messages arrive.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SreeramGarlapati/qpid-proton sg.recvstuck

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/qpid-proton/pull/73.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #73


commit 2ceb434e1720917e788f55f8439095cb1be0fb78
Author: Sreeram Garlapati 
Date:   2016-04-05T04:34:01Z

JavaClient: Fix stuck issue on Receiver
Underlying Proton-J Issue: Messages were stuck in the SSL Decode Buffer and 
were only picked up when new messages arrive.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Amqp-ConnProp: user-agent

2016-04-05 Thread Robbie Gemmell
Each component simply sets the "product" and "version" entries into
its connection properties map, which use a map with amqp symbol keys,
e.g Symbol.valueOf("product"); in the case of proton-j.

For example, the JMS client does it here:
https://github.com/apache/qpid-jms/blob/0.8.0/qpid-jms-client/src/main/java/org/apache/qpid/jms/provider/amqp/builders/AmqpConnectionBuilder.java#L106

Robbie

On 5 April 2016 at 18:22, Garlapati Sreeram Kumar  wrote:
> Can you please point me to those Symbols. I would love to not re-invent & 
> reuse those …
>
> Thx!
> Sree
>
> From: Robbie Gemmell
> Sent: Tuesday, April 5, 2016 10:14 AM
> To: proton@qpid.apache.org
> Subject: Re: Amqp-ConnProp: user-agent
>
> Many of the Qpid components advertise "product" and "version"
> connection properties containing strings with the appropriate details.
> Those [Symbol] keys were chosen because they happened to be names
> specified in the older AMQP 0-x specs, and so matched what some of the
> components were already using.
>
> Robbie
>
> On 5 April 2016 at 18:02, Garlapati Sreeram Kumar  wrote:
>> Hello Folks!
>>
>> In our (Microsoft Azure EventHubs) java library, we are looking for a way to 
>> identify which AmqpClient did the call originate from.
>> We want to use this information to:
>> 1) Identify client stack while troubleshooting Service issues - for ex: a 
>> bug in implementation of AmqpProtocol stack
>> 2) collect telemetry on how many different clients are talking to our 
>> service (which will open-up data insights in future, for ex: investing our 
>> efforts on the right client echo system etc.,)
>>
>> To achieve this - we are planning to introduce a Property (on AmqpConnection 
>> -  [AMQP-CONNPROP]) to identify UserAgent.
>> - We chose “Connection properties” – as this would be one-time per 
>> connection and will not add extra overhead per message.
>>
>> I believe this would be a common requirement to all Amqp implementations. 
>> Are you folks aware of any proposal to add a standard property to Amqp 
>> Protocol Specification (or) is there already one that you folks use for this 
>> purpose – in any of the Clients – which we can re-use and collectively 
>> propose to the protocol spec.
>>
>> Thanks a lot for the Wonderful collaboration!
>> Sree


Re: Amqp-ConnProp: user-agent

2016-04-05 Thread Justin Ross
Here's something related.  It doesn't amount to a recommendation yet, but
it shows you what some of the brokers are doing.

https://cwiki.apache.org/confluence/display/qpid/AMQP+connection+properties

Previous discussion in this area:

http://qpid.2158936.n2.nabble.com/AMQP-1-0-connection-property-names-td7608091.html

http://qpid.2158936.n2.nabble.com/AMQP-1-0-connection-property-names-td7608236.html

Justin

On Tue, Apr 5, 2016 at 10:22 AM, Garlapati Sreeram Kumar 
wrote:

> Can you please point me to those Symbols. I would love to not re-invent &
> reuse those …
>
> Thx!
> Sree
>
> From: Robbie Gemmell
> Sent: Tuesday, April 5, 2016 10:14 AM
> To: proton@qpid.apache.org
> Subject: Re: Amqp-ConnProp: user-agent
>
> Many of the Qpid components advertise "product" and "version"
> connection properties containing strings with the appropriate details.
> Those [Symbol] keys were chosen because they happened to be names
> specified in the older AMQP 0-x specs, and so matched what some of the
> components were already using.
>
> Robbie
>
> On 5 April 2016 at 18:02, Garlapati Sreeram Kumar 
> wrote:
> > Hello Folks!
> >
> > In our (Microsoft Azure EventHubs) java library, we are looking for a
> way to identify which AmqpClient did the call originate from.
> > We want to use this information to:
> > 1) Identify client stack while troubleshooting Service issues - for ex:
> a bug in implementation of AmqpProtocol stack
> > 2) collect telemetry on how many different clients are talking to our
> service (which will open-up data insights in future, for ex: investing our
> efforts on the right client echo system etc.,)
> >
> > To achieve this - we are planning to introduce a Property (on
> AmqpConnection -  [AMQP-CONNPROP]) to identify UserAgent.
> > - We chose “Connection properties” – as this would be one-time per
> connection and will not add extra overhead per message.
> >
> > I believe this would be a common requirement to all Amqp
> implementations. Are you folks aware of any proposal to add a standard
> property to Amqp Protocol Specification (or) is there already one that you
> folks use for this purpose – in any of the Clients – which we can re-use
> and collectively propose to the protocol spec.
> >
> > Thanks a lot for the Wonderful collaboration!
> > Sree
>


RE: Amqp-ConnProp: user-agent

2016-04-05 Thread Garlapati Sreeram Kumar
Can you please point me to those Symbols. I would love to not re-invent & reuse 
those …

Thx!
Sree

From: Robbie Gemmell
Sent: Tuesday, April 5, 2016 10:14 AM
To: proton@qpid.apache.org
Subject: Re: Amqp-ConnProp: user-agent

Many of the Qpid components advertise "product" and "version"
connection properties containing strings with the appropriate details.
Those [Symbol] keys were chosen because they happened to be names
specified in the older AMQP 0-x specs, and so matched what some of the
components were already using.

Robbie

On 5 April 2016 at 18:02, Garlapati Sreeram Kumar  wrote:
> Hello Folks!
>
> In our (Microsoft Azure EventHubs) java library, we are looking for a way to 
> identify which AmqpClient did the call originate from.
> We want to use this information to:
> 1) Identify client stack while troubleshooting Service issues - for ex: a bug 
> in implementation of AmqpProtocol stack
> 2) collect telemetry on how many different clients are talking to our service 
> (which will open-up data insights in future, for ex: investing our efforts on 
> the right client echo system etc.,)
>
> To achieve this - we are planning to introduce a Property (on AmqpConnection 
> -  [AMQP-CONNPROP]) to identify UserAgent.
> - We chose “Connection properties” – as this would be one-time per connection 
> and will not add extra overhead per message.
>
> I believe this would be a common requirement to all Amqp implementations. Are 
> you folks aware of any proposal to add a standard property to Amqp Protocol 
> Specification (or) is there already one that you folks use for this purpose – 
> in any of the Clients – which we can re-use and collectively propose to the 
> protocol spec.
>
> Thanks a lot for the Wonderful collaboration!
> Sree


Re: Amqp-ConnProp: user-agent

2016-04-05 Thread Robbie Gemmell
Many of the Qpid components advertise "product" and "version"
connection properties containing strings with the appropriate details.
Those [Symbol] keys were chosen because they happened to be names
specified in the older AMQP 0-x specs, and so matched what some of the
components were already using.

Robbie

On 5 April 2016 at 18:02, Garlapati Sreeram Kumar  wrote:
> Hello Folks!
>
> In our (Microsoft Azure EventHubs) java library, we are looking for a way to 
> identify which AmqpClient did the call originate from.
> We want to use this information to:
> 1) Identify client stack while troubleshooting Service issues - for ex: a bug 
> in implementation of AmqpProtocol stack
> 2) collect telemetry on how many different clients are talking to our service 
> (which will open-up data insights in future, for ex: investing our efforts on 
> the right client echo system etc.,)
>
> To achieve this - we are planning to introduce a Property (on AmqpConnection 
> -  [AMQP-CONNPROP]) to identify UserAgent.
> - We chose “Connection properties” – as this would be one-time per connection 
> and will not add extra overhead per message.
>
> I believe this would be a common requirement to all Amqp implementations. Are 
> you folks aware of any proposal to add a standard property to Amqp Protocol 
> Specification (or) is there already one that you folks use for this purpose – 
> in any of the Clients – which we can re-use and collectively propose to the 
> protocol spec.
>
> Thanks a lot for the Wonderful collaboration!
> Sree


Amqp-ConnProp: user-agent

2016-04-05 Thread Garlapati Sreeram Kumar
Hello Folks!

In our (Microsoft Azure EventHubs) java library, we are looking for a way to 
identify which AmqpClient did the call originate from. 
We want to use this information to:
1) Identify client stack while troubleshooting Service issues - for ex: a bug 
in implementation of AmqpProtocol stack
2) collect telemetry on how many different clients are talking to our service 
(which will open-up data insights in future, for ex: investing our efforts on 
the right client echo system etc.,)

To achieve this - we are planning to introduce a Property (on AmqpConnection -  
[AMQP-CONNPROP]) to identify UserAgent.
- We chose “Connection properties” – as this would be one-time per connection 
and will not add extra overhead per message.

I believe this would be a common requirement to all Amqp implementations. Are 
you folks aware of any proposal to add a standard property to Amqp Protocol 
Specification (or) is there already one that you folks use for this purpose – 
in any of the Clients – which we can re-use and collectively propose to the 
protocol spec.

Thanks a lot for the Wonderful collaboration!
Sree


Re: Deadlock in pn_messenger_send when using more than 2 publishers?

2016-04-05 Thread Frank Quinn
Fyi, just raised https://issues.apache.org/jira/browse/PROTON-1169 for this.

On Sat, Apr 2, 2016 at 5:14 PM, Frank Quinn  wrote:

> Hi Folks,
>
> I think I have found an issue with qpid proton when running in point to
> point mode. If running a single recv thread and 3 concurrent messenger
> links are set up with it, it seems to cause a deadlock in the third
> pn_messenger_send. All subsequent attempts to send will also hang (i.e. the
> proton example send.c application).
>
> We found this behaviour in our own code for OpenMAMA, but I think we have
> a valid recreation in native qpid proton code here too - see
> https://github.com/OpenMAMA/OpenMAMA/files/200901/om-issue-153.zip
> (attached as part of where it was discovered -
> https://github.com/OpenMAMA/OpenMAMA/issues/153).
>
> If you compile and run that code on latest yum versions for Fedora 23 /
> qpid proton, you'll get:
>
> Creating the messengers
> Starting the messengers
> Starting listener thread
> pthread_create successful
> Creating message for sending
> Setting the subject for the message
> Setting the address and sending message to subscriber
> Sending from first publisher
> Sent from first publisher
> Recv got something
> Received message with subject 'First Publisher'
> Sending from second publisher
> Sent from second publisher
> Recv got something
> Received message with subject 'Second Publisher'
> Sending from third publisher
>
>
> Then it hangs - you never get "Sent from third publisher".
>
> The application sets up one messenger to run on its own recv thread, then
> on the main thread, it fires up 3 distinct messengers and attempts to send
> a single message from each messenger, and the third one hangs.
>
> Can you let me know what you think? Should I raise an issue or is this
> something we can configure / workaround our way out of?
>
> Cheers,
> Frank
>


[jira] [Created] (PROTON-1169) Deadlock in pn_messenger_send when using more than 2 publishers?

2016-04-05 Thread Frank Quinn (JIRA)
Frank Quinn created PROTON-1169:
---

 Summary: Deadlock in pn_messenger_send when using more than 2 
publishers?
 Key: PROTON-1169
 URL: https://issues.apache.org/jira/browse/PROTON-1169
 Project: Qpid Proton
  Issue Type: Bug
  Components: proton-c
Affects Versions: 0.12.0
 Environment: Fedora 23, 64-bit
Reporter: Frank Quinn


As per 
http://qpid.2158936.n2.nabble.com/Deadlock-in-pn-messenger-send-when-using-more-than-2-publishers-td7641239.html,
 I think I have found an issue with qpid proton when running in point to point 
mode. If running a single recv thread and 3 concurrent messenger links are set 
up with it, it seems to cause a deadlock in the third pn_messenger_send. All 
subsequent attempts to send will also hang (i.e. the proton example send.c 
application).

We found this behaviour in our own code for OpenMAMA, but I think we have a 
valid recreation in native qpid proton code here too - see 
https://github.com/OpenMAMA/OpenMAMA/files/200901/om-issue-153.zip (attached as 
part of where it was discovered - 
https://github.com/OpenMAMA/OpenMAMA/issues/153).

If you compile and run that code on latest yum versions for Fedora 23 / qpid 
proton, you'll get:

{noformat}
Creating the messengers
Starting the messengers
Starting listener thread
pthread_create successful
Creating message for sending
Setting the subject for the message
Setting the address and sending message to subscriber
Sending from first publisher
Sent from first publisher
Recv got something
Received message with subject 'First Publisher'
Sending from second publisher
Sent from second publisher
Recv got something
Received message with subject 'Second Publisher'
Sending from third publisher
{noformat}


Then it hangs - you never get "Sent from third publisher".

The application sets up one messenger to run on its own recv thread, then on 
the main thread, it fires up 3 distinct messengers and attempts to send a 
single message from each messenger, and the third one hangs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (PROTON-423) Deadlock in messenger recv / send when send called from different threads (proton-c 0.5)

2016-04-05 Thread Frank Quinn (JIRA)

 [ 
https://issues.apache.org/jira/browse/PROTON-423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Frank Quinn closed PROTON-423.
--
Resolution: Fixed

This has been resolved long ago.

> Deadlock in messenger recv / send when send called from different threads 
> (proton-c 0.5)
> 
>
> Key: PROTON-423
> URL: https://issues.apache.org/jira/browse/PROTON-423
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: proton-c
>Affects Versions: 0.5
> Environment: Linux x86 / pthread
>Reporter: Frank Quinn
>  Labels: messenger
> Attachments: qpid_messenger_deadlock.c
>
>
> Hi Folks,
> If I set a receive block to 1 message and in turn try to send to a 
> destination from multiple messengers on different threads, it seems to cause 
> a deadlock in send / recv. I have attached a small example which recreates 
> this:
> This is the output we expect this application to produce:
> Starting qpidListenerThread...
> Waiting on data to come into pn_messenger_recv...
> Data received by pn_messenger_recv...
> Message received with subject: 'MESSAGE FROM MAIN THREAD'
> Moving back to pn_messenger_recv
> Waiting on data to come into pn_messenger_recv...
> Starting qpidSenderThread...
> Finished with qpidSenderThread...
> Data received by pn_messenger_recv...
> Message received with subject: 'MESSAGE FROM PTHREAD'
> Moving back to pn_messenger_recv
> Waiting on data to come into pn_messenger_recv...
> This is what actually gets produced (note the second message is never 
> received)
> Starting qpidListenerThread...
> Waiting on data to come into pn_messenger_recv...
> Data received by pn_messenger_recv...
> Message received with subject: 'MESSAGE FROM MAIN THREAD'
> Moving back to pn_messenger_recv
> Waiting on data to come into pn_messenger_recv...
> Starting qpidSenderThread...
> Which deadlocks with the following backtrace:
> (gdb) thread apply all bt
> Thread 3 (Thread 0xb77c9b70 (LWP 9431)):
> #0  0x00cc8424 in __kernel_vsyscall ()
> #1  0x0021cca6 in poll () from /lib/libc.so.6
> #2  0x00c0f9fa in pn_driver_wait_2 ()
>from /home/fquinn/lib/qpid-proton-0.5/lib/libqpid-proton.so.2
> #3  0x00c0fd9f in pn_driver_wait ()
>from /home/fquinn/lib/qpid-proton-0.5/lib/libqpid-proton.so.2
> #4  0x00c0a4d1 in pn_messenger_tsync ()
>from /home/fquinn/lib/qpid-proton-0.5/lib/libqpid-proton.so.2
> #5  0x00c0a7bc in pn_messenger_sync ()
>from /home/fquinn/lib/qpid-proton-0.5/lib/libqpid-proton.so.2
> #6  0x00c0c27a in pn_messenger_recv ()
>from /home/fquinn/lib/qpid-proton-0.5/lib/libqpid-proton.so.2
> #7  0x08048953 in qpidListenerThread ()
> #8  0x00355a49 in start_thread () from /lib/libpthread.so.0
> #9  0x00227aee in clone () from /lib/libc.so.6
> Thread 2 (Thread 0xb6dc8b70 (LWP 9432)):
> #0  0x00cc8424 in __kernel_vsyscall ()
> #1  0x0021cca6 in poll () from /lib/libc.so.6
> #2  0x00c0f9fa in pn_driver_wait_2 ()
>from /home/fquinn/lib/qpid-proton-0.5/lib/libqpid-proton.so.2
> #3  0x00c0fd9f in pn_driver_wait ()
>from /home/fquinn/lib/qpid-proton-0.5/lib/libqpid-proton.so.2
> #4  0x00c0a4d1 in pn_messenger_tsync ()
>from /home/fquinn/lib/qpid-proton-0.5/lib/libqpid-proton.so.2
> #5  0x00c0a7bc in pn_messenger_sync ()
>from /home/fquinn/lib/qpid-proton-0.5/lib/libqpid-proton.so.2
> #6  0x00c0c1d5 in pn_messenger_send ()
>from /home/fquinn/lib/qpid-proton-0.5/lib/libqpid-proton.so.2
> #7  0x08048a5d in qpidSenderThread ()
> #8  0x00355a49 in start_thread () from /lib/libpthread.so.0
> #9  0x00227aee in clone () from /lib/libc.so.6
> Thread 1 (Thread 0xb77ca990 (LWP 9430)):
> #0  0x00cc8424 in __kernel_vsyscall ()
> #1  0x0035610d in pthread_join () from /lib/libpthread.so.0
> #2  0x08048bc9 in main ()
> Note that we know that this can be avoided by using the same messenger across 
> different threads for publishing or by setting a larger receive window, but 
> we expected this to work regardless and our existing implementation depends 
> on it.
> Cheers,
> Frank



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (PROTON-346) Deadlock experienced in pn_messenger_stop

2016-04-05 Thread Frank Quinn (JIRA)

 [ 
https://issues.apache.org/jira/browse/PROTON-346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Frank Quinn closed PROTON-346.
--
   Resolution: Fixed
Fix Version/s: 0.5

Again, fixed long ago.

> Deadlock experienced in pn_messenger_stop
> -
>
> Key: PROTON-346
> URL: https://issues.apache.org/jira/browse/PROTON-346
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: proton-c
>Affects Versions: 0.4
>Reporter: Frank Quinn
>Priority: Critical
>  Labels: messenger
> Fix For: 0.5
>
> Attachments: qpid_deadlock_repro.c
>
>
> Hi Folks,
> See attached code: I'm encountering a deadlock when I try to stop messengers. 
> The general workflow is:
> 1. Create pub and sub Messengers
> 2. Start the Messengers
> 3. Thread sub off onto its own thread as recv is a blocking call
> 4. Publish round trip from the pub messenger to the sub messenger with a 
> destroy subject (recv is uninteruptable at the moment so this is our only to 
> interrupt it)
> 5. Stop the messengers
> When I try and stop the messengers, the application deadlocks with the 
> following backtrace (there is only one thread running at this point as the 
> subscribe thread has since exited):
> Thread 1 (Thread 0x7f38181a4840 (LWP 6688)):
> #0  0x003518ce99ad in poll () at ../sysdeps/unix/syscall-template.S:81
> #1  0x00309c226a1c in poll (__timeout=, __nfds= out>, __fds=)
> at /usr/include/bits/poll2.h:46
> #2  pn_driver_wait_2 (d=d@entry=0x1a81140, timeout=, 
> timeout@entry=-1)
> at /usr/src/debug/qpid-proton-0.4/proton-c/src/posix/driver.c:752
> #3  0x00309c226c42 in pn_driver_wait (d=0x1a81140, 
> timeout=timeout@entry=-1)
> at /usr/src/debug/qpid-proton-0.4/proton-c/src/posix/driver.c:807
> #4  0x00309c2242d3 in pn_messenger_tsync (messenger=0x1a81050, 
> predicate=0x309c222d80 , timeout=)
> at /usr/src/debug/qpid-proton-0.4/proton-c/src/messenger.c:623
> #5  0x00400ffb in main () at qpid_deadlock_repro.c:123
> I also tried adding the entire subscriber messenger workflow to the newly 
> created thread but the same behaviour persists (I'll be attaching the code to 
> recreate this shortly).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PROTON-1168) 2-way Authentication via Certificates Fails in Proton-J

2016-04-05 Thread Robbie Gemmell (JIRA)

 [ 
https://issues.apache.org/jira/browse/PROTON-1168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robbie Gemmell updated PROTON-1168:
---
Attachment: PROTON-1168_reactor_ssl.patch

I posted the following reply on the users list thread\[1\] yesterday, copying 
here for later ease and also attaching the patch contents from the referenced 
pastebin in case anyone has problems with the link.

{quote}
Hi Jack,

This isn't something I had tried before, but I was able to establish a
connecting using the master/0.13.0-SNAPSHOT proton-j reactor and send
messages to a 6.0.x/6.0.2-SNAPSHOT Qpid Java broker that was
configured to require SSL client certs and use the EXTERNAL SASL
mechanism (I didn't have a Dispatch set up appropriately and that was
easier for me, plus the issue described seemed to be client-side).

I had to make the following changes to the existing Send example to
add a required dependency, actually set where the sender is attaching,
change the sasl mech, and configure use of ssl plus provide the
cert/trust details:

http://pastebin.com/TR5azYFR

I notice that the C code you attached to the JIRA (PROTON-1168 for
interested folks) is actually using Messenger with proton-c, and not
the Reactor as mentioned here for proton-j. I'm not sure if your Java
code is actually doing the same since you didn't include it, but that
isn't something I have tried either in any case. I do seem to recall
previous discussion around proton-c Messenger that it isn't actually
possible to set the particular sasl mechanism with Messenger (though
that would presumably be a separate issue from the one the Dispatch
logs suggest occurred, of not sending a cert as requested/required).

Robbie
{quote}

\[1\] 
http://mail-archives.apache.org/mod_mbox/qpid-users/201604.mbox/%3CCAFitrpTn2smMXbCVNQCLxo5B6S%3D5KUzmbQwozti1%2BQb4ezRS8Q%40mail.gmail.com%3E

> 2-way Authentication via Certificates Fails in Proton-J
> ---
>
> Key: PROTON-1168
> URL: https://issues.apache.org/jira/browse/PROTON-1168
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: proton-j
>Affects Versions: 0.12.0
> Environment: Ubuntu 15.10 & RHEL 7
> Qpid Dispatch 0.5 & 0.6
> Proton-C 0.12 and Proton-J 0.12
>Reporter: Jack Gibson
>Priority: Critical
> Attachments: PROTON-1168_reactor_ssl.patch, 
> my_qdrouterd_B_standalone.conf, recv_with_ssl.c, send_with_ssl.c
>
>
> Using qpid dispatch, we are unable to enable 2 way SSL with proton-j but able 
> to with proton-c.
> To reproduce use the attached config to enable 2 WAY SSL with “authenticate 
> Peer” flag set to TRUE.
> Restart the qdrouterd instance to pick up the config changes.
> Make the client send a message based on the AMQP-CLIENT library (which uses 
> Proton J). 
> Client Error Message: from the log file
> AMQP framing error
> EventImpl{type=TRANSPORT_ERROR, context=TransportImpl 
> [_connectionEndpoint=org.apache.qpid.proton.engine.impl.ConnectionImpl@6ef351a0,
>  org.apache.qpid.proton.engine.impl.TransportImpl@44c213d9]}
> Server Error Message: from the log file
> =64, totalFreeToHeap=0, transferBatchSize=64, 
> type=org.apache.qpid.dispatch.allocator, typeName=qd_timer_t, typeSize=56)
> Wed Mar 30 12:00:47 2016 AGENT (info) Activating management agent on 
> $management
> Wed Mar 30 12:00:47 2016 ROUTER (info) In-Process Address Registered: 
> $management
> Wed Mar 30 12:00:47 2016 ROUTER (info) In-Process Address Registered: 
> $management
> Wed Mar 30 12:00:47 2016 AGENT (debug) Add entity: 
> FixedAddressEntity(bias=closest, fanout=single, identity=fixedAddress/0, 
> name=fixedAddress/0, prefix=/, type=org.apache.qpid.dispatch.fixedAddress)
> Wed Mar 30 12:00:47 2016 ROUTER (info) Configured Address: prefix=/ phase=0 
> fanout=QD_SCHEMA_FIXEDADDRESS_FANOUT_SINGLE 
> bias=QD_SCHEMA_FIXEDADDRESS_BIAS_CLOSEST
> Wed Mar 30 12:00:47 2016 AGENT (debug) Add entity: 
> ListenerEntity(addr=0.0.0.0, authenticatePeer=True, 
> certDb=/home/vsharda/protected/pprootca_cert.pem, 
> certFile=/home/vsharda/protected/generic_cert.pem, 
> identity=listener/0.0.0.0:20009, idleTimeoutSeconds=16, 
> keyFile=/home/vsharda/protected/generic_key.pem, maxFrameSize=65536, 
> name=listener/0.0.0.0:20009, password=pn2.GmdXmkKv.X7fPq.oYDFj8Cs, 
> port=20009, requireEncryption=True, requireSsl=True, role=normal, 
> saslMechanisms=EXTERNAL, stripAnnotations=both, 
> type=org.apache.qpid.dispatch.listener)
> Wed Mar 30 12:00:47 2016 CONN_MGR (info) Configured Listener: 0.0.0.0:20009 
> proto=any role=normal
> Wed Mar 30 12:00:47 2016 SERVER (trace) Listening on 0.0.0.0:20009
> Wed Mar 30 12:00:47 2016 AGENT (debug) Add entity: 
> ConsoleEntity(identity=console/0, name=console/0, 
> type=org.apache.qpid.dispatch.console, wsport=5673)
> Wed Mar 30 12:00:47 2016 SERVER (info) Operational, 4 Threads Running
> Wed