[jira] [Created] (PROTON-1174) Proton C encodes Open.channel-max 0 as a null and not as a uint.

2016-04-11 Thread Chuck Rolke (JIRA)
Chuck Rolke created PROTON-1174:
---

 Summary: Proton C encodes Open.channel-max 0 as a null and not as 
a uint.
 Key: PROTON-1174
 URL: https://issues.apache.org/jira/browse/PROTON-1174
 Project: Qpid Proton
  Issue Type: Bug
  Components: proton-c
Affects Versions: 0.12.0
Reporter: Chuck Rolke


A connection Open frame is created with the intent of limiting the peer to 
exactly one link.
{noformat}
  pn_transport_set_channel_max(pn_trans, 0);
{noformat}

Proton encodes the frame with a null 0x40 going over the wire in the 
channel-max slot.

A receiving application treats the value as 'default' and uses the spec default 
value of 65535. If the receiving application had received a 0x43 
(types:encoding uint:uint0) it would have used the intended channel max of 0.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


RE: Proton-j Reactor - Receiver

2016-04-11 Thread Garlapati Sreeram Kumar
Awesome.

To make it easy - added you as collaborator to my fork of Proton & here’s the 
branch from which I submitted the PR: 
https://github.com/SreeramGarlapati/qpid-proton/tree/sg.recvstuck

Thx!
Sree

From: Robbie Gemmell
Sent: Monday, April 11, 2016 9:52 AM
To: proton@qpid.apache.org
Cc: SeongJoon Kwak (SJ); 
hm...@microsoft.com
Subject: Re: Proton-j Reactor - Receiver

Ah, excellent. I had actually started on testing this myself a little
earlier, so I'll take a look and see whats what before continuing
tomorrow. On taking an initial better look at things I think the
change itself may need augmented to account for some other conditions
too, need to investigate further to be sure.

Robbie

On 11 April 2016 at 17:37, Garlapati Sreeram Kumar  wrote:
> Thanks a lot for the Response Robbie!
> Per your suggestion, added the CIT to the Pull Request (& yes, as you already 
> said – this issue is being tracked via JIRA - PROTON-1171).
>
> Thanks a lot for the Wonderful Collaboration!
> Sree
>
> From: Robbie Gemmell
> Sent: Thursday, April 7, 2016 3:52 AM
> To: proton@qpid.apache.org
> Cc: SeongJoon Kwak (SJ); 
> hm...@microsoft.com
> Subject: Re: Proton-j Reactor - Receiver
>
> Hi Sree,
>
> Thanks for the analysis and PR, I'll try to take a proper look soon.
> It's not an area of the code I'm familiar with so I'll need to have a
> bit of a dig myself to see if the change seems ok. I'd note that any
> not-insignificant bug fix such as this should probably have a test
> with it (and a JIRA, though I see you have since created one of those)
> :)
>
> Robbie
>
> On 6 April 2016 at 01:23, Garlapati Sreeram Kumar  wrote:
>> Hello Robbie,
>>
>> We are using proton-j client with SSL and many of our customers are hitting 
>> this issue.
>> Here are my findings after debugging through this issue:
>>
>> -  When incoming bytes arrive on the SocketChannel – proton-j client 
>> gets signaled by nio & as a result it unwinds the transport stack – as a 
>> result all the TransportInput implementations performs its task on the Read 
>> Bytes and hands off to the Next Layer in the stack (transport to ssl, ssl to 
>> frameparser etc).
>>
>> -  While unwinding that stack, SimpleSSLTransportWrapper.unwrapInput 
>> reads(16k bytes) from _inputBuffer and the result - decoded bytes are 
>> written to _decodedInputBuffer – as an intermediate buffer.
>>
>> -  It then flushes bytes from intermediate buffer to the next layer 
>> & invokes an _underlyingInput.Process() – to signal it that it has bytes in 
>> its input buffer.
>>
>> -  If the underlyingInput (lets say FrameParser) buffer size is 
>> small – lets say 4k – then decodedInputBuffer will be left with 12k bytes & 
>> Over time this accrues.
>>
>> The fix here is to flush decodedInputBuffer to the Next transport in the 
>> Network Stack & call _underlyingInput.Process() - until decodedInputBuffer 
>> is empty. Here’s the pull request - 
>> https://github.com/apache/qpid-proton/pull/73
>>
>> Pl. let me know if we need to do more to fix this issue comprehensively.
>>
>> Thx!
>> Sree
>>
>> From: Robbie Gemmell
>> Sent: Thursday, March 31, 2016 9:19 AM
>> To: proton@qpid.apache.org
>> Subject: Re: Proton-j Reactor - Receiver
>>
>> On 31 March 2016 at 04:32, Garlapati Sreeram Kumar  wrote:
>>> Hello All!
>>>
>>> I am using Proton-J reactor API (Version 0.12.0) for receiving AMQP 
>>> Messages (from Microsoft Azure Event Hubs): 
>>> https://github.com/Azure/azure-event-hubs/blob/master/java/azure-eventhubs/src/main/java/com/microsoft/azure/servicebus/amqp/ReceiveLinkHandler.java#L124
>>>
>>> Am using the onDelivery(Event) callback to receive messages. I really 
>>> appreciate your help with this issue/behavior:
>>>
>>> ISSUE: I noticed that the last few messages on the Queue are not being 
>>> issued to onDelivery(Event) callback by the Reactor
>>> - Then, I went ahead and enabled proton Frame tracing (PN_TRACE_FRM=1) and 
>>> discovered that the Transfer frames corresponding to those messages were 
>>> not even delivered to Client. Then, I looked at our Service Proton Frames 
>>> and can clearly see that they are being delivered by the Service. And other 
>>> AMQP clients (for ex: .net client can see the Transfer frames)
>>> - Is this a known behavior?
>>> Does Reactor code path disable Nagle on underlying socket – could this be 
>>> related? or is there any other Configuration that we should be setting to 
>>> see all Transfer frames received on the Socket?
>>>
>>> Please advice.
>>>
>>> Thanks a lot in Advance!
>>> Sree
>>>
>>> Sent from Mail for Windows 10
>>>
>>
>> I'm not aware of anyone else 

Re: Proton-j Reactor - Receiver

2016-04-11 Thread Robbie Gemmell
Ah, excellent. I had actually started on testing this myself a little
earlier, so I'll take a look and see whats what before continuing
tomorrow. On taking an initial better look at things I think the
change itself may need augmented to account for some other conditions
too, need to investigate further to be sure.

Robbie

On 11 April 2016 at 17:37, Garlapati Sreeram Kumar  wrote:
> Thanks a lot for the Response Robbie!
> Per your suggestion, added the CIT to the Pull Request (& yes, as you already 
> said – this issue is being tracked via JIRA - PROTON-1171).
>
> Thanks a lot for the Wonderful Collaboration!
> Sree
>
> From: Robbie Gemmell
> Sent: Thursday, April 7, 2016 3:52 AM
> To: proton@qpid.apache.org
> Cc: SeongJoon Kwak (SJ); 
> hm...@microsoft.com
> Subject: Re: Proton-j Reactor - Receiver
>
> Hi Sree,
>
> Thanks for the analysis and PR, I'll try to take a proper look soon.
> It's not an area of the code I'm familiar with so I'll need to have a
> bit of a dig myself to see if the change seems ok. I'd note that any
> not-insignificant bug fix such as this should probably have a test
> with it (and a JIRA, though I see you have since created one of those)
> :)
>
> Robbie
>
> On 6 April 2016 at 01:23, Garlapati Sreeram Kumar  wrote:
>> Hello Robbie,
>>
>> We are using proton-j client with SSL and many of our customers are hitting 
>> this issue.
>> Here are my findings after debugging through this issue:
>>
>> -  When incoming bytes arrive on the SocketChannel – proton-j client 
>> gets signaled by nio & as a result it unwinds the transport stack – as a 
>> result all the TransportInput implementations performs its task on the Read 
>> Bytes and hands off to the Next Layer in the stack (transport to ssl, ssl to 
>> frameparser etc).
>>
>> -  While unwinding that stack, SimpleSSLTransportWrapper.unwrapInput 
>> reads(16k bytes) from _inputBuffer and the result - decoded bytes are 
>> written to _decodedInputBuffer – as an intermediate buffer.
>>
>> -  It then flushes bytes from intermediate buffer to the next layer 
>> & invokes an _underlyingInput.Process() – to signal it that it has bytes in 
>> its input buffer.
>>
>> -  If the underlyingInput (lets say FrameParser) buffer size is 
>> small – lets say 4k – then decodedInputBuffer will be left with 12k bytes & 
>> Over time this accrues.
>>
>> The fix here is to flush decodedInputBuffer to the Next transport in the 
>> Network Stack & call _underlyingInput.Process() - until decodedInputBuffer 
>> is empty. Here’s the pull request - 
>> https://github.com/apache/qpid-proton/pull/73
>>
>> Pl. let me know if we need to do more to fix this issue comprehensively.
>>
>> Thx!
>> Sree
>>
>> From: Robbie Gemmell
>> Sent: Thursday, March 31, 2016 9:19 AM
>> To: proton@qpid.apache.org
>> Subject: Re: Proton-j Reactor - Receiver
>>
>> On 31 March 2016 at 04:32, Garlapati Sreeram Kumar  wrote:
>>> Hello All!
>>>
>>> I am using Proton-J reactor API (Version 0.12.0) for receiving AMQP 
>>> Messages (from Microsoft Azure Event Hubs): 
>>> https://github.com/Azure/azure-event-hubs/blob/master/java/azure-eventhubs/src/main/java/com/microsoft/azure/servicebus/amqp/ReceiveLinkHandler.java#L124
>>>
>>> Am using the onDelivery(Event) callback to receive messages. I really 
>>> appreciate your help with this issue/behavior:
>>>
>>> ISSUE: I noticed that the last few messages on the Queue are not being 
>>> issued to onDelivery(Event) callback by the Reactor
>>> - Then, I went ahead and enabled proton Frame tracing (PN_TRACE_FRM=1) and 
>>> discovered that the Transfer frames corresponding to those messages were 
>>> not even delivered to Client. Then, I looked at our Service Proton Frames 
>>> and can clearly see that they are being delivered by the Service. And other 
>>> AMQP clients (for ex: .net client can see the Transfer frames)
>>> - Is this a known behavior?
>>> Does Reactor code path disable Nagle on underlying socket – could this be 
>>> related? or is there any other Configuration that we should be setting to 
>>> see all Transfer frames received on the Socket?
>>>
>>> Please advice.
>>>
>>> Thanks a lot in Advance!
>>> Sree
>>>
>>> Sent from Mail for Windows 10
>>>
>>
>> I'm not aware of anyone else reporting anything like that. I don't see
>> anything in the code suggesting the reactor sets TCP_NODELAY trueon
>> the socket, but I wouldn't think that should matter here.
>>
>> The frame trace logging is done after the bytes are given to the
>> Transport and are processed into frames, so a lack of logging could
>> suggest various things such as they didnt actually get there, they
>> werent processed, something went wrong before they did/were, something
>> went wrong decoding them, etc. Its 

RE: Proton-j Reactor - Receiver

2016-04-11 Thread Garlapati Sreeram Kumar
Thanks a lot for the Response Robbie!
Per your suggestion, added the CIT to the Pull Request (& yes, as you already 
said – this issue is being tracked via JIRA - PROTON-1171).

Thanks a lot for the Wonderful Collaboration!
Sree

From: Robbie Gemmell
Sent: Thursday, April 7, 2016 3:52 AM
To: proton@qpid.apache.org
Cc: SeongJoon Kwak (SJ); 
hm...@microsoft.com
Subject: Re: Proton-j Reactor - Receiver

Hi Sree,

Thanks for the analysis and PR, I'll try to take a proper look soon.
It's not an area of the code I'm familiar with so I'll need to have a
bit of a dig myself to see if the change seems ok. I'd note that any
not-insignificant bug fix such as this should probably have a test
with it (and a JIRA, though I see you have since created one of those)
:)

Robbie

On 6 April 2016 at 01:23, Garlapati Sreeram Kumar  wrote:
> Hello Robbie,
>
> We are using proton-j client with SSL and many of our customers are hitting 
> this issue.
> Here are my findings after debugging through this issue:
>
> -  When incoming bytes arrive on the SocketChannel – proton-j client 
> gets signaled by nio & as a result it unwinds the transport stack – as a 
> result all the TransportInput implementations performs its task on the Read 
> Bytes and hands off to the Next Layer in the stack (transport to ssl, ssl to 
> frameparser etc).
>
> -  While unwinding that stack, SimpleSSLTransportWrapper.unwrapInput 
> reads(16k bytes) from _inputBuffer and the result - decoded bytes are written 
> to _decodedInputBuffer – as an intermediate buffer.
>
> -  It then flushes bytes from intermediate buffer to the next layer & 
> invokes an _underlyingInput.Process() – to signal it that it has bytes in its 
> input buffer.
>
> -  If the underlyingInput (lets say FrameParser) buffer size is small 
> – lets say 4k – then decodedInputBuffer will be left with 12k bytes & Over 
> time this accrues.
>
> The fix here is to flush decodedInputBuffer to the Next transport in the 
> Network Stack & call _underlyingInput.Process() - until decodedInputBuffer is 
> empty. Here’s the pull request - https://github.com/apache/qpid-proton/pull/73
>
> Pl. let me know if we need to do more to fix this issue comprehensively.
>
> Thx!
> Sree
>
> From: Robbie Gemmell
> Sent: Thursday, March 31, 2016 9:19 AM
> To: proton@qpid.apache.org
> Subject: Re: Proton-j Reactor - Receiver
>
> On 31 March 2016 at 04:32, Garlapati Sreeram Kumar  wrote:
>> Hello All!
>>
>> I am using Proton-J reactor API (Version 0.12.0) for receiving AMQP Messages 
>> (from Microsoft Azure Event Hubs): 
>> https://github.com/Azure/azure-event-hubs/blob/master/java/azure-eventhubs/src/main/java/com/microsoft/azure/servicebus/amqp/ReceiveLinkHandler.java#L124
>>
>> Am using the onDelivery(Event) callback to receive messages. I really 
>> appreciate your help with this issue/behavior:
>>
>> ISSUE: I noticed that the last few messages on the Queue are not being 
>> issued to onDelivery(Event) callback by the Reactor
>> - Then, I went ahead and enabled proton Frame tracing (PN_TRACE_FRM=1) and 
>> discovered that the Transfer frames corresponding to those messages were not 
>> even delivered to Client. Then, I looked at our Service Proton Frames and 
>> can clearly see that they are being delivered by the Service. And other AMQP 
>> clients (for ex: .net client can see the Transfer frames)
>> - Is this a known behavior?
>> Does Reactor code path disable Nagle on underlying socket – could this be 
>> related? or is there any other Configuration that we should be setting to 
>> see all Transfer frames received on the Socket?
>>
>> Please advice.
>>
>> Thanks a lot in Advance!
>> Sree
>>
>> Sent from Mail for Windows 10
>>
>
> I'm not aware of anyone else reporting anything like that. I don't see
> anything in the code suggesting the reactor sets TCP_NODELAY trueon
> the socket, but I wouldn't think that should matter here.
>
> The frame trace logging is done after the bytes are given to the
> Transport and are processed into frames, so a lack of logging could
> suggest various things such as they didnt actually get there, they
> werent processed, something went wrong before they did/were, something
> went wrong decoding them, etc. Its hard to say much more without more
> info.
>
> Robbie


[GitHub] qpid-proton pull request: PROTON-1171: SimpleSSLTransportWrapper d...

2016-04-11 Thread SreeramGarlapati
Github user SreeramGarlapati commented on a diff in the pull request:

https://github.com/apache/qpid-proton/pull/73#discussion_r59236240
  
--- Diff: 
proton-j/src/test/java/org/apache/qpid/proton/engine/impl/ssl/SimpleSslTransportWrapperTest.java
 ---
@@ -117,16 +117,6 @@ public void testInputIncompletePacketInThreeParts()
 }
 
 @Test
-public void 
testUnderlyingInputUsingSmallBuffer_receivesAllDecodedInput() throws Exception
-{
-_underlyingInput.setInputBufferSize(1);
--- End diff --

I deleted this Test from this File and Moved it to a brand new file for 2 
reasons:
1) This function has a bug - SetInputBufferSize - sets only the integer 
value. It doesn't change anything with actual Buffer Size.
2) Since, changing buffer size will change the behavior of 
SimpleSslTransportWrapper - this could impact other testcases in the same file. 
So, moved the Size related test to a separate file...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (PROTON-1173) Proton C core dump in face of channel-max protocol violation

2016-04-11 Thread Chuck Rolke (JIRA)

 [ 
https://issues.apache.org/jira/browse/PROTON-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chuck Rolke resolved PROTON-1173.
-
Resolution: Fixed

Fixed at commit 9d88f82

> Proton C core dump in face of channel-max protocol violation
> 
>
> Key: PROTON-1173
> URL: https://issues.apache.org/jira/browse/PROTON-1173
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: proton-c
>Affects Versions: 0.11.1
>Reporter: Chuck Rolke
>
> A rogue client creates a session on a channel higher than the channel-max 
> exchanged at connection open.
> {noformat}
> Mon Apr 11 10:34:27 2016 SERVER (trace) [1]:pn_session: too many sessions: 1  
> channel_max is 0 (/home/chug/git/qpid-dispatch/src/server.c:116)
> Program received signal SIGSEGV, Segmentation fault.
> 0x7793b84a in pn_do_begin (transport=0x6a4bd0, frame_type=0 '\000', 
> channel=1, args=0x7c1f60, payload=0x7fffd2c0)
> at /home/chug/git/qpid-proton/proton-c/src/transport/transport.c:1205
> 1205ssn->state.incoming_transfer_count = next;
> Missing separate debuginfos, use: debuginfo-install 
> nss-mdns-0.10-15.fc21.x86_64
> (gdb) 
> (gdb) list
> 1200  // XXX: what if session is NULL?
> 1201  ssn = (pn_session_t *) pn_hash_get(transport->local_channels, 
> remote_channel);
> 1202} else {
> 1203  ssn = pn_session(transport->connection);
> 1204}
> 1205ssn->state.incoming_transfer_count = next;
> 1206pni_map_remote_channel(ssn, channel);
> 1207PN_SET_REMOTE(ssn->endpoint.state, PN_REMOTE_ACTIVE);
> 1208pn_collector_put(transport->connection->collector, PN_OBJECT, ssn, 
> PN_SESSION_REMOTE_OPEN);
> 1209return 0;
> (gdb) p ssn
> $1 = (pn_session_t *) 0x0
> (gdb) 
> {noformat}
> Session is null and SEGV is what happens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PROTON-1173) Proton C core dump in face of channel-max protocol violation

2016-04-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/PROTON-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15235328#comment-15235328
 ] 

ASF subversion and git services commented on PROTON-1173:
-

Commit 9d88f823c1951012888440501fdbe111f344518d in qpid-proton's branch 
refs/heads/master from [~chug]
[ https://git-wip-us.apache.org/repos/asf?p=qpid-proton.git;h=9d88f82 ]

PROTON-1173: Close channel with error on channel-max violation


> Proton C core dump in face of channel-max protocol violation
> 
>
> Key: PROTON-1173
> URL: https://issues.apache.org/jira/browse/PROTON-1173
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: proton-c
>Affects Versions: 0.11.1
>Reporter: Chuck Rolke
>
> A rogue client creates a session on a channel higher than the channel-max 
> exchanged at connection open.
> {noformat}
> Mon Apr 11 10:34:27 2016 SERVER (trace) [1]:pn_session: too many sessions: 1  
> channel_max is 0 (/home/chug/git/qpid-dispatch/src/server.c:116)
> Program received signal SIGSEGV, Segmentation fault.
> 0x7793b84a in pn_do_begin (transport=0x6a4bd0, frame_type=0 '\000', 
> channel=1, args=0x7c1f60, payload=0x7fffd2c0)
> at /home/chug/git/qpid-proton/proton-c/src/transport/transport.c:1205
> 1205ssn->state.incoming_transfer_count = next;
> Missing separate debuginfos, use: debuginfo-install 
> nss-mdns-0.10-15.fc21.x86_64
> (gdb) 
> (gdb) list
> 1200  // XXX: what if session is NULL?
> 1201  ssn = (pn_session_t *) pn_hash_get(transport->local_channels, 
> remote_channel);
> 1202} else {
> 1203  ssn = pn_session(transport->connection);
> 1204}
> 1205ssn->state.incoming_transfer_count = next;
> 1206pni_map_remote_channel(ssn, channel);
> 1207PN_SET_REMOTE(ssn->endpoint.state, PN_REMOTE_ACTIVE);
> 1208pn_collector_put(transport->connection->collector, PN_OBJECT, ssn, 
> PN_SESSION_REMOTE_OPEN);
> 1209return 0;
> (gdb) p ssn
> $1 = (pn_session_t *) 0x0
> (gdb) 
> {noformat}
> Session is null and SEGV is what happens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PROTON-1173) Proton C core dump in face of channel-max protocol violation

2016-04-11 Thread Chuck Rolke (JIRA)
Chuck Rolke created PROTON-1173:
---

 Summary: Proton C core dump in face of channel-max protocol 
violation
 Key: PROTON-1173
 URL: https://issues.apache.org/jira/browse/PROTON-1173
 Project: Qpid Proton
  Issue Type: Bug
  Components: proton-c
Affects Versions: 0.11.1
Reporter: Chuck Rolke


A rogue client creates a session on a channel higher than the channel-max 
exchanged at connection open.
{noformat}
Mon Apr 11 10:34:27 2016 SERVER (trace) [1]:pn_session: too many sessions: 1  
channel_max is 0 (/home/chug/git/qpid-dispatch/src/server.c:116)

Program received signal SIGSEGV, Segmentation fault.
0x7793b84a in pn_do_begin (transport=0x6a4bd0, frame_type=0 '\000', 
channel=1, args=0x7c1f60, payload=0x7fffd2c0)
at /home/chug/git/qpid-proton/proton-c/src/transport/transport.c:1205
1205  ssn->state.incoming_transfer_count = next;
Missing separate debuginfos, use: debuginfo-install nss-mdns-0.10-15.fc21.x86_64
(gdb) 
(gdb) list
1200// XXX: what if session is NULL?
1201ssn = (pn_session_t *) pn_hash_get(transport->local_channels, 
remote_channel);
1202  } else {
1203ssn = pn_session(transport->connection);
1204  }
1205  ssn->state.incoming_transfer_count = next;
1206  pni_map_remote_channel(ssn, channel);
1207  PN_SET_REMOTE(ssn->endpoint.state, PN_REMOTE_ACTIVE);
1208  pn_collector_put(transport->connection->collector, PN_OBJECT, ssn, 
PN_SESSION_REMOTE_OPEN);
1209  return 0;
(gdb) p ssn
$1 = (pn_session_t *) 0x0
(gdb) 
{noformat}

Session is null and SEGV is what happens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)