[jira] [Commented] (DISPATCH-1316) race in remote_sasl can cause use after free

2019-04-08 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16812816#comment-16812816
 ] 

ASF GitHub Bot commented on DISPATCH-1316:
--

codecov-io commented on issue #485: DISPATCH-1316: atomic checking for 
deletability
URL: https://github.com/apache/qpid-dispatch/pull/485#issuecomment-481017785
 
 
   # 
[Codecov](https://codecov.io/gh/apache/qpid-dispatch/pull/485?src=pr&el=h1) 
Report
   > Merging 
[#485](https://codecov.io/gh/apache/qpid-dispatch/pull/485?src=pr&el=desc) into 
[master](https://codecov.io/gh/apache/qpid-dispatch/commit/c27d73b188fcf0a311b1c1d69e21cb9f4c58172d?src=pr&el=desc)
 will **increase** coverage by `0.01%`.
   > The diff coverage is `100%`.
   
   [![Impacted file tree 
graph](https://codecov.io/gh/apache/qpid-dispatch/pull/485/graphs/tree.svg?width=650&token=rk2Cgd27pP&height=150&src=pr)](https://codecov.io/gh/apache/qpid-dispatch/pull/485?src=pr&el=tree)
   
   ```diff
   @@Coverage Diff @@
   ##   master #485  +/-   ##
   ==
   + Coverage   86.92%   86.94%   +0.01% 
   ==
 Files  85   85  
 Lines   1922019228   +8 
   ==
   + Hits1670716717  +10 
   + Misses   2513 2511   -2
   ```
   
   
   | [Impacted 
Files](https://codecov.io/gh/apache/qpid-dispatch/pull/485?src=pr&el=tree) | 
Coverage Δ | |
   |---|---|---|
   | 
[src/remote\_sasl.c](https://codecov.io/gh/apache/qpid-dispatch/pull/485/diff?src=pr&el=tree#diff-c3JjL3JlbW90ZV9zYXNsLmM=)
 | `84.23% <100%> (+0.35%)` | :arrow_up: |
   | 
[src/router\_core/connections.c](https://codecov.io/gh/apache/qpid-dispatch/pull/485/diff?src=pr&el=tree#diff-c3JjL3JvdXRlcl9jb3JlL2Nvbm5lY3Rpb25zLmM=)
 | `94.87% <0%> (+0.11%)` | :arrow_up: |
   | 
[src/router\_core/route\_tables.c](https://codecov.io/gh/apache/qpid-dispatch/pull/485/diff?src=pr&el=tree#diff-c3JjL3JvdXRlcl9jb3JlL3JvdXRlX3RhYmxlcy5j)
 | `76.92% <0%> (+0.24%)` | :arrow_up: |
   
   --
   
   [Continue to review full report at 
Codecov](https://codecov.io/gh/apache/qpid-dispatch/pull/485?src=pr&el=continue).
   > **Legend** - [Click here to learn 
more](https://docs.codecov.io/docs/codecov-delta)
   > `Δ = absolute  (impact)`, `ø = not affected`, `? = missing data`
   > Powered by 
[Codecov](https://codecov.io/gh/apache/qpid-dispatch/pull/485?src=pr&el=footer).
 Last update 
[c27d73b...651bfcf](https://codecov.io/gh/apache/qpid-dispatch/pull/485?src=pr&el=lastupdated).
 Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> race in remote_sasl can cause use after free
> 
>
> Key: DISPATCH-1316
> URL: https://issues.apache.org/jira/browse/DISPATCH-1316
> Project: Qpid Dispatch
>  Issue Type: Bug
>Affects Versions: 1.6.0
>Reporter: Gordon Sim
>Assignee: Gordon Sim
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-dispatch] codecov-io commented on issue #485: DISPATCH-1316: atomic checking for deletability

2019-04-08 Thread GitBox
codecov-io commented on issue #485: DISPATCH-1316: atomic checking for 
deletability
URL: https://github.com/apache/qpid-dispatch/pull/485#issuecomment-481017785
 
 
   # 
[Codecov](https://codecov.io/gh/apache/qpid-dispatch/pull/485?src=pr&el=h1) 
Report
   > Merging 
[#485](https://codecov.io/gh/apache/qpid-dispatch/pull/485?src=pr&el=desc) into 
[master](https://codecov.io/gh/apache/qpid-dispatch/commit/c27d73b188fcf0a311b1c1d69e21cb9f4c58172d?src=pr&el=desc)
 will **increase** coverage by `0.01%`.
   > The diff coverage is `100%`.
   
   [![Impacted file tree 
graph](https://codecov.io/gh/apache/qpid-dispatch/pull/485/graphs/tree.svg?width=650&token=rk2Cgd27pP&height=150&src=pr)](https://codecov.io/gh/apache/qpid-dispatch/pull/485?src=pr&el=tree)
   
   ```diff
   @@Coverage Diff @@
   ##   master #485  +/-   ##
   ==
   + Coverage   86.92%   86.94%   +0.01% 
   ==
 Files  85   85  
 Lines   1922019228   +8 
   ==
   + Hits1670716717  +10 
   + Misses   2513 2511   -2
   ```
   
   
   | [Impacted 
Files](https://codecov.io/gh/apache/qpid-dispatch/pull/485?src=pr&el=tree) | 
Coverage Δ | |
   |---|---|---|
   | 
[src/remote\_sasl.c](https://codecov.io/gh/apache/qpid-dispatch/pull/485/diff?src=pr&el=tree#diff-c3JjL3JlbW90ZV9zYXNsLmM=)
 | `84.23% <100%> (+0.35%)` | :arrow_up: |
   | 
[src/router\_core/connections.c](https://codecov.io/gh/apache/qpid-dispatch/pull/485/diff?src=pr&el=tree#diff-c3JjL3JvdXRlcl9jb3JlL2Nvbm5lY3Rpb25zLmM=)
 | `94.87% <0%> (+0.11%)` | :arrow_up: |
   | 
[src/router\_core/route\_tables.c](https://codecov.io/gh/apache/qpid-dispatch/pull/485/diff?src=pr&el=tree#diff-c3JjL3JvdXRlcl9jb3JlL3JvdXRlX3RhYmxlcy5j)
 | `76.92% <0%> (+0.24%)` | :arrow_up: |
   
   --
   
   [Continue to review full report at 
Codecov](https://codecov.io/gh/apache/qpid-dispatch/pull/485?src=pr&el=continue).
   > **Legend** - [Click here to learn 
more](https://docs.codecov.io/docs/codecov-delta)
   > `Δ = absolute  (impact)`, `ø = not affected`, `? = missing data`
   > Powered by 
[Codecov](https://codecov.io/gh/apache/qpid-dispatch/pull/485?src=pr&el=footer).
 Last update 
[c27d73b...651bfcf](https://codecov.io/gh/apache/qpid-dispatch/pull/485?src=pr&el=lastupdated).
 Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1316) race in remote_sasl can cause use after free

2019-04-08 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16812796#comment-16812796
 ] 

ASF GitHub Bot commented on DISPATCH-1316:
--

grs commented on pull request #485: DISPATCH-1316: atomic checking for 
deletability
URL: https://github.com/apache/qpid-dispatch/pull/485
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> race in remote_sasl can cause use after free
> 
>
> Key: DISPATCH-1316
> URL: https://issues.apache.org/jira/browse/DISPATCH-1316
> Project: Qpid Dispatch
>  Issue Type: Bug
>Affects Versions: 1.6.0
>Reporter: Gordon Sim
>Assignee: Gordon Sim
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-dispatch] grs opened a new pull request #485: DISPATCH-1316: atomic checking for deletability

2019-04-08 Thread GitBox
grs opened a new pull request #485: DISPATCH-1316: atomic checking for 
deletability
URL: https://github.com/apache/qpid-dispatch/pull/485
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (DISPATCH-1316) race in remote_sasl can cause use after free

2019-04-08 Thread Gordon Sim (JIRA)
Gordon Sim created DISPATCH-1316:


 Summary: race in remote_sasl can cause use after free
 Key: DISPATCH-1316
 URL: https://issues.apache.org/jira/browse/DISPATCH-1316
 Project: Qpid Dispatch
  Issue Type: Bug
Affects Versions: 1.6.0
Reporter: Gordon Sim
Assignee: Gordon Sim






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1309) Various crashes in 1.6 release

2019-04-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16812753#comment-16812753
 ] 

ASF subversion and git services commented on DISPATCH-1309:
---

Commit c27d73b188fcf0a311b1c1d69e21cb9f4c58172d in qpid-dispatch's branch 
refs/heads/master from Ted Ross
[ https://gitbox.apache.org/repos/asf?p=qpid-dispatch.git;h=c27d73b ]

DISPATCH-1309 - Move the decrement of the content ref-count below the 
buffer-trimming logic in qd_message_free.  This eliminates the case where 
message content can be freed by one thread while another is doing 
buffer-trimming on the same content.


> Various crashes in 1.6 release
> --
>
> Key: DISPATCH-1309
> URL: https://issues.apache.org/jira/browse/DISPATCH-1309
> Project: Qpid Dispatch
>  Issue Type: Bug
>Affects Versions: 1.6.0
> Environment: System 'unused':(
> Fedora 5.0.3-200.fc29.x86_64,
> Python 2.7.15,
> Proton master @ eab1f.
> System 'taj':(
> Fedora 4.18.16-200.fc28.x86_64,
> Python 3.6.6,
> Proton master @ 68b38
>Reporter: Chuck Rolke
>Priority: Major
> Attachments: DISPATCH-1309-backtraces.txt, 
> DISPATCH-1309-gen_configs_linear.py
>
>
> qpid-dispatch master @ 51244, which is very close to the 1.6 release, has 
> various crashes.
> The test network is 12 routers spread over two systems. (Configuration 
> generator to be attached.) Four interior routers are in linear arrangement 
> with A and C on one system ('unused'), and B and D on the other system 
> ('taj'). Each system then attaches four edge routers, one to each interior 
> router.
> Running lightweight tests, like proton cpp simple_send and simple_recv to 
> ports on INTA and INTB interior routers leads to a crash on INTC. The crashes 
> typically look like reuse of structures after they have been freed (addresses 
> are 0x). Other crashes hint of general memory corruption 
> (crashes in malloc.c).
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1315) attempt to wake a deleted http connection

2019-04-08 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16812699#comment-16812699
 ] 

ASF GitHub Bot commented on DISPATCH-1315:
--

grs commented on pull request #484: DISPATCH-1315: ensure that the state shared 
by threads other than the…
URL: https://github.com/apache/qpid-dispatch/pull/484
 
 
   … http thread cannot be prematurely deleted
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> attempt to wake a deleted http connection
> -
>
> Key: DISPATCH-1315
> URL: https://issues.apache.org/jira/browse/DISPATCH-1315
> Project: Qpid Dispatch
>  Issue Type: Bug
>Affects Versions: 1.6.0
>Reporter: Gordon Sim
>Assignee: Gordon Sim
>Priority: Major
>
> The /healthz and /metrics implementation can result in attempts to wake up 
> deleted http connections. So far I have only reproduced very occasional 
> invalid reads with this but there could be other symptoms.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-dispatch] grs opened a new pull request #484: DISPATCH-1315: ensure that the state shared by threads other than the…

2019-04-08 Thread GitBox
grs opened a new pull request #484: DISPATCH-1315: ensure that the state shared 
by threads other than the…
URL: https://github.com/apache/qpid-dispatch/pull/484
 
 
   … http thread cannot be prematurely deleted


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (DISPATCH-1315) attempt to wake a deleted http connection

2019-04-08 Thread Gordon Sim (JIRA)
Gordon Sim created DISPATCH-1315:


 Summary: attempt to wake a deleted http connection
 Key: DISPATCH-1315
 URL: https://issues.apache.org/jira/browse/DISPATCH-1315
 Project: Qpid Dispatch
  Issue Type: Bug
Affects Versions: 1.6.0
Reporter: Gordon Sim
Assignee: Gordon Sim


The /healthz and /metrics implementation can result in attempts to wake up 
deleted http connections. So far I have only reproduced very occasional invalid 
reads with this but there could be other symptoms.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-dispatch] grs opened a new pull request #483: DISPATCH-1314: suppress spurious errors from log

2019-04-08 Thread GitBox
grs opened a new pull request #483: DISPATCH-1314: suppress spurious errors 
from log
URL: https://github.com/apache/qpid-dispatch/pull/483
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1314) spurious error in log when using healthz or metrics

2019-04-08 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16812484#comment-16812484
 ] 

ASF GitHub Bot commented on DISPATCH-1314:
--

grs commented on pull request #483: DISPATCH-1314: suppress spurious errors 
from log
URL: https://github.com/apache/qpid-dispatch/pull/483
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> spurious error in log when using healthz or metrics
> ---
>
> Key: DISPATCH-1314
> URL: https://issues.apache.org/jira/browse/DISPATCH-1314
> Project: Qpid Dispatch
>  Issue Type: Bug
>Affects Versions: 1.6.0
>Reporter: Gordon Sim
>Assignee: Gordon Sim
>Priority: Major
>
> When using the /metrics or /healthz http requests using libwebsockets 2.4 or 
> earlier, an error is logged regarding the path (which is dynamically served). 
> This does not occur with libwebsockets 3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (DISPATCH-1314) spurious error in log when using healthz or metrics

2019-04-08 Thread Gordon Sim (JIRA)
Gordon Sim created DISPATCH-1314:


 Summary: spurious error in log when using healthz or metrics
 Key: DISPATCH-1314
 URL: https://issues.apache.org/jira/browse/DISPATCH-1314
 Project: Qpid Dispatch
  Issue Type: Bug
Affects Versions: 1.6.0
Reporter: Gordon Sim
Assignee: Gordon Sim


When using the /metrics or /healthz http requests using libwebsockets 2.4 or 
earlier, an error is logged regarding the path (which is dynamically served). 
This does not occur with libwebsockets 3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (PROTON-2027) Proactor connection wake after memory freed when using pn_proactor_disconnect().

2019-04-08 Thread Cliff Jansen (JIRA)


 [ 
https://issues.apache.org/jira/browse/PROTON-2027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cliff Jansen resolved PROTON-2027.
--
   Resolution: Fixed
Fix Version/s: proton-c-0.28.0

> Proactor connection wake after memory freed when using 
> pn_proactor_disconnect().
> 
>
> Key: PROTON-2027
> URL: https://issues.apache.org/jira/browse/PROTON-2027
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: proton-c
>Affects Versions: proton-c-0.27.0
>Reporter: Cliff Jansen
>Assignee: Cliff Jansen
>Priority: Major
> Fix For: proton-c-0.28.0
>
>
> The normal cleanup procedure for epoll and win_iocp proactors waits for all 
> async activity to complete before freeing memory.
> pn_proactor_disconnect can't actually force a close so it launches a separate 
> async activity piggy-backed on the internal wake mechanism of any connections 
> to be closed.
> If the disconnect is happening at the same time as a separate thread doing a 
> normal close, a new wake can result after concluding there are none left.
> The solution is to mark the connection as "already awake" before entering the 
> cleanup code so new wakes are no-ops.  The libuv proactor doesn't need this 
> as the disconnect function is managed within libuv and never competes with 
> the normal close operation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-2027) Proactor connection wake after memory freed when using pn_proactor_disconnect().

2019-04-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/PROTON-2027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16812391#comment-16812391
 ] 

ASF subversion and git services commented on PROTON-2027:
-

Commit d4a6971af2c64b7d9e8aa31a1e807690d4f26a7e in qpid-proton's branch 
refs/heads/master from Clifford Jansen
[ https://gitbox.apache.org/repos/asf?p=qpid-proton.git;h=d4a6971 ]

PROTON-2027: Make disconnect work like other wake mechanisms (eg. 
pn_connection_wake) and check for closing status.   Count disconnects correctly 
for competing closes.


> Proactor connection wake after memory freed when using 
> pn_proactor_disconnect().
> 
>
> Key: PROTON-2027
> URL: https://issues.apache.org/jira/browse/PROTON-2027
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: proton-c
>Affects Versions: proton-c-0.27.0
>Reporter: Cliff Jansen
>Assignee: Cliff Jansen
>Priority: Major
>
> The normal cleanup procedure for epoll and win_iocp proactors waits for all 
> async activity to complete before freeing memory.
> pn_proactor_disconnect can't actually force a close so it launches a separate 
> async activity piggy-backed on the internal wake mechanism of any connections 
> to be closed.
> If the disconnect is happening at the same time as a separate thread doing a 
> normal close, a new wake can result after concluding there are none left.
> The solution is to mark the connection as "already awake" before entering the 
> cleanup code so new wakes are no-ops.  The libuv proactor doesn't need this 
> as the disconnect function is managed within libuv and never competes with 
> the normal close operation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-2027) Proactor connection wake after memory freed when using pn_proactor_disconnect().

2019-04-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/PROTON-2027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16812392#comment-16812392
 ] 

ASF subversion and git services commented on PROTON-2027:
-

Commit f973528912b3b74e8d1e50317429e202e9f51ec4 in qpid-proton's branch 
refs/heads/master from Clifford Jansen
[ https://gitbox.apache.org/repos/asf?p=qpid-proton.git;h=f973528 ]

PROTON-2027: Test case with two closing connection contexts competing with a 
third context in pn_proactor_disconnect().


> Proactor connection wake after memory freed when using 
> pn_proactor_disconnect().
> 
>
> Key: PROTON-2027
> URL: https://issues.apache.org/jira/browse/PROTON-2027
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: proton-c
>Affects Versions: proton-c-0.27.0
>Reporter: Cliff Jansen
>Assignee: Cliff Jansen
>Priority: Major
>
> The normal cleanup procedure for epoll and win_iocp proactors waits for all 
> async activity to complete before freeing memory.
> pn_proactor_disconnect can't actually force a close so it launches a separate 
> async activity piggy-backed on the internal wake mechanism of any connections 
> to be closed.
> If the disconnect is happening at the same time as a separate thread doing a 
> normal close, a new wake can result after concluding there are none left.
> The solution is to mark the connection as "already awake" before entering the 
> cleanup code so new wakes are no-ops.  The libuv proactor doesn't need this 
> as the disconnect function is managed within libuv and never competes with 
> the normal close operation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (PROTON-2027) Proactor connection wake after memory freed when using pn_proactor_disconnect().

2019-04-08 Thread Cliff Jansen (JIRA)


 [ 
https://issues.apache.org/jira/browse/PROTON-2027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cliff Jansen updated PROTON-2027:
-
Affects Version/s: (was: proton-j-0.32.0)
   proton-c-0.27.0

> Proactor connection wake after memory freed when using 
> pn_proactor_disconnect().
> 
>
> Key: PROTON-2027
> URL: https://issues.apache.org/jira/browse/PROTON-2027
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: proton-c
>Affects Versions: proton-c-0.27.0
>Reporter: Cliff Jansen
>Assignee: Cliff Jansen
>Priority: Major
>
> The normal cleanup procedure for epoll and win_iocp proactors waits for all 
> async activity to complete before freeing memory.
> pn_proactor_disconnect can't actually force a close so it launches a separate 
> async activity piggy-backed on the internal wake mechanism of any connections 
> to be closed.
> If the disconnect is happening at the same time as a separate thread doing a 
> normal close, a new wake can result after concluding there are none left.
> The solution is to mark the connection as "already awake" before entering the 
> cleanup code so new wakes are no-ops.  The libuv proactor doesn't need this 
> as the disconnect function is managed within libuv and never competes with 
> the normal close operation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-8297) [Broker-J][Oracle Message Store] QPID_MESSAGE_CONTENT reserved space keeps growing until it reaches the limit and crashes

2019-04-08 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/QPID-8297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16812330#comment-16812330
 ] 

ASF GitHub Bot commented on QPID-8297:
--

overmeulen commented on pull request #24: QPID-8297: [Broker-J][Oracle Message 
Store] QPID_MESSAGE_CONTENT reserved space keeps growing
URL: https://github.com/apache/qpid-broker-j/pull/24
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [Broker-J][Oracle Message Store] QPID_MESSAGE_CONTENT reserved space keeps 
> growing until it reaches the limit and crashes
> -
>
> Key: QPID-8297
> URL: https://issues.apache.org/jira/browse/QPID-8297
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.1.0
>Reporter: Olivier VERMEULEN
>Priority: Critical
>
> Under a continuous high load, the Oracle message store crashes because the 
> reserved space for the table QPID_MESSAGE_CONTENT keeps on growing (even if 
> the table itself is empty since we consume as fast as we produce).
> This only happens for "big" messages (over 4KB) and comes from the way Oracle 
> handles the LOB types (content column is declared as a BLOB). For LOB values 
> over 4KB Oracle reserves some space in the table to handle the value but, by 
> default, it won't actually release it (even if the value is removed) before a 
> few hours.
> So when we have a high load of big messages that lasts a few hours we end up 
> reaching the max size of the tablespace and the database crashes...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-broker-j] overmeulen opened a new pull request #24: QPID-8297: [Broker-J][Oracle Message Store] QPID_MESSAGE_CONTENT reserved space keeps growing

2019-04-08 Thread GitBox
overmeulen opened a new pull request #24: QPID-8297: [Broker-J][Oracle Message 
Store] QPID_MESSAGE_CONTENT reserved space keeps growing
URL: https://github.com/apache/qpid-broker-j/pull/24
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-8297) [Broker-J][Oracle Message Store] QPID_MESSAGE_CONTENT reserved space keeps growing until it reaches the limit and crashes

2019-04-08 Thread Olivier VERMEULEN (JIRA)


[ 
https://issues.apache.org/jira/browse/QPID-8297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16812279#comment-16812279
 ] 

Olivier VERMEULEN commented on QPID-8297:
-

Working on a fix...

> [Broker-J][Oracle Message Store] QPID_MESSAGE_CONTENT reserved space keeps 
> growing until it reaches the limit and crashes
> -
>
> Key: QPID-8297
> URL: https://issues.apache.org/jira/browse/QPID-8297
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.1.0
>Reporter: Olivier VERMEULEN
>Priority: Critical
>
> Under a continuous high load, the Oracle message store crashes because the 
> reserved space for the table QPID_MESSAGE_CONTENT keeps on growing (even if 
> the table itself is empty since we consume as fast as we produce).
> This only happens for "big" messages (over 4KB) and comes from the way Oracle 
> handles the LOB types (content column is declared as a BLOB). For LOB values 
> over 4KB Oracle reserves some space in the table to handle the value but, by 
> default, it won't actually release it (even if the value is removed) before a 
> few hours.
> So when we have a high load of big messages that lasts a few hours we end up 
> reaching the max size of the tablespace and the database crashes...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPID-8297) [Broker-J][Oracle Message Store] QPID_MESSAGE_CONTENT reserved space keeps growing until it reaches the limit and crashes

2019-04-08 Thread Olivier VERMEULEN (JIRA)
Olivier VERMEULEN created QPID-8297:
---

 Summary: [Broker-J][Oracle Message Store] QPID_MESSAGE_CONTENT 
reserved space keeps growing until it reaches the limit and crashes
 Key: QPID-8297
 URL: https://issues.apache.org/jira/browse/QPID-8297
 Project: Qpid
  Issue Type: Bug
  Components: Broker-J
Affects Versions: qpid-java-broker-7.1.0
Reporter: Olivier VERMEULEN


Under a continuous high load, the Oracle message store crashes because the 
reserved space for the table QPID_MESSAGE_CONTENT keeps on growing (even if the 
table itself is empty since we consume as fast as we produce).

This only happens for "big" messages (over 4KB) and comes from the way Oracle 
handles the LOB types (content column is declared as a BLOB). For LOB values 
over 4KB Oracle reserves some space in the table to handle the value but, by 
default, it won't actually release it (even if the value is removed) before a 
few hours.

So when we have a high load of big messages that lasts a few hours we end up 
reaching the max size of the tablespace and the database crashes...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-8296) [JMS AMQP 0-x] Release 6.3.4 version of Qpid JMS client for AMQP 0-x

2019-04-08 Thread Alex Rudyy (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPID-8296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Rudyy updated QPID-8296:
-
Summary: [JMS AMQP 0-x] Release 6.3.4 version of Qpid JMS client for AMQP 
0-x  (was: [JMS AMQP 0-x] Release 6.3.4 version of Qpid JMS client)

> [JMS AMQP 0-x] Release 6.3.4 version of Qpid JMS client for AMQP 0-x
> 
>
> Key: QPID-8296
> URL: https://issues.apache.org/jira/browse/QPID-8296
> Project: Qpid
>  Issue Type: Task
>  Components: JMS AMQP 0-x
>Reporter: Alex Rudyy
>Priority: Major
> Fix For: qpid-java-client-0-x-6.3.4
>
>
> Release Qpid JMS AMQP 0-x client following instructions 
> [here|https://cwiki.apache.org/confluence/display/qpid/Releasing+Qpid+JMS+AMQP+0-x]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPID-8296) [JMS AMQP 0-x] Release 6.3.4 version of Qpid JMS client

2019-04-08 Thread Alex Rudyy (JIRA)
Alex Rudyy created QPID-8296:


 Summary: [JMS AMQP 0-x] Release 6.3.4 version of Qpid JMS client
 Key: QPID-8296
 URL: https://issues.apache.org/jira/browse/QPID-8296
 Project: Qpid
  Issue Type: Task
  Components: JMS AMQP 0-x
Reporter: Alex Rudyy
 Fix For: qpid-java-client-0-x-6.3.4


Release Qpid JMS AMQP 0-x client following instructions 
[here|https://cwiki.apache.org/confluence/display/qpid/Releasing+Qpid+JMS+AMQP+0-x]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-8282) [JMS AMQP 0-x] Java 11 build/runtime failure due to use of javax.xml.bind.DatatypeConverter

2019-04-08 Thread Alex Rudyy (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPID-8282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Rudyy updated QPID-8282:
-
Fix Version/s: qpid-java-client-0-x-6.3.4

> [JMS AMQP 0-x] Java 11 build/runtime failure due to use of 
> javax.xml.bind.DatatypeConverter
> ---
>
> Key: QPID-8282
> URL: https://issues.apache.org/jira/browse/QPID-8282
> Project: Qpid
>  Issue Type: Bug
>  Components: JMS AMQP 0-x
>Affects Versions: qpid-java-client-0-x-6.3.3
>Reporter: Robbie Gemmell
>Priority: Minor
> Fix For: qpid-java-client-0-x-6.3.4
>
>
> In setting up a new system, I happened to make attempt to build the JMS AMQP 
> 0-x client using Java 11. It failed to compile due to use of 
> javax.xml.bind.DatatypeConverter for Base64 handling in the SCRAM-SHA 
> mechanisms. This prevents building on Java 11, and would break use of 
> SCRAM-SHA while running on Java 11 assuming nothing else stopped things 
> working.
> Although the 0-x client is essentially in maintenance/legacy mode at this 
> point, blockers as trivial as Base 64 handling would be nice to resolved if 
> it made using Java 11 possible, if only to simplify migrations later.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-8285) Deadlock during receiveMessage when broker connecton fails

2019-04-08 Thread Alex Rudyy (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPID-8285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Rudyy updated QPID-8285:
-
Fix Version/s: qpid-java-client-0-x-6.3.4

> Deadlock during receiveMessage when broker connecton fails
> --
>
> Key: QPID-8285
> URL: https://issues.apache.org/jira/browse/QPID-8285
> Project: Qpid
>  Issue Type: Bug
>  Components: JMS AMQP 0-x
> Environment: * Java 1.8.0_20, 1.8.0_192, & 1.8.0_172
>  * Linux 4.9.0-4-amd64 & 3.10.0-327.13
>  * qpidd 1.39.0
>Reporter: Jonathan Beales
>Priority: Major
> Fix For: qpid-java-client-0-x-6.3.4
>
> Attachments: qpid_jms_deaklock.patch
>
>
> When a JMS MessageConsumer calls receiveMessage with a timeout, if no message 
> is received during the timeout, 
> BasicMessageConsumer_0_10.getMessageFromQueue() calls the syncDispatchQueue() 
> method.  As not the dispatcher thread, the consumer waits on a method local 
> CountDownLatch which should be decremented when the AMQSession.Dispatcher 
> thread calls dispatch().
> In the AMQSession.Dispatcher thread, the core loop will stop pulling from the 
> queue to dispatch messages when the connection to the broker is lost 
> (isClosing() becomes true).
> In this scenario, the receiveMessage call is waiting forever because the 
> Dispatcher will never call dispatch.  This also leaves the Dispatcher thread 
> in an infinite loop (using 100% CPU) waiting to be fully closed.
> This can fixed by allowing the AMQSession.Dispatcher to always dispatch 
> remaining queue content to ensure a consumer is not waiting forever (see 
> attached).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-8294) [Broker-J][Oracle Message Store] Batch delete fails for more than 1000 messages

2019-04-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/QPID-8294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16812238#comment-16812238
 ] 

ASF subversion and git services commented on QPID-8294:
---

Commit c3ae728743a0ec9ac6d4960c9c0a7d49e4f122a3 in qpid-broker-j's branch 
refs/heads/7.1.x from overmeulen
[ https://gitbox.apache.org/repos/asf?p=qpid-broker-j.git;h=c3ae728 ]

QPID-8294: [Broker-J][Oracle Message Store] Batch delete fails for more than 
1000 messages

This closes #23

(cherry picked from commit ba933e93bc86f73ec2cf97d4182069bbcc69d0d3)


> [Broker-J][Oracle Message Store] Batch delete fails for more than 1000 
> messages
> ---
>
> Key: QPID-8294
> URL: https://issues.apache.org/jira/browse/QPID-8294
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.1.0
>Reporter: Olivier VERMEULEN
>Assignee: Alex Rudyy
>Priority: Critical
> Fix For: qpid-java-broker-8.0.0, qpid-java-broker-7.1.3
>
>
> When under high load, the Broker-J can end up having to delete more than 1000 
> messages in a single batch. But some databases (and Oracle in particular) put 
> a limit to the number of elements you can have in the IN clause. So we end up 
> with the following exception: ORA-01795: maximum number of expressions in a 
> list is 1000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-8294) [Broker-J][Oracle Message Store] Batch delete fails for more than 1000 messages

2019-04-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/QPID-8294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16812239#comment-16812239
 ] 

ASF subversion and git services commented on QPID-8294:
---

Commit 391954f59e442872146351302007a823bd266574 in qpid-broker-j's branch 
refs/heads/7.1.x from Alex Rudyy
[ https://gitbox.apache.org/repos/asf?p=qpid-broker-j.git;h=391954f ]

QPID-8294: [Broker-J] Fix code formatting

(cherry picked from commit 920db7be6d8248b1662044000a7bf809605ca26c)


> [Broker-J][Oracle Message Store] Batch delete fails for more than 1000 
> messages
> ---
>
> Key: QPID-8294
> URL: https://issues.apache.org/jira/browse/QPID-8294
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.1.0
>Reporter: Olivier VERMEULEN
>Assignee: Alex Rudyy
>Priority: Critical
> Fix For: qpid-java-broker-8.0.0, qpid-java-broker-7.1.3
>
>
> When under high load, the Broker-J can end up having to delete more than 1000 
> messages in a single batch. But some databases (and Oracle in particular) put 
> a limit to the number of elements you can have in the IN clause. So we end up 
> with the following exception: ORA-01795: maximum number of expressions in a 
> list is 1000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-8294) [Broker-J][Oracle Message Store] Batch delete fails for more than 1000 messages

2019-04-08 Thread Alex Rudyy (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPID-8294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Rudyy updated QPID-8294:
-
Fix Version/s: qpid-java-broker-8.0.0

> [Broker-J][Oracle Message Store] Batch delete fails for more than 1000 
> messages
> ---
>
> Key: QPID-8294
> URL: https://issues.apache.org/jira/browse/QPID-8294
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.1.0
>Reporter: Olivier VERMEULEN
>Assignee: Alex Rudyy
>Priority: Critical
> Fix For: qpid-java-broker-8.0.0, qpid-java-broker-7.1.3
>
>
> When under high load, the Broker-J can end up having to delete more than 1000 
> messages in a single batch. But some databases (and Oracle in particular) put 
> a limit to the number of elements you can have in the IN clause. So we end up 
> with the following exception: ORA-01795: maximum number of expressions in a 
> list is 1000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPID-8294) [Broker-J][Oracle Message Store] Batch delete fails for more than 1000 messages

2019-04-08 Thread Alex Rudyy (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPID-8294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Rudyy resolved QPID-8294.
--
   Resolution: Fixed
Fix Version/s: qpid-java-broker-7.1.3

> [Broker-J][Oracle Message Store] Batch delete fails for more than 1000 
> messages
> ---
>
> Key: QPID-8294
> URL: https://issues.apache.org/jira/browse/QPID-8294
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.1.0
>Reporter: Olivier VERMEULEN
>Assignee: Alex Rudyy
>Priority: Critical
> Fix For: qpid-java-broker-7.1.3
>
>
> When under high load, the Broker-J can end up having to delete more than 1000 
> messages in a single batch. But some databases (and Oracle in particular) put 
> a limit to the number of elements you can have in the IN clause. So we end up 
> with the following exception: ORA-01795: maximum number of expressions in a 
> list is 1000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Assigned] (QPID-8294) [Broker-J][Oracle Message Store] Batch delete fails for more than 1000 messages

2019-04-08 Thread Alex Rudyy (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPID-8294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Rudyy reassigned QPID-8294:


Assignee: Alex Rudyy

> [Broker-J][Oracle Message Store] Batch delete fails for more than 1000 
> messages
> ---
>
> Key: QPID-8294
> URL: https://issues.apache.org/jira/browse/QPID-8294
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.1.0
>Reporter: Olivier VERMEULEN
>Assignee: Alex Rudyy
>Priority: Critical
>
> When under high load, the Broker-J can end up having to delete more than 1000 
> messages in a single batch. But some databases (and Oracle in particular) put 
> a limit to the number of elements you can have in the IN clause. So we end up 
> with the following exception: ORA-01795: maximum number of expressions in a 
> list is 1000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-8294) [Broker-J][Oracle Message Store] Batch delete fails for more than 1000 messages

2019-04-08 Thread Alex Rudyy (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPID-8294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Rudyy updated QPID-8294:
-
Status: Reviewable  (was: In Progress)

> [Broker-J][Oracle Message Store] Batch delete fails for more than 1000 
> messages
> ---
>
> Key: QPID-8294
> URL: https://issues.apache.org/jira/browse/QPID-8294
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.1.0
>Reporter: Olivier VERMEULEN
>Assignee: Alex Rudyy
>Priority: Critical
>
> When under high load, the Broker-J can end up having to delete more than 1000 
> messages in a single batch. But some databases (and Oracle in particular) put 
> a limit to the number of elements you can have in the IN clause. So we end up 
> with the following exception: ORA-01795: maximum number of expressions in a 
> list is 1000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-8294) [Broker-J][Oracle Message Store] Batch delete fails for more than 1000 messages

2019-04-08 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/QPID-8294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16812225#comment-16812225
 ] 

ASF GitHub Bot commented on QPID-8294:
--

asfgit commented on pull request #23: QPID-8294: [Broker-J][Oracle Message 
Store] Batch delete fails for more than 1000 messages
URL: https://github.com/apache/qpid-broker-j/pull/23
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [Broker-J][Oracle Message Store] Batch delete fails for more than 1000 
> messages
> ---
>
> Key: QPID-8294
> URL: https://issues.apache.org/jira/browse/QPID-8294
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.1.0
>Reporter: Olivier VERMEULEN
>Priority: Critical
>
> When under high load, the Broker-J can end up having to delete more than 1000 
> messages in a single batch. But some databases (and Oracle in particular) put 
> a limit to the number of elements you can have in the IN clause. So we end up 
> with the following exception: ORA-01795: maximum number of expressions in a 
> list is 1000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-8294) [Broker-J][Oracle Message Store] Batch delete fails for more than 1000 messages

2019-04-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/QPID-8294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16812226#comment-16812226
 ] 

ASF subversion and git services commented on QPID-8294:
---

Commit 920db7be6d8248b1662044000a7bf809605ca26c in qpid-broker-j's branch 
refs/heads/master from Alex Rudyy
[ https://gitbox.apache.org/repos/asf?p=qpid-broker-j.git;h=920db7b ]

QPID-8294: [Broker-J] Fix code formatting


> [Broker-J][Oracle Message Store] Batch delete fails for more than 1000 
> messages
> ---
>
> Key: QPID-8294
> URL: https://issues.apache.org/jira/browse/QPID-8294
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.1.0
>Reporter: Olivier VERMEULEN
>Priority: Critical
>
> When under high load, the Broker-J can end up having to delete more than 1000 
> messages in a single batch. But some databases (and Oracle in particular) put 
> a limit to the number of elements you can have in the IN clause. So we end up 
> with the following exception: ORA-01795: maximum number of expressions in a 
> list is 1000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-8294) [Broker-J][Oracle Message Store] Batch delete fails for more than 1000 messages

2019-04-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/QPID-8294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16812224#comment-16812224
 ] 

ASF subversion and git services commented on QPID-8294:
---

Commit ba933e93bc86f73ec2cf97d4182069bbcc69d0d3 in qpid-broker-j's branch 
refs/heads/master from overmeulen
[ https://gitbox.apache.org/repos/asf?p=qpid-broker-j.git;h=ba933e9 ]

QPID-8294: [Broker-J][Oracle Message Store] Batch delete fails for more than 
1000 messages

This closes #23


> [Broker-J][Oracle Message Store] Batch delete fails for more than 1000 
> messages
> ---
>
> Key: QPID-8294
> URL: https://issues.apache.org/jira/browse/QPID-8294
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.1.0
>Reporter: Olivier VERMEULEN
>Priority: Critical
>
> When under high load, the Broker-J can end up having to delete more than 1000 
> messages in a single batch. But some databases (and Oracle in particular) put 
> a limit to the number of elements you can have in the IN clause. So we end up 
> with the following exception: ORA-01795: maximum number of expressions in a 
> list is 1000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-broker-j] asfgit closed pull request #23: QPID-8294: [Broker-J][Oracle Message Store] Batch delete fails for more than 1000 messages

2019-04-08 Thread GitBox
asfgit closed pull request #23: QPID-8294: [Broker-J][Oracle Message Store] 
Batch delete fails for more than 1000 messages
URL: https://github.com/apache/qpid-broker-j/pull/23
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-8294) [Broker-J][Oracle Message Store] Batch delete fails for more than 1000 messages

2019-04-08 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/QPID-8294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16812203#comment-16812203
 ] 

ASF GitHub Bot commented on QPID-8294:
--

overmeulen commented on pull request #23: QPID-8294: [Broker-J][Oracle Message 
Store] Batch delete fails for more than 1000 messages
URL: https://github.com/apache/qpid-broker-j/pull/23#discussion_r272924510
 
 

 ##
 File path: 
broker-plugins/jdbc-store/src/main/java/org/apache/qpid/server/store/jdbc/AbstractJDBCMessageStore.java
 ##
 @@ -82,6 +82,8 @@
 private static final String XID_TABLE_NAME_SUFFIX = "QPID_XIDS";
 private static final String XID_ACTIONS_TABLE_NAME_SUFFIX = 
"QPID_XID_ACTIONS";
 
+private static final int MAX_DELETE_BATCH_SIZE = 1000;
 
 Review comment:
   Done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [Broker-J][Oracle Message Store] Batch delete fails for more than 1000 
> messages
> ---
>
> Key: QPID-8294
> URL: https://issues.apache.org/jira/browse/QPID-8294
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.1.0
>Reporter: Olivier VERMEULEN
>Priority: Critical
>
> When under high load, the Broker-J can end up having to delete more than 1000 
> messages in a single batch. But some databases (and Oracle in particular) put 
> a limit to the number of elements you can have in the IN clause. So we end up 
> with the following exception: ORA-01795: maximum number of expressions in a 
> list is 1000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-broker-j] overmeulen commented on a change in pull request #23: QPID-8294: [Broker-J][Oracle Message Store] Batch delete fails for more than 1000 messages

2019-04-08 Thread GitBox
overmeulen commented on a change in pull request #23: QPID-8294: 
[Broker-J][Oracle Message Store] Batch delete fails for more than 1000 messages
URL: https://github.com/apache/qpid-broker-j/pull/23#discussion_r272924510
 
 

 ##
 File path: 
broker-plugins/jdbc-store/src/main/java/org/apache/qpid/server/store/jdbc/AbstractJDBCMessageStore.java
 ##
 @@ -82,6 +82,8 @@
 private static final String XID_TABLE_NAME_SUFFIX = "QPID_XIDS";
 private static final String XID_ACTIONS_TABLE_NAME_SUFFIX = 
"QPID_XID_ACTIONS";
 
+private static final int MAX_DELETE_BATCH_SIZE = 1000;
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-8294) [Broker-J][Oracle Message Store] Batch delete fails for more than 1000 messages

2019-04-08 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/QPID-8294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16812197#comment-16812197
 ] 

ASF GitHub Bot commented on QPID-8294:
--

alex-rufous commented on pull request #23: QPID-8294: [Broker-J][Oracle Message 
Store] Batch delete fails for more than 1000 messages
URL: https://github.com/apache/qpid-broker-j/pull/23#discussion_r272921369
 
 

 ##
 File path: 
broker-plugins/jdbc-store/src/main/java/org/apache/qpid/server/store/jdbc/AbstractJDBCMessageStore.java
 ##
 @@ -82,6 +82,8 @@
 private static final String XID_TABLE_NAME_SUFFIX = "QPID_XIDS";
 private static final String XID_ACTIONS_TABLE_NAME_SUFFIX = 
"QPID_XID_ACTIONS";
 
+private static final int MAX_DELETE_BATCH_SIZE = 1000;
 
 Review comment:
   I agree that "qpid.jdbcstore.inClauseMaxSize" is much better name than 
"qpid.jdbcStoreMaxDeleteBatchSize". The variable name can be changed as well to 
inClauseMaxSize.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [Broker-J][Oracle Message Store] Batch delete fails for more than 1000 
> messages
> ---
>
> Key: QPID-8294
> URL: https://issues.apache.org/jira/browse/QPID-8294
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.1.0
>Reporter: Olivier VERMEULEN
>Priority: Critical
>
> When under high load, the Broker-J can end up having to delete more than 1000 
> messages in a single batch. But some databases (and Oracle in particular) put 
> a limit to the number of elements you can have in the IN clause. So we end up 
> with the following exception: ORA-01795: maximum number of expressions in a 
> list is 1000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-broker-j] alex-rufous commented on a change in pull request #23: QPID-8294: [Broker-J][Oracle Message Store] Batch delete fails for more than 1000 messages

2019-04-08 Thread GitBox
alex-rufous commented on a change in pull request #23: QPID-8294: 
[Broker-J][Oracle Message Store] Batch delete fails for more than 1000 messages
URL: https://github.com/apache/qpid-broker-j/pull/23#discussion_r272921369
 
 

 ##
 File path: 
broker-plugins/jdbc-store/src/main/java/org/apache/qpid/server/store/jdbc/AbstractJDBCMessageStore.java
 ##
 @@ -82,6 +82,8 @@
 private static final String XID_TABLE_NAME_SUFFIX = "QPID_XIDS";
 private static final String XID_ACTIONS_TABLE_NAME_SUFFIX = 
"QPID_XID_ACTIONS";
 
+private static final int MAX_DELETE_BATCH_SIZE = 1000;
 
 Review comment:
   I agree that "qpid.jdbcstore.inClauseMaxSize" is much better name than 
"qpid.jdbcStoreMaxDeleteBatchSize". The variable name can be changed as well to 
inClauseMaxSize.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Moved] (QPIDJMS-452) Receiving a large message with a jms polling consumer with timeout of 1 sec can hang when the timeout is reached while message is in transfer.

2019-04-08 Thread Rob Godfrey (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rob Godfrey moved QPID-8295 to QPIDJMS-452:
---

Affects Version/s: (was: Future)
  Component/s: (was: Java Common)
 Workflow: classic default workflow  (was: QPid Workflow)
  Key: QPIDJMS-452  (was: QPID-8295)
  Project: Qpid JMS  (was: Qpid)

> Receiving a large message with a jms polling consumer with timeout of 1 sec 
> can hang when the timeout is reached while message is in transfer.
> --
>
> Key: QPIDJMS-452
> URL: https://issues.apache.org/jira/browse/QPIDJMS-452
> Project: Qpid JMS
>  Issue Type: Bug
> Environment: Windows
>Reporter: Bas
>Priority: Major
>  Labels: test
> Attachments: LargeJmsMessageConsumerTest.java
>
>
> Receiving a large message with a polling consumer with timeout of 1 sec can 
> hang when the timeout is reached while message is in transfer.
> Message is not delivered and further polling is discontinued
>  
> Test case included but timing maybe difficult when running on a different 
> machine. Try chaning the sleep in the test case or increasing the message 
> size.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org