[jira] [Updated] (ARTEMIS-1977) ASYNCIO can reduce sys-calls to retrieve I/O events

2018-07-11 Thread Francesco Nigro (JIRA)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1977:
-
Summary: ASYNCIO can reduce sys-calls to retrieve I/O events  (was: ASYNCIO 
can reduce sys-calls to retrieve I/O completion events)

> ASYNCIO can reduce sys-calls to retrieve I/O events
> ---
>
> Key: ARTEMIS-1977
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1977
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.6.2
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Minor
> Fix For: 2.7.0
>
>
> On LibAIO is possible to retrieve the I/O completion events without 
> using io_getevents sys-calls by reading the user-space ring buffer used by the
> kernel to store them.
> This is already beneficial for very fast disks and necessary for further 
> improvements
> of ASYNCIO Journal to leverage (very) fast low-latency disks by going 
> completly
> lock-free. 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (ARTEMIS-1977) ASYNCIO can reduce sys-calls to retrieve I/O completion events

2018-07-11 Thread Francesco Nigro (JIRA)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-1977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16539754#comment-16539754
 ] 

Francesco Nigro edited comment on ARTEMIS-1977 at 7/11/18 9:04 AM:
---

Currently there are low-latency DBMS that support this feature with success: 
[https://groups.google.com/forum/#!msg/seastar-dev/DSXC5UcIsTg/LD13i1vmAAAJ;context-place=topic/seastar-dev/zdr01znzUes].

The next step is to expose through JNI this feature and allow an even loop 
style of processing of I/O events, possibly lock-free and with smart batching

to reduce the submit sys-calls without impacting on latencies.


was (Author: nigro@gmail.com):
Currently there are low-latency DBMS that support this feature with success: 
[PATCH seastar v2 2/6 linux-aio: try to perform io_getevents in 
userspace|[https://groups.google.com/forum/#!msg/seastar-dev/DSXC5UcIsTg/LD13i1vmAAAJ;context-place=topic/seastar-dev/zdr01znzUes]]

The next step is to expose through JNI this feature and allow an even loop 
style of processing of I/O events, possibly lock-free and with smart batching

to reduce the submit sys-calls without impacting on latencies.

> ASYNCIO can reduce sys-calls to retrieve I/O completion events
> --
>
> Key: ARTEMIS-1977
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1977
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.6.2
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Minor
> Fix For: 2.7.0
>
>
> On LibAIO is possible to retrieve the I/O completion events without 
> using io_getevents sys-calls by reading the user-space ring buffer used by the
> kernel to store them.
> This is already beneficial for very fast disks and necessary for further 
> improvements
> of ASYNCIO Journal to leverage (very) fast low-latency disks by going 
> completly
> lock-free. 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (ARTEMIS-1977) ASYNCIO can reduce sys-calls to retrieve I/O completion events

2018-07-11 Thread Francesco Nigro (JIRA)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-1977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16539754#comment-16539754
 ] 

Francesco Nigro edited comment on ARTEMIS-1977 at 7/11/18 9:03 AM:
---

Currently there are low-latency DBMS that support this feature with success: 
[PATCH seastar v2 2/6 linux-aio: try to perform io_getevents in 
userspace|[https://groups.google.com/forum/#!msg/seastar-dev/DSXC5UcIsTg/LD13i1vmAAAJ;context-place=topic/seastar-dev/zdr01znzUes]]

The next step is to expose through JNI this feature and allow an even loop 
style of processing of I/O events, possibly lock-free and with smart batching

to reduce the submit sys-calls without impacting on latencies.


was (Author: nigro@gmail.com):
Currently there are low-latency DBMS that support this feature with success: 
[PATCH seastar v2 2/6 linux-aio: try to perform io_getevents in 
userspace|[https://groups.google.com/forum/#!msg/seastar-dev/DSXC5UcIsTg/LD13i1vmAAAJ;context-place=topic/seastar-dev/zdr01znzUes].]

The next step is to expose through JNI this feature and allow an even loop 
style of processing of I/O events, possibly lock-free and with smart batching

to reduce the submit sys-calls without impacting on latencies.

> ASYNCIO can reduce sys-calls to retrieve I/O completion events
> --
>
> Key: ARTEMIS-1977
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1977
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.6.2
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Minor
> Fix For: 2.7.0
>
>
> On LibAIO is possible to retrieve the I/O completion events without 
> using io_getevents sys-calls by reading the user-space ring buffer used by the
> kernel to store them.
> This is already beneficial for very fast disks and necessary for further 
> improvements
> of ASYNCIO Journal to leverage (very) fast low-latency disks by going 
> completly
> lock-free. 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (ARTEMIS-1977) ASYNCIO can reduce sys-calls to retrieve I/O completion events

2018-07-11 Thread Francesco Nigro (JIRA)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-1977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16539754#comment-16539754
 ] 

Francesco Nigro edited comment on ARTEMIS-1977 at 7/11/18 9:02 AM:
---

Currently there are low-latency DBMS that support this feature with success: 
[PATCH seastar v2 2/6 linux-aio: try to perform io_getevents in 
userspace|[https://groups.google.com/forum/#!msg/seastar-dev/DSXC5UcIsTg/LD13i1vmAAAJ;context-place=topic/seastar-dev/zdr01znzUes].]

The next step is to expose through JNI this feature and allow an even loop 
style of processing of I/O events, possibly lock-free and with smart batching

to reduce the submit sys-calls without impacting on latencies.


was (Author: nigro@gmail.com):
Currently there are low-latency DBMS that support this feature with success: 
[[PATCH seastar v2 2/6] linux-aio: try to perform io_getevents in 
userspace|[https://groups.google.com/forum/#!msg/seastar-dev/DSXC5UcIsTg/LD13i1vmAAAJ;context-place=topic/seastar-dev/zdr01znzUes].]

The next step is to expose through JNI this feature and allow an even loop 
style of processing of I/O events, possibly lock-free and with smart batching

to reduce the submit sys-calls without impacting on latencies.

> ASYNCIO can reduce sys-calls to retrieve I/O completion events
> --
>
> Key: ARTEMIS-1977
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1977
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.6.2
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Minor
> Fix For: 2.7.0
>
>
> On LibAIO is possible to retrieve the I/O completion events without 
> using io_getevents sys-calls by reading the user-space ring buffer used by the
> kernel to store them.
> This is already beneficial for very fast disks and necessary for further 
> improvements
> of ASYNCIO Journal to leverage (very) fast low-latency disks by going 
> completly
> lock-free. 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-1977) ASYNCIO can reduce sys-calls to retrieve I/O completion events

2018-07-11 Thread Francesco Nigro (JIRA)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-1977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16539754#comment-16539754
 ] 

Francesco Nigro commented on ARTEMIS-1977:
--

Currently there are low-latency DBMS that support this feature with success: 
[[PATCH seastar v2 2/6] linux-aio: try to perform io_getevents in 
userspace|[https://groups.google.com/forum/#!msg/seastar-dev/DSXC5UcIsTg/LD13i1vmAAAJ;context-place=topic/seastar-dev/zdr01znzUes].]

The next step is to expose through JNI this feature and allow an even loop 
style of processing of I/O events, possibly lock-free and with smart batching

to reduce the submit sys-calls without impacting on latencies.

> ASYNCIO can reduce sys-calls to retrieve I/O completion events
> --
>
> Key: ARTEMIS-1977
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1977
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.6.2
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Minor
> Fix For: 2.7.0
>
>
> On LibAIO is possible to retrieve the I/O completion events without 
> using io_getevents sys-calls by reading the user-space ring buffer used by the
> kernel to store them.
> This is already beneficial for very fast disks and necessary for further 
> improvements
> of ASYNCIO Journal to leverage (very) fast low-latency disks by going 
> completly
> lock-free. 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1977) ASYNCIO can reduce sys-calls to retrieve I/O completion events

2018-07-11 Thread Francesco Nigro (JIRA)
Francesco Nigro created ARTEMIS-1977:


 Summary: ASYNCIO can reduce sys-calls to retrieve I/O completion 
events
 Key: ARTEMIS-1977
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1977
 Project: ActiveMQ Artemis
  Issue Type: Improvement
  Components: Broker
Affects Versions: 2.6.2
Reporter: Francesco Nigro
Assignee: Francesco Nigro
 Fix For: 2.7.0


On LibAIO is possible to retrieve the I/O completion events without 

using io_getevents sys-calls by reading the user-space ring buffer used by the

kernel to store them.

This is already beneficial for very fast disks and necessary for further 
improvements

of ASYNCIO Journal to leverage (very) fast low-latency disks by going completly

lock-free. 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1945) InVMNodeManager shared state should be volatile

2018-06-20 Thread Francesco Nigro (JIRA)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1945:
-
Affects Version/s: 2.6.2
Fix Version/s: 2.7.0
  Component/s: Broker

> InVMNodeManager shared state should be volatile
> ---
>
> Key: ARTEMIS-1945
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1945
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.6.2
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.7.0
>
>
> InVMNodeManager is not declaring its shared state as volatile, risking to 
> deadlock while awaiting live state (depends by how JIT decide to optimize the 
> state access code).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1945) InVMNodeManager shared state should be volatile

2018-06-20 Thread Francesco Nigro (JIRA)
Francesco Nigro created ARTEMIS-1945:


 Summary: InVMNodeManager shared state should be volatile
 Key: ARTEMIS-1945
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1945
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Francesco Nigro
Assignee: Francesco Nigro


InVMNodeManager is not declaring its shared state as volatile, risking to 
deadlock while awaiting live state (depends by how JIT decide to optimize the 
state access code).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (ARTEMIS-1876) InVMNodeManager shouldn't be used if no JDBC HA is configured

2018-05-22 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on ARTEMIS-1876 started by Francesco Nigro.

> InVMNodeManager shouldn't be used if no JDBC HA is configured
> -
>
> Key: ARTEMIS-1876
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1876
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.6.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.7.0
>
>
> When database persistence and no shared store option is being used, Artemis 
> is choosing to use InVMNodeManager, that is not providing the same behaviour 
> of FileLockNodeManager.
> It is causing several regressions with Oracle on the test suite.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1876) InVMNodeManager shouldn't be used if no JDBC HA is configured

2018-05-22 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1876:
-
Description: 
When database persistence and no shared store option is being used, Artemis is 
choosing to use InVMNodeManager, that is not providing the same behaviour of 
FileLockNodeManager.

It is causing several regressions with Oracle on the test suite.

  was:
When database persistence and no shared store option is being used, Artemis is 
choosing to use InVMNodeManager, that is not providing the same behaviour of 
FileLockNodeManager and is not production-ready.

It is causing several regressions with Oracle on the test suite.


> InVMNodeManager shouldn't be used if no JDBC HA is configured
> -
>
> Key: ARTEMIS-1876
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1876
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.6.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.7.0
>
>
> When database persistence and no shared store option is being used, Artemis 
> is choosing to use InVMNodeManager, that is not providing the same behaviour 
> of FileLockNodeManager.
> It is causing several regressions with Oracle on the test suite.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1876) InVMNodeManager shouldn't be used if no JDBC HA is configured

2018-05-22 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1876:
-
Summary: InVMNodeManager shouldn't be used if no JDBC HA is configured  
(was: ARTEMIS-1762 InVMNodeManager shouldn't be used if no JDBC HA is 
configured)

> InVMNodeManager shouldn't be used if no JDBC HA is configured
> -
>
> Key: ARTEMIS-1876
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1876
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.6.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.7.0
>
>
> When database persistence and no shared store option is being used, Artemis 
> is choosing to use InVMNodeManager, that is not providing the same behaviour 
> of FileLockNodeManager and is not production-ready.
> It is causing several regressions with Oracle on the test suite.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1876) ARTEMIS-1762 InVMNodeManager shouldn't be used if no JDBC HA is configured

2018-05-22 Thread Francesco Nigro (JIRA)
Francesco Nigro created ARTEMIS-1876:


 Summary: ARTEMIS-1762 InVMNodeManager shouldn't be used if no JDBC 
HA is configured
 Key: ARTEMIS-1876
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1876
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 2.6.0
Reporter: Francesco Nigro
Assignee: Francesco Nigro
 Fix For: 2.7.0


When database persistence and no shared store option is being used, Artemis is 
choosing to use InVMNodeManager, that is not providing the same behaviour of 
FileLockNodeManager and is not production-ready.

It is causing several regressions with Oracle on the test suite.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-1865) Shared Store Cluster Doesn't Work on CIFS

2018-05-17 Thread Francesco Nigro (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-1865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478691#comment-16478691
 ] 

Francesco Nigro commented on ARTEMIS-1865:
--

Given that currently it just use one file with [5 different 
regions|https://github.com/apache/activemq-artemis/blob/master/artemis-server/src/main/java/org/apache/activemq/artemis/core/server/impl/FileLockNodeManager.java#L36]
 used for:
 * shared state ([LIVE, FAILINGBACK, STARTED, 
NOT_STARTED|[https://github.com/apache/activemq-artemis/blob/master/artemis-server/src/main/java/org/apache/activemq/artemis/core/server/impl/FileLockNodeManager.java#L44])]
 - 1 byte -
 * live lock - 1 byte -
 * backup lock - 1 byte -
 * shared nodeId (UUID) - 16 byte -

I think that it could be splitted into 3 files:
 * one with the shared state and nodeId (server.state)
 * one with the live lock (live.lock)
 * one with the backup lock (backup.lock)

That should avoid strange locking issues to happen due to not obvious CIFS 
configuration.

The only problems I see are related to the retro-compatibility and 
cross-compatibility (ie a new version can't work with an old one etc etc).

I don't have any box to reproduce the CIFS behaviour ATM, but it would be good 
to create 2 simple programs that try to acquire locks on different regions on 
the same remote file and validate that's the real problem.

AFAIK NFS doesn't seem to have similar issues.

 

 

> Shared Store Cluster Doesn't Work on CIFS
> -
>
> Key: ARTEMIS-1865
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1865
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.5.0
> Environment: RHEL 6.2
>Reporter: Ilkka Virolainen
>Priority: Minor
> Attachments: broker_a.xml, broker_b.xml
>
>
> When Artemis is configured as a shared store master/slave -pair with the 
> journal saved on a CIFS share only the first instance is able to start. The 
> instance started later will fail the acquire a lock to journal/server.lock 
> file and will start in an invalid state. Similar shared store master/slave 
> -configuration works correctly with 5.14.5.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-1865) Shared Store Cluster Doesn't Work on CIFS

2018-05-14 Thread Francesco Nigro (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-1865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474135#comment-16474135
 ] 

Francesco Nigro commented on ARTEMIS-1865:
--

How the shared store pairs are configured? There are specific CIFS 
configuration (timeout on connectivity etc etc) that should help on it, 
similarly to NFS, but I'm not expert on both.

It looks to me like a problem of CIFS on locking file regions ie the master has 
acquired the lock on the live region on the server.lock file and the CIFS 
implementation is locking the entire file instead of a region, making the slave 
not able to acquire lock on its backup region on the same file.

In any case if you have a reproducer would be simpler to search for a solution 
instead of guess one :)

> Shared Store Cluster Doesn't Work on CIFS
> -
>
> Key: ARTEMIS-1865
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1865
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.5.0
> Environment: RHEL 6.2
>Reporter: Ilkka Virolainen
>Priority: Minor
>
> When Artemis is configured as a shared store master/slave -pair with the 
> journal saved on a CIFS share only the first instance is able to start. The 
> instance started later will fail the acquire a lock to journal/server.lock 
> file and will start in an invalid state. Similar shared store master/slave 
> -configuration works correctly with 5.14.5.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (ARTEMIS-1852) PageCursorProvider is leaking cleanup tasks while stopping

2018-05-07 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro closed ARTEMIS-1852.


> PageCursorProvider is leaking cleanup tasks while stopping
> --
>
> Key: ARTEMIS-1852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.5.1
>
>
> PageCursorProviderImpl is not handling any pending cleanup tasks
> on stop, leaving paging enabled due to the remaining pages to be
> cleared up.
> PagingStoreImpl is responsible to trigger the flushing of pending
> tasks on PageCursorProviderImpl before stopping it and to try to
> execute any remaining tasks on the owned common executor, before
> shutting it down.
> It fixes testTopicsWithNonDurableSubscription.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (ARTEMIS-1852) PageCursorProvider is leaking cleanup tasks while stopping

2018-05-07 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro resolved ARTEMIS-1852.
--
Resolution: Fixed

> PageCursorProvider is leaking cleanup tasks while stopping
> --
>
> Key: ARTEMIS-1852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.5.1
>
>
> PageCursorProviderImpl is not handling any pending cleanup tasks
> on stop, leaving paging enabled due to the remaining pages to be
> cleared up.
> PagingStoreImpl is responsible to trigger the flushing of pending
> tasks on PageCursorProviderImpl before stopping it and to try to
> execute any remaining tasks on the owned common executor, before
> shutting it down.
> It fixes testTopicsWithNonDurableSubscription.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (ARTEMIS-1852) PageCursorProvider is leaking cleanup tasks while stopping

2018-05-07 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on ARTEMIS-1852 started by Francesco Nigro.

> PageCursorProvider is leaking cleanup tasks while stopping
> --
>
> Key: ARTEMIS-1852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.5.1
>
>
> PageCursorProviderImpl is not handling any pending cleanup tasks
> on stop, leaving paging enabled due to the remaining pages to be
> cleared up.
> PagingStoreImpl is responsible to trigger the flushing of pending
> tasks on PageCursorProviderImpl before stopping it and to try to
> execute any remaining tasks on the owned common executor, before
> shutting it down.
> It fixes testTopicsWithNonDurableSubscription.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1852) PageCursorProvider is leaking cleanup tasks while stopping

2018-05-07 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1852:
-
Description: 
PageCursorProviderImpl is not handling any pending cleanup tasks
on stop, leaving paging enabled due to the remaining pages to be
cleared up.
PagingStoreImpl is responsible to trigger the flushing of pending
tasks on PageCursorProviderImpl before stopping it and to try to
execute any remaining tasks on the owned common executor, before
shutting it down.
It fixes testTopicsWithNonDurableSubscription.

  was:
PageCursorProviderImpl::stop is not handling any scheduled cleanup tasks on 
stop, leaving paging enabled due to the remaining pages to be
cleared up.
It fixes testTopicsWithNonDurableSubscription.


> PageCursorProvider is leaking cleanup tasks while stopping
> --
>
> Key: ARTEMIS-1852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.5.1
>
>
> PageCursorProviderImpl is not handling any pending cleanup tasks
> on stop, leaving paging enabled due to the remaining pages to be
> cleared up.
> PagingStoreImpl is responsible to trigger the flushing of pending
> tasks on PageCursorProviderImpl before stopping it and to try to
> execute any remaining tasks on the owned common executor, before
> shutting it down.
> It fixes testTopicsWithNonDurableSubscription.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1852) PageCursorProvider is leaking cleanup tasks while stopping

2018-05-07 Thread Francesco Nigro (JIRA)
Francesco Nigro created ARTEMIS-1852:


 Summary: PageCursorProvider is leaking cleanup tasks while stopping
 Key: ARTEMIS-1852
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1852
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 2.5.0
Reporter: Francesco Nigro
Assignee: Francesco Nigro
 Fix For: 2.5.1


PageCursorProviderImpl::stop is not handling any scheduled cleanup tasks on 
stop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1811) NIOSequentialFile should use RandomAccessFile with heap ByteBuffers

2018-05-02 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1811:
-
Description: 
JournalStorageManager::addBytesToLargeMessage and 
LargeServerMessageImpl::DecodingContext::decode are relying on the pooling of 
direct ByteBuffers performed internally by NIO.

Those buffers are pooled until certain size limit (ie 
jdk.nio.maxCachedBufferSize, as shown on 
[https://bugs.openjdk.java.net/browse/JDK-8147468]) otherwise are freed right 
after the write succeed.

If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
always pooled regardless of the size, leading to OOM issues on high load of 
variable sized writes due to the amount of direct memory allocated and not 
released/late released.

The proposed solutions are:
 # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
read lock
 # replace the NIO SequentialFile usage and just use RandomAccessFile that 
provide the right API to append byte[] without creating additional native copies

  was:
JournalStorageManager::addBytesToLargeMessage and 
LargeServerMessageImpl::DecodingContext::decode are relying on the pooling of 
direct ByteBuffers performed internally by NIO.

Those buffers are pooled until certain size limit (ie 
jdk.nio.maxCachedBufferSize, as shown on 
[https://bugs.openjdk.java.net/browse/JDK-8147468]) otherwise are freed at the 
end of its usage.

If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
always pooled regardless of the size, leading to OOM issues on high load of 
variable sized writes due to the amount of direct memory allocated and not 
released/late released.

The proposed solutions are:
 # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
read lock
 # replace the NIO SequentialFile usage and just use RandomAccessFile that 
provide the right API to append byte[] without creating additional native copies


> NIOSequentialFile should use RandomAccessFile with heap ByteBuffers
> ---
>
> Key: ARTEMIS-1811
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1811
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::decode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> [https://bugs.openjdk.java.net/browse/JDK-8147468]) otherwise are freed right 
> after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> The proposed solutions are:
>  # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
> read lock
>  # replace the NIO SequentialFile usage and just use RandomAccessFile that 
> provide the right API to append byte[] without creating additional native 
> copies



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1811) NIOSequentialFile should use RandomAccessFile with heap ByteBuffers

2018-04-30 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1811:
-
Description: 
JournalStorageManager::addBytesToLargeMessage and 
LargeServerMessageImpl::DecodingContext::decode are relying on the pooling of 
direct ByteBuffers performed internally by NIO.

Those buffers are pooled until certain size limit (ie 
jdk.nio.maxCachedBufferSize, as shown on 
[https://bugs.openjdk.java.net/browse/JDK-8147468]) otherwise are freed at the 
end of its usage.

If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
always pooled regardless of the size, leading to OOM issues on high load of 
variable sized writes due to the amount of direct memory allocated and not 
released.

The proposed solutions are:
 # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
read lock
 # replace the NIO SequentialFile usage and just use RandomAccessFile that 
provide the right API to append byte[] without creating additional native copies

  was:
JournalStorageManager::addBytesToLargeMessage and 
LargeServerMessageImpl::DecodingContext::decode are relying on the pooling of 
direct ByteBuffers performed internally by NIO.

Those buffers are pooled until certain size limit (ie 
jdk.nio.maxCachedBufferSize, as shown on 
[https://bugs.openjdk.java.net/browse/JDK-8147468]) otherwise are cleaned at 
the end of its usage.

If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
always pooled regardless of the size, leading to OOM issues on high load of 
variable sized writes due to the amount of direct memory allocated and not 
released.

The proposed solutions are:
 # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
read lock
 # replace the NIO SequentialFile usage and just use RandomAccessFile that 
provide the right API to append byte[] without creating additional native copies


> NIOSequentialFile should use RandomAccessFile with heap ByteBuffers
> ---
>
> Key: ARTEMIS-1811
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1811
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::decode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> [https://bugs.openjdk.java.net/browse/JDK-8147468]) otherwise are freed at 
> the end of its usage.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released.
> The proposed solutions are:
>  # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
> read lock
>  # replace the NIO SequentialFile usage and just use RandomAccessFile that 
> provide the right API to append byte[] without creating additional native 
> copies



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1811) NIOSequentialFile should use RandomAccessFile with heap ByteBuffers

2018-04-30 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1811:
-
Description: 
JournalStorageManager::addBytesToLargeMessage and 
LargeServerMessageImpl::DecodingContext::decode are relying on the pooling of 
direct ByteBuffers performed internally by NIO.

Those buffers are pooled until certain size limit (ie 
jdk.nio.maxCachedBufferSize, as shown on 
[https://bugs.openjdk.java.net/browse/JDK-8147468]) otherwise are freed at the 
end of its usage.

If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
always pooled regardless of the size, leading to OOM issues on high load of 
variable sized writes due to the amount of direct memory allocated and not 
released/late released.

The proposed solutions are:
 # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
read lock
 # replace the NIO SequentialFile usage and just use RandomAccessFile that 
provide the right API to append byte[] without creating additional native copies

  was:
JournalStorageManager::addBytesToLargeMessage and 
LargeServerMessageImpl::DecodingContext::decode are relying on the pooling of 
direct ByteBuffers performed internally by NIO.

Those buffers are pooled until certain size limit (ie 
jdk.nio.maxCachedBufferSize, as shown on 
[https://bugs.openjdk.java.net/browse/JDK-8147468]) otherwise are freed at the 
end of its usage.

If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
always pooled regardless of the size, leading to OOM issues on high load of 
variable sized writes due to the amount of direct memory allocated and not 
released.

The proposed solutions are:
 # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
read lock
 # replace the NIO SequentialFile usage and just use RandomAccessFile that 
provide the right API to append byte[] without creating additional native copies


> NIOSequentialFile should use RandomAccessFile with heap ByteBuffers
> ---
>
> Key: ARTEMIS-1811
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1811
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::decode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> [https://bugs.openjdk.java.net/browse/JDK-8147468]) otherwise are freed at 
> the end of its usage.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released/late released.
> The proposed solutions are:
>  # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
> read lock
>  # replace the NIO SequentialFile usage and just use RandomAccessFile that 
> provide the right API to append byte[] without creating additional native 
> copies



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (ARTEMIS-1816) OpenWire should avoid ByteArrayOutputStream lazy allocation

2018-04-30 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro closed ARTEMIS-1816.

   Resolution: Fixed
Fix Version/s: 2.5.1

> OpenWire should avoid ByteArrayOutputStream lazy allocation
> ---
>
> Key: ARTEMIS-1816
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1816
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker, OpenWire
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Minor
> Fix For: 2.5.1
>
>
> OpenWireMessageConverter::toAMQMessage on bytes messages is lazy allocating a 
> write buffer with a default size of 1024 even when it won't be used to write 
> anything.
> To avoid an useless allocation would be better to reduce it to a zero length 
> one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (ARTEMIS-1829) Remove deprecated plugin's messageExpired implementations

2018-04-30 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro closed ARTEMIS-1829.

Resolution: Fixed

> Remove deprecated plugin's messageExpired implementations
> -
>
> Key: ARTEMIS-1829
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1829
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Minor
> Fix For: 2.5.1
>
>
> NotificationActiveMQServerPlugin and LoggingActiveMQServerPlugin are 
> implementing the deprecated version of ActiveMQServerPlugin::messageExpired 
> that is not called by the new version of the method nor any other part of the 
> code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (ARTEMIS-1832) HAAutomaticBackupSharedStoreTest::basicDiscovery is dead

2018-04-30 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro closed ARTEMIS-1832.

Resolution: Fixed

> HAAutomaticBackupSharedStoreTest::basicDiscovery is dead
> 
>
> Key: ARTEMIS-1832
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1832
> Project: ActiveMQ Artemis
>  Issue Type: Test
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Minor
> Fix For: 2.5.1
>
>
> HAAutomaticBackupSharedStoreTest::basicDiscovery is creating queues on not 
> existing nodes and is not setting up any SessionFactory on the existing ones.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1811) NIOSequentialFile should use RandomAccessFile with heap ByteBuffers

2018-04-25 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1811:
-
Description: 
JournalStorageManager::addBytesToLargeMessage and 
LargeServerMessageImpl::DecodingContext::decode are relying on the pooling of 
direct ByteBuffers performed internally by NIO.

Those buffers are pooled until certain size limit (ie 
jdk.nio.maxCachedBufferSize, as shown on 
[https://bugs.openjdk.java.net/browse/JDK-8147468]) otherwise are cleaned at 
the end of its usage.

If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
always pooled regardless of the size, leading to OOM issues on high load of 
variable sized writes due to the amount of direct memory allocated and not 
released.

The proposed solutions are:
 # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
read lock
 # replace the NIO SequentialFile usage and just use RandomAccessFile that 
provide the right API to append byte[] without creating additional native copies

  was:
JournalStorageManager::addBytesToLargeMessage and 
LargeServerMessageImpl::DecodingContext::decode are relying on the pooling of 
direct ByteBuffers performed internally by NIO.

Those buffers are pooled until certain size limit (ie 
jdk.nio.maxCachedBufferSize, as shown on 
[https://bugs.openjdk.java.net/browse/JDK-8147468]) and when not pooled are 
cleaned at the end of its usage.

That's stress the native memory allocator and would lead to poor performances 
and potential OOMs as well, dependently on the written message chunk size.

The proposed solutions are:
 # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
read lock
 # replace the NIO SequentialFile usage and just use RandomAccessFile that 
provide the right API to append byte[] without creating additional native copies


> NIOSequentialFile should use RandomAccessFile with heap ByteBuffers
> ---
>
> Key: ARTEMIS-1811
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1811
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::decode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> [https://bugs.openjdk.java.net/browse/JDK-8147468]) otherwise are cleaned at 
> the end of its usage.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are 
> always pooled regardless of the size, leading to OOM issues on high load of 
> variable sized writes due to the amount of direct memory allocated and not 
> released.
> The proposed solutions are:
>  # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
> read lock
>  # replace the NIO SequentialFile usage and just use RandomAccessFile that 
> provide the right API to append byte[] without creating additional native 
> copies



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1832) HAAutomaticBackupSharedStoreTest::basicDiscovery misconfigured

2018-04-25 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1832:
-
Summary: HAAutomaticBackupSharedStoreTest::basicDiscovery misconfigured  
(was: HAAutomaticBackupSharedStoreTest::basicDiscovery is not properly 
configured)

> HAAutomaticBackupSharedStoreTest::basicDiscovery misconfigured
> --
>
> Key: ARTEMIS-1832
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1832
> Project: ActiveMQ Artemis
>  Issue Type: Test
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Minor
> Fix For: 2.5.1
>
>
> HAAutomaticBackupSharedStoreTest::basicDiscovery is creating queues on not 
> existing nodes and is not setting up any SessionFactory on the existing ones.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1832) HAAutomaticBackupSharedStoreTest::basicDiscovery is not properly configured

2018-04-25 Thread Francesco Nigro (JIRA)
Francesco Nigro created ARTEMIS-1832:


 Summary: HAAutomaticBackupSharedStoreTest::basicDiscovery is not 
properly configured
 Key: ARTEMIS-1832
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1832
 Project: ActiveMQ Artemis
  Issue Type: Test
  Components: Broker
Affects Versions: 2.5.0
Reporter: Francesco Nigro
Assignee: Francesco Nigro
 Fix For: 2.5.1


HAAutomaticBackupSharedStoreTest::basicDiscovery is creating queues on not 
existing nodes and is not setting up any SessionFactory on the existing ones.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work stopped] (ARTEMIS-1829) Remove deprecated plugin's messageExpired implementations

2018-04-25 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on ARTEMIS-1829 stopped by Francesco Nigro.

> Remove deprecated plugin's messageExpired implementations
> -
>
> Key: ARTEMIS-1829
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1829
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Minor
> Fix For: 2.5.1
>
>
> NotificationActiveMQServerPlugin and LoggingActiveMQServerPlugin are 
> implementing the deprecated version of ActiveMQServerPlugin::messageExpired 
> that is not called by the new version of the method nor any other part of the 
> code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1829) Remove deprecated plugin's messageExpired implementations

2018-04-25 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1829:
-
Summary: Remove deprecated plugin's messageExpired implementations  (was: 
ActiveMQServerPlugin::messageExpired deprecated implementations should be 
updated to the new version)

> Remove deprecated plugin's messageExpired implementations
> -
>
> Key: ARTEMIS-1829
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1829
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Minor
> Fix For: 2.5.1
>
>
> NotificationActiveMQServerPlugin and LoggingActiveMQServerPlugin are 
> implementing the deprecated version of ActiveMQServerPlugin::messageExpired 
> that is not called by the new version of the method nor any other part of the 
> code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1829) ActiveMQServerPlugin::messageExpired deprecated implementations should be updated to the new version

2018-04-25 Thread Francesco Nigro (JIRA)
Francesco Nigro created ARTEMIS-1829:


 Summary: ActiveMQServerPlugin::messageExpired deprecated 
implementations should be updated to the new version
 Key: ARTEMIS-1829
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1829
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Francesco Nigro
Assignee: Francesco Nigro


NotificationActiveMQServerPlugin and LoggingActiveMQServerPlugin are 
implementing the deprecated version of ActiveMQServerPlugin::messageExpired 
that is not called by the new version of the method nor any other part of the 
code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1811) NIOSequentialFile should use RandomAccessFile with heap ByteBuffers

2018-04-23 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1811:
-
Description: 
JournalStorageManager::addBytesToLargeMessage and 
LargeServerMessageImpl::DecodingContext::decode are relying on the pooling of 
direct ByteBuffers performed internally by NIO.

Those buffers are pooled until certain size limit (ie 
jdk.nio.maxCachedBufferSize, as shown on 
[https://bugs.openjdk.java.net/browse/JDK-8147468]) and when not pooled are 
cleaned at the end of its usage.

That's stress the native memory allocator and would lead to poor performances 
and potential OOMs as well, dependently on the written message chunk size.

The proposed solutions are:
 # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
read lock
 # replace the NIO SequentialFile usage and just use RandomAccessFile that 
provide the right API to append byte[] without creating additional native copies

  was:
JournalStorageManager::addBytesToLargeMessage is relying on the pooling of 
direct ByteBuffers performed internally by NIO.

Those buffers are pooled until certain size limit (ie 
jdk.nio.maxCachedBufferSize, as shown on 
[https://bugs.openjdk.java.net/browse/JDK-8147468]) and when not pooled are 
cleaned at the end of its usage.

That's stress the native memory allocator and would lead to poor performances 
and potential OOMs as well, dependently on the written message chunk size.

The proposed solutions are:
 # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
read lock
 # replace the NIO SequentialFile usage and just use RandomAccessFile that 
provide the right API to append byte[] without creating additional native copies


> NIOSequentialFile should use RandomAccessFile with heap ByteBuffers
> ---
>
> Key: ARTEMIS-1811
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1811
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> JournalStorageManager::addBytesToLargeMessage and 
> LargeServerMessageImpl::DecodingContext::decode are relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> [https://bugs.openjdk.java.net/browse/JDK-8147468]) and when not pooled are 
> cleaned at the end of its usage.
> That's stress the native memory allocator and would lead to poor performances 
> and potential OOMs as well, dependently on the written message chunk size.
> The proposed solutions are:
>  # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
> read lock
>  # replace the NIO SequentialFile usage and just use RandomAccessFile that 
> provide the right API to append byte[] without creating additional native 
> copies



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1811) NIOSequentialFile should use RandomAccessFile with heap ByteBuffers

2018-04-23 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1811:
-
Summary: NIOSequentialFile should use RandomAccessFile with heap 
ByteBuffers  (was: JournalStorageManager::addBytesToLargeMessage should not use 
heap buffers)

> NIOSequentialFile should use RandomAccessFile with heap ByteBuffers
> ---
>
> Key: ARTEMIS-1811
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1811
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> JournalStorageManager::addBytesToLargeMessage is relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> [https://bugs.openjdk.java.net/browse/JDK-8147468]) and when not pooled are 
> cleaned at the end of its usage.
> That's stress the native memory allocator and would lead to poor performances 
> and potential OOMs as well, dependently on the written message chunk size.
> The proposed solutions are:
>  # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
> read lock
>  # replace the NIO SequentialFile usage and just use RandomAccessFile that 
> provide the right API to append byte[] without creating additional native 
> copies



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1816) OpenWire should avoid ByteArrayOutputStream lazy allocation

2018-04-19 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1816:
-
Description: 
OpenWireMessageConverter::toAMQMessage on bytes messages is lazy allocating a 
write buffer with a default size of 1024 even when it won't be used to write 
anything.

To avoid an useless allocation would be better to reduce it to a zero length 
one.

  was:
OpenWireMessageConverter::toAMQMessage on bytes messages is lazy allocating a 
write buffer with a default size of 1024 even when it won't be used to write 
anything.

To avoid an useless allocation would be better to avoid it or at least reduce 
it to a zero length one.


> OpenWire should avoid ByteArrayOutputStream lazy allocation
> ---
>
> Key: ARTEMIS-1816
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1816
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker, OpenWire
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Minor
>
> OpenWireMessageConverter::toAMQMessage on bytes messages is lazy allocating a 
> write buffer with a default size of 1024 even when it won't be used to write 
> anything.
> To avoid an useless allocation would be better to reduce it to a zero length 
> one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1816) OpenWire should avoid ByteArrayOutputStream lazy allocation

2018-04-19 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1816:
-
Priority: Minor  (was: Major)

> OpenWire should avoid ByteArrayOutputStream lazy allocation
> ---
>
> Key: ARTEMIS-1816
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1816
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker, OpenWire
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Minor
>
> OpenWireMessageConverter::toAMQMessage on bytes messages is lazy allocating a 
> write buffer with a default size of 1024 even when it won't be used to write 
> anything.
> To avoid an useless allocation would be better to avoid it or at least reduce 
> it to a zero length one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1816) OpenWire should avoid ByteArrayOutputStream lazy allocation

2018-04-19 Thread Francesco Nigro (JIRA)
Francesco Nigro created ARTEMIS-1816:


 Summary: OpenWire should avoid ByteArrayOutputStream lazy 
allocation
 Key: ARTEMIS-1816
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1816
 Project: ActiveMQ Artemis
  Issue Type: Improvement
  Components: Broker, OpenWire
Affects Versions: 2.5.0
Reporter: Francesco Nigro
Assignee: Francesco Nigro


OpenWireMessageConverter::toAMQMessage on bytes messages is lazy allocating a 
write buffer with a default size of 1024 even when it won't be used to write 
anything.

To avoid an useless allocation would be better to avoid it or at least reduce 
it to a zero length one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (ARTEMIS-1784) JDBC NodeManager should just use DMBS clock

2018-04-18 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro closed ARTEMIS-1784.

Resolution: Fixed

> JDBC NodeManager should just use DMBS clock
> ---
>
> Key: ARTEMIS-1784
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1784
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.5.1
>
>
> JdbcNodeManager rely on the correct synchronization between broker wall-clock 
> time and DBMS: it affects the requirements to meet in order to have it 
> working propertly.
> Using just the DBMS clock would simplify the requirements/configuration while 
> improving the resiliency of JDBC HA.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1784) JDBC NodeManager should just use DMBS clock

2018-04-18 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1784:
-
Description: 
JdbcNodeManager rely on the correct synchronization between broker wall-clock 
time and DBMS: it affects the requirements to meet in order to have it working 
propertly.

Using just the DBMS clock would simplify the requirements/configuration while 
improving the resiliency of JDBC HA.

  was:
It avoid using the system clock to perform the locks logic
by using the DBMS time.
It contains several improvements on the JDBC error handling
and an improved observability thanks to debug logs.


> JDBC NodeManager should just use DMBS clock
> ---
>
> Key: ARTEMIS-1784
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1784
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.5.1
>
>
> JdbcNodeManager rely on the correct synchronization between broker wall-clock 
> time and DBMS: it affects the requirements to meet in order to have it 
> working propertly.
> Using just the DBMS clock would simplify the requirements/configuration while 
> improving the resiliency of JDBC HA.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1784) JDBC NodeManager should just use DMBS clock

2018-04-18 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1784:
-
Description: 
It avoid using the system clock to perform the locks logic
by using the DBMS time.
It contains several improvements on the JDBC error handling
and an improved observability thanks to debug logs.

> JDBC NodeManager should just use DMBS clock
> ---
>
> Key: ARTEMIS-1784
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1784
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.5.1
>
>
> It avoid using the system clock to perform the locks logic
> by using the DBMS time.
> It contains several improvements on the JDBC error handling
> and an improved observability thanks to debug logs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (ARTEMIS-1813) DB2 should avoid Blob to append data

2018-04-18 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro closed ARTEMIS-1813.

Resolution: Fixed

> DB2 should avoid Blob to append data
> 
>
> Key: ARTEMIS-1813
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1813
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (ARTEMIS-1813) DB2 should avoid Blob to append data

2018-04-17 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro reassigned ARTEMIS-1813:


Assignee: Francesco Nigro

> DB2 should avoid Blob to append data
> 
>
> Key: ARTEMIS-1813
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1813
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (ARTEMIS-1808) LargeServerMessageImpl leaks direct ByteBuffer

2018-04-17 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro resolved ARTEMIS-1808.
--
Resolution: Fixed

> LargeServerMessageImpl leaks direct ByteBuffer
> --
>
> Key: ARTEMIS-1808
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1808
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.5.1
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (ARTEMIS-1810) JDBCSequentialFileFactoryDriver should check <=0 read length

2018-04-17 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro resolved ARTEMIS-1810.
--
Resolution: Fixed

> JDBCSequentialFileFactoryDriver should check <=0 read length
> 
>
> Key: ARTEMIS-1810
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1810
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.5.1
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (ARTEMIS-1788) JDBC HA should use JDBC Network Timeout

2018-04-17 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro resolved ARTEMIS-1788.
--
Resolution: Fixed

> JDBC HA should use JDBC Network Timeout
> ---
>
> Key: ARTEMIS-1788
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1788
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.5.1
>
>
> The JDBC Node Manager doesn't use the JDBC Network Timeout used on the rest 
> of the Journal.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1813) DB2 should avoid Blob to append data

2018-04-17 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1813:
-
Summary: DB2 should avoid Blob to append data  (was: DB2 BLOB size should 
be set to match its SQL definition)

> DB2 should avoid Blob to append data
> 
>
> Key: ARTEMIS-1813
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1813
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1813) DB2 BLOB size should be set to match its SQL definition

2018-04-16 Thread Francesco Nigro (JIRA)
Francesco Nigro created ARTEMIS-1813:


 Summary: DB2 BLOB size should be set to match its SQL definition
 Key: ARTEMIS-1813
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1813
 Project: ActiveMQ Artemis
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Francesco Nigro






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1811) JournalStorageManager::addBytesToLargeMessage should not use heap buffers

2018-04-16 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1811:
-
Description: 
JournalStorageManager::addBytesToLargeMessage is relying on the pooling of 
direct ByteBuffers performed internally by NIO.

Those buffers are pooled until certain size limit (ie 
jdk.nio.maxCachedBufferSize, as shown on 
[https://bugs.openjdk.java.net/browse/JDK-8147468]) and when not pooled are 
cleaned at the end of its usage.

That's stress the native memory allocator and would lead to poor performances 
and potential OOMs as well, dependently on the written message chunk size.

The proposed solutions are:
 # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
read lock
 # replace the NIO SequentialFile usage and just use RandomAccessFile that 
provide the right API to append byte[] without creating additional native copies

  was:
JournalStorageManager::addBytesToLargeMessage is relying on the pooling of 
direct ByteBuffers performed internally by NIO.

Those buffers are pooled until certain size limit (ie 
jdk.nio.maxCachedBufferSize, as shown on 
[https://bugs.openjdk.java.net/browse/JDK-8147468]) and when not pooled are 
cleaned at the end of its usage.

That's stress the native memory allocator and would lead to poor performances 
and potential OOMs as well, dependently on the written message chunk size.

The proposed solutions are:
 # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
read lock
 # replace the NIO SequentialFile usage and just use RandomAccessFile that 
provide the right API to append byte[] without creating additional native copies
 #


> JournalStorageManager::addBytesToLargeMessage should not use heap buffers
> -
>
> Key: ARTEMIS-1811
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1811
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> JournalStorageManager::addBytesToLargeMessage is relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> [https://bugs.openjdk.java.net/browse/JDK-8147468]) and when not pooled are 
> cleaned at the end of its usage.
> That's stress the native memory allocator and would lead to poor performances 
> and potential OOMs as well, dependently on the written message chunk size.
> The proposed solutions are:
>  # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
> read lock
>  # replace the NIO SequentialFile usage and just use RandomAccessFile that 
> provide the right API to append byte[] without creating additional native 
> copies



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1811) JournalStorageManager::addBytesToLargeMessage should not use heap buffers

2018-04-16 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1811:
-
Description: 
JournalStorageManager::addBytesToLargeMessage is relying on the pooling of 
direct ByteBuffers performed internally by NIO.

Those buffers are pooled until certain size limit (ie 
jdk.nio.maxCachedBufferSize, as shown on 
[https://bugs.openjdk.java.net/browse/JDK-8147468]) and when not pooled are 
cleaned at the end of its usage.

That's stress the native memory allocator and would lead to poor performances 
and potential OOMs as well, dependently on the written message chunk size.

The proposed solutions are:
 # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
read lock
 # replace the NIO SequentialFile usage and just use RandomAccessFile that 
provide the right API to append byte[] without creating additional native copies
 #

  was:
JournalStorageManager::addBytesToLargeMessage is relying on the pooling of 
direct ByteBuffers performed internally by NIO.

Those buffers are pooled until certain size limit (ie 
jdk.nio.maxCachedBufferSize, as shown on 
[https://bugs.openjdk.java.net/browse/JDK-8147468]) and when not pooled are 
cleaned at the end of its usage.

That's stress the native memory allocator and would lead to poor performances 
and potential OOMs as well.

The proposed solutions are:
 # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
read lock
 # replace the NIO SequentialFile usage and just use RandomAccessFile that 
provide the right API to append byte[] without creating additional native copies
 #


> JournalStorageManager::addBytesToLargeMessage should not use heap buffers
> -
>
> Key: ARTEMIS-1811
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1811
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> JournalStorageManager::addBytesToLargeMessage is relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> [https://bugs.openjdk.java.net/browse/JDK-8147468]) and when not pooled are 
> cleaned at the end of its usage.
> That's stress the native memory allocator and would lead to poor performances 
> and potential OOMs as well, dependently on the written message chunk size.
> The proposed solutions are:
>  # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
> read lock
>  # replace the NIO SequentialFile usage and just use RandomAccessFile that 
> provide the right API to append byte[] without creating additional native 
> copies
>  #



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1811) JournalStorageManager::addBytesToLargeMessage should not use heap buffers

2018-04-16 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1811:
-
Description: 
JournalStorageManager::addBytesToLargeMessage is relying on the pooling of 
direct ByteBuffers performed internally by NIO.

Those buffers are pooled until certain size limit (ie 
jdk.nio.maxCachedBufferSize, as shown on 
[https://bugs.openjdk.java.net/browse/JDK-8147468]) and when not pooled are 
cleaned at the end of its usage.

That's stress the native memory allocator and would lead to poor performances 
and potential OOMs as well.

The proposed solutions are:
 # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
read lock
 # replace the NIO SequentialFile usage and just use RandomAccessFile that 
provide the right API to append byte[] without creating additional native copies
 #

  was:
JournalStorageManager::addBytesToLargeMessage is relying on the pooling of 
direct ByteBuffers performed internally by NIO.

Those buffers are pooled until certain size limit (ie 
jdk.nio.maxCachedBufferSize, as shown 
https://bugs.openjdk.java.net/browse/JDK-8147468) and when not pooled are 
cleaned at the end of its usage.

That's stress the native memory allocator and would lead to poor performances 
and potential OOMs as well.

The proposed solutions are:
 # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
read lock
 # replace the NIO SequentialFile usage and just use RandomAccessFile that 
provide the right API to append byte[] without creating additional native copies
 #


> JournalStorageManager::addBytesToLargeMessage should not use heap buffers
> -
>
> Key: ARTEMIS-1811
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1811
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> JournalStorageManager::addBytesToLargeMessage is relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown on 
> [https://bugs.openjdk.java.net/browse/JDK-8147468]) and when not pooled are 
> cleaned at the end of its usage.
> That's stress the native memory allocator and would lead to poor performances 
> and potential OOMs as well.
> The proposed solutions are:
>  # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
> read lock
>  # replace the NIO SequentialFile usage and just use RandomAccessFile that 
> provide the right API to append byte[] without creating additional native 
> copies
>  #



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1811) JournalStorageManager::addBytesToLargeMessage should not use heap buffers

2018-04-16 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1811:
-
Description: 
JournalStorageManager::addBytesToLargeMessage is relying on the pooling of 
direct ByteBuffers performed internally by NIO.

Those buffers are pooled until certain size limit (ie 
[https://bugs.openjdk.java.net/browse/JDK-8147468 
|jdk.nio.maxCachedBufferSize]) and when not pooled are cleaned at the end of 
its usage.

That's stress the native memory allocator and would lead to poor performances 
and potential OOMs as well.

The proposed solutions are:
 # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
read lock
 # replace the NIO SequentialFile usage and just use RandomAccessFile that 
provide the right API to append byte[] without creating additional native copies
 #

  was:
JournalStorageManager::addBytesToLargeMessage is relying on the pooling of 
direct ByteBuffers performed internally by NIO.

Those buffers are pooled until certain size limit (ie 
[jdk.nio.maxCachedBufferSize|[https://bugs.openjdk.java.net/browse/JDK-8147468]]
 and when not pooled are cleaned at the end of its usage.

That's stress the native memory allocator and would lead to poor performances 
and potential OOMs as well.

The proposed solutions are:
 # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
read lock
 # replace the NIO SequentialFile usage and just use RandomAccessFile that 
provide the right API to append byte[] without creating additional native copies
 #


> JournalStorageManager::addBytesToLargeMessage should not use heap buffers
> -
>
> Key: ARTEMIS-1811
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1811
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> JournalStorageManager::addBytesToLargeMessage is relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> [https://bugs.openjdk.java.net/browse/JDK-8147468 
> |jdk.nio.maxCachedBufferSize]) and when not pooled are cleaned at the end of 
> its usage.
> That's stress the native memory allocator and would lead to poor performances 
> and potential OOMs as well.
> The proposed solutions are:
>  # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
> read lock
>  # replace the NIO SequentialFile usage and just use RandomAccessFile that 
> provide the right API to append byte[] without creating additional native 
> copies
>  #



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1811) JournalStorageManager::addBytesToLargeMessage should not use heap buffers

2018-04-16 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1811:
-
Description: 
JournalStorageManager::addBytesToLargeMessage is relying on the pooling of 
direct ByteBuffers performed internally by NIO.

Those buffers are pooled until certain size limit (ie 
[https://bugs.openjdk.java.net/browse/JDK-8147468|jdk.nio.maxCachedBufferSize]) 
and when not pooled are cleaned at the end of its usage.

That's stress the native memory allocator and would lead to poor performances 
and potential OOMs as well.

The proposed solutions are:
 # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
read lock
 # replace the NIO SequentialFile usage and just use RandomAccessFile that 
provide the right API to append byte[] without creating additional native copies
 #

  was:
JournalStorageManager::addBytesToLargeMessage is relying on the pooling of 
direct ByteBuffers performed internally by NIO.

Those buffers are pooled until certain size limit (ie 
[https://bugs.openjdk.java.net/browse/JDK-8147468 
|jdk.nio.maxCachedBufferSize]) and when not pooled are cleaned at the end of 
its usage.

That's stress the native memory allocator and would lead to poor performances 
and potential OOMs as well.

The proposed solutions are:
 # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
read lock
 # replace the NIO SequentialFile usage and just use RandomAccessFile that 
provide the right API to append byte[] without creating additional native copies
 #


> JournalStorageManager::addBytesToLargeMessage should not use heap buffers
> -
>
> Key: ARTEMIS-1811
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1811
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> JournalStorageManager::addBytesToLargeMessage is relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> [https://bugs.openjdk.java.net/browse/JDK-8147468|jdk.nio.maxCachedBufferSize])
>  and when not pooled are cleaned at the end of its usage.
> That's stress the native memory allocator and would lead to poor performances 
> and potential OOMs as well.
> The proposed solutions are:
>  # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
> read lock
>  # replace the NIO SequentialFile usage and just use RandomAccessFile that 
> provide the right API to append byte[] without creating additional native 
> copies
>  #



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1811) JournalStorageManager::addBytesToLargeMessage should not use heap buffers

2018-04-16 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1811:
-
Description: 
JournalStorageManager::addBytesToLargeMessage is relying on the pooling of 
direct ByteBuffers performed internally by NIO.

Those buffers are pooled until certain size limit (ie 
jdk.nio.maxCachedBufferSize, as shown 
https://bugs.openjdk.java.net/browse/JDK-8147468) and when not pooled are 
cleaned at the end of its usage.

That's stress the native memory allocator and would lead to poor performances 
and potential OOMs as well.

The proposed solutions are:
 # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
read lock
 # replace the NIO SequentialFile usage and just use RandomAccessFile that 
provide the right API to append byte[] without creating additional native copies
 #

  was:
JournalStorageManager::addBytesToLargeMessage is relying on the pooling of 
direct ByteBuffers performed internally by NIO.

Those buffers are pooled until certain size limit (ie 
[https://bugs.openjdk.java.net/browse/JDK-8147468|jdk.nio.maxCachedBufferSize]) 
and when not pooled are cleaned at the end of its usage.

That's stress the native memory allocator and would lead to poor performances 
and potential OOMs as well.

The proposed solutions are:
 # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
read lock
 # replace the NIO SequentialFile usage and just use RandomAccessFile that 
provide the right API to append byte[] without creating additional native copies
 #


> JournalStorageManager::addBytesToLargeMessage should not use heap buffers
> -
>
> Key: ARTEMIS-1811
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1811
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> JournalStorageManager::addBytesToLargeMessage is relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> jdk.nio.maxCachedBufferSize, as shown 
> https://bugs.openjdk.java.net/browse/JDK-8147468) and when not pooled are 
> cleaned at the end of its usage.
> That's stress the native memory allocator and would lead to poor performances 
> and potential OOMs as well.
> The proposed solutions are:
>  # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
> read lock
>  # replace the NIO SequentialFile usage and just use RandomAccessFile that 
> provide the right API to append byte[] without creating additional native 
> copies
>  #



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1811) JournalStorageManager::addBytesToLargeMessage should not use heap buffers

2018-04-16 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1811:
-
Description: 
JournalStorageManager::addBytesToLargeMessage is relying on the pooling of 
direct ByteBuffers performed internally by NIO.

Those buffers are pooled until certain size limit (ie 
[jdk.nio.maxCachedBufferSize|[https://bugs.openjdk.java.net/browse/JDK-8147468])]
 and when not pooled are cleaned at the end of its usage.

That's stress the native memory allocator and would lead to poor performances 
and potential OOMs as well.

The proposed solutions are:
 # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
read lock
 # replace the NIO SequentialFile usage and just use RandomAccessFile that 
provide the right API to append byte[] without creating additional native copies
 #

  was:
JournalStorageManager::addBytesToLargeMessage is relying on the pooled direct 
ByteBuffers performed internally by NIO.

Those buffers are pooled until certain size limit (ie 
[jdk.nio.maxCachedBufferSize|[https://bugs.openjdk.java.net/browse/JDK-8147468])]
 and when not pooled are cleaned at the end of its usage.

That's stress the native memory allocator and would lead to poor performances 
and potential OOMs as well.

The proposed solutions are:
 # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
read lock
 # replace the NIO SequentialFile usage and just use RandomAccessFile that 
provide the right API to append byte[] without creating additional native copies
 #


> JournalStorageManager::addBytesToLargeMessage should not use heap buffers
> -
>
> Key: ARTEMIS-1811
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1811
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> JournalStorageManager::addBytesToLargeMessage is relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> [jdk.nio.maxCachedBufferSize|[https://bugs.openjdk.java.net/browse/JDK-8147468])]
>  and when not pooled are cleaned at the end of its usage.
> That's stress the native memory allocator and would lead to poor performances 
> and potential OOMs as well.
> The proposed solutions are:
>  # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
> read lock
>  # replace the NIO SequentialFile usage and just use RandomAccessFile that 
> provide the right API to append byte[] without creating additional native 
> copies
>  #



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1811) JournalStorageManager::addBytesToLargeMessage should not use heap buffers

2018-04-16 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1811:
-
Description: 
JournalStorageManager::addBytesToLargeMessage is relying on the pooling of 
direct ByteBuffers performed internally by NIO.

Those buffers are pooled until certain size limit (ie 
[jdk.nio.maxCachedBufferSize|[https://bugs.openjdk.java.net/browse/JDK-8147468]]
 and when not pooled are cleaned at the end of its usage.

That's stress the native memory allocator and would lead to poor performances 
and potential OOMs as well.

The proposed solutions are:
 # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
read lock
 # replace the NIO SequentialFile usage and just use RandomAccessFile that 
provide the right API to append byte[] without creating additional native copies
 #

  was:
JournalStorageManager::addBytesToLargeMessage is relying on the pooling of 
direct ByteBuffers performed internally by NIO.

Those buffers are pooled until certain size limit (ie 
[jdk.nio.maxCachedBufferSize|[https://bugs.openjdk.java.net/browse/JDK-8147468])]
 and when not pooled are cleaned at the end of its usage.

That's stress the native memory allocator and would lead to poor performances 
and potential OOMs as well.

The proposed solutions are:
 # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
read lock
 # replace the NIO SequentialFile usage and just use RandomAccessFile that 
provide the right API to append byte[] without creating additional native copies
 #


> JournalStorageManager::addBytesToLargeMessage should not use heap buffers
> -
>
> Key: ARTEMIS-1811
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1811
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> JournalStorageManager::addBytesToLargeMessage is relying on the pooling of 
> direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie 
> [jdk.nio.maxCachedBufferSize|[https://bugs.openjdk.java.net/browse/JDK-8147468]]
>  and when not pooled are cleaned at the end of its usage.
> That's stress the native memory allocator and would lead to poor performances 
> and potential OOMs as well.
> The proposed solutions are:
>  # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
> read lock
>  # replace the NIO SequentialFile usage and just use RandomAccessFile that 
> provide the right API to append byte[] without creating additional native 
> copies
>  #



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1811) JournalStorageManager::addBytesToLargeMessage should not use heap buffers

2018-04-16 Thread Francesco Nigro (JIRA)
Francesco Nigro created ARTEMIS-1811:


 Summary: JournalStorageManager::addBytesToLargeMessage should not 
use heap buffers
 Key: ARTEMIS-1811
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1811
 Project: ActiveMQ Artemis
  Issue Type: Improvement
  Components: Broker
Affects Versions: 2.5.0
Reporter: Francesco Nigro
Assignee: Francesco Nigro


JournalStorageManager::addBytesToLargeMessage is relying on the pooled direct 
ByteBuffers performed internally by NIO.

Those buffers are pooled until certain size limit (ie 
[jdk.nio.maxCachedBufferSize|[https://bugs.openjdk.java.net/browse/JDK-8147468])]
 and when not pooled are cleaned at the end of its usage.

That's stress the native memory allocator and would lead to poor performances 
and potential OOMs as well.

The proposed solutions are:
 # perform ad hoc direct ByteBuffer caching on the write path thanks to the 
read lock
 # replace the NIO SequentialFile usage and just use RandomAccessFile that 
provide the right API to append byte[] without creating additional native copies
 #



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1810) JDBCSequentialFileFactoryDriver should check <=0 read length

2018-04-16 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1810:
-
Summary: JDBCSequentialFileFactoryDriver should check <=0 read length  
(was: JDBCSequentialFileFactoryDriver should check <= read length)

> JDBCSequentialFileFactoryDriver should check <=0 read length
> 
>
> Key: ARTEMIS-1810
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1810
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.5.1
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1810) JDBCSequentialFileFactoryDriver should check <= read length

2018-04-16 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1810:
-
Summary: JDBCSequentialFileFactoryDriver should check <= read length  (was: 
JDBCSequentialFileFactoryDriver should check zero and negative read length)

> JDBCSequentialFileFactoryDriver should check <= read length
> ---
>
> Key: ARTEMIS-1810
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1810
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.5.1
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1810) JDBCSequentialFileFactoryDriver should check zero and negative read length

2018-04-16 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1810:
-
Summary: JDBCSequentialFileFactoryDriver should check zero and negative 
read length  (was: JDBCSequentialFileFactoryDriver should check negative read 
length)

> JDBCSequentialFileFactoryDriver should check zero and negative read length
> --
>
> Key: ARTEMIS-1810
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1810
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.5.1
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1810) JDBCSequentialFileFactoryDriver should check negative read length

2018-04-16 Thread Francesco Nigro (JIRA)
Francesco Nigro created ARTEMIS-1810:


 Summary: JDBCSequentialFileFactoryDriver should check negative 
read length
 Key: ARTEMIS-1810
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1810
 Project: ActiveMQ Artemis
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Francesco Nigro
Assignee: Francesco Nigro
 Fix For: 2.5.1






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1808) LargeServerMessageImpl leaks direct ByteBuffer

2018-04-15 Thread Francesco Nigro (JIRA)
Francesco Nigro created ARTEMIS-1808:


 Summary: LargeServerMessageImpl leaks direct ByteBuffer
 Key: ARTEMIS-1808
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1808
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Reporter: Francesco Nigro
Assignee: Francesco Nigro
 Fix For: 2.5.1






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-1807) File-based Large Message encoding should use read-only mmap

2018-04-15 Thread Francesco Nigro (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-1807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16438726#comment-16438726
 ] 

Francesco Nigro commented on ARTEMIS-1807:
--

The solution implemented is the less costly from the number of changes 
required: the best solution would be to use directly the mapped file slices on 
the Packets to be sent or several FileRegion and perform zero-copy transfert 
into Netty channel.

> File-based Large Message encoding should use read-only mmap
> ---
>
> Key: ARTEMIS-1807
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1807
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> File-based LargeServerMessageImpl should use read-only memory mapping while 
> reading the file, in order to:
>  * reduce the number of copies 
>  * reduce the context switches



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1807) File-based Large Message encoding should use read-only mmap

2018-04-15 Thread Francesco Nigro (JIRA)
Francesco Nigro created ARTEMIS-1807:


 Summary: File-based Large Message encoding should use read-only 
mmap
 Key: ARTEMIS-1807
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1807
 Project: ActiveMQ Artemis
  Issue Type: Improvement
  Components: Broker
Affects Versions: 2.5.0
Reporter: Francesco Nigro
Assignee: Francesco Nigro


File-based LargeServerMessageImpl should use read-only memory mapping while 
reading the file, in order to:
 * reduce the number of copies 
 * reduce the context switches



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (ARTEMIS-1806) JDBC Connection leaks

2018-04-15 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro resolved ARTEMIS-1806.
--
Resolution: Fixed

> JDBC Connection leaks
> -
>
> Key: ARTEMIS-1806
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1806
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.5.1
>
>
> The JDBC Connection leaks on: 
>  * JDBCFileUtils::getDBFileDriver(DataSource, SQLProvider)
>  * SharedStoreBackupActivation.FailbackChecker::run on a failed 
> awaitLiveStatus



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (ARTEMIS-1806) JDBC Connection leaks

2018-04-15 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on ARTEMIS-1806 started by Francesco Nigro.

> JDBC Connection leaks
> -
>
> Key: ARTEMIS-1806
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1806
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.5.1
>
>
> The JDBC Connection leaks on: 
>  * JDBCFileUtils::getDBFileDriver(DataSource, SQLProvider)
>  * SharedStoreBackupActivation.FailbackChecker::run on a failed 
> awaitLiveStatus



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1806) JDBC Connection leaks

2018-04-15 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1806:
-
Affects Version/s: 2.5.0
Fix Version/s: 2.5.1
  Component/s: Broker

> JDBC Connection leaks
> -
>
> Key: ARTEMIS-1806
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1806
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.5.1
>
>
> The JDBC Connection leaks on: 
>  * JDBCFileUtils::getDBFileDriver(DataSource, SQLProvider)
>  * SharedStoreBackupActivation.FailbackChecker::run on a failed 
> awaitLiveStatus



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (ARTEMIS-1774) Node Manager Store table name should be configurable

2018-04-15 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro closed ARTEMIS-1774.

Resolution: Fixed

> Node Manager Store table name should be configurable
> 
>
> Key: ARTEMIS-1774
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1774
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.5.1
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1806) JDBC Connection leaks

2018-04-14 Thread Francesco Nigro (JIRA)
Francesco Nigro created ARTEMIS-1806:


 Summary: JDBC Connection leaks
 Key: ARTEMIS-1806
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1806
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Francesco Nigro
Assignee: Francesco Nigro


The JDBC Connection leaks on: 
 * JDBCFileUtils::getDBFileDriver(DataSource, SQLProvider)
 * SharedStoreBackupActivation.FailbackChecker::run on a failed awaitLiveStatus



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (ARTEMIS-1772) Reduce memory footprint and allocations of QueueImpl

2018-04-06 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro closed ARTEMIS-1772.

Resolution: Fixed

> Reduce memory footprint and allocations of QueueImpl
> 
>
> Key: ARTEMIS-1772
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1772
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> Each message referenced into QueueImpl is using Atomic* and boxed types that 
> could be turned into primitive types, reducing the number of allocations and 
> increasing the precision of the memory footprint estimation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (ARTEMIS-1774) Node Manager Store table name should be configurable

2018-04-06 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro reopened ARTEMIS-1774:
--

It is not finished yet.

> Node Manager Store table name should be configurable
> 
>
> Key: ARTEMIS-1774
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1774
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.5.1
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1788) JDBC HA should use JDBC Network Timeout

2018-04-06 Thread Francesco Nigro (JIRA)
Francesco Nigro created ARTEMIS-1788:


 Summary: JDBC HA should use JDBC Network Timeout
 Key: ARTEMIS-1788
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1788
 Project: ActiveMQ Artemis
  Issue Type: Improvement
  Components: Broker
Affects Versions: 2.5.0
Reporter: Francesco Nigro
Assignee: Francesco Nigro
 Fix For: 2.5.1


The JDBC Node Manager doesn't use the JDBC Network Timeout used on the rest of 
the Journal.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (ARTEMIS-1784) JDBC NodeManager should just use DMBS clock

2018-04-04 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on ARTEMIS-1784 started by Francesco Nigro.

> JDBC NodeManager should just use DMBS clock
> ---
>
> Key: ARTEMIS-1784
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1784
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.5.1
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1784) JDBC NodeManager should just use DMBS clock

2018-04-04 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1784:
-
Affects Version/s: 2.5.0
Fix Version/s: 2.5.1
  Component/s: Broker

> JDBC NodeManager should just use DMBS clock
> ---
>
> Key: ARTEMIS-1784
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1784
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.5.1
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (ARTEMIS-1784) JDBC NodeManager should just use DMBS clock

2018-04-04 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro reassigned ARTEMIS-1784:


Assignee: Francesco Nigro

> JDBC NodeManager should just use DMBS clock
> ---
>
> Key: ARTEMIS-1784
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1784
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1784) JDBC NodeManager should just use DMBS clock

2018-04-04 Thread Francesco Nigro (JIRA)
Francesco Nigro created ARTEMIS-1784:


 Summary: JDBC NodeManager should just use DMBS clock
 Key: ARTEMIS-1784
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1784
 Project: ActiveMQ Artemis
  Issue Type: Improvement
Reporter: Francesco Nigro






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (ARTEMIS-1653) Allow database tables to be created externally

2018-04-03 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro reassigned ARTEMIS-1653:


Assignee: Francesco Nigro

> Allow database tables to be created externally
> --
>
> Key: ARTEMIS-1653
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1653
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.4.0
>Reporter: Niels Lippke
>Assignee: Francesco Nigro
>Priority: Major
>
> In some environments (e.g. production) it is not allowed that applications 
> create their own schema. It's common practice to pass DDL-Statetements to 
> DBAs instead prior to rollout.
> Currently the broker does not support this scenario. If the required tables 
> already exist the broker fails to start.
> A better approach is that if the broker detects empy tables and initializes 
> them in the very same way it does if the tables dont't exist.
> See also discussion in 
> [forum|http://activemq.2283324.n4.nabble.com/ARTEMIS-Server-doesn-t-start-if-JDBC-store-is-used-and-table-NODE-MANAGER-STORE-is-empty-td4735779.html].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1772) Reduce memory footprint and allocations of QueueImpl

2018-04-03 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1772:
-
Description: Each message referenced into QueueImpl is using Atomic* and 
boxed types that could be turned into primitive types, reducing the number of 
allocations and increasing the precision of the memory footprint estimation.

> Reduce memory footprint and allocations of QueueImpl
> 
>
> Key: ARTEMIS-1772
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1772
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>Reporter: Francesco Nigro
>Priority: Major
>
> Each message referenced into QueueImpl is using Atomic* and boxed types that 
> could be turned into primitive types, reducing the number of allocations and 
> increasing the precision of the memory footprint estimation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (ARTEMIS-1772) Reduce memory footprint and allocations of QueueImpl

2018-04-03 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro reassigned ARTEMIS-1772:


Assignee: Francesco Nigro

> Reduce memory footprint and allocations of QueueImpl
> 
>
> Key: ARTEMIS-1772
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1772
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> Each message referenced into QueueImpl is using Atomic* and boxed types that 
> could be turned into primitive types, reducing the number of allocations and 
> increasing the precision of the memory footprint estimation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (ARTEMIS-1774) Node Manager Store table name should be configurable

2018-03-29 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro closed ARTEMIS-1774.

Resolution: Fixed

> Node Manager Store table name should be configurable
> 
>
> Key: ARTEMIS-1774
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1774
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.5.1
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (ARTEMIS-1771) Porting of JDBC NodeManager into 1.5.5

2018-03-29 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro closed ARTEMIS-1771.

Resolution: Fixed

> Porting of JDBC NodeManager into 1.5.5
> --
>
> Key: ARTEMIS-1771
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1771
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 1.5.5
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> It includes most of the changes related to the JDBC support, including:
> - JDBC node manager to support Shared Store HA + test (+ the most recent 
> configuration improvements/features and fixes)
> - specific DBMS fixes (Oracle, DB2)
> - SQL query definition through property file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1774) Node Manager Store table name should be configurable

2018-03-28 Thread Francesco Nigro (JIRA)
Francesco Nigro created ARTEMIS-1774:


 Summary: Node Manager Store table name should be configurable
 Key: ARTEMIS-1774
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1774
 Project: ActiveMQ Artemis
  Issue Type: Improvement
  Components: Broker
Affects Versions: 2.5.0
Reporter: Francesco Nigro
Assignee: Francesco Nigro
 Fix For: 2.5.1






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1771) Porting of JDBC NodeManager into 1.5.5

2018-03-28 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1771:
-
Description: 
It includes most of the changes related to the JDBC support, including:
- JDBC node manager to support Shared Store HA + test (+ the most recent 
configuration improvements/features and fixes)
- specific DBMS fixes (Oracle, DB2)
- SQL query definition through property file

> Porting of JDBC NodeManager into 1.5.5
> --
>
> Key: ARTEMIS-1771
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1771
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 1.5.5
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> It includes most of the changes related to the JDBC support, including:
> - JDBC node manager to support Shared Store HA + test (+ the most recent 
> configuration improvements/features and fixes)
> - specific DBMS fixes (Oracle, DB2)
> - SQL query definition through property file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (ARTEMIS-1757) Improve DB2 compatibility

2018-03-28 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro closed ARTEMIS-1757.

Resolution: Fixed

> Improve DB2 compatibility
> -
>
> Key: ARTEMIS-1757
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1757
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.5.1
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1772) Reduce memory footprint and allocations of QueueImpl

2018-03-28 Thread Francesco Nigro (JIRA)
Francesco Nigro created ARTEMIS-1772:


 Summary: Reduce memory footprint and allocations of QueueImpl
 Key: ARTEMIS-1772
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1772
 Project: ActiveMQ Artemis
  Issue Type: New Feature
Reporter: Francesco Nigro






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1771) Porting of JDBC NodeManager into 1.5.5

2018-03-27 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1771:
-
Affects Version/s: 1.5.5
  Component/s: Broker
  Summary: Porting of JDBC NodeManager into 1.5.5  (was: Porting of 
JDBC NodeManager into 1.x)

> Porting of JDBC NodeManager into 1.5.5
> --
>
> Key: ARTEMIS-1771
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1771
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 1.5.5
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1771) Porting of JDBC NodeManager into 1.x

2018-03-27 Thread Francesco Nigro (JIRA)
Francesco Nigro created ARTEMIS-1771:


 Summary: Porting of JDBC NodeManager into 1.x
 Key: ARTEMIS-1771
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1771
 Project: ActiveMQ Artemis
  Issue Type: New Feature
Reporter: Francesco Nigro
Assignee: Francesco Nigro






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (ARTEMIS-1762) JdbcNodeManager shouldn't be used if no HA is configured

2018-03-27 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro closed ARTEMIS-1762.

Resolution: Fixed

> JdbcNodeManager shouldn't be used if no HA is configured
> 
>
> Key: ARTEMIS-1762
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1762
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.5.1
>
>
> JDBC based journal is using a JdbcNodeManager when no HA is configured: 
> should be better to not have it to avoid unneeded lock table creations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (ARTEMIS-1760) JDBC HA should have configurable tolerance of DB time misalignment

2018-03-27 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro closed ARTEMIS-1760.

Resolution: Fixed

> JDBC HA should have configurable tolerance of DB time misalignment
> --
>
> Key: ARTEMIS-1760
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1760
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.5.1
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (ARTEMIS-1767) JDBC Lock Acquisition Timeout should behave like the file based version

2018-03-26 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro closed ARTEMIS-1767.

Resolution: Fixed

> JDBC Lock Acquisition Timeout should behave like the file based version
> ---
>
> Key: ARTEMIS-1767
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1767
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.5.1
>
>
> The journal lock timeout (file-based) is infinite by default, not 
> customizable by the user's configuration and exposed programmatically just 
> for testing purposes: the JDBC version of it need to do the same in order to 
> avoid being misconfigured, breaking JDBC HA reliability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1767) JDBC Lock Acquisition Timeout should behave like the file based version

2018-03-22 Thread Francesco Nigro (JIRA)
Francesco Nigro created ARTEMIS-1767:


 Summary: JDBC Lock Acquisition Timeout should behave like the file 
based version
 Key: ARTEMIS-1767
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1767
 Project: ActiveMQ Artemis
  Issue Type: Improvement
  Components: Broker
Affects Versions: 2.5.0
Reporter: Francesco Nigro
Assignee: Francesco Nigro
 Fix For: 2.5.1


The journal lock timeout (file-based) is infinite by default, not customizable 
by the user's configuration and exposed programmatically just for testing 
purposes: the JDBC version of it need to do the same in order to avoid being 
misconfigured, breaking JDBC HA reliability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1762) Jdbc NodeManagers shouldn't be used if HA is not configured

2018-03-22 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1762:
-
Affects Version/s: 2.5.0
Fix Version/s: 2.5.1
  Component/s: Broker

> Jdbc NodeManagers shouldn't be used if HA is not configured
> ---
>
> Key: ARTEMIS-1762
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1762
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.5.1
>
>
> JDBC based journal is using a JdbcNodeManager when no HA is configured: 
> should be better to not have it to avoid unneeded lock table creations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1762) JdbcNodeManager shouldn't be used if no HA is configured

2018-03-22 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1762:
-
Summary: JdbcNodeManager shouldn't be used if no HA is configured  (was: 
JdbcNodeManager shouldn't be used if HA is not configured)

> JdbcNodeManager shouldn't be used if no HA is configured
> 
>
> Key: ARTEMIS-1762
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1762
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.5.1
>
>
> JDBC based journal is using a JdbcNodeManager when no HA is configured: 
> should be better to not have it to avoid unneeded lock table creations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1762) JdbcNodeManager shouldn't be used if HA is not configured

2018-03-22 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1762:
-
Summary: JdbcNodeManager shouldn't be used if HA is not configured  (was: 
Jdbc NodeManagers shouldn't be used if HA is not configured)

> JdbcNodeManager shouldn't be used if HA is not configured
> -
>
> Key: ARTEMIS-1762
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1762
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.5.1
>
>
> JDBC based journal is using a JdbcNodeManager when no HA is configured: 
> should be better to not have it to avoid unneeded lock table creations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1762) Jdbc NodeManagers shouldn't be used if HA is not configured

2018-03-22 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1762:
-
Description: JDBC based journal is using a JdbcNodeManager when no HA is 
configured: should be better to not have it to avoid unneeded lock table 
creations.  (was: JDBC based journal is using a JdbcNodeManager when no HA is 
configured: should be better to not have it to avoid unneeded lock table 
creations

.)

> Jdbc NodeManagers shouldn't be used if HA is not configured
> ---
>
> Key: ARTEMIS-1762
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1762
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.5.1
>
>
> JDBC based journal is using a JdbcNodeManager when no HA is configured: 
> should be better to not have it to avoid unneeded lock table creations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1762) Jdbc NodeManagers shouldn't be used if HA is not configured

2018-03-22 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1762:
-
Description: 
JDBC based journal is using a JdbcNodeManager when no HA is configured: should 
be better to not have it to avoid unneeded lock table creations

.

  was:File and JDBC based journal are both creating NodeManager instances even 
when HA is not configured: should be better to not have any of them to avoid 
unneeded file lock/table creations.


> Jdbc NodeManagers shouldn't be used if HA is not configured
> ---
>
> Key: ARTEMIS-1762
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1762
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> JDBC based journal is using a JdbcNodeManager when no HA is configured: 
> should be better to not have it to avoid unneeded lock table creations
> .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1762) Jdbc NodeManagers shouldn't be used if HA is not configured

2018-03-22 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1762:
-
Summary: Jdbc NodeManagers shouldn't be used if HA is not configured  (was: 
File/Jdbc NodeManagers shouldn't be used if HA is not configured)

> Jdbc NodeManagers shouldn't be used if HA is not configured
> ---
>
> Key: ARTEMIS-1762
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1762
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> File and JDBC based journal are both creating NodeManager instances even when 
> HA is not configured: should be better to not have any of them to avoid 
> unneeded file lock/table creations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1762) File/Jdbc NodeManagers shouldn't be used if HA is not configured

2018-03-22 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1762:
-
Summary: File/Jdbc NodeManagers shouldn't be used if HA is not configured  
(was: NodeManager shouldn't exists if HA is not configured)

> File/Jdbc NodeManagers shouldn't be used if HA is not configured
> 
>
> Key: ARTEMIS-1762
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1762
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> File and JDBC based journal are both creating NodeManager instances even when 
> HA is not configured: should be better to not have any of them to avoid 
> unneeded file lock/table creations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1762) NodeManager shouldn't exists if HA is not configured

2018-03-22 Thread Francesco Nigro (JIRA)
Francesco Nigro created ARTEMIS-1762:


 Summary: NodeManager shouldn't exists if HA is not configured
 Key: ARTEMIS-1762
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1762
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Francesco Nigro
Assignee: Francesco Nigro


File and JDBC based journal are both creating NodeManager instances even when 
HA is not configured: should be better to not have any of them to avoid 
unneeded file lock/table creations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1760) JDBC HA should have configurable tolerance of DB time misalignment

2018-03-21 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1760:
-
Affects Version/s: 2.5.0
Fix Version/s: 2.5.1
  Component/s: Broker

> JDBC HA should have configurable tolerance of DB time misalignment
> --
>
> Key: ARTEMIS-1760
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1760
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.5.1
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1760) JDBC HA should have configurable tolerance of DB time misalignment

2018-03-21 Thread Francesco Nigro (JIRA)
Francesco Nigro created ARTEMIS-1760:


 Summary: JDBC HA should have configurable tolerance of DB time 
misalignment
 Key: ARTEMIS-1760
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1760
 Project: ActiveMQ Artemis
  Issue Type: Improvement
Reporter: Francesco Nigro
Assignee: Francesco Nigro






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1757) Improve DB2 compatibility

2018-03-21 Thread Francesco Nigro (JIRA)
Francesco Nigro created ARTEMIS-1757:


 Summary: Improve DB2 compatibility
 Key: ARTEMIS-1757
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1757
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 2.5.0
Reporter: Francesco Nigro
Assignee: Francesco Nigro
 Fix For: 2.5.1






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


<    5   6   7   8   9   10   11   12   >