[jira] [Commented] (DISPATCH-333) Add a chapter on policy to the Qpid Dispatch Router Book.

2018-02-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370633#comment-16370633
 ] 

ASF GitHub Bot commented on DISPATCH-333:
-

Github user ChugR commented on a diff in the pull request:

https://github.com/apache/qpid-dispatch/pull/255#discussion_r169471170
  
--- Diff: doc/new-book/configuration-security.adoc ---
@@ -412,3 +414,385 @@ listener {
 
 For more information about these attributes, see 
xref:adding_sasl_authentication_to_incoming_connection[].
 --
+
+== Authorizing Access to Messaging Resources
+
+You can restrict the number of user connections, and control access to 
AMQP messaging resources by configuring _policies_.
+
+=== Types of Policies
+
+You can configure two different types of policies: _global policies_ and 
_vhost policies_.
+
+Global policies::
+Settings for the router. A global policy defines the maximum number of 
incoming user connections for the router (across all vhost policies), and 
defines how the router should use vhost policies.
+
+Vhost policies::
+Connection and AMQP resource limits for a messaging endpoint (called an 
AMQP virtual host, or _vhost_). A vhost policy defines what a client can access 
on a messaging endpoint over a particular connection.
++
+[NOTE]
+
+A vhost is typically the name of the host to which the client connection 
is directed. For example, if a client application opens a connection to the 
`amqp://mybroker.example.com:5672/queue01` URL, the vhost would be 
`mybroker.example.com`.
+
+
+The resource limits defined in global and vhost policies are applied to 
user connections only. The limits do not affect inter-router connections or 
router connections that are outbound to waypoints.
+
+=== How {RouterName} Applies Policies
+
+When a client connects to a router, the router determines whether to 
permit the connection based on the global and vhost policies, and the following 
properties of the connection:
+
+* The host to which the connection is directed (the vhost)
+* The connection's authenticated user name
+* The host from which the client is connecting (the remote host)
+
+If the connection is permitted, then the router applies a vhost policy 
that matches the vhost to which the connection is directed. The vhost policy 
limits are enforced for the lifetime of the connection.
+
+=== Configuring Global Policies
+
+You can set the incoming connection limit for the router and define how it 
should use vhost policies by configuring a global policy.
+
+.Procedure
+
+* In the router configuration file, add a `policy` section.
++
+--
+[options="nowrap",subs="+quotes"]
+
+policy = {
+maxConnections: _NUMBER_OF_CONNECTIONS_
+enableVhostPolicy: true | false
+policyDir: _PATH_
+defaultVhost: _VHOST_NAME_
+}
+
+
+`maxConnections`::
+The maximum number of concurrent client connections allowed for this 
router. This limit is always enforced, even if no other policy settings have 
been defined. The limit is applied to all incoming connections regardless of 
remote host, authenticated user, or targeted vhost. The default value is 
`65535`.
+
+`enableVhostPolicy`::
+Enables the router to enforce the connection denials and resource limits 
defined in the configured vhost policies. The default is `false`, which means 
that the router will not enforce any vhost policies.
++
+[NOTE]
+
+Setting `enableVhostPolicy` to `false` improves the router's performance.
+
+
+`policyDir`:: 
+The absolute path to a directory that holds vhost policy definition files 
in JSON format (`*.json`). The router processes all of the vhost policies in 
each JSON file that is in this directory. For more information, see 
xref:configuring-vhost-policies-json[].
+
+`defaultVhost`:: 
+The name of the default vhost policy, which is applied to any connection 
for which a vhost policy has not been configured. The default is `$default`. If 
`defaultVhost` is not defined, then default vhost processing is disabled.
+--
+
+=== Configuring Vhost Policies
+
+You configure vhost policies to define the connection limits and AMQP 
resource limits for a messaging endpoint.
+
+A vhost policy consists of the following:
+
+* Connection limits
++
+These limits control the number of users that can be connected to the 
vhost simultaneously.
+
+* User groups
++
+A user group defines the messaging resources that the group members are 
permitted to access. Each user group defines the following:
+
+** A set of users that can connect to the vhost (the group members)
+** The remote hosts from which the 

[GitHub] qpid-dispatch pull request #255: DISPATCH-333: Create new router policies do...

2018-02-20 Thread ChugR
Github user ChugR commented on a diff in the pull request:

https://github.com/apache/qpid-dispatch/pull/255#discussion_r169471170
  
--- Diff: doc/new-book/configuration-security.adoc ---
@@ -412,3 +414,385 @@ listener {
 
 For more information about these attributes, see 
xref:adding_sasl_authentication_to_incoming_connection[].
 --
+
+== Authorizing Access to Messaging Resources
+
+You can restrict the number of user connections, and control access to 
AMQP messaging resources by configuring _policies_.
+
+=== Types of Policies
+
+You can configure two different types of policies: _global policies_ and 
_vhost policies_.
+
+Global policies::
+Settings for the router. A global policy defines the maximum number of 
incoming user connections for the router (across all vhost policies), and 
defines how the router should use vhost policies.
+
+Vhost policies::
+Connection and AMQP resource limits for a messaging endpoint (called an 
AMQP virtual host, or _vhost_). A vhost policy defines what a client can access 
on a messaging endpoint over a particular connection.
++
+[NOTE]
+
+A vhost is typically the name of the host to which the client connection 
is directed. For example, if a client application opens a connection to the 
`amqp://mybroker.example.com:5672/queue01` URL, the vhost would be 
`mybroker.example.com`.
+
+
+The resource limits defined in global and vhost policies are applied to 
user connections only. The limits do not affect inter-router connections or 
router connections that are outbound to waypoints.
+
+=== How {RouterName} Applies Policies
+
+When a client connects to a router, the router determines whether to 
permit the connection based on the global and vhost policies, and the following 
properties of the connection:
+
+* The host to which the connection is directed (the vhost)
+* The connection's authenticated user name
+* The host from which the client is connecting (the remote host)
+
+If the connection is permitted, then the router applies a vhost policy 
that matches the vhost to which the connection is directed. The vhost policy 
limits are enforced for the lifetime of the connection.
+
+=== Configuring Global Policies
+
+You can set the incoming connection limit for the router and define how it 
should use vhost policies by configuring a global policy.
+
+.Procedure
+
+* In the router configuration file, add a `policy` section.
++
+--
+[options="nowrap",subs="+quotes"]
+
+policy = {
+maxConnections: _NUMBER_OF_CONNECTIONS_
+enableVhostPolicy: true | false
+policyDir: _PATH_
+defaultVhost: _VHOST_NAME_
+}
+
+
+`maxConnections`::
+The maximum number of concurrent client connections allowed for this 
router. This limit is always enforced, even if no other policy settings have 
been defined. The limit is applied to all incoming connections regardless of 
remote host, authenticated user, or targeted vhost. The default value is 
`65535`.
+
+`enableVhostPolicy`::
+Enables the router to enforce the connection denials and resource limits 
defined in the configured vhost policies. The default is `false`, which means 
that the router will not enforce any vhost policies.
++
+[NOTE]
+
+Setting `enableVhostPolicy` to `false` improves the router's performance.
+
+
+`policyDir`:: 
+The absolute path to a directory that holds vhost policy definition files 
in JSON format (`*.json`). The router processes all of the vhost policies in 
each JSON file that is in this directory. For more information, see 
xref:configuring-vhost-policies-json[].
+
+`defaultVhost`:: 
+The name of the default vhost policy, which is applied to any connection 
for which a vhost policy has not been configured. The default is `$default`. If 
`defaultVhost` is not defined, then default vhost processing is disabled.
+--
+
+=== Configuring Vhost Policies
+
+You configure vhost policies to define the connection limits and AMQP 
resource limits for a messaging endpoint.
+
+A vhost policy consists of the following:
+
+* Connection limits
++
+These limits control the number of users that can be connected to the 
vhost simultaneously.
+
+* User groups
++
+A user group defines the messaging resources that the group members are 
permitted to access. Each user group defines the following:
+
+** A set of users that can connect to the vhost (the group members)
+** The remote hosts from which the group members may connect to the router 
network
+** The AMQP resources that the group members are permitted to access on 
the vhost
+
+You can configure vhost policies directly in the router configuration 
file, or create them as JSON 

[jira] [Comment Edited] (PROTON-1734) [cpp] container.stop() doesn't work when called from non-proactor thread.

2018-02-20 Thread Alan Conway (JIRA)

[ 
https://issues.apache.org/jira/browse/PROTON-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370416#comment-16370416
 ] 

Alan Conway edited comment on PROTON-1734 at 2/20/18 6:30 PM:
--

I think this is a proactor behavior problem.

In stop(), C++ uses pn_proactor_interrupt() to close all open 
connctions/listeners etc. and then waits for a PN_PROACTOR_INACTIVE event 
before closing down. This is the intended use of the proactor.

The problem is that the proactor currently only dispatches the 
PN_PROACTOR_INACTIVE event on the *transition* from active to inactive - i.e. 
when the last connection, listener or timeout finishes. Therefore if you call 
pn_proactor_disconnect() when the proactor is already inactive, you will never 
get a PN_PROACTOR_INACTIVE event.

I think a valid fix is to say that the proactor should *always* dispatch at 
least one PN_PROACTOR_ACTIVE event after every call to 
pn_proactor_disconnect(). If there are things to disconnect, then the INACTIVE 
is dispatched as it currently is when the last activity is finished. If not it 
is dispatched immediately. With that change to the proactor I think the C++ 
would work as intended.

 


was (Author: aconway):
I think this is a proactor behavior problem.

In stop(), C++ uses pn_proactor_interrupt() to close all open 
connctions/listeners etc. and then waits for a PN_PROACTOR_INACTIVE event 
before closing down. This is the intended use of the proactor.

The problem is that the proactor currently only dispatches the 
PN_PROACTOR_INACTIVE event on the *transition* from active to inactive - i.e. 
when the last connection, listener or timeout finishes. Therefore if you call 
pn_proactor_disconnect() when the proactor is already inactive, you will never 
get a PN_PROACTOR_INACTIVE event.

I think a valid fix is to say that the proactor should *always* dispatch at 
least one PN_PROACTOR_ACTIVE event after every call to 
pn_proactor_disconnect(). If there are things to disconnect, then the INACTIVE 
is dispatched as it currently is when the last activity is finished. If not it 
is dispatched immediately. 

 

> [cpp] container.stop() doesn't work when called from non-proactor thread.
> -
>
> Key: PROTON-1734
> URL: https://issues.apache.org/jira/browse/PROTON-1734
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: cpp-binding
>Affects Versions: proton-c-0.19.0
>Reporter: Alan Conway
>Assignee: Andrew Stitcher
>Priority: Major
> Fix For: proton-c-0.21.0
>
>
> Using the below code
> {code}
> #include 
> #include 
> #include 
> int main( int, char** )
> {
>   try
>   {
> proton::container c;
> c.auto_stop( false );
> auto containerThread = std::thread([&]() { std::cout << "CONTAINER IS 
> RUNNING" << std::endl; 
>   
> c.run(); std::cout << "CONTAINER IS DONE" << std::endl; });
> std::this_thread::sleep_for( std::chrono::seconds( 2 ));
> std::cout << "STOPPING CONTAINER" << std::endl;
> c.stop();
> std::cout << "WAITING FOR CONTAINER" << std::endl;
> containerThread.join();
> return 0;
>   }
>   catch( std::exception& e )
>   {
> std::cerr << e.what() << std::endl;
>   }
>   return 1;
> }
> {code}
> via
> {code}
> [rkieley@i7t450s build]$ g++ -g -Wall -Wextra -Wpointer-arith -Wconversion 
> -Wformat -Wformat-security -Wformat-y2k -Wsign-promo -Wcast-qual -g3 -ggdb3 
> -Wunused-variable -fno-eliminate-unused-debug-types -O3 -DNDEBUG -fPIC 
> -DPN_CPP_HAS_LAMBDAS=0  -std=gnu++11  ../attachments/test.cpp 
> -lqpid-proton-cpp -lqpid-proton-core -lqpid-proton-proactor -lrt -lpthread -o 
> test
> {code}
> With both PROACTOR epoll and libuv I see the following when run:
> {quote}
> [New Thread 0x73c95700 (LWP 20312)]
> CONTAINER IS RUNNING
> STOPPING CONTAINER
> WAITING FOR CONTAINER
> ^C
> Thread 1 "test" received signal SIGINT, Interrupt.
> {quote}
> When I use CTRL-C to stop waiting after running via gdb and waiting 2 minutes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-1734) [cpp] container.stop() doesn't work when called from non-proactor thread.

2018-02-20 Thread Alan Conway (JIRA)

[ 
https://issues.apache.org/jira/browse/PROTON-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370416#comment-16370416
 ] 

Alan Conway commented on PROTON-1734:
-

I think this is a proactor behavior problem.

In stop(), C++ uses pn_proactor_interrupt() to close all open 
connctions/listeners etc. and then waits for a PN_PROACTOR_INACTIVE event 
before closing down. This is the intended use of the proactor.

The problem is that the proactor currently only dispatches the 
PN_PROACTOR_INACTIVE event on the *transition* from active to inactive - i.e. 
when the last connection, listener or timeout finishes. Therefore if you call 
pn_proactor_disconnect() when the proactor is already inactive, you will never 
get a PN_PROACTOR_LISTEN event.

I think a valid fix is to say that the proactor should *always* dispatch at 
least one PN_PROACTOR_ACTIVE event after every call to 
pn_proactor_disconnect(). If there are things to disconnect, then the INACTIVE 
is dispatched as it currently is when the last activity is finished. If not it 
is dispatched immediately. 

 

> [cpp] container.stop() doesn't work when called from non-proactor thread.
> -
>
> Key: PROTON-1734
> URL: https://issues.apache.org/jira/browse/PROTON-1734
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: cpp-binding
>Affects Versions: proton-c-0.19.0
>Reporter: Alan Conway
>Assignee: Andrew Stitcher
>Priority: Major
> Fix For: proton-c-0.21.0
>
>
> Using the below code
> {code}
> #include 
> #include 
> #include 
> int main( int, char** )
> {
>   try
>   {
> proton::container c;
> c.auto_stop( false );
> auto containerThread = std::thread([&]() { std::cout << "CONTAINER IS 
> RUNNING" << std::endl; 
>   
> c.run(); std::cout << "CONTAINER IS DONE" << std::endl; });
> std::this_thread::sleep_for( std::chrono::seconds( 2 ));
> std::cout << "STOPPING CONTAINER" << std::endl;
> c.stop();
> std::cout << "WAITING FOR CONTAINER" << std::endl;
> containerThread.join();
> return 0;
>   }
>   catch( std::exception& e )
>   {
> std::cerr << e.what() << std::endl;
>   }
>   return 1;
> }
> {code}
> via
> {code}
> [rkieley@i7t450s build]$ g++ -g -Wall -Wextra -Wpointer-arith -Wconversion 
> -Wformat -Wformat-security -Wformat-y2k -Wsign-promo -Wcast-qual -g3 -ggdb3 
> -Wunused-variable -fno-eliminate-unused-debug-types -O3 -DNDEBUG -fPIC 
> -DPN_CPP_HAS_LAMBDAS=0  -std=gnu++11  ../attachments/test.cpp 
> -lqpid-proton-cpp -lqpid-proton-core -lqpid-proton-proactor -lrt -lpthread -o 
> test
> {code}
> With both PROACTOR epoll and libuv I see the following when run:
> {quote}
> [New Thread 0x73c95700 (LWP 20312)]
> CONTAINER IS RUNNING
> STOPPING CONTAINER
> WAITING FOR CONTAINER
> ^C
> Thread 1 "test" received signal SIGINT, Interrupt.
> {quote}
> When I use CTRL-C to stop waiting after running via gdb and waiting 2 minutes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Comment Edited] (PROTON-1734) [cpp] container.stop() doesn't work when called from non-proactor thread.

2018-02-20 Thread Alan Conway (JIRA)

[ 
https://issues.apache.org/jira/browse/PROTON-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370416#comment-16370416
 ] 

Alan Conway edited comment on PROTON-1734 at 2/20/18 6:29 PM:
--

I think this is a proactor behavior problem.

In stop(), C++ uses pn_proactor_interrupt() to close all open 
connctions/listeners etc. and then waits for a PN_PROACTOR_INACTIVE event 
before closing down. This is the intended use of the proactor.

The problem is that the proactor currently only dispatches the 
PN_PROACTOR_INACTIVE event on the *transition* from active to inactive - i.e. 
when the last connection, listener or timeout finishes. Therefore if you call 
pn_proactor_disconnect() when the proactor is already inactive, you will never 
get a PN_PROACTOR_INACTIVE event.

I think a valid fix is to say that the proactor should *always* dispatch at 
least one PN_PROACTOR_ACTIVE event after every call to 
pn_proactor_disconnect(). If there are things to disconnect, then the INACTIVE 
is dispatched as it currently is when the last activity is finished. If not it 
is dispatched immediately. 

 


was (Author: aconway):
I think this is a proactor behavior problem.

In stop(), C++ uses pn_proactor_interrupt() to close all open 
connctions/listeners etc. and then waits for a PN_PROACTOR_INACTIVE event 
before closing down. This is the intended use of the proactor.

The problem is that the proactor currently only dispatches the 
PN_PROACTOR_INACTIVE event on the *transition* from active to inactive - i.e. 
when the last connection, listener or timeout finishes. Therefore if you call 
pn_proactor_disconnect() when the proactor is already inactive, you will never 
get a PN_PROACTOR_LISTEN event.

I think a valid fix is to say that the proactor should *always* dispatch at 
least one PN_PROACTOR_ACTIVE event after every call to 
pn_proactor_disconnect(). If there are things to disconnect, then the INACTIVE 
is dispatched as it currently is when the last activity is finished. If not it 
is dispatched immediately. 

 

> [cpp] container.stop() doesn't work when called from non-proactor thread.
> -
>
> Key: PROTON-1734
> URL: https://issues.apache.org/jira/browse/PROTON-1734
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: cpp-binding
>Affects Versions: proton-c-0.19.0
>Reporter: Alan Conway
>Assignee: Andrew Stitcher
>Priority: Major
> Fix For: proton-c-0.21.0
>
>
> Using the below code
> {code}
> #include 
> #include 
> #include 
> int main( int, char** )
> {
>   try
>   {
> proton::container c;
> c.auto_stop( false );
> auto containerThread = std::thread([&]() { std::cout << "CONTAINER IS 
> RUNNING" << std::endl; 
>   
> c.run(); std::cout << "CONTAINER IS DONE" << std::endl; });
> std::this_thread::sleep_for( std::chrono::seconds( 2 ));
> std::cout << "STOPPING CONTAINER" << std::endl;
> c.stop();
> std::cout << "WAITING FOR CONTAINER" << std::endl;
> containerThread.join();
> return 0;
>   }
>   catch( std::exception& e )
>   {
> std::cerr << e.what() << std::endl;
>   }
>   return 1;
> }
> {code}
> via
> {code}
> [rkieley@i7t450s build]$ g++ -g -Wall -Wextra -Wpointer-arith -Wconversion 
> -Wformat -Wformat-security -Wformat-y2k -Wsign-promo -Wcast-qual -g3 -ggdb3 
> -Wunused-variable -fno-eliminate-unused-debug-types -O3 -DNDEBUG -fPIC 
> -DPN_CPP_HAS_LAMBDAS=0  -std=gnu++11  ../attachments/test.cpp 
> -lqpid-proton-cpp -lqpid-proton-core -lqpid-proton-proactor -lrt -lpthread -o 
> test
> {code}
> With both PROACTOR epoll and libuv I see the following when run:
> {quote}
> [New Thread 0x73c95700 (LWP 20312)]
> CONTAINER IS RUNNING
> STOPPING CONTAINER
> WAITING FOR CONTAINER
> ^C
> Thread 1 "test" received signal SIGINT, Interrupt.
> {quote}
> When I use CTRL-C to stop waiting after running via gdb and waiting 2 minutes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-1766) [cpp] seg fault in reconnect test

2018-02-20 Thread Alan Conway (JIRA)

[ 
https://issues.apache.org/jira/browse/PROTON-1766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370354#comment-16370354
 ] 

Alan Conway commented on PROTON-1766:
-

NOTE: the issue can be reproduced using examples/cpp/broker.cpp with the 
following change:
{code:java}
modified examples/cpp/broker.cpp
@@ -261,6 +261,7 @@ public:
DOUT(std::cerr << "Receiver: " << this << " bound to Queue: " << q << "(" << qn 
<< ")\n";);
queue_ = q;
receiver_.open(proton::receiver_options()
+ .credit_window(10) // FIXME aconway 2018-02-16: test
.source((proton::source_options().address(qn)))
.handler(*this));
std::cout << "receiving to " << qn << std::endl;
{code}

> [cpp] seg fault in reconnect test
> -
>
> Key: PROTON-1766
> URL: https://issues.apache.org/jira/browse/PROTON-1766
> Project: Qpid Proton
>  Issue Type: Improvement
>  Components: cpp-binding, proton-c
>Affects Versions: proton-c-0.20.0
>Reporter: Alan Conway
>Assignee: Alan Conway
>Priority: Major
> Fix For: proton-c-0.21.0
>
>
> See [https://issues.jboss.org/browse/ENTMQCL-600] for details and reproducer 
> code, summary:
>  
> Using the to be attached reproducer and broker configuration:
> Running amqsender
> ./amqsender   microseconds>
> e.g.
> ./amqsender testbox111:5672 testbox111:5673 anon anon Q1 1
> You can reproduce the coredump with just one broker
> 1. keep slave down
> 2. start master broker
> 3. run amqsender with a very low frequency
> 4. kill master broker
> This should reproduce the coredump.
> The reproducer has events implemented for on_transport_close yet we see:
> {code}
> .
> .
> .
> [0x7fffec0169b0]:(PN_TRANSPORT, pn_transport<0x7fffec0169b0>)
> [0x7fffec0169b0]:(PN_TRANSPORT, pn_transport<0x7fffec0169b0>)
> [0x7fffec0169b0]:(PN_CONNECTION_WAKE, pn_connection<0x7fffec000b90>)
> AMQSender::on_connection_wake pn_connection<0x7fffec000b90>
> [0x7fffec0169b0]:(PN_TRANSPORT_TAIL_CLOSED, pn_transport<0x7fffec0169b0>)
> [0x7fffec0169b0]:(PN_TRANSPORT_ERROR, pn_transport<0x7fffec0169b0>)
> [0x7fffec0169b0]:(PN_TRANSPORT_HEAD_CLOSED, pn_transport<0x7fffec0169b0>)
> [0x7fffec0169b0]:(PN_TRANSPORT_CLOSED, pn_transport<0x7fffec0169b0>)
> [0x7fffec0169b0]:(PN_CONNECTION_INIT, pn_connection<0x7fffec000b90>)
> Thread 1 "amqsender" received signal SIGSEGV, Segmentation fault.
> 0x772bcdd0 in pthread_mutex_lock () from /lib64/libpthread.so.0
> Missing separate debuginfos, use: dnf debuginfo-install 
> cyrus-sasl-gssapi-2.1.26-26.2.fc24.x86_64 
> cyrus-sasl-lib-2.1.26-26.2.fc24.x86_64 cyrus-sasl-md5-2.1.26-26.2.fc24.x86_64 
> cyrus-sasl-plain-2.1.26-26.2.fc24.x86_64 
> cyrus-sasl-scram-2.1.26-26.2.fc24.x86_64 keyutils-libs-1.5.9-8.fc24.x86_64 
> krb5-libs-1.14.4-7.fc25.x86_64 libcom_err-1.43.3-1.fc25.x86_64 
> libcrypt-nss-2.24-4.fc25.x86_64 libdb-5.3.28-16.fc25.x86_64 
> libgcc-6.3.1-1.fc25.x86_64 libselinux-2.5-13.fc25.x86_64 
> libstdc++-6.3.1-1.fc25.x86_64 nss-softokn-freebl-3.28.3-1.1.fc25.x86_64 
> openssl-libs-1.0.2k-1.fc25.x86_64 pcre-8.40-5.fc25.x86_64 
> zlib-1.2.8-10.fc24.x86_64
> (gdb) bt
> #0  0x772bcdd0 in pthread_mutex_lock () from /lib64/libpthread.so.0
> #1  0x776dc4fa in lock (m=0x1a0) at 
> /home/rkieley/LocalProjects/src/rh/rh-qpid-proton/proton-c/src/proactor/epoll.c:112
> #2  0x776dcc09 in wake (ctx=0x7fffec2b8ac0) at 
> /home/rkieley/LocalProjects/src/rh/rh-qpid-proton/proton-c/src/proactor/epoll.c:436
> #3  0x776def0e in pn_connection_wake (c=0x7fffec000b90) at 
> /home/rkieley/LocalProjects/src/rh/rh-qpid-proton/proton-c/src/proactor/epoll.c:1302
> #4  0x77b81b82 in proton::container::impl::connection_work_queue::add 
> (this=0x7fffec001d30, f=...) at 
> /home/rkieley/LocalProjects/src/rh/rh-qpid-proton/proton-c/bindings/cpp/src/proactor_container_impl.cpp:118
> #5  0x77bacde5 in proton::work_queue::add (this=0x7fffec001cd8, 
> f=...) at 
> /home/rkieley/LocalProjects/src/rh/rh-qpid-proton/proton-c/bindings/cpp/src/work_queue.cpp:43
> #6  0x0040536f in AMQSender::send (this=this@entry=0x7fffd960, 
> strMsg="7578") at ../attachments/AMQSender.cpp:42
> #7  0x0040328f in main (argc=, argv=0x7fffdbd8) at 
> ../attachments/amqsend.cpp:20
> (gdb) frame 2
> #2  0x776dcc09 in wake (ctx=0x7fffec2b8ac0) at 
> /home/rkieley/LocalProjects/src/rh/rh-qpid-proton/proton-c/src/proactor/epoll.c:436
> 436   lock(>eventfd_mutex);
> (gdb) print p
> $3 = (pn_proactor_t *) 0x0
> (gdb)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Assigned] (DISPATCH-927) detach not echoed back on multi-hop link route

2018-02-20 Thread Ganesh Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/DISPATCH-927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ganesh Murthy reassigned DISPATCH-927:
--

Assignee: Ganesh Murthy

> detach not echoed back on multi-hop link route
> --
>
> Key: DISPATCH-927
> URL: https://issues.apache.org/jira/browse/DISPATCH-927
> Project: Qpid Dispatch
>  Issue Type: Bug
>Reporter: Gordon Sim
>Assignee: Ganesh Murthy
>Priority: Major
> Attachments: broker.xml, simple-topic-a.conf, simple-topic-b.conf, 
> simple_recv_modified.py
>
>
> In a two router network, router-a and router-b, a link route is defined in 
> both directions on both routers. There is also an associated connector to a 
> broker on router-b. The address is configured to be a topic on the broker.
> If two receivers attach on this address to router-a, and then detach at the 
> same time having received the defined number of messages, frequently (but not 
> always), one of the receivers will not get a detach echoed back to it.
> On inspection of protocol traces, it appears that router-b, though it gets 
> the detach echoed back from the broker, does not forward this back to the 
> client (via router-a).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] qpid-broker-j issue #5: [Broker-J] broker-core unit tests are failing becaus...

2018-02-20 Thread rgodfrey
Github user rgodfrey commented on the issue:

https://github.com/apache/qpid-broker-j/pull/5
  
So I guess the question here is was the intention to support locale 
specific formats for these outputs (in which case the test are wrong and need 
fixing - though I wouldn't take the Locale from BrokerMessages which is 
actually generated code), or was the intention to ignore locale and always use 
the same format - in which case the code emitting this output needs changing.  
This should be raised as a JIRA. 


---

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-1734) [cpp] container.stop() doesn't work when called from non-proactor thread.

2018-02-20 Thread Andrew Stitcher (JIRA)

[ 
https://issues.apache.org/jira/browse/PROTON-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370175#comment-16370175
 ] 

Andrew Stitcher commented on PROTON-1734:
-

Possibly this could be worked around at the C++ level by interrupting the 
proactor, but it appears there would still be a C level API issue then.

> [cpp] container.stop() doesn't work when called from non-proactor thread.
> -
>
> Key: PROTON-1734
> URL: https://issues.apache.org/jira/browse/PROTON-1734
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: cpp-binding
>Affects Versions: proton-c-0.19.0
>Reporter: Alan Conway
>Assignee: Andrew Stitcher
>Priority: Major
> Fix For: proton-c-0.21.0
>
>
> Using the below code
> {code}
> #include 
> #include 
> #include 
> int main( int, char** )
> {
>   try
>   {
> proton::container c;
> c.auto_stop( false );
> auto containerThread = std::thread([&]() { std::cout << "CONTAINER IS 
> RUNNING" << std::endl; 
>   
> c.run(); std::cout << "CONTAINER IS DONE" << std::endl; });
> std::this_thread::sleep_for( std::chrono::seconds( 2 ));
> std::cout << "STOPPING CONTAINER" << std::endl;
> c.stop();
> std::cout << "WAITING FOR CONTAINER" << std::endl;
> containerThread.join();
> return 0;
>   }
>   catch( std::exception& e )
>   {
> std::cerr << e.what() << std::endl;
>   }
>   return 1;
> }
> {code}
> via
> {code}
> [rkieley@i7t450s build]$ g++ -g -Wall -Wextra -Wpointer-arith -Wconversion 
> -Wformat -Wformat-security -Wformat-y2k -Wsign-promo -Wcast-qual -g3 -ggdb3 
> -Wunused-variable -fno-eliminate-unused-debug-types -O3 -DNDEBUG -fPIC 
> -DPN_CPP_HAS_LAMBDAS=0  -std=gnu++11  ../attachments/test.cpp 
> -lqpid-proton-cpp -lqpid-proton-core -lqpid-proton-proactor -lrt -lpthread -o 
> test
> {code}
> With both PROACTOR epoll and libuv I see the following when run:
> {quote}
> [New Thread 0x73c95700 (LWP 20312)]
> CONTAINER IS RUNNING
> STOPPING CONTAINER
> WAITING FOR CONTAINER
> ^C
> Thread 1 "test" received signal SIGINT, Interrupt.
> {quote}
> When I use CTRL-C to stop waiting after running via gdb and waiting 2 minutes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-1734) [cpp] container.stop() doesn't work when called from non-proactor thread.

2018-02-20 Thread Andrew Stitcher (JIRA)

[ 
https://issues.apache.org/jira/browse/PROTON-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370173#comment-16370173
 ] 

Andrew Stitcher commented on PROTON-1734:
-

This issue  appears to happen specifically when you stop the container before 
initiating any connections - I don't think (or at least I'm not sure) that it 
is actually related to the thread used to call container::stop().

It seems that in the underlying pn_proactor_disconnect() code if there is 
nothing to stop then the proactor doesn't interrupt existing running 
pn_proactor_wait() calls.

> [cpp] container.stop() doesn't work when called from non-proactor thread.
> -
>
> Key: PROTON-1734
> URL: https://issues.apache.org/jira/browse/PROTON-1734
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: cpp-binding
>Affects Versions: proton-c-0.19.0
>Reporter: Alan Conway
>Assignee: Andrew Stitcher
>Priority: Major
> Fix For: proton-c-0.21.0
>
>
> Using the below code
> {code}
> #include 
> #include 
> #include 
> int main( int, char** )
> {
>   try
>   {
> proton::container c;
> c.auto_stop( false );
> auto containerThread = std::thread([&]() { std::cout << "CONTAINER IS 
> RUNNING" << std::endl; 
>   
> c.run(); std::cout << "CONTAINER IS DONE" << std::endl; });
> std::this_thread::sleep_for( std::chrono::seconds( 2 ));
> std::cout << "STOPPING CONTAINER" << std::endl;
> c.stop();
> std::cout << "WAITING FOR CONTAINER" << std::endl;
> containerThread.join();
> return 0;
>   }
>   catch( std::exception& e )
>   {
> std::cerr << e.what() << std::endl;
>   }
>   return 1;
> }
> {code}
> via
> {code}
> [rkieley@i7t450s build]$ g++ -g -Wall -Wextra -Wpointer-arith -Wconversion 
> -Wformat -Wformat-security -Wformat-y2k -Wsign-promo -Wcast-qual -g3 -ggdb3 
> -Wunused-variable -fno-eliminate-unused-debug-types -O3 -DNDEBUG -fPIC 
> -DPN_CPP_HAS_LAMBDAS=0  -std=gnu++11  ../attachments/test.cpp 
> -lqpid-proton-cpp -lqpid-proton-core -lqpid-proton-proactor -lrt -lpthread -o 
> test
> {code}
> With both PROACTOR epoll and libuv I see the following when run:
> {quote}
> [New Thread 0x73c95700 (LWP 20312)]
> CONTAINER IS RUNNING
> STOPPING CONTAINER
> WAITING FOR CONTAINER
> ^C
> Thread 1 "test" received signal SIGINT, Interrupt.
> {quote}
> When I use CTRL-C to stop waiting after running via gdb and waiting 2 minutes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] qpid-broker-j pull request #5: [Broker-J] broker-core unit tests are failing...

2018-02-20 Thread overmeulen
GitHub user overmeulen opened a pull request:

https://github.com/apache/qpid-broker-j/pull/5

[Broker-J] broker-core unit tests are failing because of wrong locale



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/overmeulen/qpid-broker-j 
bugfix/wrong-locale-in-unit-tests

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/qpid-broker-j/pull/5.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5


commit f5e61fcaab77e8bc32fbc23b7069ffde2b53f199
Author: overmeulen 
Date:   2018-02-19T16:47:52Z

[Broker-J] broker-core unit tests are failing because of wrong locale




---

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



Save the date: ApacheCon North America, September 24-27 in Montréal

2018-02-20 Thread Rich Bowen

Dear Apache Enthusiast,

(You’re receiving this message because you’re subscribed to a user@ or 
dev@ list of one or more Apache Software Foundation projects.)


We’re pleased to announce the upcoming ApacheCon [1] in Montréal, 
September 24-27. This event is all about you — the Apache project community.


We’ll have four tracks of technical content this time, as well as lots 
of opportunities to connect with your project community, hack on the 
code, and learn about other related (and unrelated!) projects across the 
foundation.


The Call For Papers (CFP) [2] and registration are now open. Register 
early to take advantage of the early bird prices and secure your place 
at the event hotel.


Important dates
March 30: CFP closes
April 20: CFP notifications sent
	August 24: Hotel room block closes (please do not wait until the last 
minute)


Follow @ApacheCon on Twitter to be the first to hear announcements about 
keynotes, the schedule, evening events, and everything you can expect to 
see at the event.


See you in Montréal!

Sincerely, Rich Bowen, V.P. Events,
on behalf of the entire ApacheCon team

[1] http://www.apachecon.com/acna18
[2] https://cfp.apachecon.com/conference.html?apachecon-north-america-2018

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (PROTON-1766) [cpp] seg fault in reconnect test

2018-02-20 Thread Alan Conway (JIRA)
Alan Conway created PROTON-1766:
---

 Summary: [cpp] seg fault in reconnect test
 Key: PROTON-1766
 URL: https://issues.apache.org/jira/browse/PROTON-1766
 Project: Qpid Proton
  Issue Type: Improvement
  Components: cpp-binding, proton-c
Affects Versions: proton-c-0.20.0
Reporter: Alan Conway
Assignee: Alan Conway
 Fix For: proton-c-0.21.0


See [https://issues.jboss.org/browse/ENTMQCL-600] for details and reproducer 
code, summary:

 
Using the to be attached reproducer and broker configuration:

Running amqsender

./amqsender  

e.g.

./amqsender testbox111:5672 testbox111:5673 anon anon Q1 1

You can reproduce the coredump with just one broker

1. keep slave down
2. start master broker
3. run amqsender with a very low frequency
4. kill master broker

This should reproduce the coredump.

The reproducer has events implemented for on_transport_close yet we see:
{code}
.
.
.
[0x7fffec0169b0]:(PN_TRANSPORT, pn_transport<0x7fffec0169b0>)
[0x7fffec0169b0]:(PN_TRANSPORT, pn_transport<0x7fffec0169b0>)
[0x7fffec0169b0]:(PN_CONNECTION_WAKE, pn_connection<0x7fffec000b90>)
AMQSender::on_connection_wake pn_connection<0x7fffec000b90>
[0x7fffec0169b0]:(PN_TRANSPORT_TAIL_CLOSED, pn_transport<0x7fffec0169b0>)
[0x7fffec0169b0]:(PN_TRANSPORT_ERROR, pn_transport<0x7fffec0169b0>)
[0x7fffec0169b0]:(PN_TRANSPORT_HEAD_CLOSED, pn_transport<0x7fffec0169b0>)
[0x7fffec0169b0]:(PN_TRANSPORT_CLOSED, pn_transport<0x7fffec0169b0>)
[0x7fffec0169b0]:(PN_CONNECTION_INIT, pn_connection<0x7fffec000b90>)

Thread 1 "amqsender" received signal SIGSEGV, Segmentation fault.
0x772bcdd0 in pthread_mutex_lock () from /lib64/libpthread.so.0
Missing separate debuginfos, use: dnf debuginfo-install 
cyrus-sasl-gssapi-2.1.26-26.2.fc24.x86_64 
cyrus-sasl-lib-2.1.26-26.2.fc24.x86_64 cyrus-sasl-md5-2.1.26-26.2.fc24.x86_64 
cyrus-sasl-plain-2.1.26-26.2.fc24.x86_64 
cyrus-sasl-scram-2.1.26-26.2.fc24.x86_64 keyutils-libs-1.5.9-8.fc24.x86_64 
krb5-libs-1.14.4-7.fc25.x86_64 libcom_err-1.43.3-1.fc25.x86_64 
libcrypt-nss-2.24-4.fc25.x86_64 libdb-5.3.28-16.fc25.x86_64 
libgcc-6.3.1-1.fc25.x86_64 libselinux-2.5-13.fc25.x86_64 
libstdc++-6.3.1-1.fc25.x86_64 nss-softokn-freebl-3.28.3-1.1.fc25.x86_64 
openssl-libs-1.0.2k-1.fc25.x86_64 pcre-8.40-5.fc25.x86_64 
zlib-1.2.8-10.fc24.x86_64
(gdb) bt
#0  0x772bcdd0 in pthread_mutex_lock () from /lib64/libpthread.so.0
#1  0x776dc4fa in lock (m=0x1a0) at 
/home/rkieley/LocalProjects/src/rh/rh-qpid-proton/proton-c/src/proactor/epoll.c:112
#2  0x776dcc09 in wake (ctx=0x7fffec2b8ac0) at 
/home/rkieley/LocalProjects/src/rh/rh-qpid-proton/proton-c/src/proactor/epoll.c:436
#3  0x776def0e in pn_connection_wake (c=0x7fffec000b90) at 
/home/rkieley/LocalProjects/src/rh/rh-qpid-proton/proton-c/src/proactor/epoll.c:1302
#4  0x77b81b82 in proton::container::impl::connection_work_queue::add 
(this=0x7fffec001d30, f=...) at 
/home/rkieley/LocalProjects/src/rh/rh-qpid-proton/proton-c/bindings/cpp/src/proactor_container_impl.cpp:118
#5  0x77bacde5 in proton::work_queue::add (this=0x7fffec001cd8, f=...) 
at 
/home/rkieley/LocalProjects/src/rh/rh-qpid-proton/proton-c/bindings/cpp/src/work_queue.cpp:43
#6  0x0040536f in AMQSender::send (this=this@entry=0x7fffd960, 
strMsg="7578") at ../attachments/AMQSender.cpp:42
#7  0x0040328f in main (argc=, argv=0x7fffdbd8) at 
../attachments/amqsend.cpp:20
(gdb) frame 2
#2  0x776dcc09 in wake (ctx=0x7fffec2b8ac0) at 
/home/rkieley/LocalProjects/src/rh/rh-qpid-proton/proton-c/src/proactor/epoll.c:436
436   lock(>eventfd_mutex);
(gdb) print p
$3 = (pn_proactor_t *) 0x0
(gdb)

{code}





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPID-8104) [Broker-J] [Query] [WMC] Ordering connections tables by port column fails with '422 - The orderBy expression at position '0' is unsupported'

2018-02-20 Thread Keith Wall (JIRA)
Keith Wall created QPID-8104:


 Summary: [Broker-J] [Query] [WMC] Ordering connections tables by 
port column fails with '422 - The orderBy expression at position '0' is 
unsupported'
 Key: QPID-8104
 URL: https://issues.apache.org/jira/browse/QPID-8104
 Project: Qpid
  Issue Type: Bug
  Components: Broker-J
Reporter: Keith Wall


Ordering the connections table on the virtualhost tab by port ends with error 
{{422 - The orderBy expression at position '0' is unsupported}}.

The select clause in question uses an alias {{port.name AS port}}.  In general 
the ability to express an the order by in terms of column aliases is supported 
however,  the issue here seems to be that {{port}} is ambiguous and could refer 
to the {{Collection#port}} attribute or the alias.

The user of the query API can workaround by using a column number, or specify 
the column's expression rather than alias.

I have not looked into this further.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPID-8100) [Broker-J] [AMQP 0-10] SESSION_BUSY sent on wrong channel leading to hung Messaging API based application

2018-02-20 Thread Alex Rudyy (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-8100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Rudyy resolved QPID-8100.
--
   Resolution: Fixed
Fix Version/s: qpid-java-broker-7.1.0
   qpid-java-broker-7.0.2

Resolving JIRA after reviewing and merging changes into 7.0.x branch

> [Broker-J] [AMQP 0-10] SESSION_BUSY sent on wrong channel leading to hung 
> Messaging API based application
> -
>
> Key: QPID-8100
> URL: https://issues.apache.org/jira/browse/QPID-8100
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: 0.32, qpid-java-broker-7.0.0, qpid-java-broker-7.0.1
> Environment: * Qpid Broker-J 0.32 derivative
> * Qpid Cpp Client using messaging API.
>Reporter: Keith Wall
>Priority: Major
> Fix For: qpid-java-broker-7.0.2, qpid-java-broker-7.1.0
>
>
> If, during session attachment, the Broker detects that the 0-10 session is 
> already in use by the same principal, the Broker is required to detach the 
> session by sending a {{session.detach}} on the same channel.  Currently owing 
> to a defect, the Broker sends this detach on channel 0, regardless of the 
> channel used by the peer.
> This defect was a contributory factor in a larger problem.  It prevented an 
> application from recovering automatically.In that case, a Qpid CPP 
> Messaging API, recovering from a missing heartbeat, entered a hung state 
> whilst attaching the existing session.  The client library discarded the 
> {{session.detach}} on the unexpected channel, so it continued to await the 
> {{session.attached}}, which never came.  
> {noformat}
> /// original session attach
> 2018-02-15 13:17:50 [Network] trace SENT 
> [[10.211.55.3:60054-10.241.132.41:5672]]: Frame[BEbe; channel=1; 
> {SessionAttachBody: name=e2baafab-5e5f-4daf-8276-33ccaa9f940a; }]
> 2018-02-15 13:17:50 [Network] trace RECV 
> [[10.211.55.3:60054-10.241.132.41:5672]]: Frame[BEbe; channel=1; 
> {SessionAttachedBody: name=e2baafab-5e5f-4daf-8276-33ccaa9f940a; }]
> 2018-02-15 13:17:50 [Network] trace SENT 
> [[10.211.55.3:60054-10.241.132.41:5672]]: Frame[BEbe; channel=1; 
> {SessionRequestTimeoutBody: timeout=0; }]
> /// snip - later heartbeat timeout
> 2018-02-15 13:18:20 [Client] debug Traffic timeout
> /// snip - reconnecting again
> 2018-02-15 13:18:20 [System] info Connecting: 10.241.132.41:5672
> /// snip -reuse the same session id
> 2018-02-15 13:18:28 [Client] debug Known-brokers for connection:
> 2018-02-15 13:18:28 [Network] trace SENT 
> [[10.211.55.3:60056-10.241.132.41:5672]]: Frame[BEbe; channel=1; 
> {SessionAttachBody: name=e2baafab-5e5f-4daf-8276-33ccaa9f940a; }]
> 2018-02-15 13:18:28 [Network] trace RECV 
> [[10.211.55.3:60056-10.241.132.41:5672]]: Frame[BEbe; channel=0; 
> {SessionDetachedBody: name=e2baafab-5e5f-4daf-8276-33ccaa9f940a; code=1; }]
> 2018-02-15 13:18:28 [Client] info Connection 
> [10.211.55.3:60056-10.241.132.41:5672] dropping frame received on invalid 
> channel: Frame[BEbe; channel=0; {SessionDetachedBody: 
> name=e2baafab-5e5f-4daf-8276-33ccaa9f940a; code=1; }]
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-8100) [Broker-J] [AMQP 0-10] SESSION_BUSY sent on wrong channel leading to hung Messaging API based application

2018-02-20 Thread Alex Rudyy (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-8100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370039#comment-16370039
 ] 

Alex Rudyy commented on QPID-8100:
--

The changes look reasonable to me. I am going to port them into 7.0.x branch

> [Broker-J] [AMQP 0-10] SESSION_BUSY sent on wrong channel leading to hung 
> Messaging API based application
> -
>
> Key: QPID-8100
> URL: https://issues.apache.org/jira/browse/QPID-8100
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: 0.32, qpid-java-broker-7.0.0, qpid-java-broker-7.0.1
> Environment: * Qpid Broker-J 0.32 derivative
> * Qpid Cpp Client using messaging API.
>Reporter: Keith Wall
>Priority: Major
>
> If, during session attachment, the Broker detects that the 0-10 session is 
> already in use by the same principal, the Broker is required to detach the 
> session by sending a {{session.detach}} on the same channel.  Currently owing 
> to a defect, the Broker sends this detach on channel 0, regardless of the 
> channel used by the peer.
> This defect was a contributory factor in a larger problem.  It prevented an 
> application from recovering automatically.In that case, a Qpid CPP 
> Messaging API, recovering from a missing heartbeat, entered a hung state 
> whilst attaching the existing session.  The client library discarded the 
> {{session.detach}} on the unexpected channel, so it continued to await the 
> {{session.attached}}, which never came.  
> {noformat}
> /// original session attach
> 2018-02-15 13:17:50 [Network] trace SENT 
> [[10.211.55.3:60054-10.241.132.41:5672]]: Frame[BEbe; channel=1; 
> {SessionAttachBody: name=e2baafab-5e5f-4daf-8276-33ccaa9f940a; }]
> 2018-02-15 13:17:50 [Network] trace RECV 
> [[10.211.55.3:60054-10.241.132.41:5672]]: Frame[BEbe; channel=1; 
> {SessionAttachedBody: name=e2baafab-5e5f-4daf-8276-33ccaa9f940a; }]
> 2018-02-15 13:17:50 [Network] trace SENT 
> [[10.211.55.3:60054-10.241.132.41:5672]]: Frame[BEbe; channel=1; 
> {SessionRequestTimeoutBody: timeout=0; }]
> /// snip - later heartbeat timeout
> 2018-02-15 13:18:20 [Client] debug Traffic timeout
> /// snip - reconnecting again
> 2018-02-15 13:18:20 [System] info Connecting: 10.241.132.41:5672
> /// snip -reuse the same session id
> 2018-02-15 13:18:28 [Client] debug Known-brokers for connection:
> 2018-02-15 13:18:28 [Network] trace SENT 
> [[10.211.55.3:60056-10.241.132.41:5672]]: Frame[BEbe; channel=1; 
> {SessionAttachBody: name=e2baafab-5e5f-4daf-8276-33ccaa9f940a; }]
> 2018-02-15 13:18:28 [Network] trace RECV 
> [[10.211.55.3:60056-10.241.132.41:5672]]: Frame[BEbe; channel=0; 
> {SessionDetachedBody: name=e2baafab-5e5f-4daf-8276-33ccaa9f940a; code=1; }]
> 2018-02-15 13:18:28 [Client] info Connection 
> [10.211.55.3:60056-10.241.132.41:5672] dropping frame received on invalid 
> channel: Frame[BEbe; channel=0; {SessionDetachedBody: 
> name=e2baafab-5e5f-4daf-8276-33ccaa9f940a; code=1; }]
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPID-8098) [Broker-J] [AMQP 0-10] Queue browsers erroneously increment the delivery count

2018-02-20 Thread Alex Rudyy (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-8098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Rudyy resolved QPID-8098.
--
Resolution: Fixed

Resolving the JIRA as changes are reviewed and merged into 7.0.x branch

> [Broker-J] [AMQP 0-10] Queue browsers erroneously increment the delivery count
> --
>
> Key: QPID-8098
> URL: https://issues.apache.org/jira/browse/QPID-8098
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: 0.30, 0.32, qpid-java-6.0, qpid-java-6.1, 
> qpid-java-broker-7.0.0
>Reporter: Keith Wall
>Assignee: Keith Wall
>Priority: Major
> Fix For: qpid-java-broker-7.0.2, qpid-java-broker-7.1.0
>
>
> On the AMQP 0-10 protocol path within the Broker, deliveries to queue 
> browsers erroneously increase the {{MessageInstance#deliveryCount}}.  This 
> should not happen.  The purpose of the delivery count is to count deliveries 
> to (destructive) consumers - not browsers.  The problem is restricted to AMQP 
> 0-10 implementation.  Neither AMQP 0-x nor AMQP 1.0 are affected by this 
> defect.
> The defect could mean that messages are spuriously routed to a DLQ (if 
> configured).  For this to happen, there would need to be additional 
> destructive consumers on the queue that cause the message to be 'released'.  
> Releasing occurs during transaction rollback and client disconnection (when 
> messages are prefetched).   The message would be prematurely routed to the 
> DLQ in this case.
> The defect is longstanding.  I tested as far back as 0.30.
> https://mail-archives.apache.org/mod_mbox/qpid-users/201802.mbox/%3c1518546115789-0.p...@n2.nabble.com%3e



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPID-8099) [Broker-J] [AMQP Management] Operation Queue#getMessageInfo response returned as serialised java object rather list of maps

2018-02-20 Thread Alex Rudyy (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-8099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Rudyy resolved QPID-8099.
--
   Resolution: Fixed
Fix Version/s: qpid-java-broker-7.0.2

Resolving the JIRA as changes are reviewed and merged into 7.0.x

> [Broker-J] [AMQP Management] Operation Queue#getMessageInfo response returned 
> as serialised java object rather list of maps
> ---
>
> Key: QPID-8099
> URL: https://issues.apache.org/jira/browse/QPID-8099
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.0.0, qpid-java-broker-7.0.1
>Reporter: Keith Wall
>Assignee: Keith Wall
>Priority: Minor
> Fix For: qpid-java-broker-7.0.2, qpid-java-broker-7.1.0
>
>
> Invoking {{Queue#getMessageInfo}} over AMQP management returns response 
> message containing the serialised bytes of the MessageInfo object rather than 
> a list of maps.
> The problem is {{MessageInfo}} and {{LogRecord}} are missing the 
> {{ManagedAttributeValue}} interface, so 
> ManagedAttributeValueAbstractConverter is ignoring them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-8099) [Broker-J] [AMQP Management] Operation Queue#getMessageInfo response returned as serialised java object rather list of maps

2018-02-20 Thread Alex Rudyy (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-8099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370012#comment-16370012
 ] 

Alex Rudyy commented on QPID-8099:
--

The changes look good to me. I am going to port them into 7.0.x branch

> [Broker-J] [AMQP Management] Operation Queue#getMessageInfo response returned 
> as serialised java object rather list of maps
> ---
>
> Key: QPID-8099
> URL: https://issues.apache.org/jira/browse/QPID-8099
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.0.0, qpid-java-broker-7.0.1
>Reporter: Keith Wall
>Assignee: Keith Wall
>Priority: Minor
> Fix For: qpid-java-broker-7.1.0
>
>
> Invoking {{Queue#getMessageInfo}} over AMQP management returns response 
> message containing the serialised bytes of the MessageInfo object rather than 
> a list of maps.
> The problem is {{MessageInfo}} and {{LogRecord}} are missing the 
> {{ManagedAttributeValue}} interface, so 
> ManagedAttributeValueAbstractConverter is ignoring them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-8098) [Broker-J] [AMQP 0-10] Queue browsers erroneously increment the delivery count

2018-02-20 Thread Alex Rudyy (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-8098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16369974#comment-16369974
 ] 

Alex Rudyy commented on QPID-8098:
--

The changes look reasonable to me. I am going to port them into 7.0.x branch

> [Broker-J] [AMQP 0-10] Queue browsers erroneously increment the delivery count
> --
>
> Key: QPID-8098
> URL: https://issues.apache.org/jira/browse/QPID-8098
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: 0.30, 0.32, qpid-java-6.0, qpid-java-6.1, 
> qpid-java-broker-7.0.0
>Reporter: Keith Wall
>Assignee: Keith Wall
>Priority: Major
> Fix For: qpid-java-broker-7.0.2, qpid-java-broker-7.1.0
>
>
> On the AMQP 0-10 protocol path within the Broker, deliveries to queue 
> browsers erroneously increase the {{MessageInstance#deliveryCount}}.  This 
> should not happen.  The purpose of the delivery count is to count deliveries 
> to (destructive) consumers - not browsers.  The problem is restricted to AMQP 
> 0-10 implementation.  Neither AMQP 0-x nor AMQP 1.0 are affected by this 
> defect.
> The defect could mean that messages are spuriously routed to a DLQ (if 
> configured).  For this to happen, there would need to be additional 
> destructive consumers on the queue that cause the message to be 'released'.  
> Releasing occurs during transaction rollback and client disconnection (when 
> messages are prefetched).   The message would be prematurely routed to the 
> DLQ in this case.
> The defect is longstanding.  I tested as far back as 0.30.
> https://mail-archives.apache.org/mod_mbox/qpid-users/201802.mbox/%3c1518546115789-0.p...@n2.nabble.com%3e



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPID-8096) [Broker-J] PUTting a user preference to the BDB backed configuration store fails with NPE

2018-02-20 Thread Alex Rudyy (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-8096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Rudyy resolved QPID-8096.
--
Resolution: Fixed

Resolving the JIRA as changes have been reviewed.

> [Broker-J] PUTting a user preference to the BDB backed configuration store 
> fails with NPE
> -
>
> Key: QPID-8096
> URL: https://issues.apache.org/jira/browse/QPID-8096
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: qpid-java-6.1, qpid-java-broker-7.0.0, 
> qpid-java-broker-7.0.1
>Reporter: Keith Wall
>Assignee: Keith Wall
>Priority: Minor
> Fix For: qpid-java-broker-7.1.0
>
>
> As an authenticated user, putting (HTTP PUT) a user preference to the store 
> when using a configuration store backed by BDB (or JDBC), fails with a stack 
> trace like the following.  The store exception would cause the Broker to 
> fail.   A restart would restore normal service.
> The normal use-case for preferences is the persistence of queries within the 
> web-management console.  The console uses POST operations so it unaffected by 
> this defect.
> {noformat}
> Thread terminated due to uncaught 
> exceptionorg.apache.qpid.server.store.StoreException: Error on replacing of 
> preferences: null
>   at 
> org.apache.qpid.server.store.berkeleydb.StandardEnvironmentFacade.handleDatabaseException(StandardEnvironmentFacade.java:447)
>   at 
> org.apache.qpid.server.store.berkeleydb.AbstractBDBPreferenceStore.removeAndAdd(AbstractBDBPreferenceStore.java:211)
>   at 
> org.apache.qpid.server.store.berkeleydb.AbstractBDBPreferenceStore.replace(AbstractBDBPreferenceStore.java:160)
>   at 
> org.apache.qpid.server.model.preferences.UserPreferencesImpl.doReplaceByTypeAndName(UserPreferencesImpl.java:280)
>   at 
> org.apache.qpid.server.model.preferences.UserPreferencesImpl.access$400(UserPreferencesImpl.java:58)
>   at 
> org.apache.qpid.server.model.preferences.UserPreferencesImpl$6.doOperation(UserPreferencesImpl.java:249)
>   at 
> org.apache.qpid.server.model.preferences.UserPreferencesImpl$6.doOperation(UserPreferencesImpl.java:245)
>   at 
> org.apache.qpid.server.model.preferences.UserPreferencesImpl$PreferencesTask$1.run(UserPreferencesImpl.java:653)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:360)
>   at 
> org.apache.qpid.server.model.preferences.UserPreferencesImpl$PreferencesTask.execute(UserPreferencesImpl.java:648)
>   at 
> org.apache.qpid.server.configuration.updater.TaskExecutorImpl$TaskLoggingWrapper.execute(TaskExecutorImpl.java:248)
>   at 
> org.apache.qpid.server.configuration.updater.TaskExecutorImpl$CallableWrapper$1.run(TaskExecutorImpl.java:320)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:360)
>   at 
> org.apache.qpid.server.configuration.updater.TaskExecutorImpl$CallableWrapper.call(TaskExecutorImpl.java:313)
>   at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:111)
>   at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:58)
>   at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:75)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at 
> org.apache.qpid.server.bytebuffer.QpidByteBufferFactory.lambda$null$0(QpidByteBufferFactory.java:464)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.qpid.server.store.berkeleydb.tuple.UUIDTupleBinding.objectToEntry(UUIDTupleBinding.java:42)
>   at 
> org.apache.qpid.server.store.berkeleydb.tuple.UUIDTupleBinding.objectToEntry(UUIDTupleBinding.java:29)
>   at 
> com.sleepycat.bind.tuple.TupleBinding.objectToEntry(TupleBinding.java:81)
>   at 
> org.apache.qpid.server.store.berkeleydb.AbstractBDBPreferenceStore.removeAndAdd(AbstractBDBPreferenceStore.java:192)
>   ... 21 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-8097) NullPointerException in AMQP 1.0 plugin using OpenJDK

2018-02-20 Thread Artyom Safronov (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16369916#comment-16369916
 ] 

Artyom Safronov commented on QPID-8097:
---

Sorry for the belated response.

I can not provide my code because it is commercial. But code required to 
reproduce the issue is trivial.

First of all the broker has to be started. In my case it is started as 
embedded. But I think it does not matter and you can run it manually.

Then there must be code that tries to create a connection to the broker. That 
is it.

> NullPointerException in AMQP 1.0 plugin using OpenJDK
> -
>
> Key: QPID-8097
> URL: https://issues.apache.org/jira/browse/QPID-8097
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: qpid-java-6.1.5
>Reporter: Artyom Safronov
>Priority: Major
> Attachments: Qpid stacktrace.txt
>
>
> An unexpected NullPointerException occures in AMQP 1.0 plugin at the first 
> attempt to connect to the broker using *OpenJDK*.
> The problem was found while running Qpid in unit tests that verify code 
> communicated with the broker through AQMP. There is a test case that starts 
> Qpid before each test and stops it after it. That test case verifies a module 
> that establishes a connection with the broker. The broker refuses the 
> connection because of NullPointerException (see the attached stacktrace). 
> Using *Oracle JDK* all tests run fine.
> +The first strange+ thing looks like this. Only a test that does the first 
> attempt to connect always fails. The rest tests run successfully although 
> Qpid are restarted. So the order of tests is not important. The project is 
> built using Maven and all its plugins are run within +a single process+ 
> (without fork ability).
> +The second strange+ thing is that the behaivor of Qpid slightly differs 
> using two different *OpenJDK* versions.
> In a case of *1.8.0_131* version the broker accepts a transport connection 
> but rejects "OPEN" frame with the next error:
> {code:java}
> io.vertx.core.impl.NoStackTraceThrowable: Error{condition=amqp:not-found, 
> description='Unknown hostname in connection open: 'default'', info=null}
> {code}
>  In a case of *1.8.0_161* version the broker rejects a transport connection 
> instantly with the next exception:
> {code:java}
> io.vertx.core.VertxException: Disconnected
>  at 
> io.vertx.proton.impl.ProtonClientImpl.lambda$null$0(ProtonClientImpl.java:80) 
> ~[vertx-proton-3.5.0.jar:?]
>  at 
> io.vertx.proton.impl.ProtonConnectionImpl.fireDisconnect(ProtonConnectionImpl.java:374)
>  ~[vertx-proton-3.5.0.jar:?]
>  at 
> io.vertx.proton.impl.ProtonTransport.handleSocketEnd(ProtonTransport.java:89) 
> ~[vertx-proton-3.5.0.jar:?]
>  at io.vertx.core.net.impl.NetSocketImpl.handleClosed(NetSocketImpl.java:345) 
> ~[vertx-core-3.5.0.jar:?]
>  at io.vertx.core.impl.ContextImpl.lambda$wrapTask$2(ContextImpl.java:342) 
> ~[vertx-core-3.5.0.jar:?]
>  at io.vertx.core.impl.ContextImpl.executeFromIO(ContextImpl.java:200) 
> [vertx-core-3.5.0.jar:?]
>  at 
> io.vertx.core.net.impl.VertxHandler.channelInactive(VertxHandler.java:134) 
> [vertx-core-3.5.0.jar:?]
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
>  [netty-transport-4.1.15.Final.jar:4.1.15.Final]
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
>  [netty-transport-4.1.15.Final.jar:4.1.15.Final]
>  at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224)
>  [netty-transport-4.1.15.Final.jar:4.1.15.Final]
>  at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.channelInactive(DefaultChannelPipeline.java:1354)
>  [netty-transport-4.1.15.Final.jar:4.1.15.Final]
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
>  [netty-transport-4.1.15.Final.jar:4.1.15.Final]
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
>  [netty-transport-4.1.15.Final.jar:4.1.15.Final]
>  at 
> io.netty.channel.DefaultChannelPipeline.fireChannelInactive(DefaultChannelPipeline.java:917)
>  [netty-transport-4.1.15.Final.jar:4.1.15.Final]
>  at 
> io.netty.channel.AbstractChannel$AbstractUnsafe$7.run(AbstractChannel.java:763)
>  [netty-transport-4.1.15.Final.jar:4.1.15.Final]
>  at 
> io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
>  [netty-common-4.1.15.Final.jar:4.1.15.Final]
>  at 
> io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
>  [netty-common-4.1.15.Final.jar:4.1.15.Final]
>  at 

[jira] [Comment Edited] (QPID-8091) [Broker-J] [AMQP 1.0] Store transaction timeout feature

2018-02-20 Thread Alex Rudyy (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-8091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16369826#comment-16369826
 ] 

Alex Rudyy edited comment on QPID-8091 at 2/20/18 9:03 AM:
---

The changes merged into 7.0.x branch.
Here are the corresponding commits
{noformat}
commit 8e874ce58909896cbc4985aaab2957eca6023b0c (HEAD -> 7.0.x, origin/7.0.x)
Author: Alex Rudyy 
Date:   Mon Feb 19 17:30:51 2018 +

QPID-8091: [Broker-J] Correct transaction timeout documentation

(cherry picked from commit 2303a76e2cb92ef0bcf282851c9cf1db4fd2eb00)

commit ba4344f67ea06e6f87367e00ce42cac56a70860e
Author: Keith Wall 
Date:   Fri Feb 9 16:04:27 2018 +

QPID-8091: [Broker-J] Move transaction timeout protocol test to separate 
packages - this features is Broker-J specific.

Also refactored the new test broker configuration mechanism so that the 
configuration of the whole broker can be adjusted,
rather than just the virtualhost.

(cherry picked from commit d57815f89427781bb3cf3d5f6c70b3b13a8604ff. Merge 
conflicts resolved manually)

commit a5a15fc3bcb277cb94a8ccc4536cb2eef70d27a2
Author: Keith Wall 
Date:   Fri Feb 9 13:56:37 2018 +

QPID-8091: [Broker-J] Update transaction timeout chapter in docbook

* Remove note regarding AMQP 1.0
* Generalise from 'producer transaction timeout' to 'transaction timeout'.  
The former was only true when using the Qpid JMS AMQP 0-x client
  (which delayed acking the messages until the application called commit).
* Update the operational log messages

(cherry picked from commit 63c315f07553dcdf32e2de1888f1cb9749e15d5c)

commit 2cd03738c4ff09f8306678e89ac061032012c26b
Author: Keith Wall 
Date:   Fri Feb 9 12:56:41 2018 +

QPID-8091: [Broker-J] Transaction timeout - move idle/open warning message 
from channel to connection

(cherry picked from commit 876bcb7af979dbf6a4c128d762d2fc507e1580f2)

commit 8ee099a987562e9a98074fbd6aa4698af6796e49
Author: Alex Rudyy 
Date:   Thu Feb 8 16:07:48 2018 +

QPID-8091: [Broker-J][AMQP 0-10] Close 0-10 connection on transaction 
timeout

(cherry picked from commit a9667120ed7a64264a50c80a2938a6c73c3f93f2. Merge 
conflict resolved manually)

commit f84a341c9f05ad53c1a86a19dd55509e45423a51
Author: Alex Rudyy 
Date:   Thu Feb 8 14:40:02 2018 +

QPID-8091: [Broker-J] Report connection close reason as part of operational 
log message

(cherry picked from commit 53cf0201a3d363f3e5f18ef758fd8a6fc3d22b4c)
commit 16bd3953103e7878133cea18291b2ce4b7f469e4
Author: Alex Rudyy 
Date:   Thu Feb 8 11:06:38 2018 +

QPID-8091: [Broker-J] Add missing annotation

(cherry picked from commit 42c182f0b994292d801d2393f033bb773415b92f)

commit f649224bff05bbd65d7256d932c136d99159b411
Author: Alex Rudyy 
Date:   Wed Feb 7 23:48:05 2018 +

QPID-8091: [Broker-J] Add protocol tests for transaction timeout feature

(cherry picked from commit c531ca0ac28e5fd457b4b114674867b3bd2ee093. Merge 
conflicts are resolved manually)

commit 5b8587a021c58c233b631d161a80e291dff441bf
Author: Alex Rudyy 
Date:   Wed Feb 7 23:47:17 2018 +

QPID-8091: [Broker-J] [AMQP 0-10] Invoke 0-10 session on close operations 
only once

(cherry picked from commit 46c49cf206d776af883610352381219a8431ffb4)

commit 24df0ac1f37d22a98d89303b95f44cd4d03be2d8
Author: Alex Rudyy 
Date:   Tue Feb 6 15:48:36 2018 +

QPID-8091: [Broker-J] [AMQP 1.0] Add store transaction timeout feature

(cherry picked from commit ffd5ad0d456532fb6c9b0ba4e28297c3452bf32c. Merge 
conflicts resolved manually.)

{noformat}


was (Author: alex.rufous):
The changes merged into 7.0.x branch

> [Broker-J] [AMQP 1.0] Store transaction timeout feature
> ---
>
> Key: QPID-8091
> URL: https://issues.apache.org/jira/browse/QPID-8091
> Project: Qpid
>  Issue Type: Improvement
>  Components: Broker-J
>Reporter: Keith Wall
>Assignee: Keith Wall
>Priority: Major
> Fix For: qpid-java-broker-7.0.2, qpid-java-broker-7.1.0
>
>
> Berkeley JE's design means that once a transaction has begun, its internal 
> cleaner is unable to clean beyond the point the transaction started in the 
> transaction log.  Other transaction work continues as normal, but the disk 
> space utilisation can't shrink until the long running transaction is 
> completed.  In an extreme case, disk space can be exhausted.
> This has consequence for long running store transactions in Qpid.  For 0-x, 
> the transaction timeout features allows the the length of time a transaction 

[jira] [Updated] (QPID-8091) [Broker-J] [AMQP 1.0] Store transaction timeout feature

2018-02-20 Thread Alex Rudyy (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-8091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Rudyy updated QPID-8091:
-
Fix Version/s: qpid-java-broker-7.0.2

> [Broker-J] [AMQP 1.0] Store transaction timeout feature
> ---
>
> Key: QPID-8091
> URL: https://issues.apache.org/jira/browse/QPID-8091
> Project: Qpid
>  Issue Type: Improvement
>  Components: Broker-J
>Reporter: Keith Wall
>Assignee: Keith Wall
>Priority: Major
> Fix For: qpid-java-broker-7.0.2, qpid-java-broker-7.1.0
>
>
> Berkeley JE's design means that once a transaction has begun, its internal 
> cleaner is unable to clean beyond the point the transaction started in the 
> transaction log.  Other transaction work continues as normal, but the disk 
> space utilisation can't shrink until the long running transaction is 
> completed.  In an extreme case, disk space can be exhausted.
> This has consequence for long running store transactions in Qpid.  For 0-x, 
> the transaction timeout features allows the the length of time a transaction 
> may be open or idle to be constrained, thus limiting the harmful effects of a 
> long running store transaction.
> For robustness, that features needs to be implemented on AMQP 1.0 too.   A 
> decision needs to be made about the correct course of action to be taken when 
> a long running transaction exceeds the threshold.   The transaction 
> coordinator link could be closed with an error or the entire connection 
> closed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPID-8091) [Broker-J] [AMQP 1.0] Store transaction timeout feature

2018-02-20 Thread Alex Rudyy (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-8091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Rudyy resolved QPID-8091.
--
Resolution: Fixed

The changes merged into 7.0.x branch

> [Broker-J] [AMQP 1.0] Store transaction timeout feature
> ---
>
> Key: QPID-8091
> URL: https://issues.apache.org/jira/browse/QPID-8091
> Project: Qpid
>  Issue Type: Improvement
>  Components: Broker-J
>Reporter: Keith Wall
>Assignee: Keith Wall
>Priority: Major
> Fix For: qpid-java-broker-7.0.2, qpid-java-broker-7.1.0
>
>
> Berkeley JE's design means that once a transaction has begun, its internal 
> cleaner is unable to clean beyond the point the transaction started in the 
> transaction log.  Other transaction work continues as normal, but the disk 
> space utilisation can't shrink until the long running transaction is 
> completed.  In an extreme case, disk space can be exhausted.
> This has consequence for long running store transactions in Qpid.  For 0-x, 
> the transaction timeout features allows the the length of time a transaction 
> may be open or idle to be constrained, thus limiting the harmful effects of a 
> long running store transaction.
> For robustness, that features needs to be implemented on AMQP 1.0 too.   A 
> decision needs to be made about the correct course of action to be taken when 
> a long running transaction exceeds the threshold.   The transaction 
> coordinator link could be closed with an error or the entire connection 
> closed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org