Qpid Broker-J 9.0.0 SimpleLDAP and Broker AESKeyFile encryption

2023-02-14 Thread Bryan Dixon
I am working on installing Qpid Broker-J 9.0.0 on Windows.  I'm using this Java 
11 version:
IMPLEMENTOR="Eclipse Adoptium"
IMPLEMENTOR_VERSION="Temurin-11.0.18+10"
JAVA_VERSION="11.0.18"
JAVA_VERSION_DATE="2023-01-17"
FULL_VERSION="11.0.18+10"
JVM_VARIANT="Hotspot"

I am wanting to utilize SimpleLDAP for an AuthenticationProvider to our 
ActiveDirectory environment.  I'm currently just testing with the HTTP(s) Port 
to make sure I have it configured correctly.  Anyway, I appear to have 
everything working just fine including referencing AD groups in the RulesBased 
ACLs.However, I want to utilize Broker-J's AESKeyFile encryption to encrypt 
the LDAP Search password.   

I made a copy of my qpid work area that was working correctly with SimpleLDAP 
and deleted it to start the configuration over.  But this time I enabled the 
Broker AESKeyFile encryption up front before I configured SimpleLDAP.   I also 
restarted the broker after I enabled encryption and before I continued the 
configuration. I used the exact same configuration settings in this new work 
area and now authentication always fails.  The error message being logged is:

2023-02-14T20:58:22,270Z WARN  [qtp453021524-123] 
(o.a.q.s.s.a.m.SimpleLDAPAuthenticationManagerImpl) - Retrieving LDAP name for 
user 'xx' resulted in error.
javax.naming.AuthenticationException: [LDAP: error code 49 - 80090308: LdapErr: 
DSID-0C090439, comment: AcceptSecurityContext error, data 52e, v4563 ]
at java.naming/com.sun.jndi.ldap.LdapCtx.mapErrorCode(LdapCtx.java:3259)
at java.naming/com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:3205)
at java.naming/com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2991)
at java.naming/com.sun.jndi.ldap.LdapCtx.connect(LdapCtx.java:2905)
at java.naming/com.sun.jndi.ldap.LdapCtx.(LdapCtx.java:348)
at 
java.naming/com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxFromUrl(LdapCtxFactory.java:266)
at 
java.naming/com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(LdapCtxFactory.java:226)
at 
java.naming/com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(LdapCtxFactory.java:284)
at 
java.naming/com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(LdapCtxFactory.java:185)
at 
java.naming/com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(LdapCtxFactory.java:115)
at 
java.naming/javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:730)
at 
java.naming/javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:305)
at java.naming/javax.naming.InitialContext.init(InitialContext.java:236)
at java.naming/javax.naming.InitialContext.(InitialContext.java:208)
at 
java.naming/javax.naming.directory.InitialDirContext.(InitialDirContext.java:101)
at 
org.apache.qpid.server.security.auth.manager.SimpleLDAPAuthenticationManagerImpl.lambda$createInitialDirContext$2(SimpleLDAPAuthenticationManagerImpl.java:602)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
at 
org.apache.qpid.server.security.auth.manager.SimpleLDAPAuthenticationManagerImpl.invokeContextOperationAs(SimpleLDAPAuthenticationManagerImpl.java:796)
at 
org.apache.qpid.server.security.auth.manager.SimpleLDAPAuthenticationManagerImpl.createInitialDirContext(SimpleLDAPAuthenticationManagerImpl.java:602)


I'm wondering if AESKeyFile encryption might not be working correctly with 
SimpleLDAP - perhaps the encrypted password (the password is encrypted in the 
config.json file) isn't being decrypted before LDAP is checked?

Thanks
Bryan


Re: Broker-J 7.0.2 BDB_HA not replicating messages correctly

2020-01-27 Thread Bryan Dixon
Thanks Alex.  I removed that context variable and replication did indeed work
correctly.  What's odd is that we've been using that context variable for
quite a while and I hadn't noticed any issue with replication.  In fact I
ran these same tests multiple times in our Dev and Prod environments and
they always worked successfully.  The only difference with this Test
environment from our Dev and Prod ones is that I did switch out 2 of the
virtual host nodes.  So maybe that will help you guys track down the bug.

Bryan



--
Sent from: http://qpid.2158936.n2.nabble.com/Apache-Qpid-users-f2158936.html

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: Broker-J 7.0.2 BDB_HA not replicating messages correctly

2020-01-27 Thread Bryan Dixon
Alex, I always use the highest durability sync policy for Master and Replica:
SYNC.

Also, for the virtual host, I am using this custom context variable:
use_async_message_store_recovery=true

We use that because we used to have some queues with a large amount of
messages.  I wanted to bring it up in case it matters.  But we've been using
that setting for a while and have never had this issue.


In node2's log file you can see the following messages after I made node2
the master that concern me:

2020-01-24 11:00:00,211 DEBUG [Feeder Output for dixon01]
(c.s.j.r.i.n.Feeder) - dixon02 Feeder output thread for replica dixon01
started at VLSN 7,024 master at 7,023 (DTVLSN:7,022) VLSN delta=-1
socket=(dixon01(1))com.sleepycat.je.rep.utilint.net.SimpleDataChannel@34d03b0c
2020-01-24 11:00:00,243 INFO  [StateChange-dixonbroker:dixon02]
(q.m.h.role_changed) - [Broker] [grp(/dixonbroker)] HA-1010 : Role change
reported: Node : 'dixon02' (spgmqtst2:5011) : from 'WAITING' to 'MASTER'
2020-01-24 11:00:00,243 INFO  [Queue Recoverer : app_test (vh: dixonbroker)]
(q.m.t.recovered) - [Broker] [vh(/dixonbroker)/ms(ProvidedBDBMessageStore)]
TXN-1005 : Recovered 0 messages for queue app_test


See that last log message - Recovered 0 messages for queue app_test.   Isn't
that an issue?  I would expect to see 50 messages recovered.  Perhaps I'm
misunderstanding the log messages.

Thanks
Bryan




--
Sent from: http://qpid.2158936.n2.nabble.com/Apache-Qpid-users-f2158936.html

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Broker-J 7.0.2 BDB_HA not replicating messages correctly

2020-01-24 Thread Bryan Dixon
We've been using Broker-J 7.0.2 BDB_HA successfully (on Windows servers) for
a couple years now without issue.  I was working on upgrading to Broker-J
7.1.6 and performing some tests in our Test Broker-J cluster prior to the
upgrade and noticed that BDB_HA message replication wasn't working properly
and we've never had an issue like this before.  By not working properly,
what is happening is when publishing messages to the master virtual host
node and then forcing (via the admin console) another virtual host node to
become the master, the messages I just added are not in the queue in the new
master.  And if I publish messages in the new master, those messages aren't
in the queue in the next node I make the master.

One difference in our Test environment from our Dev and Production
environment (which all use 3 nodes/servers) is that a couple months ago we
replaced 2 nodes (2 VMs - swapped out Windows 2008 Server with Windows 2016
Server for those 2 VMs).   For that process I just changed the Virtual Host
Node to add the 2 new VMs and then removed the old nodes (old Windows2008
VMs).  Everything appeared to be fine after that change but I didn't run the
tests I am running now so perhaps that is what is causing this issue.

To see if this replication issue was for our existing BDB_HA virtual host
(the VH that existed prior to the Windows VM replacement discussed
previously) I created a brand new virtual host (dixonbroker) in our Test
Qpid environment and added a virtual host node on each of our VMs in Test. 
I am still getting the same behavior where messages aren't replicating
correctly to each node when that node becomes master.  To help with
troubleshooting I used this new dixonbroker virtual host for the tests I've
documented below.

I've attached log files from all 3 nodes with some debugging enabled (I can
add more debugging and run the tests again if it helps).


Here are the steps of my test:
1) Node1 was master.  I created a new durable queue named app_test.  I have
a little Java app using qpid-jms-client .31 (I tried .44 and it still
produced the same bad results).  I use Spring's JmsTemplate to publish 50
messages.  This was done at 11:00 (to help see the flow in the log files).
2) I made Node2 the master and there were no messages in app_test queue (see
log message 2020-01-24 11:00:00,243 INFO  [Queue Recoverer : app_test (vh:
dixonbroker)] (q.m.t.recovered) - [Broker]
[vh(/dixonbroker)/ms(ProvidedBDBMessageStore)] TXN-1005 : Recovered 0
messages for queue app_test in the node2 log file).
3) I made Node3 the master.  The 50 messages DID exist in the app_test
queue.  I then added 50 more messages to app_test queue (so total of 100)
while Node3 was the master.  This was at 11:02.
4) I made Node1 the master.  It only had 50 messages in app_test queue -
should have had 100.
5) I made Node2 the master. It still didn't have any messages in app_test
queue - should have had 100.



qpid-node1.log
  
qpid-node2.log
  
qpid-node3.log
  



--
Sent from: http://qpid.2158936.n2.nabble.com/Apache-Qpid-users-f2158936.html

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Process to upgrade Qpid Broker-J 7.0.2 to 7.1.6

2020-01-13 Thread Bryan Dixon
I'm wanting to upgrade/migrate our current Qpid Broker-J 7.0.2 environments
to  7.1.6 (or 7.1.7 when it is released). 

Our current 7.0.2 environment is  BDB HA (3 nodes) running on Windows Server
2012 R2 with Oracle JDK 1.8.   As part of the move to Broker-J 7.1.6 I want
to move to Adopt OpenJDK 11.

To perform the upgrade, do I just do the following:
1. stop all nodes of the 7.0.2 environment
2. start the 7.1.6 environment (one node at a time) pointing to the existing
qpid work environment that we were using with 7.0.2.

Also, if I wanted to do a backup of the qpid work environment that I can use
for a restore in case we have some type of unexpected issue after the
upgrade, can I simply copy the qpid work directory on each node while all
nodes are down and then restore that directory on each node and use with
7.0.2 in case there are upgrade issues?

Thanks
Bryan



--
Sent from: http://qpid.2158936.n2.nabble.com/Apache-Qpid-users-f2158936.html

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: Delayed Delivery doesn't appear to be working with Broker-J and Qpid JMS

2019-07-29 Thread Bryan Dixon
Thanks Alex.  I just installed Broker-J 7.1.4 and noticed the attribute you
mentioned is now exposed on the Admin UI so I enabled it on the specific
queue I was testing and it worked correctly.  

We will eventually have a BDB_HA  group setup for this new Broker-J
environment.   Are there any issues with HA regarding delayed delivery
messages?  Will those messages get replicated to all nodes and have
guaranteed delivery (as long as we're using persistent messages)?

Bryan



--
Sent from: http://qpid.2158936.n2.nabble.com/Apache-Qpid-users-f2158936.html

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Qpid Broker-J 7.0 vs 7.1

2019-07-29 Thread Bryan Dixon
We've been successfully using Qpid Broker-J 7.0.2 in production for well over
a year and we have a new project that will require a new Qpid Broker-J
install.  I have been reviewing the current release of Qpid Broker-J and
noticed there are now 7.0.x and 7.1.x releases.  I'm assuming we'll want to
use the latest 7.1.x release but I can't find any documentation about the
difference between 7.0.x and 7.1.x.   Is there any information about the
different releases?





--
Sent from: http://qpid.2158936.n2.nabble.com/Apache-Qpid-users-f2158936.html

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Delayed Delivery doesn't appear to be working with Broker-J and Qpid JMS

2019-07-29 Thread Bryan Dixon
I'm using:
Broker-J  7.0.2  (using Oracle Corporation 1.8.0_162-b12)
Qpid JMS (AMQP 1.0) 0.44.0 (using Adopt OpenJDK jdk-11.0.3.7-openj9)
Spring JMS  5.1.8

I'm using the following JMS code to publish a message to a queue in Broker-J
(using the amq-direct exchange) with a delayed delivery and the message
never gets delivered to the destination queue:

javax.jms.Destination destinaton =
jmsTemplate.getDestinationResolver().resolveDestinationName(session,
"app_delay_queue", jmsTemplate.isPubSubDomain());

MessageProducer producer = session.createProducer(destinaton);  

producer.setDeliveryDelay(workRequest.getDelaySeconds() * 1000);

Message message = jmsTemplate.getMessageConverter().toMessage(new
TaskInfo(workRequest.getSomeWork(), workRequest), session);

producer.send(message);



--
Sent from: http://qpid.2158936.n2.nabble.com/Apache-Qpid-users-f2158936.html

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Qpid JMS 0.33.0 OutOfMemoryError

2018-06-29 Thread Bryan Dixon
We have a Java Spring app that was using Qpid JMS 0.31.0 without any issues. 
I upgraded it to use Qpid JMS 0.33.0 and the app would occasionally receive
an OutOfMemoryError.  The stacktrace is:

2018-06-28 05:13:05.434 -- WARN -- [AmqpProvider
:(26492):[amqp://spgmqtest1:5692]] io.netty.channel.AbstractChannel:
Force-closing a channel whose registration task was not accepted by an event
loop: [id: 0x4ecf4236]
java.lang.OutOfMemoryError: Failed to create a thread: retVal -1073741830,
errno 22
at java.lang.Thread.startImpl(Native Method) ~[?:1.8.0-internal]
at java.lang.Thread.start(Thread.java:948) ~[?:1.8.0-internal]
at
io.netty.util.concurrent.ThreadPerTaskExecutor.execute(ThreadPerTaskExecutor.java:33)
~[netty-common-4.1.25.Final.jar:4.1.25.Final]
at
io.netty.util.concurrent.SingleThreadEventExecutor.doStartThread(SingleThreadEventExecutor.java:873)
~[netty-common-4.1.25.Final.jar:4.1.25.Final]
at
io.netty.util.concurrent.SingleThreadEventExecutor.startThread(SingleThreadEventExecutor.java:862)
~[netty-common-4.1.25.Final.jar:4.1.25.Final]
at
io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:767)
~[netty-common-4.1.25.Final.jar:4.1.25.Final]
at
io.netty.channel.AbstractChannel$AbstractUnsafe.register(AbstractChannel.java:479)
[netty-transport-4.1.25.Final.jar:4.1.25.Final]
at
io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:80)
[netty-transport-4.1.25.Final.jar:4.1.25.Final]
at
io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:74)
[netty-transport-4.1.25.Final.jar:4.1.25.Final]
at
io.netty.channel.MultithreadEventLoopGroup.register(MultithreadEventLoopGroup.java:86)
[netty-transport-4.1.25.Final.jar:4.1.25.Final]
at
io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:333)
[netty-transport-4.1.25.Final.jar:4.1.25.Final]
at io.netty.bootstrap.Bootstrap.doResolveAndConnect(Bootstrap.java:163)
[netty-transport-4.1.25.Final.jar:4.1.25.Final]
at io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:145)
[netty-transport-4.1.25.Final.jar:4.1.25.Final]
at io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:126)
[netty-transport-4.1.25.Final.jar:4.1.25.Final]
at
org.apache.qpid.jms.transports.netty.NettyTcpTransport.connect(NettyTcpTransport.java:181)
[qpid-jms-client-0.33.0.jar:?]
at
org.apache.qpid.jms.provider.amqp.AmqpProvider$1.run(AmqpProvider.java:223)
[qpid-jms-client-0.33.0.jar:?]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:522)
[?:1.8.0-internal]
at java.util.concurrent.FutureTask.run(FutureTask.java:277)
[?:1.8.0-internal]
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:191)
[?:1.8.0-internal]
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
[?:1.8.0-internal]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1153)
[?:1.8.0-internal]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
[?:1.8.0-internal]
at java.lang.Thread.run(Thread.java:785) [?:1.8.0-internal]
2018-06-28 05:13:05.496 -- WARN -- [nioEventLoopGroup-26493-2]
io.netty.channel.AbstractChannel: Failed to mark a promise as failure
because it's done already: DefaultChannelPromise@dfcd5fdd(failure:
java.lang.OutOfMemoryError: Failed to create a thread: retVal -1073741830,
errno 22)
java.nio.channels.ClosedChannelException: null
at 
io.netty.channel.AbstractChannel$AbstractUnsafe.ensureOpen(...)(Unknown
Source) ~[netty-transport-4.1.25.Final.jar:4.1.25.Final]


The spring jms config is:

 


   

Our broker URL is:
message.broker.url=failover:(amqp://spgmqtest1:5692,amqp://spgmqtest2:5692,amqp://spgmqtest3:5672)?failover.nested.amqp.vhost=testbroker=0=simon411-=simon411-




--
Sent from: http://qpid.2158936.n2.nabble.com/Apache-Qpid-users-f2158936.html

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Broker-J BDB JE High Availability Time Sync issue

2018-04-16 Thread Bryan Dixon
We are using Broker-J 7.0.2 and just ran into a Berkeley HA Time Sync issue
that I'm wondering if anyone else has run into.  The stackTrace is at the
end of this post.   We are running on Windows Server 2012 R2 6.3 amd64 and
our JDK is Oracle Corporation 1.8.0_162-b12.  We have 3 servers as part of
our HA setup.

This error occurred in our production environment which has been live for
just a couple of weeks.  We never ran into this in our Test or Dev
environments that have been running for a few months.   When one of our
admins checked the clock times of all 3 servers they were completely in
sync.  Another admin stated that the server clock times are synced with NTP. 
Unfortunately our log files rolled off and I don't know exactly when this
error first occurred because the older log file are gone.

2018-04-16 04:10:57,039 ERROR [Group-Change-Learner:prodbroker:prodbroker2]
(o.a.q.s.u.ServerScopedRuntimeException) - Exception on master check
com.sleepycat.je.EnvironmentFailureException: (JE 7.4.5) Environment must be
closed, caused by: com.sleepycat.je.EnvironmentFailureException: Environment
invalid because of previous exception: (JE 7.4.5)
prodbroker2(2):D:\qpidwork\prodbroker2\config Clock delta: 8859 ms. between
Feeder: prodbroker1 and this Replica exceeds max permissible delta: 2000 ms.
HANDSHAKE_ERROR: Error during the handshake between two nodes. Some validity
or compatibility check failed, preventing further communication between the
nodes. Environment is invalid and must be closed. Originally thrown by HA
thread: UNKNOWN prodbroker2(2) Originally thrown by HA thread: UNKNOWN
prodbroker2(2)
at
com.sleepycat.je.EnvironmentFailureException.wrapSelf(EnvironmentFailureException.java:228)
at
com.sleepycat.je.dbi.EnvironmentImpl.checkIfInvalid(EnvironmentImpl.java:1766)
at
com.sleepycat.je.dbi.EnvironmentImpl.checkOpen(EnvironmentImpl.java:1775)
at com.sleepycat.je.Environment.checkOpen(Environment.java:2473)
at com.sleepycat.je.DbInternal.checkOpen(DbInternal.java:105)
at
com.sleepycat.je.rep.ReplicatedEnvironment.checkOpen(ReplicatedEnvironment.java:1052)
at
com.sleepycat.je.rep.ReplicatedEnvironment.getState(ReplicatedEnvironment.java:764)
at
org.apache.qpid.server.store.berkeleydb.replication.ReplicatedEnvironmentFacade$RemoteNodeStateLearner.executeDatabasePingerOnNodeChangesIfMaster(ReplicatedEnvironmentFacade.java:2276)
at
org.apache.qpid.server.store.berkeleydb.replication.ReplicatedEnvironmentFacade$RemoteNodeStateLearner.call(ReplicatedEnvironmentFacade.java:2042)
at
org.apache.qpid.server.store.berkeleydb.replication.ReplicatedEnvironmentFacade$RemoteNodeStateLearner.call(ReplicatedEnvironmentFacade.java:2012)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.sleepycat.je.EnvironmentFailureException: Environment invalid
because of previous exception: (JE 7.4.5)
prodbroker2(2):D:\qpidwork\prodbroker2\config Clock delta: 8859 ms. between
Feeder: prodbroker1 and this Replica exceeds max permissible delta: 2000 ms.
HANDSHAKE_ERROR: Error during the handshake between two nodes. Some validity
or compatibility check failed, preventing further communication between the
nodes. Environment is invalid and must be closed. Originally thrown by HA
thread: UNKNOWN prodbroker2(2) Originally thrown by HA thread: UNKNOWN
prodbroker2(2)
at
com.sleepycat.je.rep.stream.ReplicaFeederHandshake.checkClockSkew(ReplicaFeederHandshake.java:424)
at
com.sleepycat.je.rep.stream.ReplicaFeederHandshake.execute(ReplicaFeederHandshake.java:261)
at 
com.sleepycat.je.rep.impl.node.Replica.initReplicaLoop(Replica.java:691)
at
com.sleepycat.je.rep.impl.node.Replica.runReplicaLoopInternal(Replica.java:474)
at 
com.sleepycat.je.rep.impl.node.Replica.runReplicaLoop(Replica.java:409)
at com.sleepycat.je.rep.impl.node.RepNode.run(RepNode.java:1873)



--
Sent from: http://qpid.2158936.n2.nabble.com/Apache-Qpid-users-f2158936.html

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: JMSMessageID differences in JMS 0.30.0 and JMS AMQP 0-x 6.3.0 clients

2018-03-23 Thread Bryan Dixon
We are getting close to going live in production with Broker-J and since I
need to stabilize everything what I'm going to do for the time being is set
the JVM Property qpid.amqp.version to 0-91.  I tested all our clients with
that and was able to get the desired results.  If you think there are any
issues with putting this in a production environment please let me know.

Thanks
Bryan



--
Sent from: http://qpid.2158936.n2.nabble.com/Apache-Qpid-users-f2158936.html

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: JMSMessageID differences in JMS 0.30.0 and JMS AMQP 0-x 6.3.0 clients

2018-03-22 Thread Bryan Dixon
I have an example attached.   The PublishMessage code needs to use the JMS
0.30.0 client.  The BrowseMessage code needs to use the MS AMQP 0-x 6.3.0
client.  When you run the BrowseMessage after publishing the message you'll
see where it can't find the message by the JMSMessageID it is returning.  

What's more interesting is in the BrowseMessage if you uncomment this code:
System.setProperty("qpid.amqp.version", "0-8");

it works successfully.  And the JMSMessageID is what I get when I get the
message id from a JMS 0.30.0 client.


PublishMessage.java
  
BrowseMessage.java
  



--
Sent from: http://qpid.2158936.n2.nabble.com/Apache-Qpid-users-f2158936.html

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



JMSMessageID differences in JMS 0.30.0 and JMS AMQP 0-x 6.3.0 clients

2018-03-22 Thread Bryan Dixon
We are using Qpid JMS 0.30.0 to publish messages to a Broker-J 7.0.2 broker. 
We have some java apps that have to use the older JMS AMQP 0-x 6.3.0 client
and others that can use the newer JMS 0.30.0 client.  I just found that when
using the JMS Message.getJMSMessageID() method call for the same message
results in 2 different messages Ids being returned by each client version.  
That by itself I guess is OK but not what I would expect.  The bigger issue
I just noticed with the older JMS AMQP 0-x 6.3.0 client is when I get the
JMSMessageID value, I can't use that value returned in a message selector to
retrieve (either consume or browse) that message - the message is not found.

Here is an example.  I published a message and the JMS JMS 0.30.0 client
returns the following value from the Message.getJMSMessageID()  call:
ID:eventapp-40500566-2ca4-439e-9a2d-4e00c88ea682:9:1:1-1

However when using the JMS AMQP 0-x 6.3.0 client Message.getJMSMessageID() 
call on the same exact message call the following ID is returned:
ID:fcae0a4b-9810-3214-b161-8572e5df8ea6

When I use the QueueBrowser class with the JMS AMQP 0-x 6.3.0 client to
browse the message using this message selector the message is not returned
by the browse() method:
JMSMessageID ='ID:fcae0a4b-9810-3214-b161-8572e5df8ea6'

It doesn't seem right that the 6.3.0 client can't retrieve the message by
the JMSMessageID it returned.

I need a way to be able to retrieve a message by its ID for the  6.3.0
client.  This does work successfully wit the JMS 0.30.0 client.

Bryan







--
Sent from: http://qpid.2158936.n2.nabble.com/Apache-Qpid-users-f2158936.html

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: Broker-J Queue Message persistent override

2018-03-22 Thread Bryan Dixon
Rob, I wouldn't know if messages are being stored or not.  I'm assuming they
are not since a broker restart causes the messages to be 'lost' but that's
just my assumption.

At the end of day Keith is correct - the behavior is what I was wanting -
the messages don't appear to be persistent (to me persistent means stored). 
It was just very confusing that the web admin UI was displaying Persistent:
true for messages and the JMS JMSDeliveryMode header also reported the
message as Persistent yet I defined on the web admin UI Persist Messages?:
NEVER.   That's why I was concerned there was an issue.

I'm also coming at this from a JMS perspective and not an AMQP one so that
could be causing my confusion - terminology differences.

Bryan



--
Sent from: http://qpid.2158936.n2.nabble.com/Apache-Qpid-users-f2158936.html

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: Broker-J Queue Message persistent override

2018-03-21 Thread Bryan Dixon
Alex your response about queue attribute 'messageDurability' isn't what I had
set to NEVER - it was the Persist Messages? attribute that I set to NEVER.  
I want to make sure I'm on the same page as you guys and I'll provide some
screen shots to show what I did and saw.

Firstly, my original post was referring to the attribute 'Message persistent
override' from this documentation:
https://qpid.apache.org/releases/qpid-broker-j-7.0.2/book/Java-Broker-Management-Managing-Queues.html.

There is a 'Durable' attribute documented on that page also and my current
setting is enabled (checked on the admin UI) but that wasn't what I wanting
to change/test.

Using the web admin UI I changed a queue's Persist Messages? value to NEVER
- see this screen shot 
 

I then ran this Java code using Qpid JMS 0.30.0 jars:
ConnectionFactory connectionFactory = new JmsConnectionFactory(brokerUrl);
Connection connection = connectionFactory.createConnection(user, pwd);
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
Destination destination =
session.createQueue("app_attach_ehcacheReplicateQueue");
MessageProducer messageProducer = session.createProducer(destination);
TextMessage message = session.createTextMessage("Hello world! " + new
Timestamp(System.currentTimeMillis()));
messageProducer.send(message);
 
It published the message to the queue and when I view the message via the
web admin UI it has Persistent: true - see this screen shot 
 . 
I expected it to be Persistent: false based upon my change to NEVER and what
I understood from the documentation.

Also, when I ran this Java code (using JMS 0.30.0 again), it returned a
value of 2 which indicates it is a persistent message:
MessageConsumer messageConsumer = session.createConsumer(destination);
TextMessage message = (TextMessage)messageConsumer.receive();
System.out.println(String.format("persist?: %s",
message.getJMSDeliveryMode()));

However, after reading Alex's response of ' Thus, on broker restart the
queue entries would be removed. ' I thought I would see what would happen to
the message after the broker restart.  When I restarted the broker the
message was gone.

So now I'm actually a little more confused that I originally was but perhaps
I'm not fully understanding this.   Being that the message was marked as
Persistent (even though I didn't want it to be) and the Queue is marked as
Durable (the checkbox is checked on the web admin UI), the message was lost
which doesn't seem right?
 
The virtual host environment I have is a 3 VirtualHostNode setup for HA.  I
didn't test this on a single VirualHost node.

Thanks
Bryan



--
Sent from: http://qpid.2158936.n2.nabble.com/Apache-Qpid-users-f2158936.html

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org