Thanks for your response. 
I looked into the extra-tests you talked about. I created a simple
application acting as the hornetq client, using the code in the test. Here
is my java code.
============================
package hornetqTest;

import java.util.HashMap;
import java.util.Map;
import java.util.UUID;
import org.hornetq.api.core.client.HornetQClient;

public class HornetqClientTest {

    private static org.hornetq.api.core.client.ClientSession
createHQClientSession() throws Exception {
        System.out.print("in createHQClientSession\n");
        Map<String, Object> map = new HashMap<>();
        map.put("host", "localhost");
        map.put("port", 5445);

        System.out.print("before create server locator\n");
        org.hornetq.api.core.client.ServerLocator serverLocator =
                HornetQClient.createServerLocatorWithoutHA(new
org.hornetq.api.core.TransportConfiguration(org.hornetq.core.remoting.impl.netty.NettyConnectorFactory.class.getName(),
map));
        System.out.print("after create server locator\n");
        System.out.print("before create session factory\n");
        org.hornetq.api.core.client.ClientSessionFactory sf =
serverLocator.createSessionFactory();
        System.out.print("after create session factory\n");
        System.out.print("before create session\n");

        return sf.createSession();
    }

    public static void main(String[] args) {
        try {

            System.out.print("in main\n");
            org.hornetq.api.core.client.ClientSession session =
createHQClientSession();
            System.out.print("after create session\n");

            String queueName = "test.hq.queue";
            session.createQueue(queueName, queueName, true);

            org.hornetq.api.core.client.ClientProducer producer =
session.createProducer(queueName);
            org.hornetq.api.core.client.ClientConsumer consumer =
session.createConsumer(queueName);
            org.hornetq.api.core.client.ClientMessage message =
session.createMessage(false);

            String messageId = UUID.randomUUID().toString();
           
message.putStringProperty(org.hornetq.api.core.Message.HDR_DUPLICATE_DETECTION_ID.toString(),
messageId);

            session.start();
            producer.send(message);
            org.hornetq.api.core.client.ClientMessage m =
consumer.receive(1000);

            producer.send(message);
            m = consumer.receive(1000);

            producer.send(message);
            m = consumer.receive(1000);

            session.close();
        }
        catch (Exception e) {
            System.out.println(e.toString());
        }
    }

}
============================

When I run this on the Artemis server, I got the following:

============================
in main
in createHQClientSession
before create server locator
after create server locator
before create session factory
13:44:08.412 [main] DEBUG i.n.u.i.l.InternalLoggerFactory - Using SLF4J as
the default logging framework
13:44:08.424 [main] DEBUG i.n.util.internal.PlatformDependent0 -
java.nio.Buffer.address: available
13:44:08.425 [main] DEBUG i.n.util.internal.PlatformDependent0 -
sun.misc.Unsafe.theUnsafe: available
13:44:08.429 [main] DEBUG i.n.util.internal.PlatformDependent0 -
sun.misc.Unsafe.copyMemory: available
13:44:08.429 [main] DEBUG i.n.util.internal.PlatformDependent0 -
java.nio.Bits.unaligned: true
13:44:08.582 [main] DEBUG i.n.util.internal.PlatformDependent - UID: 0
13:44:08.582 [main] DEBUG i.n.util.internal.PlatformDependent - Java
version: 8
13:44:08.582 [main] DEBUG i.n.util.internal.PlatformDependent -
-Dio.netty.noUnsafe: false
13:44:08.583 [main] DEBUG i.n.util.internal.PlatformDependent -
sun.misc.Unsafe: available
13:44:08.583 [main] DEBUG i.n.util.internal.PlatformDependent -
-Dio.netty.noJavassist: false
13:44:08.755 [main] DEBUG i.n.util.internal.PlatformDependent - Javassist:
available
13:44:08.755 [main] DEBUG i.n.util.internal.PlatformDependent -
-Dio.netty.tmpdir: /tmp (java.io.tmpdir)
13:44:08.756 [main] DEBUG i.n.util.internal.PlatformDependent -
-Dio.netty.bitMode: 64 (sun.arch.data.model)
13:44:08.756 [main] DEBUG i.n.util.internal.PlatformDependent -
-Dio.netty.noPreferDirect: false
13:44:08.802 [main] DEBUG io.netty.util.ResourceLeakDetector -
-Dio.netty.leakDetectionLevel: simple
13:44:08.821 [main] DEBUG i.n.c.MultithreadEventLoopGroup -
-Dio.netty.eventLoopThreads: 2
13:44:08.878 [main] DEBUG io.netty.channel.nio.NioEventLoop -
-Dio.netty.noKeySetOptimization: false
13:44:08.878 [main] DEBUG io.netty.channel.nio.NioEventLoop -
-Dio.netty.selectorAutoRebuildThreshold: 512
13:44:08.918 [main] DEBUG i.n.buffer.PooledByteBufAllocator -
-Dio.netty.allocator.numHeapArenas: 1
13:44:08.918 [main] DEBUG i.n.buffer.PooledByteBufAllocator -
-Dio.netty.allocator.numDirectArenas: 1
13:44:08.919 [main] DEBUG i.n.buffer.PooledByteBufAllocator -
-Dio.netty.allocator.pageSize: 8192
13:44:08.919 [main] DEBUG i.n.buffer.PooledByteBufAllocator -
-Dio.netty.allocator.maxOrder: 11
13:44:08.919 [main] DEBUG i.n.buffer.PooledByteBufAllocator -
-Dio.netty.allocator.chunkSize: 16777216
13:44:08.919 [main] DEBUG i.n.buffer.PooledByteBufAllocator -
-Dio.netty.allocator.tinyCacheSize: 512
13:44:08.919 [main] DEBUG i.n.buffer.PooledByteBufAllocator -
-Dio.netty.allocator.smallCacheSize: 256
13:44:08.928 [main] DEBUG i.n.buffer.PooledByteBufAllocator -
-Dio.netty.allocator.normalCacheSize: 64
13:44:08.928 [main] DEBUG i.n.buffer.PooledByteBufAllocator -
-Dio.netty.allocator.maxCachedBufferCapacity: 32768
13:44:08.928 [main] DEBUG i.n.buffer.PooledByteBufAllocator -
-Dio.netty.allocator.cacheTrimInterval: 8192
13:44:08.955 [main] DEBUG i.n.util.internal.ThreadLocalRandom -
-Dio.netty.initialSeedUniquifier: 0x64184e6bd1e0675f (took 0 ms)
13:44:08.976 [main] DEBUG io.netty.buffer.ByteBufUtil -
-Dio.netty.allocator.type: unpooled
13:44:08.977 [main] DEBUG io.netty.buffer.ByteBufUtil -
-Dio.netty.threadLocalDirectBufferSize: 65536
13:44:09.016 [main] DEBUG io.netty.util.Recycler -
-Dio.netty.recycler.maxCapacity.default: 262144
HornetQConnectionTimedOutException[errorType=CONNECTION_TIMEDOUT
message=HQ119013: Timed out waiting to receive cluster topology. Group:null]
============================

Note that the error is the same as what I initially received.

Here is what is in the server broker.xml that is relevant to connection, I
think.

============================
      <connectors>
         <connector name="netty">tcp://localhost:5445</connector>
         <connector name="netty-throughput">tcp://localhost:5455</connector>
      </connectors>

      <acceptors>
         
         
         <acceptor
name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576</acceptor>

         
         <acceptor name="amqp">tcp://0.0.0.0:5672?protocols=AMQP</acceptor>

         
         <acceptor
name="stomp">tcp://0.0.0.0:61613?protocols=STOMP</acceptor>

         
         <acceptor name="netty">tcp://0.0.0.0:5445</acceptor>

        <acceptor name="netty-throughput">tcp://0.0.0.0:5455</acceptor>

        <acceptor
name="netty-ssl">tcp://0.0.0.0:5500?sslEnabled=true;keyStorePath=/var/idefender/idefender-message-server/config/idefender.keystore;keyStorePassword=idefender;trustStorePath=/var/idefender/config/idefender-truststore.jks;trustStorePassword=ve.3ranok;</acceptor>

         
         <acceptor name="mqtt">tcp://0.0.0.0:1883?protocols=MQTT</acceptor>
      </acceptors>
============================

I am pretty new in either hornetq or artemis. Please bear with me. Thanks.






--
View this message in context: 
http://activemq.2283324.n4.nabble.com/HornetQ-client-connect-to-Artemis-server-tp4717861p4717934.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Reply via email to