Re: Agent not able to connect

2018-06-03 Thread Roman Guseinov
Hi Humphrey, 

Currently, front-end is included into back-end container as static files
(html, js). This way we don't have to open two ports. 80 is fully enough.
3001 isn't used now.

Regarding documentation, we don't have that one for web agent docker image
because it isn't public yet. I hope it will be published soon.

According to your logs, now the agent is able to connect to web console and
Ignite node as well.

Best Regards,
Roman



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Agent not able to connect

2018-06-01 Thread Humphrey
Hi Roman, I do understand the front-end part, I think that is the UI we get
when we connect to the web console when we go to http://webconsole and we
get the login page.

What I don't understand is the backend part, what does it do? I thought that
was where the agent is suppose to connect to (poort 3000 or 3001).

I didn't specify all environment settings, also these environments is also
missing in the documentation:



But now I did specify the driver folder as well as the node uri.

The node is accessible from http://localhost:8080 cause the agent is
deployed in the same docker container as the server node. I do get 1 cluster
connected in the web console UI.

[2018-06-01 11:08:40,676][INFO ][main][AgentLauncher] Starting Apache Ignite
Web Console Agent...
[2018-06-01 11:08:41,277][WARN ][main][AgentLauncher] Failed to find agent
property file: default.properties

Agent configuration:

User's security tokens : 2lKB
URI to Ignite node REST server: http://localhost:8080
URI to Ignite Console server : http://demo-webconsole
Path to agent property file : default.properties
Path to JDBC drivers folder : ./jdbc-drivers

Demo mode : enabled

[2018-06-01 11:08:41,873][INFO ][main][AgentLauncher] Connecting to:
http://demo-webconsole
[2018-06-01 11:08:42,499][INFO ][EventThread][AgentLauncher] Connection
established.
[2018-06-01 11:08:42,890][INFO ][EventThread][AgentLauncher] Authentication
success.
[2018-06-01 11:08:43,140][WARN ][pool-1-thread-1][RestExecutor] Failed
connect to cluster [url=http://localhost:8080, parameters={attr=true,
mtr=false, cmd=top}]
[2018-06-01 11:08:43,141][WARN ][pool-1-thread-1][RestExecutor] Failed
connect to cluster. Please ensure that nodes have [ignite-rest-http] module
in classpath (was copied from libs/optional to libs folder).
[2018-06-01 11:08:49,647][INFO ][pool-1-thread-1][RestExecutor] Connected to
cluster [url=http://localhost:8080]
[2018-06-01 11:08:49,690][INFO ][pool-1-thread-1][ClusterListener]
*Connection successfully established to cluster with nodes: [F2708F6F]*

And the command from the agent docker container:
wget -qO- ${NODE_URI}/ignite?cmd=version  
*{"successStatus":0,"sessionToken":null,"error":null,"response":"2.5.0"}*

Humphrey




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Agent not able to connect

2018-05-31 Thread Roman Guseinov
Hi Humphrey,

1. I checked the details about ports. It was required to expose 3001 port
when back-end and front-end path were split into different docker images.
3001 was used on the back-end. Currently, we don't need to expose it [1].

2. Did you specify all env variables (they shouldn't be empty)? I don't see
that in your logs.

3. Is ignite node accessible inside docker container by
http://localhost:8080? I am not sure.

Could you try to connect web-agent container and check that Ignite node REST
URL is correct?

$ wget -qO- ${NODE_URI}/ignite?cmd=version

Best Regards,
Roman

[1] https://apacheignite-tools.readme.io/v2.5/docs/docker-deployment



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Agent not able to connect

2018-05-31 Thread Humphrey
Okay I got it running, I think the agent is really using the 80 port instead
of the 3000 or 3001.
Can you explain what the 3000 or 3001 port is for?

[2018-05-31 12:27:44,882][INFO ][main][AgentLauncher] Starting Apache Ignite
Web Console Agent...
[2018-05-31 12:27:45,498][WARN ][main][AgentLauncher] Failed to find agent
property file: default.properties

Agent configuration:
User's security tokens : fjp3
URI to Ignite node REST server: http://localhost:8080
URI to Ignite Console server : http://172.17.0.3
Path to agent property file : default.properties
Path to JDBC drivers folder : -n

Demo mode : enabled
[2018-05-31 12:27:46,184][INFO ][main][AgentLauncher] Connecting to:
http://172.17.0.3
[2018-05-31 12:27:46,804][INFO ][EventThread][AgentLauncher] Connection
established.
[2018-05-31 12:27:47,149][INFO ][EventThread][AgentLauncher] Authentication
success.
[2018-05-31 12:27:47,281][WARN ][pool-1-thread-1][RestExecutor] Failed
connect to cluster [url=http://localhost:8080, parameters={attr=true,
mtr=false, cmd=top}]
[2018-05-31 12:27:47,281][WARN ][pool-1-thread-1][RestExecutor] Failed
connect to cluster. Please ensure that nodes have [ignite-rest-http] module
in classpath (was copied from libs/optional to libs folder).
[2018-05-31 12:27:53,574][INFO ][pool-1-thread-1][RestExecutor] Connected to
cluster [url=http://localhost:8080]
[2018-05-31 12:27:53,647][INFO ][pool-1-thread-1][ClusterListener]
Connection successfully established to cluster with nodes: [D4B57CC9]



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Agent not able to connect

2018-05-31 Thread Roman Guseinov
Hi Humphrey,

1. Could you try to specify `URI to Ignite Console server` without port
(just http://172.17.0.3)?

2. If you run web agent inside docker container you won't be able to connect
a cluster node using http://localhost:8080

3. If you use docker deployment for web console, please make sure that you
exposed 3001 port.

4. Also, it looks like you didn't specify all env variables (they shouldn't
be empty). Please take a look at the example:

  environment:
- DRIVER_FOLDER=./jdbc-drivers
- NODE_URI=http://192.168.1.107:8080
- SERVER_URI=http://192.168.1.107
- TOKENS=hpTBDM6DGuG1YPTQrGs5

I hope it will help.

Best Regards,
Roman




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Agent not able to connect

2018-05-31 Thread Humphrey
Let me put the picture I have in my head here.

Pod 1 < ip = 12345 >:
- container 1: Ignite Server node
- container 2: Web Agent

Pod 2 < ip = 67890 >:
- container 1: Web Console Standalone

I assume that every node would need to have an Agent next to it to
communicate with the WebConsole.
When I scale up Pod 1, it will scale up my Server Node and my Agents, so I
will be having more agents talking to the single deployed WebConsole.

WebConsole is using the frontend port 80 where I can login and execute
queries.
WebConsole has a backend port open 3000 for the Agent to talk to.
The agent will have to be configured with webconsole 67890:3000 to
communicate with and 12345:8080 (or localhost:8080) for the server node, as
they will always be on the same machine (Pod).

But the WebConsole is starting up and only listening to 127.0.0.1:3000, so
my agent is getting connection refused cause it has a different IP address
then the webconsole.

Do we need 1 agent talking to the whole cluster, or 1 agent per Ignite Node
(Pod).
If we only need one agent, then I will move the agent to the same Pod as the
WebConsole.

Humphrey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Agent not able to connect

2018-05-31 Thread Petr Ivanov
Roman — can you advise?



> On 31 May 2018, at 13:32, Humphrey  wrote:
> 
> I've done the steps you noted but it seems to do the same as the other 2.4
> version.
> 
> web-console-standalone:
> 
> 09:55:33 0|index | All Ignite migrations finished successfully.
> 09:55:33 0|index | Running Ignite Modules migrations...
> 09:55:33 0|index | There are no Ignite Modules migrations to run.
> 09:55:33 0|index | Start listening on 127.0.0.1:3000
> 
> web-agent:
> 
> Agent configuration:
> User's security tokens: fjp3
> URI to Ignite node REST server: http://localhost:8080
> URI to Ignite Console server  : http://172.17.0.3:3000
> Path to agent property file   : default.properties
> Path to JDBC drivers folder   : -n
> Demo mode : enabled
> 
> [2018-05-31 10:26:57,704][INFO ][main][AgentLauncher] Connecting to:
> http://172.17.0.3:3000
> [2018-05-31 10:26:57,855][ERROR][EventThread][AgentLauncher] Failed to
> establish connection to server (connection refused).
> [2018-05-31 10:26:59,062][ERROR][EventThread][AgentLauncher] Failed to
> establish connection to server (connection refused).
> [2018-05-31 10:27:01,219][ERROR][EventThread][AgentLauncher] Failed to
> establish connection to server (connection refused).
> [2018-05-31 10:27:04,697][ERROR][EventThread][AgentLauncher] Failed to
> establish connection to server (connection refused).
> 
> The Pod running the web-console-standalone has IP 172.17.0.3.
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Agent not able to connect

2018-05-31 Thread vbm
Hi Humphrey,

The default frontend port is 80 and for backend is 3000.
So for web-agent to connect to web-console, you need to provide
:.

I think, you are using backend port 3000.


Regards,
Vishwas



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Agent not able to connect

2018-05-31 Thread Humphrey
I've done the steps you noted but it seems to do the same as the other 2.4
version.

web-console-standalone:

09:55:33 0|index | All Ignite migrations finished successfully.
09:55:33 0|index | Running Ignite Modules migrations...
09:55:33 0|index | There are no Ignite Modules migrations to run.
09:55:33 0|index | Start listening on 127.0.0.1:3000

web-agent:

Agent configuration:
User's security tokens: fjp3
URI to Ignite node REST server: http://localhost:8080
URI to Ignite Console server  : http://172.17.0.3:3000
Path to agent property file   : default.properties
Path to JDBC drivers folder   : -n
Demo mode : enabled

[2018-05-31 10:26:57,704][INFO ][main][AgentLauncher] Connecting to:
http://172.17.0.3:3000
[2018-05-31 10:26:57,855][ERROR][EventThread][AgentLauncher] Failed to
establish connection to server (connection refused).
[2018-05-31 10:26:59,062][ERROR][EventThread][AgentLauncher] Failed to
establish connection to server (connection refused).
[2018-05-31 10:27:01,219][ERROR][EventThread][AgentLauncher] Failed to
establish connection to server (connection refused).
[2018-05-31 10:27:04,697][ERROR][EventThread][AgentLauncher] Failed to
establish connection to server (connection refused).

The Pod running the web-console-standalone has IP 172.17.0.3.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Agent not able to connect

2018-05-31 Thread Petr Ivanov
This image is delivered within the frames of Apache Ignite Nightly Builds only 
for now.
To pull the image from the link I’ve provided, download corresponding tar.gz 
archive, gunzip it and use ‘docker load < web-agent-*-docker-image.tar’ to add 
it to local registry.



> On 31 May 2018, at 11:57, Humphrey  wrote:
> 
> How do I do a pull from that repository? Currently my docker is only looking
> at dockerhub.
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Agent not able to connect

2018-05-31 Thread Humphrey
How do I do a pull from that repository? Currently my docker is only looking
at dockerhub.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Agent not able to connect

2018-05-30 Thread Peter Ivanov
Humphrey,

AFAIK, Web Console Standalone is deliberately designed to have
self-contained Web Agent.
Can you try separate Web Console docker image from experimental nightly
build? [1]


[1]
https://ci.ignite.apache.org/viewLog.html?buildId=1320479&buildTypeId=Releases_NightlyRelease_ApacheIgniteNightlyRelease3AssembleDockerImageIgnite8526&tab=artifacts&guest=1


On Wed, 30 May 2018 at 19:52, Humphrey  wrote:

> Looks like it can't connect to the webconsole as the webconsole is only
> listening to localhost:
>
> Start listening on 127.0.0.1:3000
>
> How can we change this so that the agents (which have different IP) in the
> cluster are able to communicate with the webconsole?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Agent not able to connect

2018-05-30 Thread Humphrey
Looks like it can't connect to the webconsole as the webconsole is only
listening to localhost:

Start listening on 127.0.0.1:3000

How can we change this so that the agents (which have different IP) in the
cluster are able to communicate with the webconsole?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Agent not able to connect

2018-05-30 Thread Humphrey
Web agent I used was apacheignite/web-console-standalone from docker hub.
Am able to login and see the corresponding token used in the agent from the
profile.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Agent not able

2018-05-30 Thread Humphrey
Hi,

I'm trying to deploy Ignite in minikube.
I've started in a separate container WebConsole (and separate Pod).
I when I start my other Pod that contains my ServerNode and WebAgent, I get
the error below. How can I solve that?

I suppose that it is able to connect to the rest server on the same machine,
when I do 
curl http://localhost:8080/ignite?cmd=version
I do get:
{"successStatus":0,"sessionToken":null,"response":"2.4.0","error":null}

I also suppose that it can connect to the webconsole:
curl http://demo-webconsole:3001  
curl: (7) Failed to connect to demo-webconsole port 3001: Connection refused

Why is this connection being refused, I'm using a valid token.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/