Hi Tom,

You are right.  Port seems to be closed. When I try nmap to display open ports, 
I got below result. At this moment the engine is deployed but still 8000 port 
is not open. How can I make it open so that I can access it outside the VM.

Nmap scan report for localhost (127.0.0.1)
Not shown: 996 closed ports
PORT     STATE SERVICE
22/tcp   open  ssh
5432/tcp open  postgresql
7070/tcp open  realserver
9200/tcp open  wap-wsp

Regards,
Sravya
From: Tom Chan [mailto:[email protected]]
Sent: Saturday, November 5, 2016 12:25 AM
To: [email protected]
Subject: Re: Not able to access engine after successful deployment

Seems you need to set up your VM so that traffic arriving at a certain port on 
your host (8000 or something of your choice) is forwarded to port 8000 on your 
VM, then you're good to go.

Tom

On Fri, Nov 4, 2016 at 11:48 AM, Guruju, Lakshmi Sravya 
<[email protected]<mailto:[email protected]>> wrote:
Hi,

It is configured to 127.0.0.1

Link encap:Local Loopback
inet addr:127.0.0.1  Mask:255.0.0.0

Regards,
Sravya

From: Gustavo Frederico 
[mailto:[email protected]<mailto:[email protected]>]
Sent: Friday, November 4, 2016 6:25 PM
To: 
[email protected]<mailto:[email protected]>
Subject: Re: Not able to access engine after successful deployment

The 0.0.0.0 IP looks a bit odd. How is your loopback network interface 
configured? Check with ifconfig.

On Fri, Nov 4, 2016 at 8:51 AM, Guruju, Lakshmi Sravya 
<[email protected]<mailto:[email protected]>> wrote:
Hi,

I have setup predictionIo successfully on a linux VM. Event server is running 
at http://localhost:7070.

I am using elasticseacrh as data storage and it is up and running at 
http://localhost:9200

But when I am trying to deploy Recommendation engine, deployment was  
successful but I am not able to access it at localhost. You can see the below 
trace after successful deployment.

[WARN] [WorkflowUtils$] Non-empty parameters supplied to 
org.template.recommendation.Preparator, but its constructor does not accept any 
arguments. Stubbing with empty parameters.
[WARN] [WorkflowUtils$] Non-empty parameters supplied to 
org.template.recommendation.Serving, but its constructor does not accept any 
arguments. Stubbing with empty parameters.
[INFO] [Remoting] Starting remoting
[INFO] [Remoting] Remoting started; listening on addresses 
:[akka.tcp://[email protected]:40932<http://[email protected]:40932>]
[WARN] [MetricsSystem] Using default name DAGScheduler for source because 
spark.app.id<http://spark.app.id> is not set.
[INFO] [Engine] Using persisted model
[INFO] [Engine] Custom-persisted model detected for algorithm 
org.template.recommendation.ALSAlgorithm
[WARN] [ALSModel] User factor does not have a partitioner. Prediction on 
individual records could be slow.
[WARN] [ALSModel] User factor is not cached. Prediction could be slow.
[WARN] [ALSModel] Product factor does not have a partitioner. Prediction on 
individual records could be slow.
[WARN] [ALSModel] Product factor is not cached. Prediction could be slow.
[INFO] [MasterActor] Undeploying any existing engine instance at 
http://0.0.0.0:8000
[WARN] [MasterActor] Nothing at http://0.0.0.0:8000
[INFO] [HttpListener] Bound to /0.0.0.0:8000<http://0.0.0.0:8000>
[INFO] [MasterActor] Engine is deployed and running. Engine API is live at 
http://0.0.0.0:8000.


$ curl -i -X GET "http://localhost:8000";
curl: (7) Failed to connect to localhost port 8000: Connection refused

I have also tried with different ports but nothing worked. Could someone please 
help me resolve these.

Regards,
Sravya


Reply via email to