Instead of using a standalone cluster installed on another docker container, I 
start a cluster with the spark on the same container with train&deploy server, 
it's successfully.
I don't understand why it happened. I can't use a local spark, it's very 
strange.





At 2018-05-10 11:12:04, "王斌斌" <hellomsg_nore...@163.com> wrote:

https://stackoverflow.com/questions/50256449/deploy-predictionio-with-spark-standalone-cluster



I use the official Recommendation as a test. I did these steps successfully:

event server installed in a docker container.(successfully)
config eventdata, metadata and all things are stored in mysql.(successfully)
train & deploy server in another docker container.(successfully)
spark standalone cluster.(successfully)
create new app.(successfully)
import enough eventdata.(successfully)

When I train and deploy as follows, it's ok as the docs described :

pio train
pio deploy


But when I use spark cluster, train and deploy as follows, train is ok(new 
model has been stored in mysql), but deploy isn't success.

pio train -v engine.json -- --master spark://predictionspark:7077 
--executor-memory 2G --driver-memory 2G --total-executor-cores 1
pio deploy -v engine.json --feedback --event-server-ip predictionevent 
--event-server-port 7070 --accesskey 
Th7k5gE5yEu9ZdTdM6KdAj0InDrLNJQ1U3qEBy7dbMnYgTxWx5ALNAa2hKjqaHSK -- --master 
spark://predictionspark:7077 --executor-memory 2G --driver-memory 2G 
--total-executor-cores 1


deploy ERROR log:

...
flb_flb_1 | 2018-05-09T09:56:20.410043835Z [INFO] [Engine] Using persisted model
flb_flb_1 | 2018-05-09T09:56:20.411705255Z [INFO] [Engine] Custom-persisted 
model detected for algorithm org.example.recommendation.ALSAlgorithm
flb_flb_1 | 2018-05-09T09:56:21.263570490Z [ERROR] [OneForOneStrategy] empty 
collection


I don't know why.





 





 

Reply via email to