GitHub user zolo302 opened an issue:

    https://github.com/apache/incubator-predictionio/issues/298

    Fail to run "pio deploy" in cluster

    These days I have been trying to run pio in cluster. Now I met a problem 
when I ran "pio deploy -- --master yarn --deploy-mode client", the error 
information is:
    
******************************************************************************
    [root@cdh-slave-3 classification]# pio deploy -- --master yarn 
--deploy-mode client
    /usr/lib/spark contains an empty RELEASE file. This is a known problem with 
certain vendors (e.g. Cloudera). Please make sure you are using at least 1.3.0.
    [INFO] [Runner$] Submission command: /usr/lib/spark/bin/spark-submit 
--master yarn --deploy-mode client --class io.prediction.workflow.CreateServer 
--jars 
file:/data/pio_tmpl/classification/target/scala-2.10/template-scala-parallel-classification_2.10-0.1-SNAPSHOT.jar,file:/data/pio_tmpl/classification/target/scala-2.10/template-scala-parallel-classification-assembly-0.1-SNAPSHOT-deps.jar
 --files 
file:/data/PredictionIO-0.9.5/conf/log4j.properties,file:/usr/lib/hbase/conf/hbase-site.xml
 --driver-class-path /data/PredictionIO-0.9.5/conf:/usr/lib/hbase/conf 
file:/data/PredictionIO-0.9.5/lib/pio-assembly-0.9.5.jar --engineInstanceId 
AVdCBkQ9MQIAc35ceaur --engine-variant 
file:/data/pio_tmpl/classification/engine.json --ip 0.0.0.0 --port 8000 
--event-server-ip 0.0.0.0 --event-server-port 7070 --json-extractor Both --env 
PIO_STORAGE_SOURCES_HBASE_TYPE=hbase,PIO_ENV_LOADED=1,PIO_STORAGE_REPOSITORIES_METADATA_NAME=pio_meta,PIO_FS_BASEDIR=/data/PredictionIO-0.9.5/.pio_store,PIO_STORAGE_SO
 
URCES_HBASE_HOME=/usr/lib/hbase,PIO_HOME=/data/PredictionIO-0.9.5,PIO_FS_ENGINESDIR=/data/PredictionIO-0.9.5/.pio_store/engines,PIO_STORAGE_SOURCES_LOCALFS_PATH=/data/PredictionIO-0.9.5/.pio_store/models,PIO_STORAGE_SOURCES_ELASTICSEARCH_TYPE=elasticsearch,PIO_STORAGE_REPOSITORIES_METADATA_SOURCE=ELASTICSEARCH,PIO_STORAGE_REPOSITORIES_MODELDATA_SOURCE=LOCALFS,PIO_STORAGE_REPOSITORIES_EVENTDATA_NAME=pio_event,PIO_STORAGE_SOURCES_ELASTICSEARCH_HOME=/data/PredictionIO-0.9.5/vendors/elasticsearch-1.4.4,PIO_FS_TMPDIR=/data/PredictionIO-0.9.5/.pio_store/tmp,PIO_STORAGE_REPOSITORIES_MODELDATA_NAME=pio_model,PIO_STORAGE_REPOSITORIES_EVENTDATA_SOURCE=HBASE,PIO_CONF_DIR=/data/PredictionIO-0.9.5/conf,PIO_STORAGE_SOURCES_LOCALFS_TYPE=localfs
    [INFO] [Slf4jLogger] Slf4jLogger started
    [WARN] [WorkflowUtils$] Non-empty parameters supplied to 
org.template.classification.Preparator, but its constructor does not accept any 
arguments. Stubbing with empty parameters.
    [WARN] [WorkflowUtils$] Non-empty parameters supplied to 
org.template.classification.Serving, but its constructor does not accept any 
arguments. Stubbing with empty parameters.
    [INFO] [Slf4jLogger] Slf4jLogger started
    [INFO] [Remoting] Starting remoting
    [INFO] [Remoting] Remoting started; listening on addresses 
:[akka.tcp://sparkDriverActorSystem@10.0.31.59:49118]
    [INFO] [Remoting] Remoting now listens on addresses: 
[akka.tcp://sparkDriverActorSystem@10.0.31.59:49118]
    [INFO] [Engine] Using persisted model
    [INFO] [Engine] Loaded model 
org.apache.spark.mllib.classification.NaiveBayesModel for algorithm 
org.template.classification.NaiveBayesAlgorithm
    [INFO] [MasterActor] Undeploying any existing engine instance at 
http://0.0.0.0:8000
    [WARN] [MasterActor] Nothing at http://0.0.0.0:8000
    Uncaught error from thread [pio-server-akka.actor.default-dispatcher-3] 
shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for 
ActorSystem[pio-server]
    java.lang.AbstractMethodError
            at akka.actor.ActorLogging$class.$init$(Actor.scala:335)
            at spray.can.HttpManager.<init>(HttpManager.scala:29)
            at spray.can.HttpExt$$anonfun$1.apply(Http.scala:153)
            at spray.can.HttpExt$$anonfun$1.apply(Http.scala:153)
            at akka.actor.TypedCreatorFunctionConsumer.produce(Props.scala:401)
            at akka.actor.Props.newActor(Props.scala:339)
            at akka.actor.ActorCell.newActor(ActorCell.scala:534)
            at akka.actor.ActorCell.create(ActorCell.scala:560)
            at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:425)
            at akka.actor.ActorCell.systemInvoke(ActorCell.scala:447)
            at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:262)
            at akka.dispatch.Mailbox.run(Mailbox.scala:218)
            at 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
            at 
scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
            at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
            at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
            at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
    [ERROR] [ActorSystemImpl] Uncaught error from thread 
[pio-server-akka.actor.default-dispatcher-3] shutting down JVM since 
'akka.jvm-exit-on-fatal-error' is enabled
    
**************************************************************************************************
    
    I guess the reason is that complication dependence version is different 
from executing version. But I don't know which dependence's version is 
conflicted. Do you have any suggestion?
    
    My cluster is cdh5.7.1 including spark-1.6.0.

----

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to