[GitHub] incubator-predictionio issue #298: Fail to run "pio deploy" in cluster
Github user dszeto commented on the issue: https://github.com/apache/incubator-predictionio/issues/298 Sorry about that. We did miss it. Let's continue your original thread on the mailing list if you have more questions. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-predictionio issue #298: Fail to run "pio deploy" in cluster
Github user zolo302 commented on the issue: https://github.com/apache/incubator-predictionio/issues/298 @dszeto thank you for you replying. Actually, I sent E-mail to u...@predictionio.incubator.apache.org before, no reply. So I try to submit my issues here. I will try with your suggestion. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-predictionio pull request #300: [PIO-35] Add integration tests for...
GitHub user chanlee514 opened a pull request: https://github.com/apache/incubator-predictionio/pull/300 [PIO-35] Add integration tests for official templates **Changes:** - Integration test fetches templates from Github, instead of storing local copy. - Install git in Docker image. **Notes** - As can be seen in `tests.py`, 3 tests are run by default on Travis: *['BasicAppUsecases', 'EventserverTest', 'ScalaParallelRecommendationTest']*. One can change this using `TEST_NAMES` param in `.travis.yml` - Each scenario should be updated to use standalone Spark cluster for more stable behavior ([https://issues.apache.org/jira/browse/PIO-36](url)). I've done some work on this, and will update as [PIO-36]. - I've excluded 'JavaParallelEcommercerecommendationTest' for now since the template fails to load org.jblas dependency before test execution. You can merge this pull request into a Git repository by running: $ git pull https://github.com/chanlee514/incubator-predictionio develop Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-predictionio/pull/300.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #300 commit bfa6fdb0b27eb1366bba8d1a550647dbdd436642 Author: Chan LeeDate: 2016-09-22T00:27:04Z Add integration tests for all official templates - Fetch engine code from Github instead of storing local copy - Some structural refactoring commit f4671ef149f7cb9ee95aabaa3dd8d9c43ce65a1d Author: Chan Lee Date: 2016-09-22T00:41:08Z Add data for template integration testing --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-predictionio issue #294: [PIO-26] BUG: Put license before XML decl...
Github user dszeto commented on the issue: https://github.com/apache/incubator-predictionio/pull/294 Thanks! --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-predictionio issue #297: Update document
Github user dszeto commented on the issue: https://github.com/apache/incubator-predictionio/pull/297 Thanks for your contribution! I don't have a problem with replacing angle brackets with square brackets. The only ask from me is to keep it consistent throughout the whole documentation. Thanks! --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-predictionio issue #296: Fix a typo: temrinal -> terminal
Github user dszeto commented on the issue: https://github.com/apache/incubator-predictionio/pull/296 Thanks! --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
Re: Remove engine registration
What do you think about using a general purpose registry, that can also be used to discover cluster machines, or microservices? Something like consul.io or docker swarm with and ASF compatible license? This would be a real step into the future and since some work is needed anyway… I think Donald is right that much of this can be made optional—with a mind towards making a single machine install easy and a cluster install almost as easy On Sep 21, 2016, at 1:18 PM, Donald Szetowrote: I second with removing engine manifests and add a separate registry for other meta data (such as where to push engine code, models, and misc. discovery). The current design is a result of realizing the need that producing predictions from the model requires custom code (scoring function) as well. We have bundled training code, predicting (scoring) code together as an engine, different input parameters as different engine variants, and engine instances as an immutable list of metadata that points to an engine, engine variant, and trained models. We can definitely draw clearer boundaries and names. We should start a design doc somewhere. Any suggestions? I propose to start by making registration optional, then start to refactor manifest and build a proper engine registry. Regards, Donald On Wed, Sep 21, 2016 at 12:29 PM, Marcin Ziemiński wrote: > I think that getting rid of the manifest.json and introducing a new kind > of resourse-id for an engine to be registered is a good idea. > > Currently in the repository there are three important keys: > * engine id > * engine version - depends only on the path the engine was built at to > distinguish copies > * engine instance id - because of the name may be associated with the > engine itself, but in fact is the identificator of trained models for an > engine. > When running deploy you either get the latest trained model for the > engine-id and engine-version, what strictly ties it to the location it was > compiled at or you specify engine instance id. I am not sure, but I think > that in the latter case you could get a model for a completely different > engine, what could potentially fail because of initialization with improper > parameters. > What is more, the engine object creation relies only on the full name of > the EngineFactory, so the actual engine, which gets loaded is determined by > the current CLASSPATH. I guess that it is probably the place, which should > be modified if we want a multi-tenant architecture. > I have to admit that these things hadn't been completely clear to me, > until I went through the code. > > We could introduce a new type of service for engine and model management. > I like the idea of the repository to push built engines under chosen ids. > We could also add some versioning of them if necessary. > I treat this approach purely as some kind of package management system. > > As Pat said, a similar approach would let us rely only on the repository > and thanks to that run pio commands regardless of the machine and location. > > Separating the engine part from the rest of PIO could potentially enable > us to come up with different architectures in the future and push us > towards micro-services ecosystem. > > What do you think of separating models from engines in more visible way? I > mean that engine variants in terms of algorithm parameters are more like > model variants. I just see an engine only as code being a dependency for > application related models/algorithms. So you would register an engine - as > a code once and run training for some domain specific data (app) and > algorithm parameters, what would result in a different identifier, that > would be later used for deployment. > > Regards, > Marcin > > > > > > niedz., 18.09.2016 o 20:02 użytkownik Pat Ferrel > napisał: > >> This sounds like a good case for Donald’s suggestion. >> >> What I was trying to add to the discussion is a way to make all commands >> rely on state in the megastore, rather than any file on any machine in a >> cluster or on ordering of execution or execution from a location in a >> directory structure. All commands would then be stateless. >> >> This enables real use cases like provisioning PIO machines and running >> `pio deploy ` to get a new PredictionServer. Provisioning can >> be container and discovery based rather cleanly. >> >> >> On Sep 17, 2016, at 5:26 PM, Mars Hall wrote: >> >> Hello folks, >> >> Great to hear about this possibility. I've been working on running >> PredictionIO on Heroku https://www.heroku.com >> >> Heroku's 12-factor architecture https://12factor.net prefers "stateless >> builds" to ensure that compiled artifacts result in processes which may be >> cheaply restarted, replaced, and scaled via process count & size. I imagine >> this stateless property would be valuable for others as well. >> >> The fact that `pio build` inserts
Re: Remove engine registration
I think that getting rid of the manifest.json and introducing a new kind of resourse-id for an engine to be registered is a good idea. Currently in the repository there are three important keys: * engine id * engine version - depends only on the path the engine was built at to distinguish copies * engine instance id - because of the name may be associated with the engine itself, but in fact is the identificator of trained models for an engine. When running deploy you either get the latest trained model for the engine-id and engine-version, what strictly ties it to the location it was compiled at or you specify engine instance id. I am not sure, but I think that in the latter case you could get a model for a completely different engine, what could potentially fail because of initialization with improper parameters. What is more, the engine object creation relies only on the full name of the EngineFactory, so the actual engine, which gets loaded is determined by the current CLASSPATH. I guess that it is probably the place, which should be modified if we want a multi-tenant architecture. I have to admit that these things hadn't been completely clear to me, until I went through the code. We could introduce a new type of service for engine and model management. I like the idea of the repository to push built engines under chosen ids. We could also add some versioning of them if necessary. I treat this approach purely as some kind of package management system. As Pat said, a similar approach would let us rely only on the repository and thanks to that run pio commands regardless of the machine and location. Separating the engine part from the rest of PIO could potentially enable us to come up with different architectures in the future and push us towards micro-services ecosystem. What do you think of separating models from engines in more visible way? I mean that engine variants in terms of algorithm parameters are more like model variants. I just see an engine only as code being a dependency for application related models/algorithms. So you would register an engine - as a code once and run training for some domain specific data (app) and algorithm parameters, what would result in a different identifier, that would be later used for deployment. Regards, Marcin niedz., 18.09.2016 o 20:02 użytkownik Pat Ferrelnapisał: > This sounds like a good case for Donald’s suggestion. > > What I was trying to add to the discussion is a way to make all commands > rely on state in the megastore, rather than any file on any machine in a > cluster or on ordering of execution or execution from a location in a > directory structure. All commands would then be stateless. > > This enables real use cases like provisioning PIO machines and running > `pio deploy ` to get a new PredictionServer. Provisioning can > be container and discovery based rather cleanly. > > > On Sep 17, 2016, at 5:26 PM, Mars Hall wrote: > > Hello folks, > > Great to hear about this possibility. I've been working on running > PredictionIO on Heroku https://www.heroku.com > > Heroku's 12-factor architecture https://12factor.net prefers "stateless > builds" to ensure that compiled artifacts result in processes which may be > cheaply restarted, replaced, and scaled via process count & size. I imagine > this stateless property would be valuable for others as well. > > The fact that `pio build` inserts stateful metadata into a database causes > ripples throughout the lifecycle of PIO engines on Heroku: > > * An engine cannot be built for production without the production database > available. When a production database contains PII (personally identifiable > information) which has security compliance requirements, the build system > may not be privileged to access that PII data. This also affects CI > (continuous integration/testing), where engines would need to be rebuilt in > production, defeating assurances CI is supposed to provide. > > * The build artifacts cannot be reliably reused. "Slugs" at Heroku are > intended to be stateless, so that you can rollback to a previous version > during the lifetime of an app. With `pio build` causing database > side-effects, there's a greater-than-zero probability of slug-to-metadata > inconsistencies eventually surfacing in a long-running system. > > > From my user-perspective, a few changes to the CLI would fix it: > > 1. add a "skip registration" option, `pio build > --without-engine-registration` > 2. a new command `pio app register` that could be run separately in the > built engine (before training) > > Alas, I do not know PredictionIO internals, so I can only offer a > suggestion for how this might be solved. > > > Donald, one specific note, > > Regarding "No automatic version matching of PIO binary distribution and > artifacts version used in the engine template": > > The Heroku slug contains the PredictionIO binary distribution used to > build the engine, so
[jira] [Commented] (PIO-26) Integrate Apache RAT for license checking
[ https://issues.apache.org/jira/browse/PIO-26?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510751#comment-15510751 ] ASF GitHub Bot commented on PIO-26: --- Github user dszeto commented on the issue: https://github.com/apache/incubator-predictionio/pull/294 Thanks! > Integrate Apache RAT for license checking > - > > Key: PIO-26 > URL: https://issues.apache.org/jira/browse/PIO-26 > Project: PredictionIO > Issue Type: New Feature >Reporter: Chan > Fix For: 0.10.0 > > > http://creadur.apache.org/rat/ > Use this for Apache license checking -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[CANCEL][VOTE]: Apache PredictionIO (incubating) 0.10.0 Release (RC3)
On Wed, Sep 21, 2016 at 10:41 AM, Donald Szetowrote: > Sorry guys. Calling this off until it's fixed. > > On Wed, Sep 21, 2016 at 10:39 AM, Suneel Marthi > wrote: > >> -1 binding >> >> Got an exception >> >> info] SHA-1: b826217743fdd03b6656eb5750b6a775df77cc92 >> >> [info] Packaging >> /Users/smarthi/Downloads/apache-predictionio-0.10.0-incubati >> ng-rc3/assembly/pio-assembly-0.10.0-incubating-rc3.jar >> ... >> >> java.util.zip.ZipException: duplicate entry: META-INF/LICENSE.txt >> >> at java.util.zip.ZipOutputStream.putNextEntry(ZipOutputStream.java:233) >> >> at java.util.jar.JarOutputStream.putNextEntry(JarOutputStream.java:109) >> >> at sbt.IO$.sbt$IO$$addFileEntry$1(IO.scala:445) >> >> at sbt.IO$$anonfun$sbt$IO$$writeZip$2.apply(IO.scala:454) >> >> at sbt.IO$$anonfun$sbt$IO$$writeZip$2.apply(IO.scala:454) >> >> at scala.collection.Iterator$class.foreach(Iterator.scala:727) >> >> at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) >> >> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) >> >> at scala.collection.AbstractIterable.foreach(Iterable.scala:54) >> >> at sbt.IO$.sbt$IO$$writeZip(IO.scala:454) >> >> at sbt.IO$$anonfun$archive$1.apply(IO.scala:410) >> >> at sbt.IO$$anonfun$archive$1.apply(IO.scala:408) >> >> at sbt.IO$$anonfun$withZipOutput$1.apply(IO.scala:498) >> >> at sbt.IO$$anonfun$withZipOutput$1.apply(IO.scala:485) >> >> at sbt.Using.apply(Using.scala:24) >> >> at sbt.IO$.withZipOutput(IO.scala:485) >> >> at sbt.IO$.archive(IO.scala:408) >> >> at sbt.IO$.jar(IO.scala:392) >> >> at sbt.Package$.makeJar(Package.scala:97) >> >> at sbtassembly.Assembly$.sbtassembly$Assembly$$makeJar$1( >> Assembly.scala:40) >> >> at sbtassembly.Assembly$$anonfun$5$$anonfun$apply$4.apply(Assem >> bly.scala:79) >> >> at sbtassembly.Assembly$$anonfun$5$$anonfun$apply$4.apply(Assem >> bly.scala:75) >> >> at sbt.Tracked$$anonfun$outputChanged$1.apply(Tracked.scala:57) >> >> at sbt.Tracked$$anonfun$outputChanged$1.apply(Tracked.scala:52) >> >> at sbtassembly.Assembly$.apply(Assembly.scala:83) >> >> at sbtassembly.Assembly$$anonfun$assemblyTask$1.apply(Assembly.scala:241) >> >> at sbtassembly.Assembly$$anonfun$assemblyTask$1.apply(Assembly.scala:238) >> >> at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47) >> >> at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:40) >> >> at sbt.std.Transform$$anon$4.work(System.scala:63) >> >> at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute >> .scala:226) >> >> at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute >> .scala:226) >> >> at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17) >> >> at sbt.Execute.work(Execute.scala:235) >> >> at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226) >> >> at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226) >> >> at >> sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(Concurre >> ntRestrictions.scala:159) >> >> at sbt.CompletionService$$anon$2.call(CompletionService.scala:28) >> >> at java.util.concurrent.FutureTask.run(FutureTask.java:266) >> >> at java.util.concurrent.Executors$RunnableAdapter.call( >> Executors.java:511) >> >> at java.util.concurrent.FutureTask.run(FutureTask.java:266) >> >> at >> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool >> Executor.java:1142) >> >> at >> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo >> lExecutor.java:617) >> >> at java.lang.Thread.run(Thread.java:745) >> >> [error] (tools/*:assembly) java.util.zip.ZipException: duplicate entry: >> META-INF/LICENSE.txt >> >> [error] Total time: 25 s, completed Sep 21, 2016 1:35:51 PM >> >> On Wed, Sep 21, 2016 at 1:27 PM, Donald Szeto wrote: >> >> > This is the vote for 0.10.0 of Apache PredictionIO (incubating). >> > >> > The vote will run for at least 72 hours and will close on Sept 24th, >> 2016. >> > >> > RC3 adds on top of RC2 with proper licenses and notices embedded in the >> > Maven artifacts. It also changes the license of the documentation from >> > Creative Commons to APLv2. >> > >> > The release candidate artifacts can be downloaded here: >> > https://dist.apache.org/repos/dist/dev/incubator/predictionio/0.10.0- >> > incubating-rc3/ >> > >> > Maven artifacts are built from the release candidate artifacts above, >> and >> > are provided as convenience for testing with engine templates. The Maven >> > artifacts are provided at the Maven staging repo here: >> > https://repository.apache.org/content/repositories/ >> > orgapachepredictionio-1005/ >> > >> > All JIRAs completed for this release are tagged with 'FixVersion = >> 0.10.0'. >> > You can view them here: >> > https://issues.apache.org/jira/secure/ReleaseNote.jspa? >> > projectId=12320420=12337844 >> > >> > The artifacts have been signed with Key : 8BF4ABEB >> > >> > Please vote accordingly: >> > >> > [ ] +1, accept RC as the official 0.10.0 release >> > [ ] -1, do not accept RC as the official 0.10.0
[VOTE]: Apache PredictionIO (incubating) 0.10.0 Release (RC3)
This is the vote for 0.10.0 of Apache PredictionIO (incubating). The vote will run for at least 72 hours and will close on Sept 24th, 2016. RC3 adds on top of RC2 with proper licenses and notices embedded in the Maven artifacts. It also changes the license of the documentation from Creative Commons to APLv2. The release candidate artifacts can be downloaded here: https://dist.apache.org/repos/dist/dev/incubator/predictionio/0.10.0-incubating-rc3/ Maven artifacts are built from the release candidate artifacts above, and are provided as convenience for testing with engine templates. The Maven artifacts are provided at the Maven staging repo here: https://repository.apache.org/content/repositories/orgapachepredictionio-1005/ All JIRAs completed for this release are tagged with 'FixVersion = 0.10.0'. You can view them here: https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12320420=12337844 The artifacts have been signed with Key : 8BF4ABEB Please vote accordingly: [ ] +1, accept RC as the official 0.10.0 release [ ] -1, do not accept RC as the official 0.10.0 release because...
[GitHub] incubator-predictionio issue #298: Fail to run "pio deploy" in cluster
GitHub user zolo302 opened an issue: https://github.com/apache/incubator-predictionio/issues/298 Fail to run "pio deploy" in cluster These days I have been trying to run pio in cluster. Now I met a problem when I ran "pio deploy -- --master yarn --deploy-mode client", the error information isï¼ ** [root@cdh-slave-3 classification]# pio deploy -- --master yarn --deploy-mode client /usr/lib/spark contains an empty RELEASE file. This is a known problem with certain vendors (e.g. Cloudera). Please make sure you are using at least 1.3.0. [INFO] [Runner$] Submission command: /usr/lib/spark/bin/spark-submit --master yarn --deploy-mode client --class io.prediction.workflow.CreateServer --jars file:/data/pio_tmpl/classification/target/scala-2.10/template-scala-parallel-classification_2.10-0.1-SNAPSHOT.jar,file:/data/pio_tmpl/classification/target/scala-2.10/template-scala-parallel-classification-assembly-0.1-SNAPSHOT-deps.jar --files file:/data/PredictionIO-0.9.5/conf/log4j.properties,file:/usr/lib/hbase/conf/hbase-site.xml --driver-class-path /data/PredictionIO-0.9.5/conf:/usr/lib/hbase/conf file:/data/PredictionIO-0.9.5/lib/pio-assembly-0.9.5.jar --engineInstanceId AVdCBkQ9MQIAc35ceaur --engine-variant file:/data/pio_tmpl/classification/engine.json --ip 0.0.0.0 --port 8000 --event-server-ip 0.0.0.0 --event-server-port 7070 --json-extractor Both --env PIO_STORAGE_SOURCES_HBASE_TYPE=hbase,PIO_ENV_LOADED=1,PIO_STORAGE_REPOSITORIES_METADATA_NAME=pio_meta,PIO_FS_BASEDIR=/data/PredictionIO-0.9.5/.pio_store,PIO_STORAGE_SO URCES_HBASE_HOME=/usr/lib/hbase,PIO_HOME=/data/PredictionIO-0.9.5,PIO_FS_ENGINESDIR=/data/PredictionIO-0.9.5/.pio_store/engines,PIO_STORAGE_SOURCES_LOCALFS_PATH=/data/PredictionIO-0.9.5/.pio_store/models,PIO_STORAGE_SOURCES_ELASTICSEARCH_TYPE=elasticsearch,PIO_STORAGE_REPOSITORIES_METADATA_SOURCE=ELASTICSEARCH,PIO_STORAGE_REPOSITORIES_MODELDATA_SOURCE=LOCALFS,PIO_STORAGE_REPOSITORIES_EVENTDATA_NAME=pio_event,PIO_STORAGE_SOURCES_ELASTICSEARCH_HOME=/data/PredictionIO-0.9.5/vendors/elasticsearch-1.4.4,PIO_FS_TMPDIR=/data/PredictionIO-0.9.5/.pio_store/tmp,PIO_STORAGE_REPOSITORIES_MODELDATA_NAME=pio_model,PIO_STORAGE_REPOSITORIES_EVENTDATA_SOURCE=HBASE,PIO_CONF_DIR=/data/PredictionIO-0.9.5/conf,PIO_STORAGE_SOURCES_LOCALFS_TYPE=localfs [INFO] [Slf4jLogger] Slf4jLogger started [WARN] [WorkflowUtils$] Non-empty parameters supplied to org.template.classification.Preparator, but its constructor does not accept any arguments. Stubbing with empty parameters. [WARN] [WorkflowUtils$] Non-empty parameters supplied to org.template.classification.Serving, but its constructor does not accept any arguments. Stubbing with empty parameters. [INFO] [Slf4jLogger] Slf4jLogger started [INFO] [Remoting] Starting remoting [INFO] [Remoting] Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@10.0.31.59:49118] [INFO] [Remoting] Remoting now listens on addresses: [akka.tcp://sparkDriverActorSystem@10.0.31.59:49118] [INFO] [Engine] Using persisted model [INFO] [Engine] Loaded model org.apache.spark.mllib.classification.NaiveBayesModel for algorithm org.template.classification.NaiveBayesAlgorithm [INFO] [MasterActor] Undeploying any existing engine instance at http://0.0.0.0:8000 [WARN] [MasterActor] Nothing at http://0.0.0.0:8000 Uncaught error from thread [pio-server-akka.actor.default-dispatcher-3] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[pio-server] java.lang.AbstractMethodError at akka.actor.ActorLogging$class.$init$(Actor.scala:335) at spray.can.HttpManager.(HttpManager.scala:29) at spray.can.HttpExt$$anonfun$1.apply(Http.scala:153) at spray.can.HttpExt$$anonfun$1.apply(Http.scala:153) at akka.actor.TypedCreatorFunctionConsumer.produce(Props.scala:401) at akka.actor.Props.newActor(Props.scala:339) at akka.actor.ActorCell.newActor(ActorCell.scala:534) at akka.actor.ActorCell.create(ActorCell.scala:560) at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:425) at akka.actor.ActorCell.systemInvoke(ActorCell.scala:447) at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:262) at akka.dispatch.Mailbox.run(Mailbox.scala:218) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) [ERROR] [ActorSystemImpl]