As Sumit indicated, the "yarn logs --applicationId <application ID>” command should dump those logs.
— Jon On Nov 5, 2014, at 2:13 PM, Pushkar Raste <[email protected]> wrote: > I am new to both slider and hadoop. Where can I find agent logs? > > On Wed, Nov 5, 2014 at 2:02 PM, Jon Maron <[email protected]> wrote: > >> It may help to provide any of the agent logs or memcached logs from the >> node managers. This could occur for any number of reasons including wrong >> java_home value. >> >> — Jon >> >> On Nov 5, 2014, at 1:55 PM, Pushkar Raste <[email protected]> wrote: >> >>> May I should provide entire log >>> >>> 2014-11-05 18:27:24,345 [main] INFO Configuration.deprecation - >>> slider.registry.path is deprecated. Instead, use hadoop.registry.zk.root >>> 2014-11-05 18:27:24,350 [main] INFO appmaster.SliderAppMaster - AM >>> configuration: >>> fs.defaultFS=hdfs://localhost:9000 >>> hadoop.registry.zk.quorum=localhost:2181 >>> hadoop.registry.zk.root=/registry >>> slider.registry.path=/registry >>> slider.yarn.queue=default >>> >> yarn.application.classpath=/usr/local/hadoop/etc/hadoop,/usr/local/hadoop/etc/hadoop/*,/usr/local/hadoop/share/hadoop/common/*,/usr/local/hadoop/share/hadoop/common/lib/*,/usr/local/hadoop/share/hadoop/hdfs/*,/usr/local/hadoop/share/hadoop/hdfs/lib/*,/usr/local/hadoop/share/hadoop/yarn/*,/usr/local/hadoop/share/hadoop/yarn/lib/*,/usr/local/hadoop/share/hadoop/mapreduce/*,/usr/local/hadoop/share/hadoop/mapreduce/lib/* >>> yarn.log-aggregation-enable=true >>> yarn.resourcemanager.address=localhost:8032 >>> yarn.resourcemanager.scheduler.address=localhost:8030 >>> >>> 2014-11-05 18:27:24,507 [main] INFO appmaster.SliderAppMaster - Cluster >> is >>> insecure >>> 2014-11-05 18:27:25,000 [main] INFO appmaster.SliderAppMaster - Login >> user >>> is root (auth:SIMPLE) >>> 2014-11-05 18:27:25,013 [openssl-001] INFO appmaster.SliderAppMaster - >>> OpenSSL 1.0.1 14 Mar 2012 >>> 2014-11-05 18:27:25,213 [openssl-001] WARN appmaster.SliderAppMaster - >>> 2014-11-05 18:27:25,214 [openssl-001] INFO appmaster.SliderAppMaster - >>> 2014-11-05 18:27:25,231 [python-003] WARN appmaster.SliderAppMaster - >>> Python 2.7.3 >>> 2014-11-05 18:27:25,431 [python-003] WARN appmaster.SliderAppMaster - >>> 2014-11-05 18:27:25,431 [python-003] INFO appmaster.SliderAppMaster - >>> 2014-11-05 18:27:25,434 [main] INFO appmaster.SliderAppMaster - Slider >>> Core-0.51.0-incubating-SNAPSHOT Built against commit# bbde42bdf9 on Java >>> 1.7.0_67 by praste >>> 2014-11-05 18:27:25,434 [main] INFO appmaster.SliderAppMaster - Compiled >>> against Hadoop 2.6.0-SNAPSHOT >>> 2014-11-05 18:27:25,436 [main] INFO appmaster.SliderAppMaster - Hadoop >>> runtime version (detached from b4446cb) with source checksum >>> d2d3ea14a0fdbf31a0273fc4f2ad594b and build date 2014-10-29T18:31Z >>> 2014-11-05 18:27:25,437 [main] INFO appmaster.SliderAppMaster - >>> Application defined at >> hdfs://localhost:9000/user/root/.slider/cluster/cl2 >>> 2014-11-05 18:27:27,195 [main] INFO appmaster.SliderAppMaster - >> Deploying >>> cluster {, >>> "internal": { >>> "schema" : "http://example.org/specification/v2.0.0", >>> "metadata" : { >>> "create.hadoop.deployed.info" : "(detached from b4446cb) >>> @d2d3ea14a0fdbf31a0273fc4f2ad594b", >>> "create.application.build.info" : "Slider >>> Core-0.51.0-incubating-SNAPSHOT Built against commit# bbde42bdf9 on Java >>> 1.7.0_67 by praste", >>> "create.hadoop.build.info" : "2.6.0-SNAPSHOT", >>> "create.time.millis" : "1415212024534", >>> "create.time" : "5 Nov 2014 18:27:04 GMT" >>> }, >>> "global" : { >>> "internal.tmp.dir" : >>> "hdfs://localhost:9000/user/root/.slider/cluster/cl2/tmp", >>> "internal.generated.conf.path" : >>> "hdfs://localhost:9000/user/root/.slider/cluster/cl2/generated", >>> "internal.snapshot.conf.path" : >>> "hdfs://localhost:9000/user/root/.slider/cluster/cl2/snapshot", >>> "internal.container.failure.shortlife" : "60000", >>> "slider.data.directory.permissions" : "0770", >>> "application.name" : "cl2", >>> "slider.cluster.directory.permissions" : "0770", >>> "internal.provider.name" : "agent", >>> "internal.am.tmp.dir" : >>> "hdfs://localhost:9000/user/root/.slider/cluster/cl2/tmp/appmaster", >>> "internal.container.failure.threshold" : "5", >>> "internal.data.dir.path" : >>> "hdfs://localhost:9000/user/root/.slider/cluster/cl2/database" >>> }, >>> "credentials" : { }, >>> "components" : { } >>> }, >>> "resources": { >>> "schema" : "http://example.org/specification/v2.0.0", >>> "metadata" : { }, >>> "global" : { }, >>> "credentials" : { }, >>> "components" : { >>> "slider-appmaster" : { >>> "yarn.memory" : "256", >>> "yarn.vcores" : "1", >>> "yarn.component.instances" : "1" >>> }, >>> "MEMCACHED" : { >>> "yarn.memory" : "256", >>> "yarn.role.priority" : "1", >>> "yarn.component.instances" : "1" >>> } >>> } >>> }, >>> "appConf" :{ >>> "schema" : "http://example.org/specification/v2.0.0", >>> "metadata" : { }, >>> "global" : { >>> "site.fs.default.name" : "hdfs://localhost:9000", >>> "site.global.app_user" : "yarn", >>> "site.global.additional_cp" : "/usr/lib/hadoop/lib/*", >>> "zookeeper.hosts" : "localhost", >>> "site.global.pid_file" : "${AGENT_WORK_ROOT}/app/run/component.pid", >>> "java_home" : "/usr/lib/jvm/java-7-openjdk-amd64", >>> "site.fs.defaultFS" : "hdfs://localhost:9000", >>> "env.MALLOC_ARENA_MAX" : "4", >>> "zookeeper.path" : "/services/slider/users/root/cl2", >>> "site.global.memory_val" : "200M", >>> "site.global.listen_port" : >>> "${MEMCACHED.ALLOCATED_PORT}{DO_NOT_PROPAGATE}", >>> "zookeeper.quorum" : "localhost:2181", >>> "site.global.xmx_val" : "256m", >>> "site.global.app_root" : >>> "${AGENT_WORK_ROOT}/app/install/jmemcached-1.0.0", >>> "application.def" : ".slider/package/memcached/jmemcached-1.0.0.zip", >>> "site.global.xms_val" : "128m" >>> }, >>> "credentials" : { }, >>> "components" : { >>> "slider-appmaster" : { >>> "jvm.heapsize" : "256M" >>> }, >>> "MEMCACHED" : { } >>> } >>> }}: >>> 2014-11-05 18:27:27,321 [main] INFO appmaster.SliderAppMaster - Conf dir >>> >> /tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/container_1415211406300_0001_01_000001/propagatedconf >>> does not exist. >>> 2014-11-05 18:27:27,321 [main] INFO appmaster.SliderAppMaster - Parent >> dir >>> >> /tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/container_1415211406300_0001_01_000001: >>> default_container_executor_session.sh >>> tmp >>> .default_container_executor_session.sh.crc >>> .default_container_executor.sh.crc >>> .launch_container.sh.crc >>> container_tokens >>> .container_tokens.crc >>> lib >>> confdir >>> launch_container.sh >>> expandedarchive >>> default_container_executor.sh >>> >>> 2014-11-05 18:27:27,321 [main] INFO appmaster.SliderAppMaster - Cluster >>> provider type is agent >>> 2014-11-05 18:27:27,392 [main] INFO appmaster.SliderAppMaster - RM is at >>> localhost/127.0.0.1:8030 >>> 2014-11-05 18:27:27,465 [main] INFO appmaster.SliderAppMaster - AM for >> ID 1 >>> 2014-11-05 18:27:27,496 [main] INFO client.RMProxy - Connecting to >>> ResourceManager at localhost/127.0.0.1:8030 >>> 2014-11-05 18:27:27,546 [main] INFO impl.NMClientAsyncImpl - Upper bound >>> of the thread pool size is 500 >>> 2014-11-05 18:27:27,547 [main] INFO >> impl.ContainerManagementProtocolProxy >>> - yarn.client.max-cached-nodemanagers-proxies : 0 >>> 2014-11-05 18:27:27,589 [main] INFO ipc.CallQueueManager - Using >> callQueue >>> class java.util.concurrent.LinkedBlockingQueue >>> 2014-11-05 18:27:27,601 [Socket Reader #1 for port 45997] INFO >> ipc.Server >>> - Starting Socket Reader #1 for port 45997 >>> 2014-11-05 18:27:27,663 [IPC Server Responder] INFO ipc.Server - IPC >>> Server Responder: starting >>> 2014-11-05 18:27:27,668 [IPC Server listener on 45997] INFO ipc.Server - >>> IPC Server listener on 45997: starting >>> 2014-11-05 18:27:27,670 [main] INFO appmaster.SliderAppMaster - AM >> Server >>> is listening at hdfs03:45997 >>> 2014-11-05 18:27:27,671 [main] INFO appmaster.SliderAppMaster - Starting >>> Yarn registry >>> 2014-11-05 18:27:27,793 [main] INFO appmaster.SliderAppMaster - Service >>> YarnRegistry in state YarnRegistry: STARTED Connection="fixed ZK quorum >>> "localhost:2181" " root="/registry" security disabled >>> 2014-11-05 18:27:27,865 [main] INFO security.SecurityUtils - Generation >> of >>> file with password >>> 2014-11-05 18:27:27,866 [main] INFO security.CertificateManager - >>> Initialization of root certificate >>> 2014-11-05 18:27:27,866 [main] INFO security.CertificateManager - >>> Certificate exists:false >>> 2014-11-05 18:27:27,867 [main] INFO security.CertificateManager - >>> Generation of server certificate >>> 2014-11-05 18:27:29,387 [main] INFO security.SecurityUtils - Command >>> openssl genrsa -des3 -passout pass:**** -out >>> /tmp/sec1415212047809/security/ca.key 4096 was finished with exit code: >> 0 >>> - the operation was completed successfully. >>> 2014-11-05 18:27:29,412 [main] INFO security.SecurityUtils - Command >>> openssl req -passin pass:**** -new -key >>> /tmp/sec1415212047809/security/ca.key -out >>> /tmp/sec1415212047809/security/ca.csr -batch was finished with exit >> code: 0 >>> - the operation was completed successfully. >>> 2014-11-05 18:27:29,449 [main] INFO security.SecurityUtils - Command >>> open**** ca -create_serial -out /tmp/sec1415212047809/security/ca.crt >> -days >>> 365 -keyfile /tmp/sec1415212047809/security/ca.key -key >>> aaHDf9hgq9DQsd1ZeaBzaucfB9DBbAv5BXpjPVaUMR1ptmGOSD -selfsign -extensions >>> jdk7_ca -config /tmp/sec1415212047809/security/ca.config -batch -infiles >>> /tmp/sec1415212047809/security/ca.csr was finished with exit code: 0 - >> the >>> operation was completed successfully. >>> 2014-11-05 18:27:29,461 [main] INFO security.SecurityUtils - Command >>> openssl pkcs12 -export -in /tmp/sec1415212047809/security/ca.crt -inkey >>> /tmp/sec1415212047809/security/ca.key -certfile >>> /tmp/sec1415212047809/security/ca.crt -out >>> /tmp/sec1415212047809/security/keystore.p12 -password pass:**** -passin >>> pass:**** >>> was finished with exit code: 0 - the operation was completed >> successfully. >>> 2014-11-05 18:27:29,463 [main] INFO appmaster.SliderAppMaster - AM >>> classpath: >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/container_1415211406300_0001_01_000001/confdir/ >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/30/commons-net-3.1.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/45/xmlenc-0.52.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/28/hadoop-mapreduce-client-app-2.6.0-20141029.183317-386.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/12/commons-lang-2.6.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/78/zookeeper-3.4.5.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/19/metrics-core-3.0.1.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/90/commons-configuration-1.6.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/22/slf4j-api-1.7.5.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/37/netty-3.2.2.Final.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/72/gson-2.2.2.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/54/hadoop-yarn-server-common-2.6.0-20141029.183249-395.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/79/servlet-api-2.5-20081211.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/66/jline-0.9.94.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/18/hadoop-common-2.6.0-20141029.183134-513.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/92/jersey-core-1.9.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/14/api-util-1.0.0-M20.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/36/hadoop-mapreduce-client-common-2.6.0-20141029.183314-386.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/80/asm-3.1.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/15/curator-recipes-2.6.0.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/44/hadoop-yarn-server-web-proxy-2.6.0-20141029.183252-395.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/95/jersey-guice-1.9.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/50/paranamer-2.3.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/27/commons-digester-1.8.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/74/slider.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/38/commons-beanutils-1.7.0.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/93/javax.inject-1.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/91/netty-all-4.0.23.Final.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/20/jsr305-1.3.9.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/73/hadoop-yarn-client-2.6.0-20141029.183300-387.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/96/curator-client-2.4.1.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/59/xml-apis-1.3.04.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/55/guava-11.0.2.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/85/api-asn1-api-1.0.0-M20.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/100/guice-3.0.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/88/hadoop-client-2.6.0-20141029.183339-384.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/65/protobuf-java-2.5.0.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/99/curator-framework-2.4.1.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/21/hamcrest-core-1.3.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/84/jetty-util-6.1.26.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/13/jettison-1.1.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/87/commons-cli-1.2.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/61/servlet-api-2.5.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/57/hadoop-yarn-registry-2.6.0-20141029.183307-93.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/23/slf4j-log4j12-1.6.1.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/32/xercesImpl-2.9.1.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/35/jcommander-1.30.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/16/hadoop-mapreduce-client-shuffle-2.6.0-20141029.183316-386.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/64/jaxb-impl-2.2.3-1.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/49/hadoop-mapreduce-client-jobclient-2.6.0-20141029.183321-385.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/97/hadoop-mapreduce-client-core-2.6.0-20141029.183311-387.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/17/hadoop-hdfs-2.6.0-20141029.183220-396.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/67/snappy-java-1.0.4.1.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/58/commons-collections-3.2.1.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/43/hadoop-yarn-api-2.6.0-20141029.183244-396.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/26/aopalliance-1.0.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/77/jackson-mapper-asl-1.9.13.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/11/commons-logging-1.1.3.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/81/commons-httpclient-3.1.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/31/hadoop-annotations-2.6.0-20141029.183101-515.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/24/commons-codec-1.4.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/29/commons-compress-1.4.1.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/63/slider-core-0.51.0-incubating-SNAPSHOT.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/53/log4j-1.2.17.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/41/apacheds-kerberos-codec-2.0.0-M15.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/34/junit-4.11.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/83/jetty-sslengine-6.1.26.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/47/jackson-core-asl-1.9.13.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/89/commons-math3-3.1.1.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/46/commons-beanutils-core-1.8.0.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/10/htrace-core-3.0.4.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/33/jetty-6.1.26.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/75/jackson-jaxrs-1.9.13.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/76/hadoop-auth-2.6.0-20141029.183111-515.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/69/apacheds-i18n-2.0.0-M15.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/51/commons-io-2.4.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/70/jaxb-api-2.2.7.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/86/jersey-server-1.9.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/82/jersey-json-1.9.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/56/guice-servlet-3.0.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/48/jackson-xc-1.9.13.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/68/hadoop-yarn-common-2.6.0-20141029.183246-396.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/60/jersey-client-1.9.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/98/jsr311-api-1.1.1.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/40/stax-api-1.0.1.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/71/leveldbjni-all-1.8.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/39/xz-1.0.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/94/avro-1.7.4.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/container_1415211406300_0001_01_000001/confdir/ >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/30/commons-net-3.1.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/45/xmlenc-0.52.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/28/hadoop-mapreduce-client-app-2.6.0-20141029.183317-386.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/12/commons-lang-2.6.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/78/zookeeper-3.4.5.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/19/metrics-core-3.0.1.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/90/commons-configuration-1.6.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/22/slf4j-api-1.7.5.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/37/netty-3.2.2.Final.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/72/gson-2.2.2.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/54/hadoop-yarn-server-common-2.6.0-20141029.183249-395.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/79/servlet-api-2.5-20081211.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/66/jline-0.9.94.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/18/hadoop-common-2.6.0-20141029.183134-513.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/92/jersey-core-1.9.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/14/api-util-1.0.0-M20.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/36/hadoop-mapreduce-client-common-2.6.0-20141029.183314-386.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/80/asm-3.1.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/15/curator-recipes-2.6.0.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/44/hadoop-yarn-server-web-proxy-2.6.0-20141029.183252-395.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/95/jersey-guice-1.9.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/50/paranamer-2.3.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/27/commons-digester-1.8.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/74/slider.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/38/commons-beanutils-1.7.0.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/93/javax.inject-1.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/91/netty-all-4.0.23.Final.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/20/jsr305-1.3.9.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/73/hadoop-yarn-client-2.6.0-20141029.183300-387.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/96/curator-client-2.4.1.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/59/xml-apis-1.3.04.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/55/guava-11.0.2.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/85/api-asn1-api-1.0.0-M20.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/100/guice-3.0.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/88/hadoop-client-2.6.0-20141029.183339-384.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/65/protobuf-java-2.5.0.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/99/curator-framework-2.4.1.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/21/hamcrest-core-1.3.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/84/jetty-util-6.1.26.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/13/jettison-1.1.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/87/commons-cli-1.2.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/61/servlet-api-2.5.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/57/hadoop-yarn-registry-2.6.0-20141029.183307-93.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/23/slf4j-log4j12-1.6.1.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/32/xercesImpl-2.9.1.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/35/jcommander-1.30.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/16/hadoop-mapreduce-client-shuffle-2.6.0-20141029.183316-386.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/64/jaxb-impl-2.2.3-1.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/49/hadoop-mapreduce-client-jobclient-2.6.0-20141029.183321-385.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/97/hadoop-mapreduce-client-core-2.6.0-20141029.183311-387.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/17/hadoop-hdfs-2.6.0-20141029.183220-396.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/67/snappy-java-1.0.4.1.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/58/commons-collections-3.2.1.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/43/hadoop-yarn-api-2.6.0-20141029.183244-396.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/26/aopalliance-1.0.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/77/jackson-mapper-asl-1.9.13.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/11/commons-logging-1.1.3.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/81/commons-httpclient-3.1.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/31/hadoop-annotations-2.6.0-20141029.183101-515.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/24/commons-codec-1.4.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/29/commons-compress-1.4.1.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/63/slider-core-0.51.0-incubating-SNAPSHOT.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/53/log4j-1.2.17.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/41/apacheds-kerberos-codec-2.0.0-M15.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/34/junit-4.11.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/83/jetty-sslengine-6.1.26.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/47/jackson-core-asl-1.9.13.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/89/commons-math3-3.1.1.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/46/commons-beanutils-core-1.8.0.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/10/htrace-core-3.0.4.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/33/jetty-6.1.26.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/75/jackson-jaxrs-1.9.13.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/76/hadoop-auth-2.6.0-20141029.183111-515.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/69/apacheds-i18n-2.0.0-M15.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/51/commons-io-2.4.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/70/jaxb-api-2.2.7.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/86/jersey-server-1.9.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/82/jersey-json-1.9.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/56/guice-servlet-3.0.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/48/jackson-xc-1.9.13.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/68/hadoop-yarn-common-2.6.0-20141029.183246-396.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/60/jersey-client-1.9.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/98/jsr311-api-1.1.1.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/40/stax-api-1.0.1.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/71/leveldbjni-all-1.8.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/39/xz-1.0.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/94/avro-1.7.4.jar >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/container_1415211406300_0001_01_000001/$CLASSPATH >>> >> file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/container_1415211406300_0001_01_000001/$HADOOP_CONF_DIR >>> file:/usr/local/hadoop/etc/hadoop/ >>> >>> 2014-11-05 18:27:29,514 [main] INFO mortbay.log - Logging to >>> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via >>> org.mortbay.log.Slf4jLog >>> 2014-11-05 18:27:29,596 [main] INFO mortbay.log - jetty-6.1.26 >>> Nov 05, 2014 6:27:29 PM com.sun.jersey.api.core.PackagesResourceConfig >> init >>> INFO: Scanning for root resource and provider classes in the packages: >>> org.apache.slider.server.appmaster.web.rest.agent >>> Nov 05, 2014 6:27:29 PM com.sun.jersey.api.core.ScanningResourceConfig >>> logClasses >>> INFO: Root resource classes found: >>> class org.apache.slider.server.appmaster.web.rest.agent.AgentWebServices >>> Nov 05, 2014 6:27:29 PM com.sun.jersey.api.core.ScanningResourceConfig >> init >>> INFO: No provider classes found. >>> Nov 05, 2014 6:27:29 PM >>> com.sun.jersey.server.impl.application.WebApplicationImpl _initiate >>> INFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 >> 11:17 >>> AM' >>> 2014-11-05 18:27:31,375 [main] INFO mortbay.log - Started >>> [email protected]:51682 >>> 2014-11-05 18:27:31,402 [main] INFO mortbay.log - Started >>> [email protected]:59454 >>> 2014-11-05 18:27:31,467 [main] INFO http.HttpRequestLog - Http request >> log >>> for http.requests.slideram is not defined >>> 2014-11-05 18:27:31,470 [main] INFO http.HttpServer2 - Added global >> filter >>> 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) >>> 2014-11-05 18:27:31,477 [main] INFO http.HttpServer2 - Added filter >>> AM_PROXY_FILTER >>> (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to >>> context slideram >>> 2014-11-05 18:27:31,477 [main] INFO http.HttpServer2 - Added filter >>> AM_PROXY_FILTER >>> (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to >>> context static >>> 2014-11-05 18:27:31,483 [main] INFO http.HttpServer2 - adding path spec: >>> /slideram/* >>> 2014-11-05 18:27:31,483 [main] INFO http.HttpServer2 - adding path spec: >>> /ws/* >>> 2014-11-05 18:27:31,487 [main] INFO http.HttpServer2 - Jetty bound to >> port >>> 46717 >>> 2014-11-05 18:27:31,487 [main] INFO mortbay.log - jetty-6.1.26 >>> 2014-11-05 18:27:31,516 [main] INFO mortbay.log - Extract >>> >> jar:file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1415211406300_0001/filecache/74/slider.jar!/webapps/slideram >>> to /tmp/Jetty_0_0_0_0_46717_slideram____xx95bl/webapp >>> 2014-11-05 18:27:31,753 [main] INFO mortbay.log - Started HttpServer2$ >>> [email protected]:46717 >>> 2014-11-05 18:27:31,753 [main] INFO webapp.WebApps - Web app /slideram >>> started at 46717 >>> 2014-11-05 18:27:32,173 [main] INFO webapp.WebApps - Registered webapp >>> guice modules >>> 2014-11-05 18:27:32,176 [main] INFO appmaster.SliderAppMaster - >> Connecting >>> to RM at 45997,address tracking URL=http://hdfs03:46717 >>> 2014-11-05 18:27:32,319 [main] INFO appmaster.SliderAppMaster - Token >>> YARN_AM_RM_TOKEN >>> 2014-11-05 18:27:32,321 [main] INFO agent.AgentUtils - Reading metainfo >> at >>> .slider/package/memcached/jmemcached-1.0.0.zip >>> 2014-11-05 18:27:32,374 [main] INFO tools.SliderUtils - Reading >>> metainfo.xml of size 1898 >>> 2014-11-05 18:27:32,477 [main] INFO agent.HeartbeatMonitor - Starting >>> heartbeat monitor with interval 60000 >>> 2014-11-05 18:27:32,592 [main] INFO state.AppState - Adding role >> MEMCACHED >>> 2014-11-05 18:27:32,592 [main] INFO state.AppState - Role MEMCACHED >>> assigned priority 1 >>> 2014-11-05 18:27:32,592 [main] INFO state.AppState - Role MEMCACHED >> flexed >>> from 0 to 1 >>> 2014-11-05 18:27:32,737 [main] INFO appmaster.SliderAppMaster - Chaos >>> monkey disabled >>> 2014-11-05 18:27:32,737 [main] INFO appmaster.SliderAppMaster - >>> HADOOP_USER_NAME='root' >>> 2014-11-05 18:27:32,761 [main] INFO appmaster.SliderAppMaster - Registry >>> service username =root >>> 2014-11-05 18:27:32,870 [main] INFO appmaster.SliderAppMaster - Service >>> Record >>> ServiceRecord{description='Slider Application Master'; external >> endpoints: >>> {Endpoint{api='org.apache.slider.appmaster', addressType='host/port', >>> protocolType='hadoop/protobuf', addresses=[ [ "hdfs03" "45997" ] ] }; >>> Endpoint{api='org.apache.http.UI', addressType='uri', >> protocolType='webui', >>> addresses=[ [ "http://hdfs03:46717" ] ] }; >>> Endpoint{api='org.apache.slider.management', addressType='uri', >>> protocolType='REST', addresses=[ [ " >> http://hdfs03:46717/ws/v1/slider/mgmt" >>> ] ] }; Endpoint{api='org.apache.slider.publisher', addressType='uri', >>> protocolType='REST', addresses=[ [ " >>> http://hdfs03:46717/ws/v1/slider/publisher" ] ] }; >>> Endpoint{api='org.apache.slider.registry', addressType='uri', >>> protocolType='REST', addresses=[ [ " >>> http://hdfs03:46717/ws/v1/slider/registry" ] ] }; >>> Endpoint{api='org.apache.slider.publisher.configurations', >>> addressType='uri', protocolType='REST', addresses=[ [ " >>> http://hdfs03:46717/ws/v1/slider/publisher/slider" ] ] }; >>> Endpoint{api='org.apache.slider.publisher.exports', addressType='uri', >>> protocolType='REST', addresses=[ [ " >>> http://hdfs03:46717/ws/v1/slider/publisher/exports" ] ] }; }; internal >>> endpoints: {Endpoint{api='org.apache.slider.agents.secure', >>> addressType='uri', protocolType='REST', addresses=[ [ " >>> https://hdfs03:59454/ws/v1/slider/agents" ] ] }; >>> Endpoint{api='org.apache.slider.agents.oneway', addressType='uri', >>> protocolType='REST', addresses=[ [ " >> https://hdfs03:51682/ws/v1/slider/agents" >>> ] ] }; }, attributes: {"yarn:persistence"="application" >>> "yarn:id"="application_1415211406300_0001" }} >>> 2014-11-05 18:27:32,978 [main] INFO zk.RegistryOperationsService - Bound >>> at /users/root/services/org-apache-slider/cl2 : >>> ServiceRecord{description='Slider Application Master'; external >> endpoints: >>> {Endpoint{api='org.apache.slider.appmaster', addressType='host/port', >>> protocolType='hadoop/protobuf', addresses=[ [ "hdfs03" "45997" ] ] }; >>> Endpoint{api='org.apache.http.UI', addressType='uri', >> protocolType='webui', >>> addresses=[ [ "http://hdfs03:46717" ] ] }; >>> Endpoint{api='org.apache.slider.management', addressType='uri', >>> protocolType='REST', addresses=[ [ " >> http://hdfs03:46717/ws/v1/slider/mgmt" >>> ] ] }; Endpoint{api='org.apache.slider.publisher', addressType='uri', >>> protocolType='REST', addresses=[ [ " >>> http://hdfs03:46717/ws/v1/slider/publisher" ] ] }; >>> Endpoint{api='org.apache.slider.registry', addressType='uri', >>> protocolType='REST', addresses=[ [ " >>> http://hdfs03:46717/ws/v1/slider/registry" ] ] }; >>> Endpoint{api='org.apache.slider.publisher.configurations', >>> addressType='uri', protocolType='REST', addresses=[ [ " >>> http://hdfs03:46717/ws/v1/slider/publisher/slider" ] ] }; >>> Endpoint{api='org.apache.slider.publisher.exports', addressType='uri', >>> protocolType='REST', addresses=[ [ " >>> http://hdfs03:46717/ws/v1/slider/publisher/exports" ] ] }; }; internal >>> endpoints: {Endpoint{api='org.apache.slider.agents.secure', >>> addressType='uri', protocolType='REST', addresses=[ [ " >>> https://hdfs03:59454/ws/v1/slider/agents" ] ] }; >>> Endpoint{api='org.apache.slider.agents.oneway', addressType='uri', >>> protocolType='REST', addresses=[ [ " >> https://hdfs03:51682/ws/v1/slider/agents" >>> ] ] }; }, attributes: {"yarn:persistence"="application" >>> "yarn:id"="application_1415211406300_0001" }} >>> 2014-11-05 18:27:33,071 [main] INFO appmaster.SliderAppMaster - >> Registered >>> service under /users/root/services/org-apache-slider/cl2; absolute path >>> /registry/users/root/services/org-apache-slider/cl2 >>> 2014-11-05 18:27:33,087 [main] INFO zk.RegistryOperationsService - Bound >>> at >>> >> /users/root/services/org-apache-slider/cl2/components/appattempt-1415211406300-0001-000001 >>> : ServiceRecord{description='Slider Application Master'; external >>> endpoints: {Endpoint{api='org.apache.slider.appmaster', >>> addressType='host/port', protocolType='hadoop/protobuf', addresses=[ [ >>> "hdfs03" "45997" ] ] }; Endpoint{api='org.apache.http.UI', >>> addressType='uri', protocolType='webui', addresses=[ [ " >> http://hdfs03:46717" >>> ] ] }; Endpoint{api='org.apache.slider.management', addressType='uri', >>> protocolType='REST', addresses=[ [ " >> http://hdfs03:46717/ws/v1/slider/mgmt" >>> ] ] }; Endpoint{api='org.apache.slider.publisher', addressType='uri', >>> protocolType='REST', addresses=[ [ " >>> http://hdfs03:46717/ws/v1/slider/publisher" ] ] }; >>> Endpoint{api='org.apache.slider.registry', addressType='uri', >>> protocolType='REST', addresses=[ [ " >>> http://hdfs03:46717/ws/v1/slider/registry" ] ] }; >>> Endpoint{api='org.apache.slider.publisher.configurations', >>> addressType='uri', protocolType='REST', addresses=[ [ " >>> http://hdfs03:46717/ws/v1/slider/publisher/slider" ] ] }; >>> Endpoint{api='org.apache.slider.publisher.exports', addressType='uri', >>> protocolType='REST', addresses=[ [ " >>> http://hdfs03:46717/ws/v1/slider/publisher/exports" ] ] }; }; internal >>> endpoints: {Endpoint{api='org.apache.slider.agents.secure', >>> addressType='uri', protocolType='REST', addresses=[ [ " >>> https://hdfs03:59454/ws/v1/slider/agents" ] ] }; >>> Endpoint{api='org.apache.slider.agents.oneway', addressType='uri', >>> protocolType='REST', addresses=[ [ " >> https://hdfs03:51682/ws/v1/slider/agents" >>> ] ] }; }, attributes: {"yarn:persistence"="application" >>> "yarn:id"="application_1415211406300_0001" }} >>> 2014-11-05 18:27:33,096 [main] INFO appmaster.SliderAppMaster - RM >> Webapp >>> address 0.0.0.0:8088 >>> 2014-11-05 18:27:33,096 [main] INFO appmaster.SliderAppMaster - slider >>> Webapp address http://hdfs03:46717 >>> 2014-11-05 18:27:33,096 [main] INFO appmaster.SliderAppMaster - >>> Application Master Initialization Completed >>> 2014-11-05 18:27:33,096 [main] INFO appmaster.SliderAppMaster - Queue >>> Processing started >>> 2014-11-05 18:27:33,102 [AmExecutor-005] INFO actions.QueueService - >>> QueueService processor started >>> 2014-11-05 18:27:33,111 [AmExecutor-006] INFO actions.QueueExecutor - >>> Queue Executor run() started >>> 2014-11-05 18:27:33,169 [main] INFO agent.AgentClientProvider - >> Validating >>> app definition .slider/package/memcached/jmemcached-1.0.0.zip >>> 2014-11-05 18:27:33,223 [AmExecutor-006] INFO state.AppState - Reviewing >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=0, requested=0, releasing=0, failed=0, started=0, startFailed=0, >>> completed=0, failureMessage=''} >>> 2014-11-05 18:27:33,223 [AmExecutor-006] INFO state.AppState - >> MEMCACHED: >>> Asking for 1 more nodes(s) for a total of 1 >>> 2014-11-05 18:27:33,233 [AmExecutor-006] INFO state.AppState - Container >>> ask is Capability[<memory:256, vCores:1>]Priority[1073741825] and label = >>> null >>> 2014-11-05 18:27:34,422 [AMRM Heartbeater thread] INFO >> impl.AMRMClientImpl >>> - Received new token for : hdfs03:59013 >>> 2014-11-05 18:27:34,424 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersAllocated(1) >>> 2014-11-05 18:27:34,428 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Assigning role MEMCACHED to container >>> container_1415211406300_0001_01_000002, on hdfs03:59013, >>> 2014-11-05 18:27:34,434 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Diagnostics: >>> RoleStatus{name='slider-appmaster', key=0, minimum=0, maximum=1, >> desired=1, >>> actual=1, requested=0, releasing=0, failed=0, started=1, startFailed=0, >>> completed=0, failureMessage=''} >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=0, started=0, startFailed=0, >>> completed=0, failureMessage=''} >>> >>> 2014-11-05 18:27:34,455 [RoleLaunchService-007] INFO >>> agent.AgentProviderService - Build launch context for Agent >>> 2014-11-05 18:27:34,456 [RoleLaunchService-007] INFO >>> agent.AgentProviderService - AGENT_WORK_ROOT set to $PWD >>> 2014-11-05 18:27:34,456 [RoleLaunchService-007] INFO >>> agent.AgentProviderService - AGENT_LOG_ROOT set to <LOG_DIR> >>> 2014-11-05 18:27:34,456 [RoleLaunchService-007] INFO >>> agent.AgentProviderService - PYTHONPATH set to >> ./infra/agent/slider-agent/ >>> 2014-11-05 18:27:34,475 [RoleLaunchService-007] INFO >>> agent.AgentProviderService - Using >>> ./infra/agent/slider-agent/agent/main.py for agent. >>> 2014-11-05 18:27:34,486 [RoleLaunchService-007] INFO >>> appmaster.RoleLaunchService - Starting container with command: python >>> ./infra/agent/slider-agent/agent/main.py --label >>> container_1415211406300_0001_01_000002___MEMCACHED --zk-quorum >>> localhost:2181 --zk-reg-path >>> /registry/users/root/services/org-apache-slider/cl2 > >>> <LOG_DIR>/slider-agent.out 2>&1 ; >>> 2014-11-05 18:27:34,510 [AmExecutor-006] WARN appmaster.SliderAppMaster >> - >>> No delegation tokens obtained and set for launch context >>> 2014-11-05 18:27:34,527 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO >>> impl.NMClientAsyncImpl - Processing Event EventType: START_CONTAINER for >>> Container container_1415211406300_0001_01_000002 >>> 2014-11-05 18:27:34,531 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO >>> impl.ContainerManagementProtocolProxy - Opening proxy : hdfs03:59013 >>> 2014-11-05 18:27:34,606 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO >>> appmaster.SliderAppMaster - Started Container >>> container_1415211406300_0001_01_000002 >>> 2014-11-05 18:27:34,804 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO >>> appmaster.SliderAppMaster - Deployed instance of role MEMCACHED onto >>> container_1415211406300_0001_01_000002 >>> 2014-11-05 18:27:34,805 [AmExecutor-006] INFO appmaster.SliderAppMaster >> - >>> Registering component container_1415211406300_0001_01_000002 >>> 2014-11-05 18:27:34,812 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #1] INFO >>> impl.NMClientAsyncImpl - Processing Event EventType: QUERY_CONTAINER for >>> Container container_1415211406300_0001_01_000002 >>> 2014-11-05 18:27:34,812 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #1] INFO >>> impl.ContainerManagementProtocolProxy - Opening proxy : hdfs03:59013 >>> 2014-11-05 18:27:34,820 [AmExecutor-006] INFO >> zk.RegistryOperationsService >>> - Bound at >>> >> /users/root/services/org-apache-slider/cl2/components/container-1415211406300-0001-01-000002 >>> : ServiceRecord{description='MEMCACHED'; external endpoints: {}; internal >>> endpoints: {}, attributes: {"yarn:persistence"="container" >>> "yarn:id"="container-1415211406300-0001-01-000002" }} >>> Nov 05, 2014 6:27:35 PM com.sun.jersey.spi.inject.Errors >>> processErrorMessages >>> WARNING: The following warnings have been detected with resource and/or >>> provider classes: >>> WARNING: A sub-resource method, public javax.ws.rs.core.Response >>> >> org.apache.slider.server.appmaster.web.rest.agent.AgentResource.endpointRoot(), >>> with URI template, "/", is treated as a resource method >>> 2014-11-05 18:27:35,994 [945188962@qtp-1799750533-5] INFO >>> agent.AgentProviderService - Handling registration: responseId=-1 >>> timestamp=1415212055696 >>> label=container_1415211406300_0001_01_000002___MEMCACHED >>> hostname=hdfs03 >>> expectedState=INIT >>> actualState=INIT >>> >>> 2014-11-05 18:27:35,995 [945188962@qtp-1799750533-5] INFO >>> agent.AgentProviderService - Registration response: >>> RegistrationResponse{response=OK, responseId=0, statusCommands=null} >>> 2014-11-05 18:27:46,089 [945188962@qtp-1799750533-5] INFO >>> agent.AgentProviderService - Installing MEMCACHED on >>> container_1415211406300_0001_01_000002. >>> 2014-11-05 18:27:46,278 [945188962@qtp-1799750533-5] INFO >>> agent.AgentProviderService - Component operation. Status: IN_PROGRESS >>> 2014-11-05 18:27:46,628 [945188962@qtp-1799750533-5] INFO >>> agent.AgentProviderService - Recording allocated port for >>> global.listen_port as 56536{DO_NOT_PROPAGATE} >>> 2014-11-05 18:27:46,628 [945188962@qtp-1799750533-5] WARN >>> agent.AgentProviderService - Failed to parse 56536{DO_NOT_PROPAGATE}: >>> java.lang.NumberFormatException: For input string: >> "56536{DO_NOT_PROPAGATE}" >>> 2014-11-05 18:27:46,628 [945188962@qtp-1799750533-5] INFO >>> agent.AgentProviderService - Publishing hdfs03:56536{DO_NOT_PROPAGATE} >> for >>> name host_port and container container_1415211406300_0001_01_000002 >>> 2014-11-05 18:27:46,629 [945188962@qtp-1799750533-5] INFO >>> agent.AgentProviderService - publishing >>> PublishedConfiguration{description='ComponentInstanceData' entries = 1} >>> 2014-11-05 18:27:46,629 [945188962@qtp-1799750533-5] INFO >>> agent.AgentProviderService - Component operation. Status: COMPLETED >>> 2014-11-05 18:27:46,629 [AmExecutor-006] INFO appmaster.SliderAppMaster >> - >>> Registering component container_1415211406300_0001_01_000002 >>> 2014-11-05 18:27:46,638 [945188962@qtp-1799750533-5] INFO >>> agent.AgentProviderService - Updating log and pwd folders for container >>> container_1415211406300_0001_01_000002 >>> 2014-11-05 18:27:46,638 [945188962@qtp-1799750533-5] INFO >>> agent.AgentProviderService - Updating log and pwd folders for container >>> container_1415211406300_0001_01_000002 >>> 2014-11-05 18:27:46,639 [945188962@qtp-1799750533-5] INFO >>> agent.AgentProviderService - Starting MEMCACHED on >>> container_1415211406300_0001_01_000002. >>> 2014-11-05 18:27:46,643 [AmExecutor-006] INFO >> zk.RegistryOperationsService >>> - Bound at >>> >> /users/root/services/org-apache-slider/cl2/components/container-1415211406300-0001-01-000002 >>> : ServiceRecord{description='MEMCACHED'; external endpoints: {}; internal >>> endpoints: {}, attributes: {"yarn:persistence"="container" >>> "yarn:id"="container-1415211406300-0001-01-000002" }} >>> 2014-11-05 18:27:48,648 [945188962@qtp-1799750533-5] INFO >>> agent.AgentProviderService - Component operation. Status: IN_PROGRESS >>> 2014-11-05 18:27:48,957 [945188962@qtp-1799750533-5] INFO >>> agent.AgentProviderService - Component operation. Status: COMPLETED >>> 2014-11-05 18:27:48,957 [945188962@qtp-1799750533-5] INFO >>> agent.AgentProviderService - Requesting applied config for MEMCACHED on >>> container_1415211406300_0001_01_000002. >>> 2014-11-05 18:27:50,921 [945188962@qtp-1799750533-5] INFO >>> agent.AgentProviderService - Processing 1 status reports. >>> 2014-11-05 18:27:50,921 [945188962@qtp-1799750533-5] INFO >>> agent.AgentProviderService - Status report: >>> ComponentStatus{componentName='MEMCACHED', msg='null', status='null', >>> serviceName='cl2', clusterName='cl2', roleCommand='GET_CONFIG'} >>> 2014-11-05 18:27:50,921 [945188962@qtp-1799750533-5] INFO >>> agent.AgentProviderService - Received and processed config for >>> container_1415211406300_0001_01_000002___MEMCACHED >>> 2014-11-05 18:28:22,735 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersCompleted([1] >>> 2014-11-05 18:28:22,736 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Container Completion for >>> containerID=container_1415211406300_0001_01_000002, state=COMPLETE, >>> exitStatus=0, diagnostics= >>> 2014-11-05 18:28:22,738 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Failed container in role[1] : MEMCACHED >>> 2014-11-05 18:28:22,738 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Current count of failed role[1] MEMCACHED = 1 >>> 2014-11-05 18:28:22,804 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Removing node ID container_1415211406300_0001_01_000002 >>> 2014-11-05 18:28:22,804 [AMRM Callback Handler Thread] ERROR >>> appmaster.SliderAppMaster - Role instance RoleInstance{role='MEMCACHED', >>> id='container_1415211406300_0001_01_000002', >>> container=ContainerID=container_1415211406300_0001_01_000002 >>> nodeID=hdfs03:59013 http=hdfs03:8042 priority=1073741825, >>> createTime=1415212054511, startTime=1415212054606, released=false, >>> roleId=1, host=hdfs03, hostURL=http://hdfs03:8042, state=5, exitCode=0, >>> command='python ./infra/agent/slider-agent/agent/main.py --label >>> container_1415211406300_0001_01_000002___MEMCACHED --zk-quorum >>> localhost:2181 --zk-reg-path >>> /registry/users/root/services/org-apache-slider/cl2 > >>> <LOG_DIR>/slider-agent.out 2>&1 ; ', diagnostics='', output=null, >>> environment=[AGENT_WORK_ROOT="$PWD", HADOOP_USER_NAME="root", >>> AGENT_LOG_ROOT="<LOG_DIR>", PYTHONPATH="./infra/agent/slider-agent/", >>> SLIDER_PASSPHRASE="qGnrsDpoLqO9TrXEdpIPQwmTgiUfPlEMj5VAaMmaxAZiS8rS9L", >>> MALLOC_ARENA_MAX="4"]} failed >>> 2014-11-05 18:28:22,805 [AMRM Callback Handler Thread] INFO >>> agent.AgentProviderService - Removing container specific data for >>> container_1415211406300_0001_01_000002 >>> 2014-11-05 18:28:22,805 [AMRM Callback Handler Thread] INFO >>> agent.AgentProviderService - publishing >>> PublishedConfiguration{description='ComponentInstanceData' entries = 0} >>> 2014-11-05 18:28:22,805 [AMRM Callback Handler Thread] INFO >>> agent.AgentProviderService - Removing component status for label >>> container_1415211406300_0001_01_000002___MEMCACHED >>> 2014-11-05 18:28:22,806 [AmExecutor-006] INFO appmaster.SliderAppMaster >> - >>> Unregistering component container_1415211406300_0001_01_000002 >>> 2014-11-05 18:28:22,817 [AmExecutor-006] INFO state.AppState - Reviewing >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=0, requested=0, releasing=0, failed=1, started=1, startFailed=1, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000002 >>> on host hdfs03: '} >>> 2014-11-05 18:28:22,818 [AmExecutor-006] INFO state.AppState - >> MEMCACHED: >>> Asking for 1 more nodes(s) for a total of 1 >>> 2014-11-05 18:28:22,818 [AmExecutor-006] INFO state.AppState - Container >>> ask is Capability[<memory:256, vCores:1>]Priority[1073741825] and label = >>> null >>> 2014-11-05 18:28:24,753 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersAllocated(1) >>> 2014-11-05 18:28:24,754 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Assigning role MEMCACHED to container >>> container_1415211406300_0001_01_000003, on hdfs03:59013, >>> 2014-11-05 18:28:24,754 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Diagnostics: >>> RoleStatus{name='slider-appmaster', key=0, minimum=0, maximum=1, >> desired=1, >>> actual=1, requested=0, releasing=0, failed=0, started=1, startFailed=0, >>> completed=0, failureMessage=''} >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=1, started=1, startFailed=1, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000002 >>> on host hdfs03: '} >>> >>> 2014-11-05 18:28:24,756 [RoleLaunchService-007] INFO >>> agent.AgentProviderService - Build launch context for Agent >>> 2014-11-05 18:28:24,759 [RoleLaunchService-007] INFO >>> agent.AgentProviderService - AGENT_WORK_ROOT set to $PWD >>> 2014-11-05 18:28:24,759 [RoleLaunchService-007] INFO >>> agent.AgentProviderService - AGENT_LOG_ROOT set to <LOG_DIR> >>> 2014-11-05 18:28:24,759 [RoleLaunchService-007] INFO >>> agent.AgentProviderService - PYTHONPATH set to >> ./infra/agent/slider-agent/ >>> 2014-11-05 18:28:24,767 [RoleLaunchService-007] INFO >>> agent.AgentProviderService - Using >>> ./infra/agent/slider-agent/agent/main.py for agent. >>> 2014-11-05 18:28:24,776 [RoleLaunchService-007] INFO >>> appmaster.RoleLaunchService - Starting container with command: python >>> ./infra/agent/slider-agent/agent/main.py --label >>> container_1415211406300_0001_01_000003___MEMCACHED --zk-quorum >>> localhost:2181 --zk-reg-path >>> /registry/users/root/services/org-apache-slider/cl2 > >>> <LOG_DIR>/slider-agent.out 2>&1 ; >>> 2014-11-05 18:28:24,780 [AmExecutor-006] WARN appmaster.SliderAppMaster >> - >>> No delegation tokens obtained and set for launch context >>> 2014-11-05 18:28:24,782 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #2] INFO >>> impl.NMClientAsyncImpl - Processing Event EventType: START_CONTAINER for >>> Container container_1415211406300_0001_01_000003 >>> 2014-11-05 18:28:24,783 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #2] INFO >>> impl.ContainerManagementProtocolProxy - Opening proxy : hdfs03:59013 >>> 2014-11-05 18:28:24,811 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #2] INFO >>> appmaster.SliderAppMaster - Started Container >>> container_1415211406300_0001_01_000003 >>> 2014-11-05 18:28:24,857 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #2] INFO >>> appmaster.SliderAppMaster - Deployed instance of role MEMCACHED onto >>> container_1415211406300_0001_01_000003 >>> 2014-11-05 18:28:24,857 [AmExecutor-006] INFO appmaster.SliderAppMaster >> - >>> Registering component container_1415211406300_0001_01_000003 >>> 2014-11-05 18:28:24,859 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #3] INFO >>> impl.NMClientAsyncImpl - Processing Event EventType: QUERY_CONTAINER for >>> Container container_1415211406300_0001_01_000003 >>> 2014-11-05 18:28:24,859 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #3] INFO >>> impl.ContainerManagementProtocolProxy - Opening proxy : hdfs03:59013 >>> 2014-11-05 18:28:24,867 [AmExecutor-006] INFO >> zk.RegistryOperationsService >>> - Bound at >>> >> /users/root/services/org-apache-slider/cl2/components/container-1415211406300-0001-01-000003 >>> : ServiceRecord{description='MEMCACHED'; external endpoints: {}; internal >>> endpoints: {}, attributes: {"yarn:persistence"="container" >>> "yarn:id"="container-1415211406300-0001-01-000003" }} >>> 2014-11-05 18:28:25,441 [945188962@qtp-1799750533-5] INFO >>> agent.AgentProviderService - Handling registration: responseId=-1 >>> timestamp=1415212105292 >>> label=container_1415211406300_0001_01_000003___MEMCACHED >>> hostname=hdfs03 >>> expectedState=INIT >>> actualState=INIT >>> >>> 2014-11-05 18:28:25,441 [945188962@qtp-1799750533-5] INFO >>> agent.AgentProviderService - Registration response: >>> RegistrationResponse{response=OK, responseId=0, statusCommands=null} >>> 2014-11-05 18:28:25,764 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersAllocated(1) >>> 2014-11-05 18:28:25,764 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Discarding surplus container container_1415211406300_0001_01_000004 on >>> hdfs03:59013 >>> 2014-11-05 18:28:25,766 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Diagnostics: >>> RoleStatus{name='slider-appmaster', key=0, minimum=0, maximum=1, >> desired=1, >>> actual=1, requested=0, releasing=0, failed=0, started=1, startFailed=0, >>> completed=0, failureMessage=''} >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=1, started=2, startFailed=1, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000002 >>> on host hdfs03: '} >>> >>> 2014-11-05 18:28:26,779 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersCompleted([1] >>> 2014-11-05 18:28:26,780 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Container Completion for >>> containerID=container_1415211406300_0001_01_000004, state=COMPLETE, >>> exitStatus=-100, diagnostics=Container released by application >>> 2014-11-05 18:28:26,781 [AmExecutor-006] INFO appmaster.SliderAppMaster >> - >>> Unregistering component container_1415211406300_0001_01_000004 >>> 2014-11-05 18:28:26,793 [AmExecutor-006] INFO state.AppState - Reviewing >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=1, started=2, startFailed=1, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000002 >>> on host hdfs03: '} >>> 2014-11-05 18:28:35,463 [945188962@qtp-1799750533-5] INFO >>> agent.AgentProviderService - Installing MEMCACHED on >>> container_1415211406300_0001_01_000003. >>> 2014-11-05 18:28:35,838 [945188962@qtp-1799750533-5] INFO >>> agent.AgentProviderService - Recording allocated port for >>> global.listen_port as 44378{DO_NOT_PROPAGATE} >>> 2014-11-05 18:28:35,838 [945188962@qtp-1799750533-5] WARN >>> agent.AgentProviderService - Failed to parse 44378{DO_NOT_PROPAGATE}: >>> java.lang.NumberFormatException: For input string: >> "44378{DO_NOT_PROPAGATE}" >>> 2014-11-05 18:28:35,838 [945188962@qtp-1799750533-5] INFO >>> agent.AgentProviderService - Publishing hdfs03:44378{DO_NOT_PROPAGATE} >> for >>> name host_port and container container_1415211406300_0001_01_000003 >>> 2014-11-05 18:28:35,838 [945188962@qtp-1799750533-5] INFO >>> agent.AgentProviderService - publishing >>> PublishedConfiguration{description='ComponentInstanceData' entries = 1} >>> 2014-11-05 18:28:35,838 [AmExecutor-006] INFO appmaster.SliderAppMaster >> - >>> Registering component container_1415211406300_0001_01_000003 >>> 2014-11-05 18:28:35,839 [945188962@qtp-1799750533-5] INFO >>> agent.AgentProviderService - Component operation. Status: COMPLETED >>> 2014-11-05 18:28:35,839 [945188962@qtp-1799750533-5] INFO >>> agent.AgentProviderService - Updating log and pwd folders for container >>> container_1415211406300_0001_01_000003 >>> 2014-11-05 18:28:35,840 [945188962@qtp-1799750533-5] INFO >>> agent.AgentProviderService - Updating log and pwd folders for container >>> container_1415211406300_0001_01_000003 >>> 2014-11-05 18:28:35,840 [945188962@qtp-1799750533-5] INFO >>> agent.AgentProviderService - Starting MEMCACHED on >>> container_1415211406300_0001_01_000003. >>> 2014-11-05 18:28:35,849 [AmExecutor-006] INFO >> zk.RegistryOperationsService >>> - Bound at >>> >> /users/root/services/org-apache-slider/cl2/components/container-1415211406300-0001-01-000003 >>> : ServiceRecord{description='MEMCACHED'; external endpoints: {}; internal >>> endpoints: {}, attributes: {"yarn:persistence"="container" >>> "yarn:id"="container-1415211406300-0001-01-000003" }} >>> 2014-11-05 18:28:37,806 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Component operation. Status: IN_PROGRESS >>> 2014-11-05 18:28:47,827 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Component operation. Status: COMPLETED >>> 2014-11-05 18:28:47,827 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Requesting applied config for MEMCACHED on >>> container_1415211406300_0001_01_000003. >>> 2014-11-05 18:28:57,856 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Processing 1 status reports. >>> 2014-11-05 18:28:57,856 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Status report: >>> ComponentStatus{componentName='MEMCACHED', msg='null', status='null', >>> serviceName='cl2', clusterName='cl2', roleCommand='GET_CONFIG'} >>> 2014-11-05 18:28:57,856 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Received and processed config for >>> container_1415211406300_0001_01_000003___MEMCACHED >>> 2014-11-05 18:29:19,016 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersCompleted([1] >>> 2014-11-05 18:29:19,017 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Container Completion for >>> containerID=container_1415211406300_0001_01_000003, state=COMPLETE, >>> exitStatus=0, diagnostics= >>> 2014-11-05 18:29:19,017 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Failed container in role[1] : MEMCACHED >>> 2014-11-05 18:29:19,018 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Current count of failed role[1] MEMCACHED = 2 >>> 2014-11-05 18:29:19,081 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Removing node ID container_1415211406300_0001_01_000003 >>> 2014-11-05 18:29:19,082 [AMRM Callback Handler Thread] ERROR >>> appmaster.SliderAppMaster - Role instance RoleInstance{role='MEMCACHED', >>> id='container_1415211406300_0001_01_000003', >>> container=ContainerID=container_1415211406300_0001_01_000003 >>> nodeID=hdfs03:59013 http=hdfs03:8042 priority=1073741825, >>> createTime=1415212104781, startTime=1415212104811, released=false, >>> roleId=1, host=hdfs03, hostURL=http://hdfs03:8042, state=5, exitCode=0, >>> command='python ./infra/agent/slider-agent/agent/main.py --label >>> container_1415211406300_0001_01_000003___MEMCACHED --zk-quorum >>> localhost:2181 --zk-reg-path >>> /registry/users/root/services/org-apache-slider/cl2 > >>> <LOG_DIR>/slider-agent.out 2>&1 ; ', diagnostics='', output=null, >>> environment=[AGENT_WORK_ROOT="$PWD", HADOOP_USER_NAME="root", >>> AGENT_LOG_ROOT="<LOG_DIR>", PYTHONPATH="./infra/agent/slider-agent/", >>> SLIDER_PASSPHRASE="qGnrsDpoLqO9TrXEdpIPQwmTgiUfPlEMj5VAaMmaxAZiS8rS9L", >>> MALLOC_ARENA_MAX="4"]} failed >>> 2014-11-05 18:29:19,082 [AMRM Callback Handler Thread] INFO >>> agent.AgentProviderService - Removing container specific data for >>> container_1415211406300_0001_01_000003 >>> 2014-11-05 18:29:19,082 [AMRM Callback Handler Thread] INFO >>> agent.AgentProviderService - publishing >>> PublishedConfiguration{description='ComponentInstanceData' entries = 0} >>> 2014-11-05 18:29:19,083 [AMRM Callback Handler Thread] INFO >>> agent.AgentProviderService - Removing component status for label >>> container_1415211406300_0001_01_000003___MEMCACHED >>> 2014-11-05 18:29:19,083 [AmExecutor-006] INFO appmaster.SliderAppMaster >> - >>> Unregistering component container_1415211406300_0001_01_000003 >>> 2014-11-05 18:29:19,095 [AmExecutor-006] INFO state.AppState - Reviewing >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=0, requested=0, releasing=0, failed=2, started=2, startFailed=2, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000003 >>> on host hdfs03: '} >>> 2014-11-05 18:29:19,096 [AmExecutor-006] INFO state.AppState - >> MEMCACHED: >>> Asking for 1 more nodes(s) for a total of 1 >>> 2014-11-05 18:29:19,096 [AmExecutor-006] INFO state.AppState - Container >>> ask is Capability[<memory:256, vCores:1>]Priority[1073741825] and label = >>> null >>> 2014-11-05 18:29:21,038 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersAllocated(1) >>> 2014-11-05 18:29:21,038 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Assigning role MEMCACHED to container >>> container_1415211406300_0001_01_000005, on hdfs03:59013, >>> 2014-11-05 18:29:21,038 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Diagnostics: >>> RoleStatus{name='slider-appmaster', key=0, minimum=0, maximum=1, >> desired=1, >>> actual=1, requested=0, releasing=0, failed=0, started=1, startFailed=0, >>> completed=0, failureMessage=''} >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=2, started=2, startFailed=2, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000003 >>> on host hdfs03: '} >>> >>> 2014-11-05 18:29:21,041 [RoleLaunchService-007] INFO >>> agent.AgentProviderService - Build launch context for Agent >>> 2014-11-05 18:29:21,043 [RoleLaunchService-007] INFO >>> agent.AgentProviderService - AGENT_WORK_ROOT set to $PWD >>> 2014-11-05 18:29:21,043 [RoleLaunchService-007] INFO >>> agent.AgentProviderService - AGENT_LOG_ROOT set to <LOG_DIR> >>> 2014-11-05 18:29:21,043 [RoleLaunchService-007] INFO >>> agent.AgentProviderService - PYTHONPATH set to >> ./infra/agent/slider-agent/ >>> 2014-11-05 18:29:21,053 [RoleLaunchService-007] INFO >>> agent.AgentProviderService - Using >>> ./infra/agent/slider-agent/agent/main.py for agent. >>> 2014-11-05 18:29:21,063 [RoleLaunchService-007] INFO >>> appmaster.RoleLaunchService - Starting container with command: python >>> ./infra/agent/slider-agent/agent/main.py --label >>> container_1415211406300_0001_01_000005___MEMCACHED --zk-quorum >>> localhost:2181 --zk-reg-path >>> /registry/users/root/services/org-apache-slider/cl2 > >>> <LOG_DIR>/slider-agent.out 2>&1 ; >>> 2014-11-05 18:29:21,066 [AmExecutor-006] WARN appmaster.SliderAppMaster >> - >>> No delegation tokens obtained and set for launch context >>> 2014-11-05 18:29:21,068 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #4] INFO >>> impl.NMClientAsyncImpl - Processing Event EventType: START_CONTAINER for >>> Container container_1415211406300_0001_01_000005 >>> 2014-11-05 18:29:21,069 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #4] INFO >>> impl.ContainerManagementProtocolProxy - Opening proxy : hdfs03:59013 >>> 2014-11-05 18:29:21,093 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #4] INFO >>> appmaster.SliderAppMaster - Started Container >>> container_1415211406300_0001_01_000005 >>> 2014-11-05 18:29:21,137 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #4] INFO >>> appmaster.SliderAppMaster - Deployed instance of role MEMCACHED onto >>> container_1415211406300_0001_01_000005 >>> 2014-11-05 18:29:21,138 [AmExecutor-006] INFO appmaster.SliderAppMaster >> - >>> Registering component container_1415211406300_0001_01_000005 >>> 2014-11-05 18:29:21,138 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #5] INFO >>> impl.NMClientAsyncImpl - Processing Event EventType: QUERY_CONTAINER for >>> Container container_1415211406300_0001_01_000005 >>> 2014-11-05 18:29:21,139 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #5] INFO >>> impl.ContainerManagementProtocolProxy - Opening proxy : hdfs03:59013 >>> 2014-11-05 18:29:21,149 [AmExecutor-006] INFO >> zk.RegistryOperationsService >>> - Bound at >>> >> /users/root/services/org-apache-slider/cl2/components/container-1415211406300-0001-01-000005 >>> : ServiceRecord{description='MEMCACHED'; external endpoints: {}; internal >>> endpoints: {}, attributes: {"yarn:persistence"="container" >>> "yarn:id"="container-1415211406300-0001-01-000005" }} >>> 2014-11-05 18:29:21,695 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Handling registration: responseId=-1 >>> timestamp=1415212161561 >>> label=container_1415211406300_0001_01_000005___MEMCACHED >>> hostname=hdfs03 >>> expectedState=INIT >>> actualState=INIT >>> >>> 2014-11-05 18:29:21,695 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Registration response: >>> RegistrationResponse{response=OK, responseId=0, statusCommands=null} >>> 2014-11-05 18:29:22,049 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersAllocated(1) >>> 2014-11-05 18:29:22,050 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Discarding surplus container container_1415211406300_0001_01_000006 on >>> hdfs03:59013 >>> 2014-11-05 18:29:22,050 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Diagnostics: >>> RoleStatus{name='slider-appmaster', key=0, minimum=0, maximum=1, >> desired=1, >>> actual=1, requested=0, releasing=0, failed=0, started=1, startFailed=0, >>> completed=0, failureMessage=''} >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=2, started=3, startFailed=2, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000003 >>> on host hdfs03: '} >>> >>> 2014-11-05 18:29:23,060 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersCompleted([1] >>> 2014-11-05 18:29:23,061 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Container Completion for >>> containerID=container_1415211406300_0001_01_000006, state=COMPLETE, >>> exitStatus=-100, diagnostics=Container released by application >>> 2014-11-05 18:29:23,061 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersAllocated(1) >>> 2014-11-05 18:29:23,062 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Discarding surplus container container_1415211406300_0001_01_000007 on >>> hdfs03:59013 >>> 2014-11-05 18:29:23,062 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Diagnostics: >>> RoleStatus{name='slider-appmaster', key=0, minimum=0, maximum=1, >> desired=1, >>> actual=1, requested=0, releasing=0, failed=0, started=1, startFailed=0, >>> completed=0, failureMessage=''} >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=2, started=3, startFailed=2, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000003 >>> on host hdfs03: '} >>> >>> 2014-11-05 18:29:23,062 [AmExecutor-006] INFO appmaster.SliderAppMaster >> - >>> Unregistering component container_1415211406300_0001_01_000006 >>> 2014-11-05 18:29:23,072 [AmExecutor-006] INFO state.AppState - Reviewing >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=2, started=3, startFailed=2, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000003 >>> on host hdfs03: '} >>> 2014-11-05 18:29:24,082 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersCompleted([1] >>> 2014-11-05 18:29:24,084 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Container Completion for >>> containerID=container_1415211406300_0001_01_000007, state=COMPLETE, >>> exitStatus=-100, diagnostics=Container released by application >>> 2014-11-05 18:29:24,084 [AmExecutor-006] INFO appmaster.SliderAppMaster >> - >>> Unregistering component container_1415211406300_0001_01_000007 >>> 2014-11-05 18:29:24,098 [AmExecutor-006] INFO state.AppState - Reviewing >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=2, started=3, startFailed=2, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000003 >>> on host hdfs03: '} >>> 2014-11-05 18:29:31,718 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Installing MEMCACHED on >>> container_1415211406300_0001_01_000005. >>> 2014-11-05 18:29:31,865 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Component operation. Status: IN_PROGRESS >>> 2014-11-05 18:29:32,061 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Recording allocated port for >>> global.listen_port as 53569{DO_NOT_PROPAGATE} >>> 2014-11-05 18:29:32,062 [2076218862@qtp-1799750533-6] WARN >>> agent.AgentProviderService - Failed to parse 53569{DO_NOT_PROPAGATE}: >>> java.lang.NumberFormatException: For input string: >> "53569{DO_NOT_PROPAGATE}" >>> 2014-11-05 18:29:32,062 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Publishing hdfs03:53569{DO_NOT_PROPAGATE} >> for >>> name host_port and container container_1415211406300_0001_01_000005 >>> 2014-11-05 18:29:32,063 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - publishing >>> PublishedConfiguration{description='ComponentInstanceData' entries = 1} >>> 2014-11-05 18:29:32,063 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Component operation. Status: COMPLETED >>> 2014-11-05 18:29:32,063 [AmExecutor-006] INFO appmaster.SliderAppMaster >> - >>> Registering component container_1415211406300_0001_01_000005 >>> 2014-11-05 18:29:32,065 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Updating log and pwd folders for container >>> container_1415211406300_0001_01_000005 >>> 2014-11-05 18:29:32,066 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Updating log and pwd folders for container >>> container_1415211406300_0001_01_000005 >>> 2014-11-05 18:29:32,069 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Starting MEMCACHED on >>> container_1415211406300_0001_01_000005. >>> 2014-11-05 18:29:32,075 [AmExecutor-006] INFO >> zk.RegistryOperationsService >>> - Bound at >>> >> /users/root/services/org-apache-slider/cl2/components/container-1415211406300-0001-01-000005 >>> : ServiceRecord{description='MEMCACHED'; external endpoints: {}; internal >>> endpoints: {}, attributes: {"yarn:persistence"="container" >>> "yarn:id"="container-1415211406300-0001-01-000005" }} >>> 2014-11-05 18:29:34,070 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Component operation. Status: COMPLETED >>> 2014-11-05 18:29:34,070 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Requesting applied config for MEMCACHED on >>> container_1415211406300_0001_01_000005. >>> 2014-11-05 18:29:36,213 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Processing 1 status reports. >>> 2014-11-05 18:29:36,213 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Status report: >>> ComponentStatus{componentName='MEMCACHED', msg='null', status='null', >>> serviceName='cl2', clusterName='cl2', roleCommand='GET_CONFIG'} >>> 2014-11-05 18:29:36,213 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Received and processed config for >>> container_1415211406300_0001_01_000005___MEMCACHED >>> 2014-11-05 18:30:07,282 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersCompleted([1] >>> 2014-11-05 18:30:07,283 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Container Completion for >>> containerID=container_1415211406300_0001_01_000005, state=COMPLETE, >>> exitStatus=0, diagnostics= >>> 2014-11-05 18:30:07,298 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Failed container in role[1] : MEMCACHED >>> 2014-11-05 18:30:07,298 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Current count of failed role[1] MEMCACHED = 3 >>> 2014-11-05 18:30:07,361 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Removing node ID container_1415211406300_0001_01_000005 >>> 2014-11-05 18:30:07,362 [AMRM Callback Handler Thread] ERROR >>> appmaster.SliderAppMaster - Role instance RoleInstance{role='MEMCACHED', >>> id='container_1415211406300_0001_01_000005', >>> container=ContainerID=container_1415211406300_0001_01_000005 >>> nodeID=hdfs03:59013 http=hdfs03:8042 priority=1073741825, >>> createTime=1415212161067, startTime=1415212161093, released=false, >>> roleId=1, host=hdfs03, hostURL=http://hdfs03:8042, state=5, exitCode=0, >>> command='python ./infra/agent/slider-agent/agent/main.py --label >>> container_1415211406300_0001_01_000005___MEMCACHED --zk-quorum >>> localhost:2181 --zk-reg-path >>> /registry/users/root/services/org-apache-slider/cl2 > >>> <LOG_DIR>/slider-agent.out 2>&1 ; ', diagnostics='', output=null, >>> environment=[AGENT_WORK_ROOT="$PWD", HADOOP_USER_NAME="root", >>> AGENT_LOG_ROOT="<LOG_DIR>", PYTHONPATH="./infra/agent/slider-agent/", >>> SLIDER_PASSPHRASE="qGnrsDpoLqO9TrXEdpIPQwmTgiUfPlEMj5VAaMmaxAZiS8rS9L", >>> MALLOC_ARENA_MAX="4"]} failed >>> 2014-11-05 18:30:07,362 [AMRM Callback Handler Thread] INFO >>> agent.AgentProviderService - Removing container specific data for >>> container_1415211406300_0001_01_000005 >>> 2014-11-05 18:30:07,362 [AMRM Callback Handler Thread] INFO >>> agent.AgentProviderService - publishing >>> PublishedConfiguration{description='ComponentInstanceData' entries = 0} >>> 2014-11-05 18:30:07,362 [AMRM Callback Handler Thread] INFO >>> agent.AgentProviderService - Removing component status for label >>> container_1415211406300_0001_01_000005___MEMCACHED >>> 2014-11-05 18:30:07,363 [AmExecutor-006] INFO appmaster.SliderAppMaster >> - >>> Unregistering component container_1415211406300_0001_01_000005 >>> 2014-11-05 18:30:07,374 [AmExecutor-006] INFO state.AppState - Reviewing >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=0, requested=0, releasing=0, failed=3, started=3, startFailed=3, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000005 >>> on host hdfs03: '} >>> 2014-11-05 18:30:07,374 [AmExecutor-006] INFO state.AppState - >> MEMCACHED: >>> Asking for 1 more nodes(s) for a total of 1 >>> 2014-11-05 18:30:07,374 [AmExecutor-006] INFO state.AppState - Container >>> ask is Capability[<memory:256, vCores:1>]Priority[1073741825] and label = >>> null >>> 2014-11-05 18:30:09,319 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersAllocated(1) >>> 2014-11-05 18:30:09,320 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Assigning role MEMCACHED to container >>> container_1415211406300_0001_01_000008, on hdfs03:59013, >>> 2014-11-05 18:30:09,320 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Diagnostics: >>> RoleStatus{name='slider-appmaster', key=0, minimum=0, maximum=1, >> desired=1, >>> actual=1, requested=0, releasing=0, failed=0, started=1, startFailed=0, >>> completed=0, failureMessage=''} >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=3, started=3, startFailed=3, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000005 >>> on host hdfs03: '} >>> >>> 2014-11-05 18:30:09,321 [RoleLaunchService-007] INFO >>> agent.AgentProviderService - Build launch context for Agent >>> 2014-11-05 18:30:09,322 [RoleLaunchService-007] INFO >>> agent.AgentProviderService - AGENT_WORK_ROOT set to $PWD >>> 2014-11-05 18:30:09,322 [RoleLaunchService-007] INFO >>> agent.AgentProviderService - AGENT_LOG_ROOT set to <LOG_DIR> >>> 2014-11-05 18:30:09,322 [RoleLaunchService-007] INFO >>> agent.AgentProviderService - PYTHONPATH set to >> ./infra/agent/slider-agent/ >>> 2014-11-05 18:30:09,327 [RoleLaunchService-007] INFO >>> agent.AgentProviderService - Using >>> ./infra/agent/slider-agent/agent/main.py for agent. >>> 2014-11-05 18:30:09,332 [RoleLaunchService-007] INFO >>> appmaster.RoleLaunchService - Starting container with command: python >>> ./infra/agent/slider-agent/agent/main.py --label >>> container_1415211406300_0001_01_000008___MEMCACHED --zk-quorum >>> localhost:2181 --zk-reg-path >>> /registry/users/root/services/org-apache-slider/cl2 > >>> <LOG_DIR>/slider-agent.out 2>&1 ; >>> 2014-11-05 18:30:09,334 [AmExecutor-006] WARN appmaster.SliderAppMaster >> - >>> No delegation tokens obtained and set for launch context >>> 2014-11-05 18:30:09,335 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #6] INFO >>> impl.NMClientAsyncImpl - Processing Event EventType: START_CONTAINER for >>> Container container_1415211406300_0001_01_000008 >>> 2014-11-05 18:30:09,335 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #6] INFO >>> impl.ContainerManagementProtocolProxy - Opening proxy : hdfs03:59013 >>> 2014-11-05 18:30:09,355 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #6] INFO >>> appmaster.SliderAppMaster - Started Container >>> container_1415211406300_0001_01_000008 >>> 2014-11-05 18:30:09,406 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #6] INFO >>> appmaster.SliderAppMaster - Deployed instance of role MEMCACHED onto >>> container_1415211406300_0001_01_000008 >>> 2014-11-05 18:30:09,408 [AmExecutor-006] INFO appmaster.SliderAppMaster >> - >>> Registering component container_1415211406300_0001_01_000008 >>> 2014-11-05 18:30:09,409 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #7] INFO >>> impl.NMClientAsyncImpl - Processing Event EventType: QUERY_CONTAINER for >>> Container container_1415211406300_0001_01_000008 >>> 2014-11-05 18:30:09,410 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #7] INFO >>> impl.ContainerManagementProtocolProxy - Opening proxy : hdfs03:59013 >>> 2014-11-05 18:30:09,420 [AmExecutor-006] INFO >> zk.RegistryOperationsService >>> - Bound at >>> >> /users/root/services/org-apache-slider/cl2/components/container-1415211406300-0001-01-000008 >>> : ServiceRecord{description='MEMCACHED'; external endpoints: {}; internal >>> endpoints: {}, attributes: {"yarn:persistence"="container" >>> "yarn:id"="container-1415211406300-0001-01-000008" }} >>> 2014-11-05 18:30:09,974 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Handling registration: responseId=-1 >>> timestamp=1415212209811 >>> label=container_1415211406300_0001_01_000008___MEMCACHED >>> hostname=hdfs03 >>> expectedState=INIT >>> actualState=INIT >>> >>> 2014-11-05 18:30:09,974 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Registration response: >>> RegistrationResponse{response=OK, responseId=0, statusCommands=null} >>> 2014-11-05 18:30:10,332 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersAllocated(1) >>> 2014-11-05 18:30:10,333 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Discarding surplus container container_1415211406300_0001_01_000009 on >>> hdfs03:59013 >>> 2014-11-05 18:30:10,333 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Diagnostics: >>> RoleStatus{name='slider-appmaster', key=0, minimum=0, maximum=1, >> desired=1, >>> actual=1, requested=0, releasing=0, failed=0, started=1, startFailed=0, >>> completed=0, failureMessage=''} >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=3, started=4, startFailed=3, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000005 >>> on host hdfs03: '} >>> >>> 2014-11-05 18:30:11,350 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersCompleted([1] >>> 2014-11-05 18:30:11,351 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Container Completion for >>> containerID=container_1415211406300_0001_01_000009, state=COMPLETE, >>> exitStatus=-100, diagnostics=Container released by application >>> 2014-11-05 18:30:11,351 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersAllocated(1) >>> 2014-11-05 18:30:11,351 [AmExecutor-006] INFO appmaster.SliderAppMaster >> - >>> Unregistering component container_1415211406300_0001_01_000009 >>> 2014-11-05 18:30:11,351 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Discarding surplus container container_1415211406300_0001_01_000010 on >>> hdfs03:59013 >>> 2014-11-05 18:30:11,354 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Diagnostics: >>> RoleStatus{name='slider-appmaster', key=0, minimum=0, maximum=1, >> desired=1, >>> actual=1, requested=0, releasing=0, failed=0, started=1, startFailed=0, >>> completed=0, failureMessage=''} >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=3, started=4, startFailed=3, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000005 >>> on host hdfs03: '} >>> >>> 2014-11-05 18:30:11,364 [AmExecutor-006] INFO state.AppState - Reviewing >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=3, started=4, startFailed=3, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000005 >>> on host hdfs03: '} >>> 2014-11-05 18:30:12,363 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersCompleted([1] >>> 2014-11-05 18:30:12,366 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Container Completion for >>> containerID=container_1415211406300_0001_01_000010, state=COMPLETE, >>> exitStatus=-100, diagnostics=Container released by application >>> 2014-11-05 18:30:12,366 [AmExecutor-006] INFO appmaster.SliderAppMaster >> - >>> Unregistering component container_1415211406300_0001_01_000010 >>> 2014-11-05 18:30:12,368 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersAllocated(1) >>> 2014-11-05 18:30:12,368 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Discarding surplus container container_1415211406300_0001_01_000011 on >>> hdfs03:59013 >>> 2014-11-05 18:30:12,368 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Diagnostics: >>> RoleStatus{name='slider-appmaster', key=0, minimum=0, maximum=1, >> desired=1, >>> actual=1, requested=0, releasing=0, failed=0, started=1, startFailed=0, >>> completed=0, failureMessage=''} >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=3, started=4, startFailed=3, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000005 >>> on host hdfs03: '} >>> >>> 2014-11-05 18:30:12,378 [AmExecutor-006] INFO state.AppState - Reviewing >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=3, started=4, startFailed=3, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000005 >>> on host hdfs03: '} >>> 2014-11-05 18:30:13,377 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersCompleted([1] >>> 2014-11-05 18:30:13,379 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Container Completion for >>> containerID=container_1415211406300_0001_01_000011, state=COMPLETE, >>> exitStatus=-100, diagnostics=Container released by application >>> 2014-11-05 18:30:13,380 [AmExecutor-006] INFO appmaster.SliderAppMaster >> - >>> Unregistering component container_1415211406300_0001_01_000011 >>> 2014-11-05 18:30:13,392 [AmExecutor-006] INFO state.AppState - Reviewing >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=3, started=4, startFailed=3, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000005 >>> on host hdfs03: '} >>> 2014-11-05 18:30:19,993 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Installing MEMCACHED on >>> container_1415211406300_0001_01_000008. >>> 2014-11-05 18:30:20,122 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Component operation. Status: IN_PROGRESS >>> 2014-11-05 18:30:20,308 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Recording allocated port for >>> global.listen_port as 36419{DO_NOT_PROPAGATE} >>> 2014-11-05 18:30:20,308 [2076218862@qtp-1799750533-6] WARN >>> agent.AgentProviderService - Failed to parse 36419{DO_NOT_PROPAGATE}: >>> java.lang.NumberFormatException: For input string: >> "36419{DO_NOT_PROPAGATE}" >>> 2014-11-05 18:30:20,309 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Publishing hdfs03:36419{DO_NOT_PROPAGATE} >> for >>> name host_port and container container_1415211406300_0001_01_000008 >>> 2014-11-05 18:30:20,309 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - publishing >>> PublishedConfiguration{description='ComponentInstanceData' entries = 1} >>> 2014-11-05 18:30:20,309 [AmExecutor-006] INFO appmaster.SliderAppMaster >> - >>> Registering component container_1415211406300_0001_01_000008 >>> 2014-11-05 18:30:20,310 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Component operation. Status: COMPLETED >>> 2014-11-05 18:30:20,312 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Updating log and pwd folders for container >>> container_1415211406300_0001_01_000008 >>> 2014-11-05 18:30:20,312 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Updating log and pwd folders for container >>> container_1415211406300_0001_01_000008 >>> 2014-11-05 18:30:20,313 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Starting MEMCACHED on >>> container_1415211406300_0001_01_000008. >>> 2014-11-05 18:30:20,321 [AmExecutor-006] INFO >> zk.RegistryOperationsService >>> - Bound at >>> >> /users/root/services/org-apache-slider/cl2/components/container-1415211406300-0001-01-000008 >>> : ServiceRecord{description='MEMCACHED'; external endpoints: {}; internal >>> endpoints: {}, attributes: {"yarn:persistence"="container" >>> "yarn:id"="container-1415211406300-0001-01-000008" }} >>> 2014-11-05 18:30:22,330 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Component operation. Status: IN_PROGRESS >>> 2014-11-05 18:30:22,523 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Component operation. Status: COMPLETED >>> 2014-11-05 18:30:22,523 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Requesting applied config for MEMCACHED on >>> container_1415211406300_0001_01_000008. >>> 2014-11-05 18:30:24,519 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Processing 1 status reports. >>> 2014-11-05 18:30:24,519 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Status report: >>> ComponentStatus{componentName='MEMCACHED', msg='null', status='null', >>> serviceName='cl2', clusterName='cl2', roleCommand='GET_CONFIG'} >>> 2014-11-05 18:30:24,519 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Received and processed config for >>> container_1415211406300_0001_01_000008___MEMCACHED >>> 2014-11-05 18:30:55,583 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersCompleted([1] >>> 2014-11-05 18:30:55,584 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Container Completion for >>> containerID=container_1415211406300_0001_01_000008, state=COMPLETE, >>> exitStatus=0, diagnostics= >>> 2014-11-05 18:30:55,584 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Failed container in role[1] : MEMCACHED >>> 2014-11-05 18:30:55,585 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Current count of failed role[1] MEMCACHED = 4 >>> 2014-11-05 18:30:55,649 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Removing node ID container_1415211406300_0001_01_000008 >>> 2014-11-05 18:30:55,649 [AMRM Callback Handler Thread] ERROR >>> appmaster.SliderAppMaster - Role instance RoleInstance{role='MEMCACHED', >>> id='container_1415211406300_0001_01_000008', >>> container=ContainerID=container_1415211406300_0001_01_000008 >>> nodeID=hdfs03:59013 http=hdfs03:8042 priority=1073741825, >>> createTime=1415212209334, startTime=1415212209355, released=false, >>> roleId=1, host=hdfs03, hostURL=http://hdfs03:8042, state=5, exitCode=0, >>> command='python ./infra/agent/slider-agent/agent/main.py --label >>> container_1415211406300_0001_01_000008___MEMCACHED --zk-quorum >>> localhost:2181 --zk-reg-path >>> /registry/users/root/services/org-apache-slider/cl2 > >>> <LOG_DIR>/slider-agent.out 2>&1 ; ', diagnostics='', output=null, >>> environment=[AGENT_WORK_ROOT="$PWD", HADOOP_USER_NAME="root", >>> AGENT_LOG_ROOT="<LOG_DIR>", PYTHONPATH="./infra/agent/slider-agent/", >>> SLIDER_PASSPHRASE="qGnrsDpoLqO9TrXEdpIPQwmTgiUfPlEMj5VAaMmaxAZiS8rS9L", >>> MALLOC_ARENA_MAX="4"]} failed >>> 2014-11-05 18:30:55,650 [AMRM Callback Handler Thread] INFO >>> agent.AgentProviderService - Removing container specific data for >>> container_1415211406300_0001_01_000008 >>> 2014-11-05 18:30:55,650 [AMRM Callback Handler Thread] INFO >>> agent.AgentProviderService - publishing >>> PublishedConfiguration{description='ComponentInstanceData' entries = 0} >>> 2014-11-05 18:30:55,650 [AMRM Callback Handler Thread] INFO >>> agent.AgentProviderService - Removing component status for label >>> container_1415211406300_0001_01_000008___MEMCACHED >>> 2014-11-05 18:30:55,651 [AmExecutor-006] INFO appmaster.SliderAppMaster >> - >>> Unregistering component container_1415211406300_0001_01_000008 >>> 2014-11-05 18:30:55,662 [AmExecutor-006] INFO state.AppState - Reviewing >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=0, requested=0, releasing=0, failed=4, started=4, startFailed=4, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000008 >>> on host hdfs03: '} >>> 2014-11-05 18:30:55,662 [AmExecutor-006] INFO state.AppState - >> MEMCACHED: >>> Asking for 1 more nodes(s) for a total of 1 >>> 2014-11-05 18:30:55,663 [AmExecutor-006] INFO state.AppState - Container >>> ask is Capability[<memory:256, vCores:1>]Priority[1073741825] and label = >>> null >>> 2014-11-05 18:30:57,603 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersAllocated(1) >>> 2014-11-05 18:30:57,603 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Assigning role MEMCACHED to container >>> container_1415211406300_0001_01_000012, on hdfs03:59013, >>> 2014-11-05 18:30:57,604 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Diagnostics: >>> RoleStatus{name='slider-appmaster', key=0, minimum=0, maximum=1, >> desired=1, >>> actual=1, requested=0, releasing=0, failed=0, started=1, startFailed=0, >>> completed=0, failureMessage=''} >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=4, started=4, startFailed=4, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000008 >>> on host hdfs03: '} >>> >>> 2014-11-05 18:30:57,606 [RoleLaunchService-007] INFO >>> agent.AgentProviderService - Build launch context for Agent >>> 2014-11-05 18:30:57,608 [RoleLaunchService-007] INFO >>> agent.AgentProviderService - AGENT_WORK_ROOT set to $PWD >>> 2014-11-05 18:30:57,608 [RoleLaunchService-007] INFO >>> agent.AgentProviderService - AGENT_LOG_ROOT set to <LOG_DIR> >>> 2014-11-05 18:30:57,608 [RoleLaunchService-007] INFO >>> agent.AgentProviderService - PYTHONPATH set to >> ./infra/agent/slider-agent/ >>> 2014-11-05 18:30:57,615 [RoleLaunchService-007] INFO >>> agent.AgentProviderService - Using >>> ./infra/agent/slider-agent/agent/main.py for agent. >>> 2014-11-05 18:30:57,624 [RoleLaunchService-007] INFO >>> appmaster.RoleLaunchService - Starting container with command: python >>> ./infra/agent/slider-agent/agent/main.py --label >>> container_1415211406300_0001_01_000012___MEMCACHED --zk-quorum >>> localhost:2181 --zk-reg-path >>> /registry/users/root/services/org-apache-slider/cl2 > >>> <LOG_DIR>/slider-agent.out 2>&1 ; >>> 2014-11-05 18:30:57,627 [AmExecutor-006] WARN appmaster.SliderAppMaster >> - >>> No delegation tokens obtained and set for launch context >>> 2014-11-05 18:30:57,629 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #8] INFO >>> impl.NMClientAsyncImpl - Processing Event EventType: START_CONTAINER for >>> Container container_1415211406300_0001_01_000012 >>> 2014-11-05 18:30:57,629 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #8] INFO >>> impl.ContainerManagementProtocolProxy - Opening proxy : hdfs03:59013 >>> 2014-11-05 18:30:57,662 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #8] INFO >>> appmaster.SliderAppMaster - Started Container >>> container_1415211406300_0001_01_000012 >>> 2014-11-05 18:30:57,701 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #8] INFO >>> appmaster.SliderAppMaster - Deployed instance of role MEMCACHED onto >>> container_1415211406300_0001_01_000012 >>> 2014-11-05 18:30:57,701 [AmExecutor-006] INFO appmaster.SliderAppMaster >> - >>> Registering component container_1415211406300_0001_01_000012 >>> 2014-11-05 18:30:57,703 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #9] INFO >>> impl.NMClientAsyncImpl - Processing Event EventType: QUERY_CONTAINER for >>> Container container_1415211406300_0001_01_000012 >>> 2014-11-05 18:30:57,704 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #9] INFO >>> impl.ContainerManagementProtocolProxy - Opening proxy : hdfs03:59013 >>> 2014-11-05 18:30:57,713 [AmExecutor-006] INFO >> zk.RegistryOperationsService >>> - Bound at >>> >> /users/root/services/org-apache-slider/cl2/components/container-1415211406300-0001-01-000012 >>> : ServiceRecord{description='MEMCACHED'; external endpoints: {}; internal >>> endpoints: {}, attributes: {"yarn:persistence"="container" >>> "yarn:id"="container-1415211406300-0001-01-000012" }} >>> 2014-11-05 18:30:58,250 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Handling registration: responseId=-1 >>> timestamp=1415212258113 >>> label=container_1415211406300_0001_01_000012___MEMCACHED >>> hostname=hdfs03 >>> expectedState=INIT >>> actualState=INIT >>> >>> 2014-11-05 18:30:58,250 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Registration response: >>> RegistrationResponse{response=OK, responseId=0, statusCommands=null} >>> 2014-11-05 18:30:58,617 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersAllocated(1) >>> 2014-11-05 18:30:58,617 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Discarding surplus container container_1415211406300_0001_01_000013 on >>> hdfs03:59013 >>> 2014-11-05 18:30:58,618 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Diagnostics: >>> RoleStatus{name='slider-appmaster', key=0, minimum=0, maximum=1, >> desired=1, >>> actual=1, requested=0, releasing=0, failed=0, started=1, startFailed=0, >>> completed=0, failureMessage=''} >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=4, started=5, startFailed=4, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000008 >>> on host hdfs03: '} >>> >>> 2014-11-05 18:30:59,646 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersCompleted([1] >>> 2014-11-05 18:30:59,648 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Container Completion for >>> containerID=container_1415211406300_0001_01_000013, state=COMPLETE, >>> exitStatus=-100, diagnostics=Container released by application >>> 2014-11-05 18:30:59,649 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersAllocated(1) >>> 2014-11-05 18:30:59,649 [AmExecutor-006] INFO appmaster.SliderAppMaster >> - >>> Unregistering component container_1415211406300_0001_01_000013 >>> 2014-11-05 18:30:59,649 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Discarding surplus container container_1415211406300_0001_01_000014 on >>> hdfs03:59013 >>> 2014-11-05 18:30:59,649 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Diagnostics: >>> RoleStatus{name='slider-appmaster', key=0, minimum=0, maximum=1, >> desired=1, >>> actual=1, requested=0, releasing=0, failed=0, started=1, startFailed=0, >>> completed=0, failureMessage=''} >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=4, started=5, startFailed=4, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000008 >>> on host hdfs03: '} >>> >>> 2014-11-05 18:30:59,662 [AmExecutor-006] INFO state.AppState - Reviewing >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=4, started=5, startFailed=4, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000008 >>> on host hdfs03: '} >>> 2014-11-05 18:31:00,662 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersCompleted([1] >>> 2014-11-05 18:31:00,664 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Container Completion for >>> containerID=container_1415211406300_0001_01_000014, state=COMPLETE, >>> exitStatus=-100, diagnostics=Container released by application >>> 2014-11-05 18:31:00,665 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersAllocated(1) >>> 2014-11-05 18:31:00,665 [AmExecutor-006] INFO appmaster.SliderAppMaster >> - >>> Unregistering component container_1415211406300_0001_01_000014 >>> 2014-11-05 18:31:00,665 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Discarding surplus container container_1415211406300_0001_01_000015 on >>> hdfs03:59013 >>> 2014-11-05 18:31:00,665 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Diagnostics: >>> RoleStatus{name='slider-appmaster', key=0, minimum=0, maximum=1, >> desired=1, >>> actual=1, requested=0, releasing=0, failed=0, started=1, startFailed=0, >>> completed=0, failureMessage=''} >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=4, started=5, startFailed=4, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000008 >>> on host hdfs03: '} >>> >>> 2014-11-05 18:31:00,679 [AmExecutor-006] INFO state.AppState - Reviewing >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=4, started=5, startFailed=4, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000008 >>> on host hdfs03: '} >>> 2014-11-05 18:31:01,676 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersCompleted([1] >>> 2014-11-05 18:31:01,679 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Container Completion for >>> containerID=container_1415211406300_0001_01_000015, state=COMPLETE, >>> exitStatus=-100, diagnostics=Container released by application >>> 2014-11-05 18:31:01,680 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersAllocated(1) >>> 2014-11-05 18:31:01,680 [AmExecutor-006] INFO appmaster.SliderAppMaster >> - >>> Unregistering component container_1415211406300_0001_01_000015 >>> 2014-11-05 18:31:01,680 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Discarding surplus container container_1415211406300_0001_01_000016 on >>> hdfs03:59013 >>> 2014-11-05 18:31:01,681 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Diagnostics: >>> RoleStatus{name='slider-appmaster', key=0, minimum=0, maximum=1, >> desired=1, >>> actual=1, requested=0, releasing=0, failed=0, started=1, startFailed=0, >>> completed=0, failureMessage=''} >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=4, started=5, startFailed=4, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000008 >>> on host hdfs03: '} >>> >>> 2014-11-05 18:31:01,692 [AmExecutor-006] INFO state.AppState - Reviewing >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=4, started=5, startFailed=4, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000008 >>> on host hdfs03: '} >>> 2014-11-05 18:31:02,686 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersCompleted([1] >>> 2014-11-05 18:31:02,687 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Container Completion for >>> containerID=container_1415211406300_0001_01_000016, state=COMPLETE, >>> exitStatus=-100, diagnostics=Container released by application >>> 2014-11-05 18:31:02,688 [AmExecutor-006] INFO appmaster.SliderAppMaster >> - >>> Unregistering component container_1415211406300_0001_01_000016 >>> 2014-11-05 18:31:02,717 [AmExecutor-006] INFO state.AppState - Reviewing >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=4, started=5, startFailed=4, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000008 >>> on host hdfs03: '} >>> 2014-11-05 18:31:08,268 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Installing MEMCACHED on >>> container_1415211406300_0001_01_000012. >>> 2014-11-05 18:31:08,630 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Recording allocated port for >>> global.listen_port as 53983{DO_NOT_PROPAGATE} >>> 2014-11-05 18:31:08,630 [2076218862@qtp-1799750533-6] WARN >>> agent.AgentProviderService - Failed to parse 53983{DO_NOT_PROPAGATE}: >>> java.lang.NumberFormatException: For input string: >> "53983{DO_NOT_PROPAGATE}" >>> 2014-11-05 18:31:08,631 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Publishing hdfs03:53983{DO_NOT_PROPAGATE} >> for >>> name host_port and container container_1415211406300_0001_01_000012 >>> 2014-11-05 18:31:08,631 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - publishing >>> PublishedConfiguration{description='ComponentInstanceData' entries = 1} >>> 2014-11-05 18:31:08,631 [AmExecutor-006] INFO appmaster.SliderAppMaster >> - >>> Registering component container_1415211406300_0001_01_000012 >>> 2014-11-05 18:31:08,632 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Component operation. Status: COMPLETED >>> 2014-11-05 18:31:08,632 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Updating log and pwd folders for container >>> container_1415211406300_0001_01_000012 >>> 2014-11-05 18:31:08,632 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Updating log and pwd folders for container >>> container_1415211406300_0001_01_000012 >>> 2014-11-05 18:31:08,632 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Starting MEMCACHED on >>> container_1415211406300_0001_01_000012. >>> 2014-11-05 18:31:08,642 [AmExecutor-006] INFO >> zk.RegistryOperationsService >>> - Bound at >>> >> /users/root/services/org-apache-slider/cl2/components/container-1415211406300-0001-01-000012 >>> : ServiceRecord{description='MEMCACHED'; external endpoints: {}; internal >>> endpoints: {}, attributes: {"yarn:persistence"="container" >>> "yarn:id"="container-1415211406300-0001-01-000012" }} >>> 2014-11-05 18:31:10,600 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Component operation. Status: IN_PROGRESS >>> 2014-11-05 18:31:10,781 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Component operation. Status: COMPLETED >>> 2014-11-05 18:31:10,781 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Requesting applied config for MEMCACHED on >>> container_1415211406300_0001_01_000012. >>> 2014-11-05 18:31:12,778 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Processing 1 status reports. >>> 2014-11-05 18:31:12,778 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Status report: >>> ComponentStatus{componentName='MEMCACHED', msg='null', status='null', >>> serviceName='cl2', clusterName='cl2', roleCommand='GET_CONFIG'} >>> 2014-11-05 18:31:12,778 [2076218862@qtp-1799750533-6] INFO >>> agent.AgentProviderService - Received and processed config for >>> container_1415211406300_0001_01_000012___MEMCACHED >>> 2014-11-05 18:31:43,875 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersCompleted([1] >>> 2014-11-05 18:31:43,876 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Container Completion for >>> containerID=container_1415211406300_0001_01_000012, state=COMPLETE, >>> exitStatus=0, diagnostics= >>> 2014-11-05 18:31:43,889 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Failed container in role[1] : MEMCACHED >>> 2014-11-05 18:31:43,890 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Current count of failed role[1] MEMCACHED = 5 >>> 2014-11-05 18:31:43,959 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Removing node ID container_1415211406300_0001_01_000012 >>> 2014-11-05 18:31:43,960 [AMRM Callback Handler Thread] ERROR >>> appmaster.SliderAppMaster - Role instance RoleInstance{role='MEMCACHED', >>> id='container_1415211406300_0001_01_000012', >>> container=ContainerID=container_1415211406300_0001_01_000012 >>> nodeID=hdfs03:59013 http=hdfs03:8042 priority=1073741825, >>> createTime=1415212257628, startTime=1415212257662, released=false, >>> roleId=1, host=hdfs03, hostURL=http://hdfs03:8042, state=5, exitCode=0, >>> command='python ./infra/agent/slider-agent/agent/main.py --label >>> container_1415211406300_0001_01_000012___MEMCACHED --zk-quorum >>> localhost:2181 --zk-reg-path >>> /registry/users/root/services/org-apache-slider/cl2 > >>> <LOG_DIR>/slider-agent.out 2>&1 ; ', diagnostics='', output=null, >>> environment=[AGENT_WORK_ROOT="$PWD", HADOOP_USER_NAME="root", >>> AGENT_LOG_ROOT="<LOG_DIR>", PYTHONPATH="./infra/agent/slider-agent/", >>> SLIDER_PASSPHRASE="qGnrsDpoLqO9TrXEdpIPQwmTgiUfPlEMj5VAaMmaxAZiS8rS9L", >>> MALLOC_ARENA_MAX="4"]} failed >>> 2014-11-05 18:31:43,960 [AMRM Callback Handler Thread] INFO >>> agent.AgentProviderService - Removing container specific data for >>> container_1415211406300_0001_01_000012 >>> 2014-11-05 18:31:43,960 [AMRM Callback Handler Thread] INFO >>> agent.AgentProviderService - publishing >>> PublishedConfiguration{description='ComponentInstanceData' entries = 0} >>> 2014-11-05 18:31:43,961 [AMRM Callback Handler Thread] INFO >>> agent.AgentProviderService - Removing component status for label >>> container_1415211406300_0001_01_000012___MEMCACHED >>> 2014-11-05 18:31:43,961 [AmExecutor-006] INFO appmaster.SliderAppMaster >> - >>> Unregistering component container_1415211406300_0001_01_000012 >>> 2014-11-05 18:31:43,975 [AmExecutor-006] INFO state.AppState - Reviewing >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=0, requested=0, releasing=0, failed=5, started=5, startFailed=5, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000012 >>> on host hdfs03: '} >>> 2014-11-05 18:31:43,975 [AmExecutor-006] INFO state.AppState - >> MEMCACHED: >>> Asking for 1 more nodes(s) for a total of 1 >>> 2014-11-05 18:31:43,975 [AmExecutor-006] INFO state.AppState - Container >>> ask is Capability[<memory:256, vCores:1>]Priority[1073741825] and label = >>> null >>> 2014-11-05 18:31:45,892 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersAllocated(1) >>> 2014-11-05 18:31:45,892 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Assigning role MEMCACHED to container >>> container_1415211406300_0001_01_000017, on hdfs03:59013, >>> 2014-11-05 18:31:45,892 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Diagnostics: >>> RoleStatus{name='slider-appmaster', key=0, minimum=0, maximum=1, >> desired=1, >>> actual=1, requested=0, releasing=0, failed=0, started=1, startFailed=0, >>> completed=0, failureMessage=''} >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=5, started=5, startFailed=5, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000012 >>> on host hdfs03: '} >>> >>> 2014-11-05 18:31:45,895 [RoleLaunchService-007] INFO >>> agent.AgentProviderService - Build launch context for Agent >>> 2014-11-05 18:31:45,896 [RoleLaunchService-007] INFO >>> agent.AgentProviderService - AGENT_WORK_ROOT set to $PWD >>> 2014-11-05 18:31:45,896 [RoleLaunchService-007] INFO >>> agent.AgentProviderService - AGENT_LOG_ROOT set to <LOG_DIR> >>> 2014-11-05 18:31:45,897 [RoleLaunchService-007] INFO >>> agent.AgentProviderService - PYTHONPATH set to >> ./infra/agent/slider-agent/ >>> 2014-11-05 18:31:45,903 [RoleLaunchService-007] INFO >>> agent.AgentProviderService - Using >>> ./infra/agent/slider-agent/agent/main.py for agent. >>> 2014-11-05 18:31:45,910 [RoleLaunchService-007] INFO >>> appmaster.RoleLaunchService - Starting container with command: python >>> ./infra/agent/slider-agent/agent/main.py --label >>> container_1415211406300_0001_01_000017___MEMCACHED --zk-quorum >>> localhost:2181 --zk-reg-path >>> /registry/users/root/services/org-apache-slider/cl2 > >>> <LOG_DIR>/slider-agent.out 2>&1 ; >>> 2014-11-05 18:31:45,912 [AmExecutor-006] WARN appmaster.SliderAppMaster >> - >>> No delegation tokens obtained and set for launch context >>> 2014-11-05 18:31:45,913 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO >>> impl.NMClientAsyncImpl - Processing Event EventType: START_CONTAINER for >>> Container container_1415211406300_0001_01_000017 >>> 2014-11-05 18:31:45,913 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO >>> impl.ContainerManagementProtocolProxy - Opening proxy : hdfs03:59013 >>> 2014-11-05 18:31:45,942 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO >>> appmaster.SliderAppMaster - Started Container >>> container_1415211406300_0001_01_000017 >>> 2014-11-05 18:31:45,994 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO >>> appmaster.SliderAppMaster - Deployed instance of role MEMCACHED onto >>> container_1415211406300_0001_01_000017 >>> 2014-11-05 18:31:45,994 [AmExecutor-006] INFO appmaster.SliderAppMaster >> - >>> Registering component container_1415211406300_0001_01_000017 >>> 2014-11-05 18:31:45,995 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #1] INFO >>> impl.NMClientAsyncImpl - Processing Event EventType: QUERY_CONTAINER for >>> Container container_1415211406300_0001_01_000017 >>> 2014-11-05 18:31:45,995 >>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #1] INFO >>> impl.ContainerManagementProtocolProxy - Opening proxy : hdfs03:59013 >>> 2014-11-05 18:31:46,008 [AmExecutor-006] INFO >> zk.RegistryOperationsService >>> - Bound at >>> >> /users/root/services/org-apache-slider/cl2/components/container-1415211406300-0001-01-000017 >>> : ServiceRecord{description='MEMCACHED'; external endpoints: {}; internal >>> endpoints: {}, attributes: {"yarn:persistence"="container" >>> "yarn:id"="container-1415211406300-0001-01-000017" }} >>> 2014-11-05 18:31:46,531 [2035700975@qtp-1799750533-7] INFO >>> agent.AgentProviderService - Handling registration: responseId=-1 >>> timestamp=1415212306377 >>> label=container_1415211406300_0001_01_000017___MEMCACHED >>> hostname=hdfs03 >>> expectedState=INIT >>> actualState=INIT >>> >>> 2014-11-05 18:31:46,531 [2035700975@qtp-1799750533-7] INFO >>> agent.AgentProviderService - Registration response: >>> RegistrationResponse{response=OK, responseId=0, statusCommands=null} >>> 2014-11-05 18:31:46,907 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersAllocated(1) >>> 2014-11-05 18:31:46,908 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Discarding surplus container container_1415211406300_0001_01_000018 on >>> hdfs03:59013 >>> 2014-11-05 18:31:46,908 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Diagnostics: >>> RoleStatus{name='slider-appmaster', key=0, minimum=0, maximum=1, >> desired=1, >>> actual=1, requested=0, releasing=0, failed=0, started=1, startFailed=0, >>> completed=0, failureMessage=''} >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=5, started=6, startFailed=5, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000012 >>> on host hdfs03: '} >>> >>> 2014-11-05 18:31:47,921 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersCompleted([1] >>> 2014-11-05 18:31:47,922 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Container Completion for >>> containerID=container_1415211406300_0001_01_000018, state=COMPLETE, >>> exitStatus=-100, diagnostics=Container released by application >>> 2014-11-05 18:31:47,922 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersAllocated(1) >>> 2014-11-05 18:31:47,922 [AmExecutor-006] INFO appmaster.SliderAppMaster >> - >>> Unregistering component container_1415211406300_0001_01_000018 >>> 2014-11-05 18:31:47,923 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Discarding surplus container container_1415211406300_0001_01_000019 on >>> hdfs03:59013 >>> 2014-11-05 18:31:47,923 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Diagnostics: >>> RoleStatus{name='slider-appmaster', key=0, minimum=0, maximum=1, >> desired=1, >>> actual=1, requested=0, releasing=0, failed=0, started=1, startFailed=0, >>> completed=0, failureMessage=''} >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=5, started=6, startFailed=5, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000012 >>> on host hdfs03: '} >>> >>> 2014-11-05 18:31:47,933 [AmExecutor-006] INFO state.AppState - Reviewing >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=5, started=6, startFailed=5, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000012 >>> on host hdfs03: '} >>> 2014-11-05 18:31:48,934 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersCompleted([1] >>> 2014-11-05 18:31:48,939 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Container Completion for >>> containerID=container_1415211406300_0001_01_000019, state=COMPLETE, >>> exitStatus=-100, diagnostics=Container released by application >>> 2014-11-05 18:31:48,940 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersAllocated(1) >>> 2014-11-05 18:31:48,940 [AmExecutor-006] INFO appmaster.SliderAppMaster >> - >>> Unregistering component container_1415211406300_0001_01_000019 >>> 2014-11-05 18:31:48,940 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Discarding surplus container container_1415211406300_0001_01_000020 on >>> hdfs03:59013 >>> 2014-11-05 18:31:48,940 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Diagnostics: >>> RoleStatus{name='slider-appmaster', key=0, minimum=0, maximum=1, >> desired=1, >>> actual=1, requested=0, releasing=0, failed=0, started=1, startFailed=0, >>> completed=0, failureMessage=''} >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=5, started=6, startFailed=5, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000012 >>> on host hdfs03: '} >>> >>> 2014-11-05 18:31:48,951 [AmExecutor-006] INFO state.AppState - Reviewing >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=5, started=6, startFailed=5, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000012 >>> on host hdfs03: '} >>> 2014-11-05 18:31:49,948 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersCompleted([1] >>> 2014-11-05 18:31:49,950 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Container Completion for >>> containerID=container_1415211406300_0001_01_000020, state=COMPLETE, >>> exitStatus=-100, diagnostics=Container released by application >>> 2014-11-05 18:31:49,950 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersAllocated(1) >>> 2014-11-05 18:31:49,950 [AmExecutor-006] INFO appmaster.SliderAppMaster >> - >>> Unregistering component container_1415211406300_0001_01_000020 >>> 2014-11-05 18:31:49,950 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Discarding surplus container container_1415211406300_0001_01_000021 on >>> hdfs03:59013 >>> 2014-11-05 18:31:49,951 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Diagnostics: >>> RoleStatus{name='slider-appmaster', key=0, minimum=0, maximum=1, >> desired=1, >>> actual=1, requested=0, releasing=0, failed=0, started=1, startFailed=0, >>> completed=0, failureMessage=''} >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=5, started=6, startFailed=5, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000012 >>> on host hdfs03: '} >>> >>> 2014-11-05 18:31:49,963 [AmExecutor-006] INFO state.AppState - Reviewing >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=5, started=6, startFailed=5, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000012 >>> on host hdfs03: '} >>> 2014-11-05 18:31:50,965 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersCompleted([1] >>> 2014-11-05 18:31:50,967 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Container Completion for >>> containerID=container_1415211406300_0001_01_000021, state=COMPLETE, >>> exitStatus=-100, diagnostics=Container released by application >>> 2014-11-05 18:31:50,967 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersAllocated(1) >>> 2014-11-05 18:31:50,967 [AmExecutor-006] INFO appmaster.SliderAppMaster >> - >>> Unregistering component container_1415211406300_0001_01_000021 >>> 2014-11-05 18:31:50,967 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Discarding surplus container container_1415211406300_0001_01_000022 on >>> hdfs03:59013 >>> 2014-11-05 18:31:50,968 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Diagnostics: >>> RoleStatus{name='slider-appmaster', key=0, minimum=0, maximum=1, >> desired=1, >>> actual=1, requested=0, releasing=0, failed=0, started=1, startFailed=0, >>> completed=0, failureMessage=''} >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=5, started=6, startFailed=5, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000012 >>> on host hdfs03: '} >>> >>> 2014-11-05 18:31:50,979 [AmExecutor-006] INFO state.AppState - Reviewing >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=5, started=6, startFailed=5, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000012 >>> on host hdfs03: '} >>> 2014-11-05 18:31:51,976 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersCompleted([1] >>> 2014-11-05 18:31:51,977 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Container Completion for >>> containerID=container_1415211406300_0001_01_000022, state=COMPLETE, >>> exitStatus=-100, diagnostics=Container released by application >>> 2014-11-05 18:31:51,978 [AmExecutor-006] INFO appmaster.SliderAppMaster >> - >>> Unregistering component container_1415211406300_0001_01_000022 >>> 2014-11-05 18:31:51,990 [AmExecutor-006] INFO state.AppState - Reviewing >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=1, requested=0, releasing=0, failed=5, started=6, startFailed=5, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000012 >>> on host hdfs03: '} >>> 2014-11-05 18:31:56,552 [2035700975@qtp-1799750533-7] INFO >>> agent.AgentProviderService - Installing MEMCACHED on >>> container_1415211406300_0001_01_000017. >>> 2014-11-05 18:31:56,692 [2035700975@qtp-1799750533-7] INFO >>> agent.AgentProviderService - Component operation. Status: IN_PROGRESS >>> 2014-11-05 18:31:56,879 [2035700975@qtp-1799750533-7] INFO >>> agent.AgentProviderService - Recording allocated port for >>> global.listen_port as 35825{DO_NOT_PROPAGATE} >>> 2014-11-05 18:31:56,879 [2035700975@qtp-1799750533-7] WARN >>> agent.AgentProviderService - Failed to parse 35825{DO_NOT_PROPAGATE}: >>> java.lang.NumberFormatException: For input string: >> "35825{DO_NOT_PROPAGATE}" >>> 2014-11-05 18:31:56,880 [2035700975@qtp-1799750533-7] INFO >>> agent.AgentProviderService - Publishing hdfs03:35825{DO_NOT_PROPAGATE} >> for >>> name host_port and container container_1415211406300_0001_01_000017 >>> 2014-11-05 18:31:56,880 [2035700975@qtp-1799750533-7] INFO >>> agent.AgentProviderService - publishing >>> PublishedConfiguration{description='ComponentInstanceData' entries = 1} >>> 2014-11-05 18:31:56,881 [2035700975@qtp-1799750533-7] INFO >>> agent.AgentProviderService - Component operation. Status: COMPLETED >>> 2014-11-05 18:31:56,881 [AmExecutor-006] INFO appmaster.SliderAppMaster >> - >>> Registering component container_1415211406300_0001_01_000017 >>> 2014-11-05 18:31:56,881 [2035700975@qtp-1799750533-7] INFO >>> agent.AgentProviderService - Updating log and pwd folders for container >>> container_1415211406300_0001_01_000017 >>> 2014-11-05 18:31:56,881 [2035700975@qtp-1799750533-7] INFO >>> agent.AgentProviderService - Updating log and pwd folders for container >>> container_1415211406300_0001_01_000017 >>> 2014-11-05 18:31:56,882 [2035700975@qtp-1799750533-7] INFO >>> agent.AgentProviderService - Starting MEMCACHED on >>> container_1415211406300_0001_01_000017. >>> 2014-11-05 18:31:56,896 [AmExecutor-006] INFO >> zk.RegistryOperationsService >>> - Bound at >>> >> /users/root/services/org-apache-slider/cl2/components/container-1415211406300-0001-01-000017 >>> : ServiceRecord{description='MEMCACHED'; external endpoints: {}; internal >>> endpoints: {}, attributes: {"yarn:persistence"="container" >>> "yarn:id"="container-1415211406300-0001-01-000017" }} >>> 2014-11-05 18:31:58,889 [2035700975@qtp-1799750533-7] INFO >>> agent.AgentProviderService - Component operation. Status: IN_PROGRESS >>> 2014-11-05 18:31:59,074 [2035700975@qtp-1799750533-7] INFO >>> agent.AgentProviderService - Component operation. Status: COMPLETED >>> 2014-11-05 18:31:59,074 [2035700975@qtp-1799750533-7] INFO >>> agent.AgentProviderService - Requesting applied config for MEMCACHED on >>> container_1415211406300_0001_01_000017. >>> 2014-11-05 18:32:01,069 [2035700975@qtp-1799750533-7] INFO >>> agent.AgentProviderService - Processing 1 status reports. >>> 2014-11-05 18:32:01,069 [2035700975@qtp-1799750533-7] INFO >>> agent.AgentProviderService - Status report: >>> ComponentStatus{componentName='MEMCACHED', msg='null', status='null', >>> serviceName='cl2', clusterName='cl2', roleCommand='GET_CONFIG'} >>> 2014-11-05 18:32:01,070 [2035700975@qtp-1799750533-7] INFO >>> agent.AgentProviderService - Received and processed config for >>> container_1415211406300_0001_01_000017___MEMCACHED >>> 2014-11-05 18:32:32,165 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - onContainersCompleted([1] >>> 2014-11-05 18:32:32,166 [AMRM Callback Handler Thread] INFO >>> appmaster.SliderAppMaster - Container Completion for >>> containerID=container_1415211406300_0001_01_000017, state=COMPLETE, >>> exitStatus=0, diagnostics= >>> 2014-11-05 18:32:32,166 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Failed container in role[1] : MEMCACHED >>> 2014-11-05 18:32:32,166 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Current count of failed role[1] MEMCACHED = 6 >>> 2014-11-05 18:32:32,229 [AMRM Callback Handler Thread] INFO >> state.AppState >>> - Removing node ID container_1415211406300_0001_01_000017 >>> 2014-11-05 18:32:32,230 [AMRM Callback Handler Thread] ERROR >>> appmaster.SliderAppMaster - Role instance RoleInstance{role='MEMCACHED', >>> id='container_1415211406300_0001_01_000017', >>> container=ContainerID=container_1415211406300_0001_01_000017 >>> nodeID=hdfs03:59013 http=hdfs03:8042 priority=1073741825, >>> createTime=1415212305913, startTime=1415212305942, released=false, >>> roleId=1, host=hdfs03, hostURL=http://hdfs03:8042, state=5, exitCode=0, >>> command='python ./infra/agent/slider-agent/agent/main.py --label >>> container_1415211406300_0001_01_000017___MEMCACHED --zk-quorum >>> localhost:2181 --zk-reg-path >>> /registry/users/root/services/org-apache-slider/cl2 > >>> <LOG_DIR>/slider-agent.out 2>&1 ; ', diagnostics='', output=null, >>> environment=[AGENT_WORK_ROOT="$PWD", HADOOP_USER_NAME="root", >>> AGENT_LOG_ROOT="<LOG_DIR>", PYTHONPATH="./infra/agent/slider-agent/", >>> SLIDER_PASSPHRASE="qGnrsDpoLqO9TrXEdpIPQwmTgiUfPlEMj5VAaMmaxAZiS8rS9L", >>> MALLOC_ARENA_MAX="4"]} failed >>> 2014-11-05 18:32:32,230 [AMRM Callback Handler Thread] INFO >>> agent.AgentProviderService - Removing container specific data for >>> container_1415211406300_0001_01_000017 >>> 2014-11-05 18:32:32,230 [AMRM Callback Handler Thread] INFO >>> agent.AgentProviderService - publishing >>> PublishedConfiguration{description='ComponentInstanceData' entries = 0} >>> 2014-11-05 18:32:32,230 [AMRM Callback Handler Thread] INFO >>> agent.AgentProviderService - Removing component status for label >>> container_1415211406300_0001_01_000017___MEMCACHED >>> 2014-11-05 18:32:32,231 [AmExecutor-006] INFO appmaster.SliderAppMaster >> - >>> Unregistering component container_1415211406300_0001_01_000017 >>> 2014-11-05 18:32:32,243 [AmExecutor-006] INFO state.AppState - Reviewing >>> RoleStatus{name='MEMCACHED', key=1, minimum=0, maximum=1, desired=1, >>> actual=0, requested=0, releasing=0, failed=6, started=6, startFailed=6, >>> completed=0, failureMessage='Failure >> container_1415211406300_0001_01_000017 >>> on host hdfs03: '} >>> 2014-11-05 18:32:32,247 [AmExecutor-006] ERROR appmaster.SliderAppMaster >> - >>> Cluster teardown triggered >>> org.apache.slider.core.exceptions.TriggerClusterTeardownException: >> Unstable >>> Application Instance : - failed with component MEMCACHED failing 6 times >> (6 >>> in startup); threshold is 5 - last failure: Failure >>> container_1415211406300_0001_01_000017 on host hdfs03: >>> org.apache.slider.core.exceptions.TriggerClusterTeardownException: >> Unstable >>> Application Instance : - failed with component MEMCACHED failing 6 times >> (6 >>> in startup); threshold is 5 - last failure: Failure >>> container_1415211406300_0001_01_000017 on host hdfs03: >>> at >>> >> org.apache.slider.server.appmaster.state.AppState.checkFailureThreshold(AppState.java:1593) >>> at >>> >> org.apache.slider.server.appmaster.state.AppState.reviewOneRole(AppState.java:1655) >>> at >>> >> org.apache.slider.server.appmaster.state.AppState.reviewRequestAndReleaseNodes(AppState.java:1572) >>> at >>> >> org.apache.slider.server.appmaster.SliderAppMaster.executeNodeReview(SliderAppMaster.java:1578) >>> at >>> >> org.apache.slider.server.appmaster.SliderAppMaster.handleReviewAndFlexApplicationSize(SliderAppMaster.java:1564) >>> at >>> >> org.apache.slider.server.appmaster.actions.ReviewAndFlexApplicationSize.execute(ReviewAndFlexApplicationSize.java:41) >>> at >>> >> org.apache.slider.server.appmaster.actions.QueueExecutor.run(QueueExecutor.java:73) >>> at >>> >> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) >>> at >>> >> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) >>> at java.lang.Thread.run(Thread.java:745) >>> 2014-11-05 18:32:32,249 [AmExecutor-006] INFO appmaster.SliderAppMaster >> - >>> SliderAppMasterApi.stopCluster: Unstable Application Instance : - failed >>> with component MEMCACHED failing 6 times (6 in startup); threshold is 5 - >>> last failure: Failure container_1415211406300_0001_01_000017 on host >>> hdfs03: >>> 2014-11-05 18:32:32,250 [main] INFO appmaster.SliderAppMaster - >> Triggering >>> shutdown of the AM: stop: exit code = 72, FAILED: Unstable Application >>> Instance : - failed with component MEMCACHED failing 6 times (6 in >>> startup); threshold is 5 - last failure: Failure >>> container_1415211406300_0001_01_000017 on host hdfs03: ; >>> 2014-11-05 18:32:32,250 [main] INFO appmaster.SliderAppMaster - Process >>> has exited with exit code 0 mapped to 0 -ignoring >>> 2014-11-05 18:32:32,251 [main] INFO workflow.WorkflowCompositeService - >>> Child service completed Service RoleLaunchService in state >>> RoleLaunchService: STOPPED >>> 2014-11-05 18:32:32,251 [main] INFO state.AppState - Releasing 0 >> containers >>> 2014-11-05 18:32:32,252 [main] INFO appmaster.SliderAppMaster - >>> Application completed. Signalling finish to RM >>> 2014-11-05 18:32:32,252 [main] INFO appmaster.SliderAppMaster - >>> Unregistering AM status=FAILED message=Unstable Application Instance : - >>> failed with component MEMCACHED failing 6 times (6 in startup); threshold >>> is 5 - last failure: Failure container_1415211406300_0001_01_000017 on >> host >>> hdfs03: >>> 2014-11-05 18:32:32,281 [main] INFO impl.AMRMClientImpl - Waiting for >>> application to be successfully unregistered. >>> 2014-11-05 18:32:32,386 [main] INFO appmaster.SliderAppMaster - Exiting >>> AM; final exit code = 0 >>> 2014-11-05 18:32:32,389 [main] INFO util.ExitUtil - Exiting with status >> 0 >>> 2014-11-05 18:32:32,395 [Shutdown] INFO mortbay.log - Shutdown hook >>> executing >>> 2014-11-05 18:32:32,400 [Shutdown] INFO mortbay.log - Stopped >>> [email protected]:59454 >>> 2014-11-05 18:32:32,406 [Shutdown] INFO mortbay.log - Stopped >>> [email protected]:51682 >>> 2014-11-05 18:32:32,413 [Thread-1] INFO mortbay.log - Stopped >> HttpServer2$ >>> [email protected]:0 >>> 2014-11-05 18:32:32,512 [Shutdown] INFO mortbay.log - Shutdown hook >>> complete >>> 2014-11-05 18:32:32,536 [Thread-1] INFO ipc.Server - Stopping server on >>> 45997 >>> 2014-11-05 18:32:32,538 [IPC Server listener on 45997] INFO ipc.Server - >>> Stopping IPC Server listener on 45997 >>> 2014-11-05 18:32:32,548 [IPC Server Responder] INFO ipc.Server - >> Stopping >>> IPC Server Responder >>> 2014-11-05 18:32:32,551 [Thread-1] INFO >>> impl.ContainerManagementProtocolProxy - Opening proxy : hdfs03:59013 >>> 2014-11-05 18:32:32,609 [Thread-1] INFO >>> impl.ContainerManagementProtocolProxy - Opening proxy : hdfs03:59013 >>> 2014-11-05 18:32:32,637 [Thread-1] INFO >>> impl.ContainerManagementProtocolProxy - Opening proxy : hdfs03:59013 >>> 2014-11-05 18:32:32,665 [Thread-1] INFO >>> impl.ContainerManagementProtocolProxy - Opening proxy : hdfs03:59013 >>> 2014-11-05 18:32:32,684 [Thread-1] INFO >>> impl.ContainerManagementProtocolProxy - Opening proxy : hdfs03:59013 >>> 2014-11-05 18:32:32,706 [Thread-1] INFO >>> impl.ContainerManagementProtocolProxy - Opening proxy : hdfs03:59013 >>> 2014-11-05 18:32:32,732 [AMRM Callback Handler Thread] INFO >>> impl.AMRMClientAsyncImpl - Interrupted while waiting for queue >>> java.lang.InterruptedException >>> at >>> >> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) >>> at >>> >> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2052) >>> at >>> >> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) >>> at >>> >> org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl$CallbackHandlerThread.run(AMRMClientAsyncImpl.java:274) >>> 2014-11-05 18:32:32,734 [AmExecutor-005] INFO actions.QueueService - >>> QueueService processor terminated >>> 2014-11-05 18:32:32,734 [AmExecutor-006] WARN actions.ActionStopQueue - >>> STOP >>> 2014-11-05 18:32:32,734 [AmExecutor-006] INFO actions.QueueExecutor - >>> Queue Executor run() stopped >>> >>> On Wed, Nov 5, 2014 at 1:35 PM, Pushkar Raste <[email protected]> >>> wrote: >>> >>>> I tried deploy jmemcached using latest slider built from dev branch. I >> see >>>> following error >>>> >>>> 2014-11-05 18:28:22,804 [AMRM Callback Handler Thread] ERROR >> appmaster.SliderAppMaster - Role instance RoleInstance{role='MEMCACHED', >> id='container_1415211406300_0001_01_000002', >> container=ContainerID=container_1415211406300_0001_01_000002 >> nodeID=hdfs03:59013 http=hdfs03:8042 priority=1073741825, >> createTime=1415212054511, startTime=1415212054606, released=false, >> roleId=1, host=hdfs03, hostURL=http://hdfs03:8042, state=5, exitCode=0, >> command='python ./infra/agent/slider-agent/agent/main.py --label >> container_1415211406300_0001_01_000002___MEMCACHED --zk-quorum >> localhost:2181 --zk-reg-path >> /registry/users/root/services/org-apache-slider/cl2 > >> <LOG_DIR>/slider-agent.out 2>&1 ; ', diagnostics='', output=null, >> environment=[AGENT_WORK_ROOT="$PWD", HADOOP_USER_NAME="root", >> AGENT_LOG_ROOT="<LOG_DIR>", PYTHONPATH="./infra/agent/slider-agent/", >> SLIDER_PASSPHRASE="qGnrsDpoLqO9TrXEdpIPQwmTgiUfPlEMj5VAaMmaxAZiS8rS9L", >> MALLOC_ARENA_MAX="4"]} failed >>>> >>>> >>>> >> >> >> -- >> CONFIDENTIALITY NOTICE >> NOTICE: This message is intended for the use of the individual or entity to >> which it is addressed and may contain information that is confidential, >> privileged and exempt from disclosure under applicable law. If the reader >> of this message is not the intended recipient, you are hereby notified that >> any printing, copying, dissemination, distribution, disclosure or >> forwarding of this communication is strictly prohibited. If you have >> received this communication in error, please contact the sender immediately >> and delete it from your system. Thank You. >> -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
