found the cause of the problem, OpenJDK-8 JVM ??!! All the tests completed
successfully when java 7 was installed on both x86 and power...

On Thu, Feb 18, 2016 at 1:59 PM, MrAsanjar . <[email protected]> wrote:

>  Evans, we couldn't  use BigTop Provisioner since  ppc artifacts are not
> in the repository yet. That is why we had to made the script. Thanks
>
>
> On Thu, Feb 18, 2016 at 1:40 PM, Konstantin Boudnik <[email protected]>
> wrote:
>
>> On Fri, Feb 19, 2016 at 02:07AM, Evans Ye wrote:
>> > Have you tried Bigtop Provisioner?
>> >
>> https://cwiki.apache.org/confluence/display/BIGTOP/Bigtop+Provisioner+User+Guide
>> >
>> > -----
>> > $ cd bigtop/bigtop-deploy/vm/vagrant-puppet-vm
>> > $ cat vagrantconfig.yaml  # You just need to update repo and add spark
>> into
>> > component list
>> > memory_size: 4096
>> > number_cpus: 1
>> > box: "puppetlabs/centos-7.0-64-nocm"
>> > repo: "
>> http://bigtop-repos.s3.amazonaws.com/releases/1.1.0/centos/7/x86_64";
>> > num_instances: 1
>> > distro: centos
>> > components: [hadoop, yarn, spark]
>> > enable_local_repo: false
>> > run_smoke_tests: false
>> > smoke_test_components: [mapreduce, pig]
>> > jdk: "java-1.7.0-openjdk-devel.x86_64"
>> >
>> > $ ./docker-hadoop.sh -c 1   # wait for 5mins
>> > $ vagrant ssh bigtop1
>> > $ cd /usr/lib/spark
>> > $ spark-submit --class org.apache.spark.examples.SparkPi --master
>> > yarn-client /usr/lib/spark/lib/spark-examples-1.5.1-hadoop2.7.1.jar 10
>> > ...
>> > Pi is roughly 3.144728
>>
>> Good enough for 1.1.0 release ;)
>>
>> But seriously: we have a real good tool to provision clusters quickly and
>> painlessly. And with Puppet to guarantee the consistency.
>>
>> Cos
>>
>> > -----
>> >
>> > Is this what you want?
>> > I can run this either on Docker or on a CentOS 7 VM.
>> > Sorry I don't have PPC machine to test.
>> >
>> >
>> > 2016-02-18 11:46 GMT+08:00 MrAsanjar . <[email protected]>:
>> >
>> > > I have build a single node hadoop/spark sandbox based on the latest
>> > > Apachue Bigtop 1.1.0 build. Spark in standalone mode + HDFS functions
>> > > perfectly, however, fails if yarn-client/yarn-master mode is used as
>> > > follow:
>> > >
>> > > *>>spark-submit --class org.apache.spark.examples.SparkPi --master
>> > > yarn-client  /usr/lib/spark/lib/spark-examples-1.5.1-hadoop2.7.1.jar
>> 10*
>> > > 16/02/17 05:19:52 ERROR YarnClientSchedulerBackend: Yarn application
>> has
>> > > already exited with state FINISHED!
>> > > Exception in thread "main" java.lang.IllegalStateException: Cannot
>> call
>> > > methods on a stopped SparkContext
>> > >     at org.apache.spark.SparkContext.org
>> > > $apache$spark$SparkContext$$assertNotStopped(SparkContext.scala:104)
>> > >     ......
>> > >
>> > >
>> > > Looking at yarn Application log file, there is a *RECEIVED SIGNAL 15:
>> > > SIGTERM *termination signal from the the yarn container.
>> > > >>*yarn logs -applicationId application_1455683261278_0001*
>> > >
>> > > YARN executor launch context:
>> > >   env:
>> > >     CLASSPATH ->
>> > >
>> > >
>> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/*<CPS>$HADOOP_COMMON_HOME/lib/*<CPS>$HADOOP_HDFS_HOME/*<CPS>$HADOOP_HDFS_HOME/lib/*<CPS>$HADOOP_MAPRED_HOME/*<CPS>$HADOOP_MAPRED_HOME/lib/*<CPS>$HADOOP_YARN_HOME/*<CPS>$HADOOP_YARN_HOME/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
>> > >     SPARK_LOG_URL_STDERR ->
>> http://2657cd5dc2f6:8042/node/containerlogs
>> > >
>> > >
>> ===============================================================================
>> > >
>> > > 16/02/17 04:29:18 INFO impl.ContainerManagementProtocolProxy: Opening
>> proxy
>> > > : 2657cd5dc2f6:33785
>> > > 16/02/17 04:29:18 ERROR yarn.ApplicationMaster: *RECEIVED SIGNAL 15:
>> > > SIGTERM*
>> > > 16/02/17 04:29:18 INFO yarn.ApplicationMaster: Final app status:
>> UNDEFINED,
>> > > exitCode: 0, (reason: Shutdown hook called before final status was
>> > > reported.)
>> > > 16/02/17 04:29:18 INFO yarn.ApplicationMaster: Unregistering
>> > > ApplicationMaster with UNDEFINED (diag message: Shutdown hook called
>> before
>> > > final status was reported.)
>> > > 16/02/17 04:29:18 INFO impl.AMRMClientImpl: Waiting for application
>> to be
>> > > successfully unregistered.
>> > > 16/02/17 04:29:18 INFO yarn.ApplicationMaster: Deleting staging
>> directory
>> > > .sparkStaging/application_1455683261278_0001
>> > > 16/02/17 04:29:18 INFO util.ShutdownHookManager: Shutdown hook called
>> > > End of LogType:stderr
>> > >
>> > >
>> > > BTW, I have successfully tested hadoop yarn by running
>> Teragen/Terasort
>> > > mapreduce job.
>> > > Before i start debugging, has anyone tested spark in yarn-client mode?
>> > >
>>
>
>

Reply via email to