ok it works! 3q
And i have an another question about shell
The shell could work when type some simple commands.but it couldn’t work when i 
wanna to start a spark shell or spark submit even the sudo command. I wanna 
know the shell interpreter’s limit. why it always call 
Process exited with an error: 1 (Exit value: 1) when i type sudo or spark-submit

and when i type spark-shell it feedback:


setting ulimit -m to 57671680
Spark assembly has been built with Hive, including Datanucleus jars on classpath
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/home/pipeline/zeppelin-manager/zeppelin-0.5.0-incubating-SNAPSHOT/interpreter/sh/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/home/pipeline/zeppelin-manager/zeppelin-0.5.0-incubating-SNAPSHOT/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/opt/simple-spark.1.3.0/lib/spark-assembly-1.3.0-hadoop2.0.0-mr1-cdh4.2.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException:  (没有那个文件或目录)
        at java.io.FileOutputStream.open(Native Method)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:142)
        at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
        at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
        at 
org.apache.log4j.DailyRollingFileAppender.activateOptions(DailyRollingFileAppender.java:223)
        at 
org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
        at 
org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)
        at 
org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)
        at 
org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:842)
        at 
org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
        at 
org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)
        at 
org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)
        at 
org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
        at 
org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
        at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
        at 
org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:64)
        at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:285)
        at 
org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:155)
        at 
org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:132)
        at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:685)
        at 
org.apache.hadoop.security.UserGroupInformation.<clinit>(UserGroupInformation.java:78)
        at 
org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:1996)
        at 
org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:1996)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:1996)
        at org.apache.spark.SecurityManager.<init>(SecurityManager.scala:207)
        at org.apache.spark.repl.SparkIMain.<init>(SparkIMain.scala:118)
        at 
org.apache.spark.repl.SparkILoop$SparkILoopInterpreter.<init>(SparkILoop.scala:187)
        at 
org.apache.spark.repl.SparkILoop.createInterpreter(SparkILoop.scala:216)
        at 
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:948)
        at 
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:944)
        at 
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:944)
        at 
scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
        at 
org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:944)
        at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1058)
        at org.apache.spark.repl.Main$.main(Main.scala:31)
        at org.apache.spark.repl.Main.main(Main.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
        at 
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
log4j:ERROR Either File or DatePattern options are not set for appender 
[dailyfile].
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 1.3.0
      /_/

Using Scala version 2.10.4 (OpenJDK 64-Bit Server VM, Java 1.7.0_71)
Type in expressions to have them evaluated.
Type :help for more information.
org.apache.spark.SparkException: Found both spark.executor.extraClassPath and 
SPARK_CLASSPATH. Use only the former.
        at 
org.apache.spark.SparkConf$$anonfun$validateSettings$6$$anonfun$apply$7.apply(SparkConf.scala:339)
        at 
org.apache.spark.SparkConf$$anonfun$validateSettings$6$$anonfun$apply$7.apply(SparkConf.scala:337)
        at scala.collection.immutable.List.foreach(List.scala:318)
        at 
org.apache.spark.SparkConf$$anonfun$validateSettings$6.apply(SparkConf.scala:337)
        at 
org.apache.spark.SparkConf$$anonfun$validateSettings$6.apply(SparkConf.scala:325)
        at scala.Option.foreach(Option.scala:236)
        at org.apache.spark.SparkConf.validateSettings(SparkConf.scala:325)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:197)
        at 
org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1016)
        at $iwC$$iwC.<init>(<console>:9)
        at $iwC.<init>(<console>:18)
        at <init>(<console>:20)
        at .<init>(<console>:24)
        at .<clinit>(<console>)
        at .<init>(<console>:7)
        at .<clinit>(<console>)
        at $print(<console>)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at 
org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
        at 
org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1338)
        at 
org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
        at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
        at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
        at 
org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:856)
        at 
org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:901)
        at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:813)
        at 
org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:123)
        at 
org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:122)
        at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
at 
org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:122)
        at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
        at 
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:973)
        at 
org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:157)
        at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
        at 
org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:106)
        at 
org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
        at 
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:990)
        at 
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:944)
        at 
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:944)
        at 
scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
        at 
org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:944)
        at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1058)
        at org.apache.spark.repl.Main$.main(Main.scala:31)
        at org.apache.spark.repl.Main.main(Main.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
        at 
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

java.lang.NullPointerException
        at org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:141)
        at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:49)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at 
org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:1027)
        at $iwC$$iwC.<init>(<console>:9)
<console>:10: error: not found: value sqlContext
       import sqlContext.implicits._
              ^
<console>:10: error: not found: value sqlContext
       import sqlContext.sql
              ^

scala> Stopping spark context.
<console>:14: error: not found: value sc
              sc.stop()
              ^

> On Jul 22, 2015, at 8:36 PM, Alexander Bezzubov <abezzu...@nflabs.com> wrote:
> 
> Hi,
> 
> those are very valid steps indeed and you do not need to build custom
> spark version.
> 
> Hope this helps!
> 
> On Tue, Jul 21, 2015 at 7:46 PM, 江之源 <jiangzhiy...@liulishuo.com> wrote:
>> hi
>> To install the zeppelin with z-manager is a helpless method, because i
>> install the zeppelin in manually way and failed.I have tried many many
>> times.
>> My cluster is spark1.3.0 hadoop 2.0.0-cdh4.5.0,and the model is standalone.
>> I will install the zeppelin manually right now so i wanna you check my
>> steps:
>> 
>> 1. git clone the repository from the github.
>> 2. mvn clean package
>> 3. mvn install -DskipTests -Dspark.version=1.3.0
>> -Dhadoop.version=2.0.0-cdh4.5.0
>> (did zeppelin support cdh4.5.0)
>> Should i have to do the custom built spark
>> like(-Dspark.version=1.1.0-Custom)
>> 4.modify my master spark://...:7077
>> Is it completed? or i lost something please tell me.
>> thanks
>> jzy
>> 
>> 在 2015年7月21日,下午5:48,Alexander Bezzubov <abezzu...@nflabs.com> 写道:
>> 
>> Hi,
>> 
>> thank you for your interest in the project!
>> 
>> It seems like the best way to get Zeppelin up and running in your case
>> would be to build it manually with relevant Spark\Hadoop options as
>> described here
>> http://zeppelin.incubator.apache.org/docs/install/install.html
>> 
>> Please, let me know if that helps.
>> 
>> --
>> BR,
>> Alex
>> 
>> On Tue, Jul 21, 2015 at 11:35 AM, 江之源 <jiangzhiy...@liulishuo.com> wrote:
>> 
>> hi
>> i installed zeppelin some time before, but it always failed in my server
>> cluster. i found the z-management Occasionally. I installed and success in
>> my server. But when i wanna to read in my HDFS file like:
>> 
>> sc.textFile("hdfs://llscluster/tmp/jzyresult/part-04093").count()
>> 
>> 
>> it throw the errors in my cluster:Job aborted due to stage failure: Task 15
>> in stage 6.0 failed 4 times, most recent failure: Lost task 15.3 in stage
>> 6.0 (TID 386, lls7): java.io.EOFException
>> 
>> when i modify it to the local model, it could read HDFS file successfully.
>> My cluster is Spark1.3.0 Hadoop2.0.0-CDH4.5.0. but the install options just
>> have Spark1.3.0 and Hadoop2.0.0-CDH-4.7.0. Is this the cause to read HDFS
>> file failed?
>> Look forward to your reply!
>> THANK YOU!
>> JZY
>> 
>> 
>> 
>> 
>> --
>> --
>> Kind regards,
>> Alexander.
>> 
>> 
> 
> 
> 
> -- 
> --
> Kind regards,
> Alexander.

Reply via email to