[jira] [Commented] (SPARK-1644) hql(CREATE TABLE IF NOT EXISTS src (key INT, value STRING)) throw an exception

2014-04-27 Thread witgo (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13982353#comment-13982353
 ] 

witgo commented on SPARK-1644:
--

The org.datanucleus:*  can't be packaged into spark-assembly-*.jar 
{code}
Caused by: org.datanucleus.exceptions.NucleusUserException: Persistence process 
has been specified to use a ClassLoaderResolver of name datanucleus yet this 
has not been found by the DataNucleus plugin mechanism. Please check your 
CLASSPATH and plugin specification.
{code}

{code}plugin.xml{code}{code}MANIFEST.MF{code} is damaged

  hql(CREATE TABLE IF NOT EXISTS src (key INT, value STRING)) throw an 
 exception
 -

 Key: SPARK-1644
 URL: https://issues.apache.org/jira/browse/SPARK-1644
 Project: Spark
  Issue Type: Bug
  Components: SQL
Reporter: witgo
 Attachments: spark.log


 cat conf/hive-site.xml
 {code:xml}
 configuration
   property
 namejavax.jdo.option.ConnectionURL/name
 valuejdbc:postgresql://bj-java-hugedata1:7432/hive/value
   /property
   property
 namejavax.jdo.option.ConnectionDriverName/name
 valueorg.postgresql.Driver/value
   /property
   property
 namejavax.jdo.option.ConnectionUserName/name
 valuehive/value
   /property
   property
 namejavax.jdo.option.ConnectionPassword/name
 valuepasswd/value
   /property
   property
 namehive.metastore.local/name
 valuefalse/value
   /property
   property
 namehive.metastore.warehouse.dir/name
 valuehdfs://host:8020/user/hive/warehouse/value
   /property
 /configuration
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (SPARK-1644) hql(CREATE TABLE IF NOT EXISTS src (key INT, value STRING)) throw an exception

2014-04-27 Thread witgo (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13982353#comment-13982353
 ] 

witgo edited comment on SPARK-1644 at 4/27/14 3:18 PM:
---

The org.datanucleus:*  can't be packaged into spark-assembly-*.jar 
{code}
Caused by: org.datanucleus.exceptions.NucleusUserException: Persistence process 
has been specified to use a ClassLoaderResolver of name datanucleus yet this 
has not been found by the DataNucleus plugin mechanism. Please check your 
CLASSPATH and plugin specification.
{code}

The plugin.xml,MANIFEST.MF is damaged


was (Author: witgo):
The org.datanucleus:*  can't be packaged into spark-assembly-*.jar 
{code}
Caused by: org.datanucleus.exceptions.NucleusUserException: Persistence process 
has been specified to use a ClassLoaderResolver of name datanucleus yet this 
has not been found by the DataNucleus plugin mechanism. Please check your 
CLASSPATH and plugin specification.
{code}

{code}plugin.xml{code}{code}MANIFEST.MF{code} is damaged

  hql(CREATE TABLE IF NOT EXISTS src (key INT, value STRING)) throw an 
 exception
 -

 Key: SPARK-1644
 URL: https://issues.apache.org/jira/browse/SPARK-1644
 Project: Spark
  Issue Type: Bug
  Components: SQL
Reporter: witgo
 Attachments: spark.log


 cat conf/hive-site.xml
 {code:xml}
 configuration
   property
 namejavax.jdo.option.ConnectionURL/name
 valuejdbc:postgresql://bj-java-hugedata1:7432/hive/value
   /property
   property
 namejavax.jdo.option.ConnectionDriverName/name
 valueorg.postgresql.Driver/value
   /property
   property
 namejavax.jdo.option.ConnectionUserName/name
 valuehive/value
   /property
   property
 namejavax.jdo.option.ConnectionPassword/name
 valuepasswd/value
   /property
   property
 namehive.metastore.local/name
 valuefalse/value
   /property
   property
 namehive.metastore.warehouse.dir/name
 valuehdfs://host:8020/user/hive/warehouse/value
   /property
 /configuration
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (SPARK-1644) hql(CREATE TABLE IF NOT EXISTS src (key INT, value STRING)) throw an exception

2014-04-26 Thread witgo (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-1644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

witgo updated SPARK-1644:
-

Attachment: (was: spark.log)

  hql(CREATE TABLE IF NOT EXISTS src (key INT, value STRING)) throw an 
 exception
 -

 Key: SPARK-1644
 URL: https://issues.apache.org/jira/browse/SPARK-1644
 Project: Spark
  Issue Type: Bug
  Components: SQL
Reporter: witgo

 cat conf/hive-site.xml
 {code:xml}
 configuration
   property
 namejavax.jdo.option.ConnectionURL/name
 valuejdbc:postgresql://bj-java-hugedata1:7432/hive/value
   /property
   property
 namejavax.jdo.option.ConnectionDriverName/name
 valueorg.postgresql.Driver/value
   /property
   property
 namejavax.jdo.option.ConnectionUserName/name
 valuehive/value
   /property
   property
 namejavax.jdo.option.ConnectionPassword/name
 valuepasswd/value
   /property
   property
 namehive.metastore.local/name
 valuefalse/value
   /property
   property
 namehive.metastore.warehouse.dir/name
 valuehdfs://host:8020/user/hive/warehouse/value
   /property
 /configuration
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (SPARK-1644) hql(CREATE TABLE IF NOT EXISTS src (key INT, value STRING)) throw an exception

2014-04-26 Thread witgo (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-1644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

witgo updated SPARK-1644:
-

Attachment: spark.log

  hql(CREATE TABLE IF NOT EXISTS src (key INT, value STRING)) throw an 
 exception
 -

 Key: SPARK-1644
 URL: https://issues.apache.org/jira/browse/SPARK-1644
 Project: Spark
  Issue Type: Bug
  Components: SQL
Reporter: witgo

 cat conf/hive-site.xml
 {code:xml}
 configuration
   property
 namejavax.jdo.option.ConnectionURL/name
 valuejdbc:postgresql://bj-java-hugedata1:7432/hive/value
   /property
   property
 namejavax.jdo.option.ConnectionDriverName/name
 valueorg.postgresql.Driver/value
   /property
   property
 namejavax.jdo.option.ConnectionUserName/name
 valuehive/value
   /property
   property
 namejavax.jdo.option.ConnectionPassword/name
 valuepasswd/value
   /property
   property
 namehive.metastore.local/name
 valuefalse/value
   /property
   property
 namehive.metastore.warehouse.dir/name
 valuehdfs://host:8020/user/hive/warehouse/value
   /property
 /configuration
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (SPARK-1629) Spark Core missing commons-lang dependence

2014-04-25 Thread witgo (JIRA)
witgo created SPARK-1629:


 Summary:  Spark Core missing commons-lang dependence
 Key: SPARK-1629
 URL: https://issues.apache.org/jira/browse/SPARK-1629
 Project: Spark
  Issue Type: Bug
Reporter: witgo






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (SPARK-1629) Spark Core missing commons-lang dependence

2014-04-25 Thread witgo (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13980886#comment-13980886
 ] 

witgo edited comment on SPARK-1629 at 4/25/14 11:06 AM:


Hi Sean Owen,see 
[Utils.scala|https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/util/Utils.scala#L33]


was (Author: witgo):
Hi Sean Owen see 
[Utils.scala|https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/util/Utils.scala#L33]

  Spark Core missing commons-lang dependence
 ---

 Key: SPARK-1629
 URL: https://issues.apache.org/jira/browse/SPARK-1629
 Project: Spark
  Issue Type: Bug
Reporter: witgo





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (SPARK-1609) Executor fails to start

2014-04-24 Thread witgo (JIRA)
witgo created SPARK-1609:


 Summary: Executor fails to start
 Key: SPARK-1609
 URL: https://issues.apache.org/jira/browse/SPARK-1609
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Reporter: witgo
Priority: Blocker


{code}
export SPARK_JAVA_OPTS=-server -Dspark.ui.killEnabled=false 
-Dspark.akka.askTimeout=120 -Dspark.akka.timeout=120 
-Dspark.locality.wait=1 
-Dspark.storage.blockManagerTimeoutIntervalMs=600 
-Dspark.storage.memoryFraction=0.7 
-Dspark.broadcast.factory=org.apache.spark.broadcast.TorrentBroadcastFactory
{code}
Executor fails to start.
{code}
export SPARK_JAVA_OPTS=-Dspark.ui.killEnabled=false 
-Dspark.akka.askTimeout=120 -Dspark.akka.timeout=120 
-Dspark.locality.wait=1 
-Dspark.storage.blockManagerTimeoutIntervalMs=600 
-Dspark.storage.memoryFraction=0.7 
-Dspark.broadcast.factory=org.apache.spark.broadcast.TorrentBroadcastFactory
{code}
Executor can work 







--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (SPARK-1609) Executor fails to start when use spark-submit

2014-04-24 Thread witgo (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-1609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

witgo updated SPARK-1609:
-

Summary: Executor fails to start when use spark-submit  (was: Executor 
fails to start)

 Executor fails to start when use spark-submit
 -

 Key: SPARK-1609
 URL: https://issues.apache.org/jira/browse/SPARK-1609
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Reporter: witgo
Priority: Blocker
 Attachments: spark.log


 {code}
 export SPARK_JAVA_OPTS=-server -Dspark.ui.killEnabled=false 
 -Dspark.akka.askTimeout=120 -Dspark.akka.timeout=120 
 -Dspark.locality.wait=1 
 -Dspark.storage.blockManagerTimeoutIntervalMs=600 
 -Dspark.storage.memoryFraction=0.7 
 -Dspark.broadcast.factory=org.apache.spark.broadcast.TorrentBroadcastFactory
 {code}
 Executor fails to start.
 {code}
 export SPARK_JAVA_OPTS=-Dspark.ui.killEnabled=false 
 -Dspark.akka.askTimeout=120 -Dspark.akka.timeout=120 
 -Dspark.locality.wait=1 
 -Dspark.storage.blockManagerTimeoutIntervalMs=600 
 -Dspark.storage.memoryFraction=0.7 
 -Dspark.broadcast.factory=org.apache.spark.broadcast.TorrentBroadcastFactory
 {code}
 Executor can work 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (SPARK-1609) Executor fails to start when use spark-submit

2014-04-24 Thread witgo (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-1609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

witgo updated SPARK-1609:
-

Attachment: (was: spark.log)

 Executor fails to start when use spark-submit
 -

 Key: SPARK-1609
 URL: https://issues.apache.org/jira/browse/SPARK-1609
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Reporter: witgo
Priority: Blocker
 Attachments: spark.log


 {code}
 export SPARK_JAVA_OPTS=-server -Dspark.ui.killEnabled=false 
 -Dspark.akka.askTimeout=120 -Dspark.akka.timeout=120 
 -Dspark.locality.wait=1 
 -Dspark.storage.blockManagerTimeoutIntervalMs=600 
 -Dspark.storage.memoryFraction=0.7 
 -Dspark.broadcast.factory=org.apache.spark.broadcast.TorrentBroadcastFactory
 {code}
 Executor fails to start.
 {code}
 export SPARK_JAVA_OPTS=-Dspark.ui.killEnabled=false 
 -Dspark.akka.askTimeout=120 -Dspark.akka.timeout=120 
 -Dspark.locality.wait=1 
 -Dspark.storage.blockManagerTimeoutIntervalMs=600 
 -Dspark.storage.memoryFraction=0.7 
 -Dspark.broadcast.factory=org.apache.spark.broadcast.TorrentBroadcastFactory
 {code}
 Executor can work 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (SPARK-1609) Executor fails to start when use spark-submit

2014-04-24 Thread witgo (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-1609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

witgo updated SPARK-1609:
-

Attachment: (was: spark.log)

 Executor fails to start when use spark-submit
 -

 Key: SPARK-1609
 URL: https://issues.apache.org/jira/browse/SPARK-1609
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Reporter: witgo
Priority: Blocker
 Attachments: spark.log


 {code}
 export SPARK_JAVA_OPTS=-server -Dspark.ui.killEnabled=false 
 -Dspark.akka.askTimeout=120 -Dspark.akka.timeout=120 
 -Dspark.locality.wait=1 
 -Dspark.storage.blockManagerTimeoutIntervalMs=600 
 -Dspark.storage.memoryFraction=0.7 
 -Dspark.broadcast.factory=org.apache.spark.broadcast.TorrentBroadcastFactory
 {code}
 Executor fails to start.
 {code}
 export SPARK_JAVA_OPTS=-Dspark.ui.killEnabled=false 
 -Dspark.akka.askTimeout=120 -Dspark.akka.timeout=120 
 -Dspark.locality.wait=1 
 -Dspark.storage.blockManagerTimeoutIntervalMs=600 
 -Dspark.storage.memoryFraction=0.7 
 -Dspark.broadcast.factory=org.apache.spark.broadcast.TorrentBroadcastFactory
 {code}
 Executor can work 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (SPARK-1609) Executor fails to start when use spark-submit

2014-04-24 Thread witgo (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13980637#comment-13980637
 ] 

witgo commented on SPARK-1609:
--

Spark Executor Command: /opt/jdk1.8.0/bin/java -cp 
:/opt/spark/classes/echo-1.0-SNAPSHOT.jar:/opt/spark/classes/toona-assembly-1.0.0-SNAPSHOT.jar:/opt/spark/spark-1.0.0-cdh3/conf:/opt/spark/spark-1.0.0-cdh3/lib/spark-assembly-1.0.0-SNAPSHOT-hadoop0.20.2-cdh3u5.jar
 -Xss2m -Dspark.ui.killEnabled=false -Xms5120M -Xmx5120M 
org.apache.spark.executor.CoarseGrainedExecutorBackend 
akka.tcp://spark@spark:47185/user/CoarseGrainedScheduler 7 spark 4 
akka.tcp://sparkWorker@spark:35646/user/Worker app-20140425183255-


Invalid thread stack size: -Xss2m -Dspark.ui.killEnabled=false
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.

 Executor fails to start when use spark-submit
 -

 Key: SPARK-1609
 URL: https://issues.apache.org/jira/browse/SPARK-1609
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Reporter: witgo
Priority: Blocker
 Attachments: spark.log


 {code}
 export SPARK_JAVA_OPTS=-server -Dspark.ui.killEnabled=false 
 -Dspark.akka.askTimeout=120 -Dspark.akka.timeout=120 
 -Dspark.locality.wait=1 
 -Dspark.storage.blockManagerTimeoutIntervalMs=600 
 -Dspark.storage.memoryFraction=0.7 
 -Dspark.broadcast.factory=org.apache.spark.broadcast.TorrentBroadcastFactory
 {code}
 Executor fails to start.
 {code}
 export SPARK_JAVA_OPTS=-Dspark.ui.killEnabled=false 
 -Dspark.akka.askTimeout=120 -Dspark.akka.timeout=120 
 -Dspark.locality.wait=1 
 -Dspark.storage.blockManagerTimeoutIntervalMs=600 
 -Dspark.storage.memoryFraction=0.7 
 -Dspark.broadcast.factory=org.apache.spark.broadcast.TorrentBroadcastFactory
 {code}
 Executor can work 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (SPARK-1609) Executor fails to start when Command.extraJavaOptions contains multiple Java options

2014-04-24 Thread witgo (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-1609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

witgo updated SPARK-1609:
-

Summary: Executor fails to start when Command.extraJavaOptions contains 
multiple Java options  (was: Executor fails to start when use spark-submit)

 Executor fails to start when Command.extraJavaOptions contains multiple Java 
 options
 

 Key: SPARK-1609
 URL: https://issues.apache.org/jira/browse/SPARK-1609
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Reporter: witgo
Priority: Blocker
 Attachments: spark.log


 {code}
 export SPARK_JAVA_OPTS=-server -Dspark.ui.killEnabled=false 
 -Dspark.akka.askTimeout=120 -Dspark.akka.timeout=120 
 -Dspark.locality.wait=1 
 -Dspark.storage.blockManagerTimeoutIntervalMs=600 
 -Dspark.storage.memoryFraction=0.7 
 -Dspark.broadcast.factory=org.apache.spark.broadcast.TorrentBroadcastFactory
 {code}
 Executor fails to start.
 {code}
 export SPARK_JAVA_OPTS=-Dspark.ui.killEnabled=false 
 -Dspark.akka.askTimeout=120 -Dspark.akka.timeout=120 
 -Dspark.locality.wait=1 
 -Dspark.storage.blockManagerTimeoutIntervalMs=600 
 -Dspark.storage.memoryFraction=0.7 
 -Dspark.broadcast.factory=org.apache.spark.broadcast.TorrentBroadcastFactory
 {code}
 Executor can work 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (SPARK-1525) TaskSchedulerImpl should decrease availableCpus by spark.task.cpus not 1

2014-04-17 Thread witgo (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972957#comment-13972957
 ] 

witgo commented on SPARK-1525:
--

The latest code already fix the bug.

 TaskSchedulerImpl should decrease availableCpus by spark.task.cpus not 1
 

 Key: SPARK-1525
 URL: https://issues.apache.org/jira/browse/SPARK-1525
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Reporter: YanTang Zhai
Priority: Minor

 TaskSchedulerImpl decreases availableCpus by 1 in resourceOffers process 
 always even though spark.task.cpus is more than 1, which will schedule more 
 tasks to some node when spark.task.cpus is more than 1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (SPARK-1518) Spark master doesn't compile against hadoop-common trunk

2014-04-17 Thread witgo (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973084#comment-13973084
 ] 

witgo commented on SPARK-1518:
--

As the hadoop API changes, some methods have been removed.
The hadoop related in spark core Independence to new modules. As in the case of 
yarn.

 Spark master doesn't compile against hadoop-common trunk
 

 Key: SPARK-1518
 URL: https://issues.apache.org/jira/browse/SPARK-1518
 Project: Spark
  Issue Type: Bug
Reporter: Marcelo Vanzin

 FSDataOutputStream::sync() has disappeared from trunk in Hadoop; 
 FileLogger.scala is calling it.
 I've changed it locally to hsync() so I can compile the code, but haven't 
 checked yet whether those are equivalent. hsync() seems to have been there 
 forever, so it hopefully works with all versions Spark cares about.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (SPARK-1500) add with-hive argument to make-distribution.sh

2014-04-15 Thread witgo (JIRA)
witgo created SPARK-1500:


 Summary:  add with-hive argument to make-distribution.sh
 Key: SPARK-1500
 URL: https://issues.apache.org/jira/browse/SPARK-1500
 Project: Spark
  Issue Type: Improvement
  Components: Build
Affects Versions: 1.0.0
Reporter: witgo






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (SPARK-1509) add zipWithIndex zipWithUniqueId methods to java api

2014-04-15 Thread witgo (JIRA)
witgo created SPARK-1509:


 Summary: add zipWithIndex zipWithUniqueId methods to java api
 Key: SPARK-1509
 URL: https://issues.apache.org/jira/browse/SPARK-1509
 Project: Spark
  Issue Type: Bug
  Components: Java API
Reporter: witgo






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (SPARK-1479) building spark on 2.0.0-cdh4.4.0 failed

2014-04-13 Thread witgo (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13967769#comment-13967769
 ] 

witgo edited comment on SPARK-1479 at 4/13/14 9:34 AM:
---

CDH4.4.0 yarn api has changed .Right now Spark doesn't support 
cdh4.4,cdh4.5,cdh4. HoweverSpark support cdh4.3  


was (Author: witgo):
CDH4.4.0 yarn api has changed .Right now Spark doesn't support 
cdh4.4,cdh4.5,cdh4.6

 building spark on 2.0.0-cdh4.4.0 failed
 ---

 Key: SPARK-1479
 URL: https://issues.apache.org/jira/browse/SPARK-1479
 Project: Spark
  Issue Type: Question
 Environment: 2.0.0-cdh4.4.0
 Scala code runner version 2.10.4 -- Copyright 2002-2013, LAMP/EPFL
 spark 0.9.1
 java version 1.6.0_32
Reporter: jackielihf
 Attachments: mvn.log


 [INFO] 
 
 [ERROR] Failed to execute goal 
 net.alchim31.maven:scala-maven-plugin:3.1.5:compile (scala-compile-first) on 
 project spark-yarn-alpha_2.10: Execution scala-compile-first of goal 
 net.alchim31.maven:scala-maven-plugin:3.1.5:compile failed. CompileFailed - 
 [Help 1]
 org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
 goal net.alchim31.maven:scala-maven-plugin:3.1.5:compile 
 (scala-compile-first) on project spark-yarn-alpha_2.10: Execution 
 scala-compile-first of goal 
 net.alchim31.maven:scala-maven-plugin:3.1.5:compile failed.
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:225)
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
   at 
 org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
   at 
 org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
   at 
 org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
   at 
 org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
   at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320)
   at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156)
   at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537)
   at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196)
   at org.apache.maven.cli.MavenCli.main(MavenCli.java:141)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290)
   at 
 org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230)
   at 
 org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409)
   at 
 org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352)
 Caused by: org.apache.maven.plugin.PluginExecutionException: Execution 
 scala-compile-first of goal 
 net.alchim31.maven:scala-maven-plugin:3.1.5:compile failed.
   at 
 org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:110)
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209)
   ... 19 more
 Caused by: Compilation failed
   at sbt.compiler.AnalyzingCompiler.call(AnalyzingCompiler.scala:76)
   at sbt.compiler.AnalyzingCompiler.compile(AnalyzingCompiler.scala:35)
   at sbt.compiler.AnalyzingCompiler.compile(AnalyzingCompiler.scala:29)
   at 
 sbt.compiler.AggressiveCompile$$anonfun$4$$anonfun$compileScala$1$1.apply$mcV$sp(AggressiveCompile.scala:71)
   at 
 sbt.compiler.AggressiveCompile$$anonfun$4$$anonfun$compileScala$1$1.apply(AggressiveCompile.scala:71)
   at 
 sbt.compiler.AggressiveCompile$$anonfun$4$$anonfun$compileScala$1$1.apply(AggressiveCompile.scala:71)
   at 
 sbt.compiler.AggressiveCompile.sbt$compiler$AggressiveCompile$$timed(AggressiveCompile.scala:101)
   at 
 sbt.compiler.AggressiveCompile$$anonfun$4.compileScala$1(AggressiveCompile.scala:70)
   at 
 sbt.compiler.AggressiveCompile$$anonfun$4.apply(AggressiveCompile.scala:88)
   at 
 sbt.compiler.AggressiveCompile$$anonfun$4.apply(AggressiveCompile.scala:60)
   at 
 sbt.inc.IncrementalCompile$$anonfun$doCompile$1.apply(Compile.scala:24)
   at 
 sbt.inc.IncrementalCompile$$anonfun$doCompile$1.apply(Compile.scala:22)
   at sbt.inc.Incremental$.cycle(Incremental.scala:40)
   at 

[jira] [Created] (SPARK-1477) Add the lifecycle interface

2014-04-12 Thread witgo (JIRA)
witgo created SPARK-1477:


 Summary: Add the lifecycle interface
 Key: SPARK-1477
 URL: https://issues.apache.org/jira/browse/SPARK-1477
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core
Affects Versions: 1.0.0
Reporter: witgo


Now the Spark in the code, there are a lot of interface or class  defines the 
stop and start 
method,eg:[SchedulerBackend|https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/SchedulerBackend.scala],[HttpServer|https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/HttpServer.scala],[ContextCleaner|https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/ContextCleaner.scala]
 . we should use a life cycle interface improve the code



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (SPARK-1479) building spark on 2.0.0-cdh4.4.0 failed

2014-04-12 Thread witgo (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13967719#comment-13967719
 ] 

witgo commented on SPARK-1479:
--

You can give all of the logs?
{code}
mvn -Pyarn-alpha -Dhadoop.version=2.0.0-cdh4.4.0 -Dyarn.version=2.0.0-chd4.4.0 
-DskipTests clean package -X  mvn.log
{code}

 building spark on 2.0.0-cdh4.4.0 failed
 ---

 Key: SPARK-1479
 URL: https://issues.apache.org/jira/browse/SPARK-1479
 Project: Spark
  Issue Type: Bug
 Environment: 2.0.0-cdh4.4.0
 Scala code runner version 2.10.4 -- Copyright 2002-2013, LAMP/EPFL
 spark 0.9.1
 java version 1.6.0_32
Reporter: jackielihf

 [INFO] 
 
 [ERROR] Failed to execute goal 
 net.alchim31.maven:scala-maven-plugin:3.1.5:compile (scala-compile-first) on 
 project spark-yarn-alpha_2.10: Execution scala-compile-first of goal 
 net.alchim31.maven:scala-maven-plugin:3.1.5:compile failed. CompileFailed - 
 [Help 1]
 org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
 goal net.alchim31.maven:scala-maven-plugin:3.1.5:compile 
 (scala-compile-first) on project spark-yarn-alpha_2.10: Execution 
 scala-compile-first of goal 
 net.alchim31.maven:scala-maven-plugin:3.1.5:compile failed.
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:225)
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
   at 
 org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
   at 
 org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
   at 
 org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
   at 
 org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
   at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320)
   at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156)
   at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537)
   at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196)
   at org.apache.maven.cli.MavenCli.main(MavenCli.java:141)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290)
   at 
 org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230)
   at 
 org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409)
   at 
 org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352)
 Caused by: org.apache.maven.plugin.PluginExecutionException: Execution 
 scala-compile-first of goal 
 net.alchim31.maven:scala-maven-plugin:3.1.5:compile failed.
   at 
 org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:110)
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209)
   ... 19 more
 Caused by: Compilation failed
   at sbt.compiler.AnalyzingCompiler.call(AnalyzingCompiler.scala:76)
   at sbt.compiler.AnalyzingCompiler.compile(AnalyzingCompiler.scala:35)
   at sbt.compiler.AnalyzingCompiler.compile(AnalyzingCompiler.scala:29)
   at 
 sbt.compiler.AggressiveCompile$$anonfun$4$$anonfun$compileScala$1$1.apply$mcV$sp(AggressiveCompile.scala:71)
   at 
 sbt.compiler.AggressiveCompile$$anonfun$4$$anonfun$compileScala$1$1.apply(AggressiveCompile.scala:71)
   at 
 sbt.compiler.AggressiveCompile$$anonfun$4$$anonfun$compileScala$1$1.apply(AggressiveCompile.scala:71)
   at 
 sbt.compiler.AggressiveCompile.sbt$compiler$AggressiveCompile$$timed(AggressiveCompile.scala:101)
   at 
 sbt.compiler.AggressiveCompile$$anonfun$4.compileScala$1(AggressiveCompile.scala:70)
   at 
 sbt.compiler.AggressiveCompile$$anonfun$4.apply(AggressiveCompile.scala:88)
   at 
 sbt.compiler.AggressiveCompile$$anonfun$4.apply(AggressiveCompile.scala:60)
   at 
 sbt.inc.IncrementalCompile$$anonfun$doCompile$1.apply(Compile.scala:24)
   at 
 sbt.inc.IncrementalCompile$$anonfun$doCompile$1.apply(Compile.scala:22)
   at sbt.inc.Incremental$.cycle(Incremental.scala:40)
   at sbt.inc.Incremental$.compile(Incremental.scala:25)
   at sbt.inc.IncrementalCompile$.apply(Compile.scala:20)
   at 

[jira] [Created] (SPARK-1470) Use the scala-logging wrapper instead of the directly sfl4j api

2014-04-10 Thread witgo (JIRA)
witgo created SPARK-1470:


 Summary: Use the scala-logging wrapper instead of the directly 
sfl4j api
 Key: SPARK-1470
 URL: https://issues.apache.org/jira/browse/SPARK-1470
 Project: Spark
  Issue Type: New Feature
Reporter: witgo


Now the Spark Catalyst using scalalogging-slf4j, but the Spark Core to use 
slf4j-api
We should use the scalalogging-slf4 wrapper instead of the directly sfl4j-api



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (SPARK-1413) Parquet messes up stdout and stdin when used in Spark REPL

2014-04-08 Thread witgo (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13959784#comment-13959784
 ] 

witgo edited comment on SPARK-1413 at 4/8/14 8:11 AM:
--

You can try this  [PR|https://github.com/apache/spark/pull/325]



was (Author: witgo):
Try [the PR 325|https://github.com/apache/spark/pull/325]

 Parquet messes up stdout and stdin when used in Spark REPL
 --

 Key: SPARK-1413
 URL: https://issues.apache.org/jira/browse/SPARK-1413
 Project: Spark
  Issue Type: Bug
  Components: SQL
Reporter: Matei Zaharia
Assignee: Michael Armbrust
Priority: Critical
 Fix For: 1.0.0


 I have a simple Parquet file in foos.parquet, but after I type this code, 
 it freezes the shell, to the point where I can't read or write stuff:
 scala val qc = new org.apache.spark.sql.SQLContext(sc); import qc._
 qc: org.apache.spark.sql.SQLContext = org.apache.spark.sql.SQLContext@1c0c8826
 import qc._
 scala qc.parquetFile(foos.parquet).saveAsTextFile(bar)
 The job itself completes successfully, and bar contains the right text, but 
 I can no longer see commands I type in, or further log output.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (SPARK-1420) The maven build error for Spark Catalyst

2014-04-05 Thread witgo (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961126#comment-13961126
 ] 

witgo commented on SPARK-1420:
--

{code}
mvn -Pyarn -Dhadoop.version=2.3.0 -Dyarn.version=2.3.0 -DskipTests install
{code} 
=
{code} 
[ERROR] 
/Users/witgo/work/code/java/spark/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/ScalaReflection.scala:31:
 object runtime is not a member of package reflect
[ERROR]   import scala.reflect.runtime.universe._
{code}

 The maven build error for Spark Catalyst
 

 Key: SPARK-1420
 URL: https://issues.apache.org/jira/browse/SPARK-1420
 Project: Spark
  Issue Type: Bug
  Components: Build
Reporter: witgo





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (SPARK-1413) Parquet messes up stdout and stdin when used in Spark REPL

2014-04-04 Thread witgo (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13959784#comment-13959784
 ] 

witgo commented on SPARK-1413:
--

Try [the PR 325|https://github.com/apache/spark/pull/325]

 Parquet messes up stdout and stdin when used in Spark REPL
 --

 Key: SPARK-1413
 URL: https://issues.apache.org/jira/browse/SPARK-1413
 Project: Spark
  Issue Type: Bug
  Components: SQL
Reporter: Matei Zaharia
Assignee: Michael Armbrust
Priority: Critical
 Fix For: 1.0.0


 I have a simple Parquet file in foos.parquet, but after I type this code, 
 it freezes the shell, to the point where I can't read or write stuff:
 scala val qc = new org.apache.spark.sql.SQLContext(sc); import qc._
 qc: org.apache.spark.sql.SQLContext = org.apache.spark.sql.SQLContext@1c0c8826
 import qc._
 scala qc.parquetFile(foos.parquet).saveAsTextFile(bar)
 The job itself completes successfully, and bar contains the right text, but 
 I can no longer see commands I type in, or further log output.



--
This message was sent by Atlassian JIRA
(v6.2#6252)