Re: Spark 2.2.1 - Operation not allowed: alter table replace columns

2018-12-19 Thread Jiaan Geng
This SQL syntax is not supported now!Please use ALTER TABLE ... CHANGE COLUMN
.



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: spark 2.2.1

2018-02-02 Thread Mihai Iacob
Turns out it was the master recovery directory, that was messing things up. What was written there was on spark 2.0.2 and after replacing the master, the recovery process would fail with that error, but there were no clues that's what was happening.
 
Regards, 

Mihai IacobDSX Local - Security, IBM Analytics
 
 
- Original message -From: Bill Schwanitz To: Mihai Iacob Cc: User Subject: Re: spark 2.2.1Date: Fri, Feb 2, 2018 8:23 AM 
What version of java? 
 
On Feb 1, 2018 11:30 AM, "Mihai Iacob"  wrote:

I am setting up a spark 2.2.1 cluster, however, when I bring up the master and workers (both on spark 2.2.1) I get this error. I tried spark 2.2.0 and get the same error. It works fine on spark 2.0.2. Have you seen this before, any idea what's wrong?
 
I found this, but it's in a different situation: https://github.com/apache/spark/pull/19802
 
18/02/01 05:07:22 ERROR Utils: Exception encountered
java.io.InvalidClassException: org.apache.spark.rpc.RpcEndpointRef; local class incompatible: stream classdesc serialVersionUID = -1223633663228316618, local class serialVersionUID = 1835832137613908542
        at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:687)
        at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1885)
        at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1751)
        at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1885)
        at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1751)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2042)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
        at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
        at java.io.ObjectInputStream.defaultReadObject(ObjectInputStream.java:563)
        at org.apache.spark.deploy.master.WorkerInfo$$anonfun$readObject$1.apply$mcV$sp(WorkerInfo.scala:52)
        at org.apache.spark.deploy.master.WorkerInfo$$anonfun$readObject$1.apply(WorkerInfo.scala:51)
        at org.apache.spark.deploy.master.WorkerInfo$$anonfun$readObject$1.apply(WorkerInfo.scala:51)
        at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1303)
        at org.apache.spark.deploy.master.WorkerInfo.readObject(WorkerInfo.scala:51)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1158)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2178)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:433)
        at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
        at org.apache.spark.deploy.master.FileSystemPersistenceEngine.org$apache$spark$deploy$master$FileSystemPersistenceEngine$$deserializeFromFile(FileSystemPersistenceEngine.scala:80)
        at org.apache.spark.deploy.master.FileSystemPersistenceEngine$$anonfun$read$1.apply(FileSystemPersistenceEngine.scala:56)
        at org.apache.spark.deploy.master.FileSystemPersistenceEngine$$anonfun$read$1.apply(FileSystemPersistenceEngine.scala:56)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
        at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
        at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
        at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186)
        at org.apache.spark.deploy.master.FileSystemPersistenceEngine.read(FileSystemPersistenceEngine.scala:56)
        at org.apache.spark.deploy.master.PersistenceEngine$$anonfun$readPersistedData$1.apply(PersistenceEngine.scala:87)
        at org.apache.spark.deploy.master.PersistenceEngine$$anonfun$readPersistedData$1.apply(PersistenceEngine.scala:86)
        at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
        at org.apache.spark.rpc.netty.NettyRpcEnv.deserialize(NettyRpcEnv.scala:316)
       packet_write_wait: Connection to 9.30.118.193 port 22: Broken pipeData(PersistenceEngine.scala:86)
 
 
 

Re: spark 2.2.1

2018-02-02 Thread Bill Schwanitz
What version of java?

On Feb 1, 2018 11:30 AM, "Mihai Iacob"  wrote:

> I am setting up a spark 2.2.1 cluster, however, when I bring up the master
> and workers (both on spark 2.2.1) I get this error. I tried spark 2.2.0 and
> get the same error. It works fine on spark 2.0.2. Have you seen this
> before, any idea what's wrong?
>
> I found this, but it's in a different situation: https://github.com/
> apache/spark/pull/19802
>
>
> 18/02/01 05:07:22 ERROR Utils: Exception encountered
>
> java.io.InvalidClassException: org.apache.spark.rpc.RpcEndpointRef; local
> class incompatible: stream classdesc serialVersionUID =
> -1223633663228316618, local class serialVersionUID = 1835832137613908542
>
> at java.io.ObjectStreamClass.initNonProxy(
> ObjectStreamClass.java:687)
>
> at java.io.ObjectInputStream.readNonProxyDesc(
> ObjectInputStream.java:1885)
>
> at java.io.ObjectInputStream.readClassDesc(
> ObjectInputStream.java:1751)
>
> at java.io.ObjectInputStream.readNonProxyDesc(
> ObjectInputStream.java:1885)
>
> at java.io.ObjectInputStream.readClassDesc(
> ObjectInputStream.java:1751)
>
> at java.io.ObjectInputStream.readOrdinaryObject(
> ObjectInputStream.java:2042)
>
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.
> java:1573)
>
> at java.io.ObjectInputStream.defaultReadFields(
> ObjectInputStream.java:2287)
>
> at java.io.ObjectInputStream.defaultReadObject(
> ObjectInputStream.java:563)
>
> at org.apache.spark.deploy.master.WorkerInfo$$anonfun$
> readObject$1.apply$mcV$sp(WorkerInfo.scala:52)
>
> at org.apache.spark.deploy.master.WorkerInfo$$anonfun$
> readObject$1.apply(WorkerInfo.scala:51)
>
> at org.apache.spark.deploy.master.WorkerInfo$$anonfun$
> readObject$1.apply(WorkerInfo.scala:51)
>
> at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1303)
>
> at org.apache.spark.deploy.master.WorkerInfo.readObject(
> WorkerInfo.scala:51)
>
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
>
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>
> at java.lang.reflect.Method.invoke(Method.java:498)
>
> at java.io.ObjectStreamClass.invokeReadObject(
> ObjectStreamClass.java:1158)
>
> at java.io.ObjectInputStream.readSerialData(
> ObjectInputStream.java:2178)
>
> at java.io.ObjectInputStream.readOrdinaryObject(
> ObjectInputStream.java:2069)
>
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.
> java:1573)
>
> at java.io.ObjectInputStream.readObject(ObjectInputStream.
> java:433)
>
> at org.apache.spark.serializer.JavaDeserializationStream.
> readObject(JavaSerializer.scala:75)
>
> at org.apache.spark.deploy.master.FileSystemPersistenceEngine.org
> $apache$spark$deploy$master$FileSystemPersistenceEngine$$
> deserializeFromFile(FileSystemPersistenceEngine.scala:80)
>
> at org.apache.spark.deploy.master.FileSystemPersistenceEngine$$
> anonfun$read$1.apply(FileSystemPersistenceEngine.scala:56)
>
> at org.apache.spark.deploy.master.FileSystemPersistenceEngine$$
> anonfun$read$1.apply(FileSystemPersistenceEngine.scala:56)
>
> at scala.collection.TraversableLike$$anonfun$map$
> 1.apply(TraversableLike.scala:234)
>
> at scala.collection.TraversableLike$$anonfun$map$
> 1.apply(TraversableLike.scala:234)
>
> at scala.collection.IndexedSeqOptimized$class.
> foreach(IndexedSeqOptimized.scala:33)
>
> at scala.collection.mutable.ArrayOps$ofRef.foreach(
> ArrayOps.scala:186)
>
> at scala.collection.TraversableLike$class.map(
> TraversableLike.scala:234)
>
> at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186)
>
> at org.apache.spark.deploy.master.FileSystemPersistenceEngine.
> read(FileSystemPersistenceEngine.scala:56)
>
> at org.apache.spark.deploy.master.PersistenceEngine$$
> anonfun$readPersistedData$1.apply(PersistenceEngine.scala:87)
>
> at org.apache.spark.deploy.master.PersistenceEngine$$
> anonfun$readPersistedData$1.apply(PersistenceEngine.scala:86)
>
> at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
>
> at org.apache.spark.rpc.netty.NettyRpcEnv.deserialize(
> NettyRpcEnv.scala:316)
>
>packet_write_wait: Connection to 9.30.118.193 port 22: Broken
> pipeData(PersistenceEngine.scala:86)
> ​​​
>
>
>
> Regards,
>
> *Mihai Iacob*
> DSX Local  - Security, IBM Analytics
>
> - To
> unsubscribe e-mail: user-unsubscr...@spark.apache.org


Re: Spark 2.2.1 worker invocation

2017-12-26 Thread Felix Cheung
I think you are looking for spark.executor.extraJavaOptions

https://spark.apache.org/docs/latest/configuration.html#runtime-environment


From: Christopher Piggott 
Sent: Tuesday, December 26, 2017 8:00:56 AM
To: user@spark.apache.org
Subject: Spark 2.2.1 worker invocation

I need to set java.library.path to get access to some native code.  Following 
directions, I made a spark-env.sh:

#!/usr/bin/env bash
export 
LD_LIBRARY_PATH="/usr/local/lib/libcdfNativeLibrary.so:/usr/local/lib/libcdf.so:${LD_LIBRARY_PATH}"
export SPARK_WORKER_OPTS=-Djava.library.path=/usr/local/lib
export SPARK_WORKER_MEMORY=2g

to no avail.  (I tried both with and without exporting the environment).  
Looking at how the worker actually starts up:

 /usr/lib/jvm/default/bin/java -cp /home/spark/conf/:/home/spark/jars/* 
-Xmx1024M -Dspark.driver.port=37219 
org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url 
spark://CoarseGrainedScheduler@10.1.1.1:37219
 --executor-id 113 --hostname 10.2.2.1 --cores 8 --app-id 
app-20171225145607-0003 --worker-url 
spark://Worker@10.2.2.1:35449



It doesn't seem to take any options.  I put an 'echo' in just to confirm that 
spark-env.sh is getting invoked (and it is).

So, just out of curiosity, I tried to troubleshoot this:



spark@node2-1:~$ grep -R SPARK_WORKER_OPTS *
conf/spark-env.sh:export 
SPARK_WORKER_OPTS=-Djava.library.path=/usr/local/lib
conf/spark-env.sh.template:# - SPARK_WORKER_OPTS, to set config properties 
only for the worker (e.g. "-Dx=y")


The variable doesn't seem to get referenced anywhere in the spark distribution. 
 I checked a number of other options in spark-env.sh.template and they didn't 
seem to be referenced either.  I expected to find them in various startup 
scripts.

I can probably "fix" my problem by hacking the lower-level startup scripts, but 
first I'd like to inquire about what's going on here.  How and where are these 
variables actually used?