What do you mean by compile target?
?
Iv'e found Apache Zeppelin handle multiple Spark versions here using profile
for each Spark version:
https://github.com/apache/zeppelin/blob/master/spark/pom.xml#L185
Do you think this method is better?
Netanel Malka,
Big Data Consultant
[Description:
I am not sure it should be a branch? It is common to deal with this as a
compile target, not as a separate branch. A separate branch might have
difficulty to release?
There are a few example in projects where they handle multiple Spark target
version like this.
On Wed, Nov 11, 2020 at 12:56 PM
Hi Netanel,
That links to this git submodule:
https://github.com/jiayuasu/jts/blob/1.16.x/modules/core/pom.xml#L6
I can easily fix this by changing the version number here to 1.16.2
excluding "SNAPSHOT":
https://github.com/jiayuasu/jts/blob/1.16.x/modules/core/pom.xml#L6
Will this solve the
OK. I agree. I am gonna create a branch for spark-2.3/2.4. Regarding the
compiler used in each branch,
For Sedona on Spark 3.0, I will compile it using Scala 2.12
For Sedona on Spark 2.4, I will compile it using Scala 2.11.
For the Java code in both branches, I will compile them using Java 1.8
Hi,
I also think that we need to support 2.4.
I saw that even Apache Spark still releases 2.4.x artifacts. (2.4.7 Sep 12,
2020)
I also asked about it on us...@spark.apache.org :
Sean Owen (answered the question):
"I don't think there's an official EOL for Spark 2.4.x but would expect