[ 
https://issues.apache.org/jira/browse/SPARK-26296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

M. Le Bihan updated SPARK-26296:
--------------------------------
    Description: 
I am not using _Scala_ when I am programming _Spark_ but plain _Java_. I 
believe those using _PySpark_ no more. 

 

But _Spark_ has been build over _Scala_ instead of plain _Java_ and this is a 
cause of troubles, especially at the time of upgrading the JDK. We are awaiting 
to use JDK 11 and [Scala is still lowering Spark in the previous version of the 
JDK|https://docs.scala-lang.org/overviews/jdk-compatibility/overview.html].

_Big Data_ programming shall not force developpers to get by with _Scala_ when 
its not the language they have choosen.

 

Having a _Spark_ without _Scala_, like it is possible to have a _Spark_ without 
_Hadoop,_ would confort me : a cause of issues would disappear.

Provide an optional _spark-scala_ artifact would be fine as those who do not 
need it wouldn't download it and wouldn't be affected by it. 

  was:
I am not using _Scala_ when I am programming _Spark_ but plain _Java_. I 
believe those using _PySpark_ no more. 

 

But _Spark_ has been build over _Scala_ instead of plain _Java_ and this is a 
cause of troubles, especially at the time of upgrading the JDK. We are awaiting 
to use JDK 11 and _Scala_ is still lowering _Spark_ in the previous version of 
the JDK.

_Big Data_ programming shall not force developpers to get by with _Scala_ when 
its not the language they have choosen.

 

Having a _Spark_ without _Scala_, like it is possible to have a _Spark_ without 
_Hadoop,_ would confort me : a cause of issues would disappear.

Provide an optional _spark-scala_ artifact would be fine as those who do not 
need it wouldn't download it and wouldn't be affected by it. 


> Base Spark over Java and not over Scala, offering Scala as an option over 
> Spark
> -------------------------------------------------------------------------------
>
>                 Key: SPARK-26296
>                 URL: https://issues.apache.org/jira/browse/SPARK-26296
>             Project: Spark
>          Issue Type: Wish
>          Components: Spark Core
>    Affects Versions: 2.4.0
>            Reporter: M. Le Bihan
>            Priority: Minor
>
> I am not using _Scala_ when I am programming _Spark_ but plain _Java_. I 
> believe those using _PySpark_ no more. 
>  
> But _Spark_ has been build over _Scala_ instead of plain _Java_ and this is a 
> cause of troubles, especially at the time of upgrading the JDK. We are 
> awaiting to use JDK 11 and [Scala is still lowering Spark in the previous 
> version of the 
> JDK|https://docs.scala-lang.org/overviews/jdk-compatibility/overview.html].
> _Big Data_ programming shall not force developpers to get by with _Scala_ 
> when its not the language they have choosen.
>  
> Having a _Spark_ without _Scala_, like it is possible to have a _Spark_ 
> without _Hadoop,_ would confort me : a cause of issues would disappear.
> Provide an optional _spark-scala_ artifact would be fine as those who do not 
> need it wouldn't download it and wouldn't be affected by it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to