[Spark SS] Spark-23541 Backward Compatibility on 2.3.2

2019-09-26 Thread Ahn, Daniel
Is it tested whether this fix is backward compatible 
(https://issues.apache.org/jira/browse/SPARK-23541) for 2.3.2? I see that fix 
version is 2.4.0 in Jira. But quickly reviewing pull request 
(https://github.com/apache/spark/pull/20698), it looks like all the code change 
is limited to spark-sql-kafka-0-10.

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: backward compatibility

2017-01-10 Thread Marco Mistroni
I think old APIs are still supported but u r advised to migrate
I migrated few apps from 1.6 to 2.0 with minimal changes
Hth

On 10 Jan 2017 4:14 pm, "pradeepbill" <pradeep.b...@gmail.com> wrote:

> hi there, I am using spark 1.4 code and now we plan to move to spark 2.0,
> and
> when I check the documentation below, there are only a few features
> backward
> compatible, does that mean I have change most of my code , please advice.
>
> One of the largest changes in Spark 2.0 is the new updated APIs:
>
> Unifying DataFrame and Dataset: In Scala and Java, DataFrame and Dataset
> have been unified, i.e. DataFrame is just a type alias for Dataset of Row.
> In Python and R, given the lack of type safety, DataFrame is the main
> programming interface.
> *SparkSession: new entry point that replaces the old SQLContext and
> HiveContext for DataFrame and Dataset APIs. SQLContext and HiveContext are
> kept for backward compatibility.*
> A new, streamlined configuration API for SparkSession
> Simpler, more performant accumulator API
> A new, improved Aggregator API for typed aggregation in Datasets
>
>
> thanks
> Pradeep
>
>
>
> --
> View this message in context: http://apache-spark-user-list.
> 1001560.n3.nabble.com/backward-compatibility-tp28296.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>


backward compatibility

2017-01-10 Thread pradeepbill
hi there, I am using spark 1.4 code and now we plan to move to spark 2.0, and
when I check the documentation below, there are only a few features backward
compatible, does that mean I have change most of my code , please advice.

One of the largest changes in Spark 2.0 is the new updated APIs:

Unifying DataFrame and Dataset: In Scala and Java, DataFrame and Dataset
have been unified, i.e. DataFrame is just a type alias for Dataset of Row.
In Python and R, given the lack of type safety, DataFrame is the main
programming interface.
*SparkSession: new entry point that replaces the old SQLContext and
HiveContext for DataFrame and Dataset APIs. SQLContext and HiveContext are
kept for backward compatibility.*
A new, streamlined configuration API for SparkSession
Simpler, more performant accumulator API
A new, improved Aggregator API for typed aggregation in Datasets


thanks
Pradeep



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/backward-compatibility-tp28296.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Backward compatibility with org.apache.spark.sql.api.java.Row class

2015-05-13 Thread Emerson CastaƱeda
Hello everyone

I'm adopting the latest version of Apache Spark on my project, moving from
*1.2.x* to *1.3.x*, and the only significative incompatibility for now is
related to the *Row *class.

Any idea about what did happen to* org.apache.spark.sql.api.java.Row* class
in Apache Spark 1.3 ?


Migration guide on Spark SQL and DataFrames - Spark 1.3.0 Documentation
does not mention anything about it.
https://spark.apache.org/docs/1.3.1/sql-programming-guide.html#upgrading-from-spark-sql-10-12-to-13


Looking around there is a new *Row *Interface on *org.apache.spark.sql
package,* but I'm not 100% sure if  this is related to my question and
about how to proceed with the upgrading,

Note that this new interface *Row* was not available in the previous
Spark's versions *1.0.0 1.1.0 1.2.0* and even *1.3.0*

Thanks in ahead

Emerson


Re: Backward compatibility with org.apache.spark.sql.api.java.Row class

2015-05-13 Thread Michael Armbrust
Sorry for missing that in the upgrade guide.  As part of unifying the Java
and Scala interfaces we got rid of the java specific row.  You are correct
in assuming that you want to use row in org.apache.spark.sql from both
Scala and Java now.

On Wed, May 13, 2015 at 2:48 AM, Emerson CastaƱeda eme...@gmail.com wrote:

 Hello everyone

 I'm adopting the latest version of Apache Spark on my project, moving from
 *1.2.x* to *1.3.x*, and the only significative incompatibility for now is
 related to the *Row *class.

 Any idea about what did happen to* org.apache.spark.sql.api.java.Row*
 class in Apache Spark 1.3 ?


 Migration guide on Spark SQL and DataFrames - Spark 1.3.0 Documentation
 does not mention anything about it.
 https://spark.apache.org/docs/1.3.1/sql-programming-guide.html#upgrading-from-spark-sql-10-12-to-13


 Looking around there is a new *Row *Interface on *org.apache.spark.sql
 package,* but I'm not 100% sure if  this is related to my question and
 about how to proceed with the upgrading,

 Note that this new interface *Row* was not available in the previous
 Spark's versions *1.0.0 1.1.0 1.2.0* and even *1.3.0*

 Thanks in ahead

 Emerson