Hi Minudika,

To add to what Oscar said, this blog post [1] should clarify it for you.
And this should be posted in the user-list not the dev.

[1]
https://databricks.com/blog/2016/07/14/a-tale-of-three-apache-spark-apis-rdds-dataframes-and-datasets.html

Cheers,
Sachith

On Thu, Aug 18, 2016 at 8:43 PM, Oscar Batori <oscarbat...@gmail.com> wrote:

> From the docs
> <https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.package@DataFrame=org.apache.spark.sql.Dataset[org.apache.spark.sql.Row]>,
> DataFrame is just Dataset[Row]. The are various converters for subtypes of
> Product if you want, using "as[T]", where T <: Product, or there is an
> implicit decoder in scope, I believe.
>
> Also, this is probably a user list question.
>
>
> On Thu, Aug 18, 2016 at 10:59 AM Minudika Malshan <minudika...@gmail.com>
> wrote:
>
>> Hi all,
>>
>> Most of Spark ML algorithms requires a dataset to train the model.
>> I would like to know how to convert a spark *data-frame* to a *dataset*
>> using Java.
>> Your support is much appreciated.
>>
>> Thank you!
>> Minudika
>>
>


-- 
Sachith Withana
Software Engineer; WSO2 Inc.; http://wso2.com
E-mail: sachith AT wso2.com
M: +94715518127
Linked-In: <http://goog_416592669>https://lk.linkedin.com/in/sachithwithana

Reply via email to