hi

You mentioned:

I already solved it using DF and spark sql ...

Are you referring to this code which is a classic analytics:

SELECT DATE,balance,
 SUM(balance) OVER (ORDER BY DATE ROWS BETWEEN UNBOUNDED PRECEDING

 AND

 CURRENT ROW) daily_balance

 FROM  table


So how did you solve it using DF in the first place?


HTH


Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 1 August 2016 at 07:04, Sri <kali.tumm...@gmail.com> wrote:

> Hi ,
>
> Just wondering how spark SQL works behind the scenes does it not convert
> SQL to some Scala RDD ? Or Scala ?
>
> How to write below SQL in Scala or Scala RDD
>
> SELECT DATE,balance,
>
> SUM(balance) OVER (ORDER BY DATE ROWS BETWEEN UNBOUNDED PRECEDING
>
> AND
>
> CURRENT ROW) daily_balance
>
> FROM  table
>
>
> Thanks
> Sri
> Sent from my iPhone
>
> On 31 Jul 2016, at 13:21, Jacek Laskowski <ja...@japila.pl> wrote:
>
> Hi,
>
> Impossible - see
>
> http://www.scala-lang.org/api/current/index.html#scala.collection.Seq@sliding(size:Int,step:Int):Iterator[Repr]
> .
>
> I tried to show you why you ended up with "non-empty iterator" after
> println. You should really start with
> http://www.scala-lang.org/documentation/
>
> Pozdrawiam,
> Jacek Laskowski
> ----
> https://medium.com/@jaceklaskowski/
> Mastering Apache Spark 2.0 http://bit.ly/mastering-apache-spark
> Follow me at https://twitter.com/jaceklaskowski
>
>
> On Sun, Jul 31, 2016 at 8:49 PM, sri hari kali charan Tummala
> <kali.tumm...@gmail.com> wrote:
>
> Tuple
>
>
> [Lscala.Tuple2;@65e4cb84
>
>
> On Sun, Jul 31, 2016 at 1:00 AM, Jacek Laskowski <ja...@japila.pl> wrote:
>
>
> Hi,
>
>
> What's the result type of sliding(2,1)?
>
>
> Pozdrawiam,
>
> Jacek Laskowski
>
> ----
>
> https://medium.com/@jaceklaskowski/
>
> Mastering Apache Spark 2.0 http://bit.ly/mastering-apache-spark
>
> Follow me at https://twitter.com/jaceklaskowski
>
>
>
> On Sun, Jul 31, 2016 at 9:23 AM, sri hari kali charan Tummala
>
> <kali.tumm...@gmail.com> wrote:
>
> tried this no luck, wht is non-empty iterator here ?
>
>
> OP:-
>
> (-987,non-empty iterator)
>
> (-987,non-empty iterator)
>
> (-987,non-empty iterator)
>
> (-987,non-empty iterator)
>
> (-987,non-empty iterator)
>
>
>
> sc.textFile(file).keyBy(x => x.split("\\~") (0))
>
>  .map(x => x._2.split("\\~"))
>
>  .map(x => (x(0),x(2)))
>
>    .map { case (key,value) =>
>
> (key,value.toArray.toSeq.sliding(2,1).map(x
>
> => x.sum/x.size))}.foreach(println)
>
>
>
> On Sun, Jul 31, 2016 at 12:03 AM, sri hari kali charan Tummala
>
> <kali.tumm...@gmail.com> wrote:
>
>
> Hi All,
>
>
> I managed to write using sliding function but can it get key as well in
>
> my
>
> output ?
>
>
> sc.textFile(file).keyBy(x => x.split("\\~") (0))
>
>      .map(x => x._2.split("\\~"))
>
>      .map(x => (x(2).toDouble)).toArray().sliding(2,1).map(x =>
>
> (x,x.size)).foreach(println)
>
>
>
> at the moment my output:-
>
>
> 75.0
>
> -25.0
>
> 50.0
>
> -50.0
>
> -100.0
>
>
> I want with key how to get moving average output based on key ?
>
>
>
> 987,75.0
>
> 987,-25
>
> 987,50.0
>
>
> Thanks
>
> Sri
>
>
>
>
>
>
>
> On Sat, Jul 30, 2016 at 11:40 AM, sri hari kali charan Tummala
>
> <kali.tumm...@gmail.com> wrote:
>
>
> for knowledge just wondering how to write it up in scala or spark RDD.
>
>
> Thanks
>
> Sri
>
>
> On Sat, Jul 30, 2016 at 11:24 AM, Jacek Laskowski <ja...@japila.pl>
>
> wrote:
>
>
> Why?
>
>
> Pozdrawiam,
>
> Jacek Laskowski
>
> ----
>
> https://medium.com/@jaceklaskowski/
>
> Mastering Apache Spark 2.0 http://bit.ly/mastering-apache-spark
>
> Follow me at https://twitter.com/jaceklaskowski
>
>
>
> On Sat, Jul 30, 2016 at 4:42 AM, kali.tumm...@gmail.com
>
> <kali.tumm...@gmail.com> wrote:
>
> Hi All,
>
>
> I managed to write business requirement in spark-sql and hive I am
>
> still
>
> learning scala how this below sql be written using spark RDD not
>
> spark
>
> data
>
> frames.
>
>
> SELECT DATE,balance,
>
> SUM(balance) OVER (ORDER BY DATE ROWS BETWEEN UNBOUNDED PRECEDING
>
> AND
>
> CURRENT ROW) daily_balance
>
> FROM  table
>
>
>
>
>
>
> --
>
> View this message in context:
>
>
>
> http://apache-spark-user-list.1001560.n3.nabble.com/sql-to-spark-scala-rdd-tp27433.html
>
> Sent from the Apache Spark User List mailing list archive at
>
> Nabble.com <http://nabble.com>.
>
>
>
> ---------------------------------------------------------------------
>
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>
>
>
>
>
> --
>
> Thanks & Regards
>
> Sri Tummala
>
>
>
>
>
> --
>
> Thanks & Regards
>
> Sri Tummala
>
>
>
>
>
> --
>
> Thanks & Regards
>
> Sri Tummala
>
>
>
>
>
>
> --
>
> Thanks & Regards
>
> Sri Tummala
>
>
>

Reply via email to