Hi,
HIVE has a metastore and HIVESERVER2 listens for SQL requests; with the
help of metastore, the query is executed and the result is passed back.
The Thrift framework is actually customised as HIVESERVER2. In this way,
HIVE is acting as a service. Via programming language, we can use HIVE as a
Hi,
I have read in many materials (including from the book: Spark - The
Definitive Guide) that Spark is a compiler.
In my understanding, our program is used until the point of DAG generation.
This portion can be written in any language - Java,Scala,R,Python.
Post that (executing the DAG), the
That's a very good idea, thanks for sharing German!
On Tue, Mar 16, 2021 at 7:08 PM German Schiavon
wrote:
> Hi all,
>
> I guess you could do something like this too:
>
> [image: Captura de pantalla 2021-03-16 a las 14.35.46.png]
>
> On Tue, 16 Mar 2021 at 13:31,
val end = (((i + 1) * length) / numSlices).toInt
> | (start, end)
> | }
> | }
> positions: (length: Long, numSlices: Int)Iterator[(Int, Int)]
>
> scala> positions(5, 12).foreach(println)
> (0,0)
> (0,0)
> (0,1)
> (1,1)
> (1,2
. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damag
Hi,
I have a question with respect to default partitioning in RDD.
*case class Animal(id:Int, name:String) val myRDD =
session.sparkContext.parallelize( (Array( Animal(1, "Lion"),
Animal(2,"Elephant"), Animal(3,"Jaguar"), Animal(4,"Tiger"), Animal(5,
"Chetah") ) ))Console println