Congrats! That's a really impressive and useful addition to spark. I just
recently discovered a similar feature in pandas and really enjoyed using it.
Regards, Heiko
Am 21.03.2014 um 02:11 schrieb Reynold Xin r...@databricks.com:
Hi All,
I'm excited to announce a new module in Spark
Hi
Where can I find the equivalent of the graphx example
(http://spark.apache.org/docs/0.9.0/graphx-programming-guide.html#examples ) in
Java ? For example. How does the following translates to Java
val users: RDD[(VertexId, (String, String))] =
sc.parallelize(Array((3L, (rxin, student)),
Congrats Michael and all for getting this so far. Spark SQL and Catalyst will
make it much easier to use structured data in Spark, and open the door for some
very cool extensions later.
Matei
On Mar 20, 2014, at 11:15 PM, Heiko Braun ike.br...@googlemail.com wrote:
Congrats! That's a really
Awesome news !
It will be great if there are any examples or usecases to look at ?
We are looking into shark/ooyala job server to give in memory sql
analytics, model serving/scoring features for dashboard apps...
Does this feature has different usecases than shark or more cleaner as hive
It will be great if there are any examples or usecases to look at ?
There are examples in the Spark documentation. Patrick posted and updated
copy here so people can see them before 1.0 is released:
http://people.apache.org/~pwendell/catalyst-docs/sql-programming-guide.html
Does this feature
Hey Everyone,
Here is a pretty major (but source compatible) change we are considering
making to the RDD API for 1.0. Java and Python APIs would remain the same,
but users of Scala would likely need to use less casts. This would be
especially true for libraries whose functions take RDDs as
That would be awesome. I support this!
On Fri, Mar 21, 2014 at 7:28 PM, Michael Armbrust
mich...@databricks.com wrote:
Hey Everyone,
Here is a pretty major (but source compatible) change we are considering
making to the RDD API for 1.0. Java and Python APIs would remain the same,
but users