We have a complex application that runs productively for couple of
months and heavily uses spark in scala.
Just to give you some insight on complexity - we do not have such a huge
source data (only about 500'000 complex elements), but we have more than
a billion transformations and intermediat
(ForkJoinWorkerThread.java:107)
kind regards
reinis
On 25.05.2015 23:09, Reinis Vicups wrote:
Great hints, you guys!
Yes spark-shell worked fine with mesos as master. I haven't tried to
execute multiple rdd actions in a row though (I did couple of
successful counts on hbase tables i am working wi
Programming Scala, 2nd Edition
<http://shop.oreilly.com/product/0636920033073.do> (O'Reilly)
Typesafe <http://typesafe.com>
@deanwampler <http://twitter.com/deanwampler>
http://polyglotprogramming.com
On Mon, May 25, 2015 at 12:06 PM, Reinis Vicups <mailto:sp...@orbit-x.de&g
well)
thanks
reinis
On 25.05.2015 17:07, Iulian DragoČ™ wrote:
On Mon, May 25, 2015 at 2:43 PM, Reinis Vicups <mailto:sp...@orbit-x.de>> wrote:
Hello,
I am using Spark 1.3.1-hadoop2.4 with Mesos 0.22.1 with zookeeper
and running on a cluster with 3 nodes on 64bit u
Hello,
I am using Spark 1.3.1-hadoop2.4 with Mesos 0.22.1 with zookeeper and
running on a cluster with 3 nodes on 64bit ubuntu.
My application is compiled with spark 1.3.1 (apparently with mesos
0.21.0 dependency), hadoop 2.5.1-mapr-1503 and akka 2.3.10. Only with
this combination I have suc
Hello,
I have two weird effects when working with spark-shell:
1. This code executed in spark-shell causes an exception below. At the
same time it works perfectly when submitted with spark-submit! :
import org.apache.hadoop.hbase.{HConstants, HBaseConfiguration}
import org.apache.hadoop.hbas
I am humbly bumping this since even after another week of trying I
haven't had luck to fix this yet.
On 14.09.2014 19:21, Reinis Vicups wrote:
I did actually try Seans suggestion just before I posted for the first
time in this thread. I got an error when doing this and thought that I
a
I did actually try Seans suggestion just before I posted for the first
time in this thread. I got an error when doing this and thought that I
am not understanding what Sean was suggesting.
Now I re-attempted your suggestions with spark 1.0.0-cdh5.1.0, hbase
0.98.1-cdh5.1.0 and hadoop 2.3.0-cdh