to you
*Romi Kuntsman*, *Big Data Engineer*
http://www.totango.com
On Thu, Jan 15, 2015 at 1:52 PM, preeze etan...@gmail.com wrote:
From the official spark documentation
(http://spark.apache.org/docs/1.2.0/running-on-yarn.html):
In yarn-cluster mode, the Spark driver runs inside an application
by retrying).
*Romi Kuntsman*, *Big Data Engineer*
http://www.totango.com
On Wed, Apr 1, 2015 at 12:58 PM, Gil Vernik g...@il.ibm.com wrote:
I actually saw the same issue, where we analyzed some container with few
hundreds of GBs zip files - one was corrupted and Spark exit with
Exception
be in Spark 2.0)
*Romi Kuntsman*, *Big Data Engineer*
http://www.totango.com
On Fri, Nov 6, 2015 at 2:53 PM, Jean-Baptiste Onofré <j...@nanthrax.net>
wrote:
> Hi Sean,
>
> Happy to see this discussion.
>
> I'm working on PoC to run Camel on Spark Streaming. The purpose is t
ultiple
levels of aggregations, iterative machine learning algorithms etc.
Sending the whole "workplan" to the Spark framework would be, as I see it,
the next step of it's evolution, like stored procedures send a logic with
many SQL queries to the database.
Was it more clear this time?
If they have a problem managing memory, wouldn't there should be a OOM?
Why does AppClient throw a NPE?
*Romi Kuntsman*, *Big Data Engineer*
http://www.totango.com
On Mon, Nov 9, 2015 at 4:59 PM, Akhil Das <ak...@sigmoidanalytics.com>
wrote:
> Is that all you have in the executo
timeout etc)
*Romi Kuntsman*, *Big Data Engineer*
http://www.totango.com
On Mon, Nov 9, 2015 at 6:00 PM, Akhil Das <ak...@sigmoidanalytics.com>
wrote:
> Did you find anything regarding the OOM in the executor logs?
>
> Thanks
> Best Regards
>
> On Mon, Nov 9, 2015 at 8
different, and building the framework around that will
benefit each of those flows (like events instead of microbatches in
streaming, worker-side intermediate processing in batch, etc).
So where is the best way to have a full Spark 2.0 discussion?
*Romi Kuntsman*, *Big Data Engineer*
http://www.t
103)
at
org.apache.spark.SparkContext.getSchedulingMode(SparkContext.scala:1501)
at
org.apache.spark.SparkContext.postEnvironmentUpdate(SparkContext.scala:2005)
at org.apache.spark.SparkContext.(SparkContext.scala:543)
at
org.apache.spark.api.java.JavaSparkContext.(JavaSparkContext.scala:61)
Thanks!
*Romi Kuntsman*, *Big D
https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark
*Romi Kuntsman*, *Big Data Engineer*
http://www.totango.com
On Fri, Oct 30, 2015 at 1:25 PM, Saurabh Shah <shahsaurabh0...@gmail.com>
wrote:
> Hello, my name is Saurabh Shah and I am a second year undergraduate
sparkConext is available on the driver, not on executors.
To read from Cassandra, you can use something like this:
https://github.com/datastax/spark-cassandra-connector/blob/master/doc/2_loading.md
*Romi Kuntsman*, *Big Data Engineer*
http://www.totango.com
On Mon, Sep 21, 2015 at 2:27 PM
again.
*Romi Kuntsman*, *Big Data Engineer*
http://www.totango.com
On Thu, Sep 17, 2015 at 10:07 AM, Gil Vernik <g...@il.ibm.com> wrote:
> Hi,
>
> I have the following case, which i am not sure how to resolve.
>
> My code uses HadoopRDD and creates various RDDs on top of i
Hi Michael,
What about the memory leak bug?
https://issues.apache.org/jira/browse/SPARK-11293
Even after the memory rewrite in 1.6.0, it still happens in some cases.
Will it be fixed for 1.6.1?
Thanks,
*Romi Kuntsman*, *Big Data Engineer*
http://www.totango.com
On Mon, Feb 1, 2016 at 9:59 PM
Sounds fair. Is it to avoid cluttering maven central with too many
intermediate versions?
What do I need to add in my pom.xml section to make it work?
*Romi Kuntsman*, *Big Data Engineer*
http://www.totango.com
On Tue, Feb 23, 2016 at 9:34 AM, Reynold Xin <r...@databricks.com> wrote:
Is it possible to make RC versions available via Maven? (many projects do
that)
That will make integration much easier, so many more people can test the
version before the final release.
Thanks!
*Romi Kuntsman*, *Big Data Engineer*
http://www.totango.com
On Tue, Feb 23, 2016 at 8:07 AM, Luciano
+1 for Java 8 only
I think it will make it easier to make a unified API for Java and Scala,
instead of the wrappers of Java over Scala.
On Mar 24, 2016 11:46 AM, "Stephen Boesch" wrote:
> +1 for java8 only +1 for 2.11+ only .At this point scala libraries
> supporting
You can also claim that there's a whole section of "Migrating from 1.6 to
2.0" missing there:
https://spark.apache.org/docs/2.0.0-preview/sql-programming-guide.html#migration-guide
*Romi Kuntsman*, *Big Data Engineer*
http://www.totango.com
On Tue, Jul 5, 2016 at 12:24 PM, nihed mb
16 matches
Mail list logo