Hi folks,
I often meet the spark compilation issue on intellij. It wastes me lots of
time. I googled it and found someone else also meet similar issue, but
seems no perfect solution for now. but still wondering anyone here has
perfect solution for that. The issue happens sometimes, I don't know
Hi Ted,
thanks for the update. The build with sbt is in progress on my box.
Regards
JB
On 11/03/2015 03:31 PM, Ted Yu wrote:
Interesting, Sbt builds were not all failing:
https://amplab.cs.berkeley.edu/jenkins/job/Spark-Master-SBT/
FYI
On Tue, Nov 3, 2015 at 5:58 AM, Jean-Baptiste Onofré
Hi Julio,
Can you please cite references based on the distributed implementation?
On Tue, Nov 3, 2015 at 8:52 PM, Julio Antonio Soto de Vicente <
ju...@esbet.es> wrote:
> Hi,
> Is my understanding that little research has been done yet on distributed
> computation (without access to shared
Sergio, you are not alone for sure. Check the RowSimilarity implementation
[SPARK-4823]. It has been there for 6 months. It is very likely those which
don't merge in the version of spark that it was developed will never merged
because spark changes quite significantly from version to version if
Thanks for your response. I was worried about #3, vs being able to use the
objects directly. #2 seems to be the dealbreaker for my use case right?
Even if it I am using tachyon for caching, if an executor is lost, then
that partition is lost for the purposes of spark?
On Tue, Nov 3, 2015 at 5:53
Alright, we'll just stick with normal caching then.
Just for future reference, how much work would it be to get it to retain
the partitions in tachyon. This is especially helpful in a multitenant
situation, where many users each have their own persistent spark contexts,
but where the notebooks
If you are using Spark with Mesos fine grained mode, can you please respond
to this email explaining why you use it over the coarse grained mode?
Thanks.
We "used" Spark on Mesos to build interactive data analysis platform
because the interactive session could be long and might not use Spark for
the entire session. It is very wasteful of resources if we used the
coarse-grained mode because it keeps resource for the entire session.
Therefore,
we use fine-grained mode. coarse-grained mode keeps JVMs around which often
leads to OOMs, which in turn kill the entire executor, causing entire
stages to be retried. In fine-grained mode, only the task fails and
subsequently gets retried without taking out an entire stage or worse.
On Tue, Nov
Soren,
If I understand how Mesos works correctly, even the fine grained mode keeps
the JVMs around?
On Tue, Nov 3, 2015 at 4:22 PM, Soren Macbeth wrote:
> we use fine-grained mode. coarse-grained mode keeps JVMs around which
> often leads to OOMs, which in turn kill the
Please vote on releasing the following candidate as Apache Spark version
1.5.2. The vote is open until Sat Nov 7, 2015 at 00:00 UTC and passes if a
majority of at least 3 +1 PMC votes are cast.
[ ] +1 Release this package as Apache Spark 1.5.2
[ ] -1 Do not release this package because ...
The
Fine grain mode does reuse the same JVM but perhaps different placement or
different allocated cores comparing to the same total memory allocation.
Tim
Sent from my iPhone
> On Nov 3, 2015, at 6:00 PM, Reynold Xin wrote:
>
> Soren,
>
> If I understand how Mesos works
Hi,
We are using Mesos fine grained mode because we can have multiple instances of
spark to share machines and each application get resources dynamically
allocated. Thanks & Regards, Meethu M
On Wednesday, 4 November 2015 5:24 AM, Reynold Xin
wrote:
If you
Hi Justin,
The Dataset API proposal is available here:
https://issues.apache.org/jira/browse/SPARK-.
-Sandy
On Tue, Nov 3, 2015 at 1:41 PM, Justin Uang wrote:
> Hi,
>
> I was looking through some of the PRs slated for 1.6.0 and I noted
> something called a Dataset,
Hello,
I'm trying to get maxCores and memoryPerExecutorMB into /api/v1 for this
ticket: https://issues.apache.org/jira/browse/SPARK-10565
I can't figure out which *getApplicationInfoList *is used by
*ApiRootResource.scala.
*It's attached in SparkUI but SparkUI's doesn't have start / end times
15 matches
Mail list logo