I haven't looked closely at this, but I think your proposal makes sense.
On Sun, Apr 17, 2016 at 6:40 PM, Niranda Perera
wrote:
> Hi guys,
>
> Any update on this?
>
> Best
>
> On Tue, Apr 12, 2016 at 12:46 PM, Niranda Perera > wrote:
>
>>
Also hitting this: https://github.com/apache/spark/pull/12455.
On Sun, Apr 17, 2016 at 9:22 PM, Hyukjin Kwon wrote:
> +1
>
> Yea, I am facing this problem as well,
> https://github.com/apache/spark/pull/12452
>
> I thought they are spurious because the tests are passed in
Hi guys,
Any update on this?
Best
On Tue, Apr 12, 2016 at 12:46 PM, Niranda Perera
wrote:
> Hi all,
>
> I have encountered a small issue in the standalone recovery mode.
>
> Let's say there was an application A running in the cluster. Due to some
> issue, the entire
+1
Yea, I am facing this problem as well,
https://github.com/apache/spark/pull/12452
I thought they are spurious because the tests are passed in my local.
2016-04-18 3:26 GMT+09:00 Kazuaki Ishizaki :
> I realized that recent Jenkins among different pull requests always
i find version 3 without the _ also more readable
On Sun, Apr 17, 2016 at 3:02 AM, Mark Hamstra
wrote:
> I actually find my version of 3 more readable than the one with the `_`,
> which looks too much like a partially applied function. It's a minor
> issue, though.
>
>
Does that not mean, GC settings with concurrent collectors should be
preferred over parallel collectors atleast on the driver side? If so, why
not have concurrent collectors specified by default when the driver JVM is
launched without any overriding on this part?
--
View this message in
Your understanding is correct. If the driver is stuck in GC, then during
that period it cannot schedule any tasks.
On Sun, Apr 17, 2016 at 10:27 AM, Rahul Tanwani
wrote:
> Hi Devs,
>
> In case of stop the world GC events on the driver JVM, since all the
> application
I realized that recent Jenkins among different pull requests always fails
in the following two tests
"SPARK-8020: set sql conf in spark conf"
"SPARK-9757 Persist Parquet relation with decimal column"
Here are examples.
https://github.com/apache/spark/pull/11956 (consoleFull:
Take a look at spark testing base.
https://github.com/holdenk/spark-testing-base/blob/master/README.md
On Apr 17, 2016 10:28 AM, "Evan Chan" wrote:
> What I want to find out is how to run tests like Spark's with
> local-cluster, just like that suite, but in your own
On Sat, Apr 16, 2016 at 11:12 PM, Reynold Xin wrote:
> First, really thank you for leading the discussion.
>
> I am concerned that it'd hurt Spark more than it helps. As many others
> have pointed out, this unnecessarily creates a new tier of connectors or
> 3rd party libraries
What I want to find out is how to run tests like Spark's with
local-cluster, just like that suite, but in your own projects. Has
anyone done this?
On Sun, Apr 17, 2016 at 5:37 AM, Takeshi Yamamuro wrote:
> Hi,
> Is this a bad idea to create `SparkContext` with a
Hi,
Is this a bad idea to create `SparkContext` with a `local-cluster` mode by
yourself like '
https://github.com/apache/spark/blob/master/core/src/test/scala/org/apache/spark/ShuffleSuite.scala#L55
'?
// maropu
On Sun, Apr 17, 2016 at 9:47 AM, Evan Chan wrote:
> Hey
I actually find my version of 3 more readable than the one with the `_`,
which looks too much like a partially applied function. It's a minor
issue, though.
On Sat, Apr 16, 2016 at 11:56 PM, Hyukjin Kwon wrote:
> Hi Mark,
>
> I know but that could harm readability. AFAIK,
Hi Mark,
I know but that could harm readability. AFAIK, for this reason, that is not
(or rarely) used in Spark.
2016-04-17 15:54 GMT+09:00 Mark Hamstra :
> FWIW, 3 should work as just `.map(function)`.
>
> On Sat, Apr 16, 2016 at 11:48 PM, Reynold Xin
FWIW, 3 should work as just `.map(function)`.
On Sat, Apr 16, 2016 at 11:48 PM, Reynold Xin wrote:
> Hi Hyukjin,
>
> Thanks for asking.
>
> For 1 the change is almost always better.
>
> For 2 it depends on the context. In general if the type is not obvious, it
> helps
Hi Hyukjin,
Thanks for asking.
For 1 the change is almost always better.
For 2 it depends on the context. In general if the type is not obvious, it
helps readability to explicitly declare them.
For 3 again it depends on context.
So while it is a good idea to change 1 to reflect a more
First, really thank you for leading the discussion.
I am concerned that it'd hurt Spark more than it helps. As many others have
pointed out, this unnecessarily creates a new tier of connectors or 3rd
party libraries appearing to be endorsed by the Spark PMC or the ASF. We
can alleviate this
Hi all,
First of all, I am sorry that this is relatively trivial and too minor but
I just want to be clear on this and careful for the more PRs in the future.
Recently, I have submitted a PR (https://github.com/apache/spark/pull/12413)
about Scala style and this was merged. In this PR, I changed
18 matches
Mail list logo