In the past the Spark community have created preview packages (not official
releases) and used those as opportunities to ask community members to test
the upcoming versions of Apache Spark. Several people in the Apache
community have suggested we conduct votes for these preview packages and
turn th
> Another clarification: not databricks, but the Apache Spark PMC grants
> access to the JIRA / wiki. That said... I'm not actually sure how its done.
word. i'll make the changes if we need to.
-
To unsubscribe, e-mail: dev-uns
>
> i can't give you permissions -- that has to be (most likely) through
> someone @ databricks, like michael.
>
Another clarification: not databricks, but the Apache Spark PMC grants
access to the JIRA / wiki. That said... I'm not actually sure how its done.
Yep. Let's hold on. :)
On Tue, May 24, 2016 at 3:45 PM, shane knapp wrote:
> > Sure, could you give me the permission for Spark Jira?
> >
> > Although we haven't decided yet, I can add Travis related section
> > (summarizing current configurations and expected VM HW, etc).
> >
> i can't give you
> Sure, could you give me the permission for Spark Jira?
>
> Although we haven't decided yet, I can add Travis related section
> (summarizing current configurations and expected VM HW, etc).
>
i can't give you permissions -- that has to be (most likely) through
someone @ databricks, like michael.
Thank you, Shane.
Sure, could you give me the permission for Spark Jira?
Although we haven't decided yet, I can add Travis related section
(summarizing current configurations and expected VM HW, etc).
That will be helpful for further discussions.
It's just a Wiki, you can delete the Travis Sect
> As Sean said, Vanzin made a PR for JDK7 compilation. We can ignore the issue
> of JDK7 compilation.
>
vanzin and i are working together on this right now... we currently
have java 7u79 installed on all of the workers. if some random test
failures keep happening during his tests, i will roll out
Hi, All.
As Sean said, Vanzin made a PR for JDK7 compilation. We can ignore the
issue of JDK7 compilation.
The remaining issues are the java-linter and maven installation test.
To: Michael
For the rate limit, Apache Foundation seems to use 30 concurrent according
to the INFRA blog.
https://blog
Thanks, Koert. This is great. Please keep them coming.
On Tue, May 24, 2016 at 9:27 AM, Koert Kuipers wrote:
> https://issues.apache.org/jira/browse/SPARK-15507
>
> On Tue, May 24, 2016 at 12:21 PM, Ted Yu wrote:
>
>> Please log a JIRA.
>>
>> Thanks
>>
>> On Tue, May 24, 2016 at 8:33 AM, Koert
https://issues.apache.org/jira/browse/SPARK-15507
On Tue, May 24, 2016 at 12:21 PM, Ted Yu wrote:
> Please log a JIRA.
>
> Thanks
>
> On Tue, May 24, 2016 at 8:33 AM, Koert Kuipers wrote:
>
>> hello,
>> as we continue to test spark 2.0 SNAPSHOT in-house we ran into the
>> following trying to po
+1 (non-binding)
I think this is an important step to improve Spark as an Apache project.
.. Owen
On Mon, May 23, 2016 at 11:18 AM, Holden Karau wrote:
> +1 non-binding (as a contributor anything which speed things up is worth
> a try, and git blame is a good enough substitute for the list whe
Please log a JIRA.
Thanks
On Tue, May 24, 2016 at 8:33 AM, Koert Kuipers wrote:
> hello,
> as we continue to test spark 2.0 SNAPSHOT in-house we ran into the
> following trying to port an existing application from spark 1.6.1 to spark
> 2.0.0-SNAPSHOT.
>
> given this code:
>
> case class Test(a
The first item as a whole should be null please refer to the jira.
Sent from my iPhone
> On May 24, 2016, at 7:31 AM, Koert Kuipers wrote:
>
> got it, but i assume thats an internal implementation detail, and it should
> show null not -1?
>
>> On Tue, May 24, 2016 at 3:10 AM, Zhan Zhang wro
hello,
as we continue to test spark 2.0 SNAPSHOT in-house we ran into the
following trying to port an existing application from spark 1.6.1 to spark
2.0.0-SNAPSHOT.
given this code:
case class Test(a: Int, b: String)
val rdd = sc.parallelize(List(Row(List(Test(5, "ha"), Test(6, "ba")
val sche
got it, but i assume thats an internal implementation detail, and it should
show null not -1?
On Tue, May 24, 2016 at 3:10 AM, Zhan Zhang wrote:
> The reason for "-1" is that the default value for Integer is -1 if the
> value
> is null
>
> def defaultValue(jt: String): String = jt match {
>
Do you need more information?
> On 23 May 2016, at 19:16, Ovidiu-Cristian MARCU
> wrote:
>
> Yes,
>
> git log
> commit dafcb05c2ef8e09f45edfb7eabf58116c23975a0
> Author: Sameer Agarwal mailto:sam...@databricks.com>>
> Date: Sun May 22 23:32:39 2016 -0700
>
> for #2 see my comments in https
The reason for "-1" is that the default value for Integer is -1 if the value
is null
def defaultValue(jt: String): String = jt match {
...
case JAVA_INT => "-1"
...
}
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/right-outer-joins-
17 matches
Mail list logo