-1 (with my Apache member hat on, non-binding)
I'll continue discussion in the other thread, but I don't think we should
share signing keys.
On Fri, Sep 15, 2017 at 5:14 PM, Holden Karau wrote:
> Indeed it's limited to a people with login permissions on the Jenkins host
> (and perhaps further l
Oh yes and to keep people more informed I've been updating a PR for the
release documentation as I go to write down some of this unwritten
knowledge -- https://github.com/apache/spark-website/pull/66
On Fri, Sep 15, 2017 at 5:12 PM Holden Karau wrote:
> Also continuing the discussion from the v
Indeed it's limited to a people with login permissions on the Jenkins host
(and perhaps further limited, I'm not certain). Shane probably knows more
about the ACLs, so I'll ask him in the other thread for specifics.
This is maybe branching a bit from the question of the current RC though,
so I'd s
Also continuing the discussion from the vote threads, Shane probably has
the best idea on the ACLs for Jenkins so I've CC'd him as well.
On Fri, Sep 15, 2017 at 5:09 PM Holden Karau wrote:
> Changing the release jobs, beyond the available parameters, right now
> depends on Josh arisen as there
Changing the release jobs, beyond the available parameters, right now
depends on Josh arisen as there are some scripts which generate the jobs
which aren't public. I've done temporary fixes in the past with the Python
packaging but my understanding is that in the medium term it requires
access to t
I think this needs to be fixed. It's true that there are barriers to
publication, but the signature is what we use to authenticate Apache
releases.
If Patrick's key is available on Jenkins for any Spark committer to use,
then the chance of a compromise are much higher than for a normal RM key.
rb
I'm not familiar with the release procedure, can you send a link to this
Jenkins job? Can anyone run this job, or is it limited to committers?
rb
On Fri, Sep 15, 2017 at 12:28 PM, Holden Karau wrote:
> That's a good question, I built the release candidate however the Jenkins
> scripts don't tak
Xiao If it doesn't apply/you've changed your mind if you can re-vote that
would be rad.
On Fri, Sep 15, 2017 at 2:22 PM, Felix Cheung
wrote:
> Yes ;)
>
> --
> *From:* Xiao Li
> *Sent:* Friday, September 15, 2017 2:22:03 PM
> *To:* Holden Karau
> *Cc:* Ryan Blue; Denn
Yes ;)
From: Xiao Li
Sent: Friday, September 15, 2017 2:22:03 PM
To: Holden Karau
Cc: Ryan Blue; Denny Lee; Felix Cheung; Sean Owen; dev@spark.apache.org
Subject: Re: [VOTE] Spark 2.1.2 (RC1)
Sorry, this release candidate is 2.1.2. The issue is in 2.2.1.
2017-09
Sorry, this release candidate is 2.1.2. The issue is in 2.2.1.
2017-09-15 14:21 GMT-07:00 Xiao Li :
> -1
>
> See the discussion in https://github.com/apache/spark/pull/19074
>
> Xiao
>
>
>
> 2017-09-15 12:28 GMT-07:00 Holden Karau :
>
>> That's a good question, I built the release candidate howev
-1
See the discussion in https://github.com/apache/spark/pull/19074
Xiao
2017-09-15 12:28 GMT-07:00 Holden Karau :
> That's a good question, I built the release candidate however the Jenkins
> scripts don't take a parameter for configuring who signs them rather it
> always signs them with Pat
Can we just create those tables once locally using official Spark versions
and commit them? Then the unit tests can just read these files and don't
need to download Spark.
On Thu, Sep 14, 2017 at 8:13 AM, Sean Owen wrote:
> I think the download could use the Apache mirror, yeah. I don't know if
Yeah I had meant to ask about that in the past. While I presume Patrick
consents to this and all that, it does mean that anyone with access to said
Jenkins scripts can create a signed Spark release, regardless of who they
are.
I haven't thought through whether that's a theoretical issue we can ign
That's a good question, I built the release candidate however the Jenkins
scripts don't take a parameter for configuring who signs them rather it
always signs them with Patrick's key. You can see this from previous
releases which were managed by other folks but still signed by Patrick.
On Fri, Sep
The signature is valid, but why was the release signed with Patrick
Wendell's private key? Did Patrick build the release candidate?
rb
On Fri, Sep 15, 2017 at 6:36 AM, Denny Lee wrote:
> +1 (non-binding)
>
> On Thu, Sep 14, 2017 at 10:57 PM Felix Cheung
> wrote:
>
>> +1 tested SparkR package o
Thank you, Ryan!
Yes. Right. If we turn off `spark.sql.hive.convertMetastoreParquet`, Spark
pads the space.
For ORC CHAR, it's the same. ORC only handles truncation on write.
The padding is handled by Hive side in `HiveCharWritable` via
`HiveBaseChar.java` on read.
Spark ORCFileFormat uses HiveCh
My guess is that this is because Parquet doesn't have a CHAR type. That
should be applied to strings by Spark for Parquet.
The reason from Parquet's perspective not to support CHAR is that we have
no expectation that it is a portable type. Non-SQL writers aren't going to
pad values with spaces, an
+1 (non-binding)
On Thu, Sep 14, 2017 at 10:57 PM Felix Cheung
wrote:
> +1 tested SparkR package on Windows, r-hub, Ubuntu.
>
> _
> From: Sean Owen
> Sent: Thursday, September 14, 2017 3:12 PM
> Subject: Re: [VOTE] Spark 2.1.2 (RC1)
> To: Holden Karau ,
>
>
>
> +1
>
Hello, guys.
I’m contributor of Apache Ignite project which is self-described as an
in-memory computing platform.
It has Data Grid features: distribute, transactional key-value store
[1], Distributed SQL support [2], etc…[3]
Currently, I’m working on integration between Ignite and Spark [4]
I'm working on updating to Scala 2.12, and, have hit a compile error in
Scala 2.12 that I'm strugging to design a fix to (that doesn't modify the
API significantly). If you "./dev/change-scala-version.sh 2.12" and
compile, you'll see errors like...
[error]
/Users/srowen/Documents/Cloudera/spark/co
20 matches
Mail list logo