.
-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org
--
Luciano Resende
http://people.apache.org/~lresende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
they're missing a credit are welcome to ping me
directly to get it fixed.
+1, this will help with giving people the bit of credit, but I guess it
also helps on recognizing the community contributors towards becoming
committers much easier.
--
Luciano Resende
http://people.apache.org/~lresende
)
at
org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testCmdBuilder(SparkSubmitCommandBuilderSuite.java:174)
at
org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testDriverCmdBuilder(SparkSubmitCommandBuilderSuite.java:51)
--
Luciano Resende
http://people.apache.org/~lresende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
to expand the build capacity for Spark, so I will use some of
the nodes he is preparing to test/run these builds for now.
Please let me know if there is anything else needed around this.
[1] https://github.com/apache/spark/pull/8101
[2] https://issues.apache.org/jira/browse/SPARK-10521
--
Luciano
Apache
> >> >> > mirror?
> >> >> > The latest version is 2.7.1, by the way.
> >> >> >
> >> >> >
> >> >> > you should be grabbing the artifacts off the ASF and then verifying
> >> >> > their
> >> >> > SHA1 checksums as published on the ASF HTTPS web site
> >> >> >
> >> >> >
> >> >> > The problem with the Apache mirrors, if I am not mistaken, is that
> >> >> > you
> >> >> > cannot use a single URL that automatically redirects you to a
> working
> >> >> > mirror
> >> >> > to download Hadoop. You have to pick a specific mirror and pray it
> >> >> > doesn't
> >> >> > disappear tomorrow.
> >> >> >
> >> >> >
> >> >> > They don't go away, especially http://mirror.ox.ac.uk , and in
> the us
> >> >> > the
> >> >> > apache.osuosl.org, osu being a where a lot of the ASF servers are
> >> >> > kept.
> >> >> >
> >> >> > full list with availability stats
> >> >> >
> >> >> > http://www.apache.org/mirrors/
> >> >> >
> >> >> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> For additional commands, e-mail: dev-h...@spark.apache.org
>
>
--
Luciano Resende
http://people.apache.org/~lresende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
http://spark.apache.org/
-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org
--
Luciano Resende
http://people.apache.org/~lresende
http://twitter.com
with the Spark build ?
My goal is to integrate some DB2 JDBC Dialect tests as mentioned in
SPARK-10521
[1] https://issues.apache.org/jira/browse/SPARK-9818
[2] https://issues.apache.org/jira/browse/SPARK-6136
[3] https://issues.apache.org/jira/browse/SPARK-10521
--
Luciano Resende
http
wrote:
> SPARK-9818 you link to actually links to a pull request trying to bring
> them back.
>
>
> On Mon, Sep 14, 2015 at 1:34 PM, Luciano Resende <luckbr1...@gmail.com>
> wrote:
>
>> I was looking for the code mentioned in SPARK-9818 and SPARK-6136 that
>> sup
ithub.com/EntilZha | LinkedIn:
> https://www.linkedin.com/in/pedrorodriguezscience
>
>
--
Luciano Resende
http://people.apache.org/~lresende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
d and install SparkR source package
>> >> > separately
>> >> > from the Spark distribution?
>> >> >
>> >> > An R user can simply download the spark distribution, which contains
>> >> > SparkR
>> >> > source and binary package, and directly use sparkR. No need to
>> install
>> >> > SparkR package at all.
>> >> >
>> >> >
>> >> >
>> >> > From: Hossein [mailto:fal...@gmail.com]
>> >> > Sent: Tuesday, September 22, 2015 9:19 AM
>> >> > To: dev@spark.apache.org
>> >> > Subject: SparkR package path
>> >> >
>> >> >
>> >> >
>> >> > Hi dev list,
>> >> >
>> >> >
>> >> >
>> >> > SparkR backend assumes SparkR source files are located under
>> >> > "SPARK_HOME/R/lib/." This directory is created by running
>> >> > R/install-dev.sh.
>> >> > This setting makes sense for Spark developers, but if an R user
>> >> > downloads
>> >> > and installs SparkR source package, the source files are going to be
>> in
>> >> > placed different locations.
>> >> >
>> >> >
>> >> >
>> >> > In the R runtime it is easy to find location of package files using
>> >> > path.package("SparkR"). But we need to make some changes to R backend
>> >> > and/or
>> >> > spark-submit so that, JVM process learns the location of worker.R and
>> >> > daemon.R and shell.R from the R runtime.
>> >> >
>> >> >
>> >> >
>> >> > Do you think this change is feasible?
>> >> >
>> >> >
>> >> >
>> >> > Thanks,
>> >> >
>> >> > --Hossein
>> >>
>> >>
>> >
>> >
>>
>
>
--
Luciano Resende
http://people.apache.org/~lresende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
dy
> present in 1.5.0 will not block this release.
>
> ===
> What should happen to JIRA tickets still targeting 1.5.1?
> ===
> Please target 1.5.2 or 1.6.0.
>
>
>
>
--
Luciano Resende
http://people.apache.org/~lresende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
.
- Documentation: document the release version of public API methods
--
Luciano Resende
http://people.apache.org/~lresende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
personally did.
--
Luciano Resende
http://people.apache.org/~lresende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
d help with installing the DB2 JDBC on the Jenkins slaves
machines
- We could also create a new profile for the DB2 Docker Tests, so that this
tests are running when this profile is enabled.
I could probably think about other options, but they would sound a lot
hacky.
Thoughts ? Some su
transformer.
>- Spark SQL's partition discovery has been changed to only discover
>partition directories that are children of the given path. (i.e. if
>path="/my/data/x=1" then x=1 will no longer be considered a partition
>but only children of x=1.) This behavior can be overridden by manually
>specifying the basePath that partitioning discovery should start with (
>SPARK-11678 <https://issues.apache.org/jira/browse/SPARK-11678>).
>- When casting a value of an integral type to timestamp (e.g. casting
>a long value to timestamp), the value is treated as being in seconds
>instead of milliseconds (SPARK-11724
><https://issues.apache.org/jira/browse/SPARK-11724>).
>- With the improved query planner for queries having distinct
>aggregations (SPARK-9241
><https://issues.apache.org/jira/browse/SPARK-9241>), the plan of a
>query having a single distinct aggregation has been changed to a more
>robust version. To switch back to the plan generated by Spark 1.5's
>planner, please set spark.sql.specializeSingleDistinctAggPlanning to
>true (SPARK-12077 <https://issues.apache.org/jira/browse/SPARK-12077>).
>
>
--
Luciano Resende
http://people.apache.org/~lresende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
ting utilities, so I'll take a peek at those to
> see if there are any specifics of their solution that we should adapt for
> Spark.
>
> On Wed, Oct 21, 2015 at 1:16 PM, Luciano Resende <luckbr1...@gmail.com>
> wrote:
>
>> I have started looking into PR-8101 [1] and
On Mon, Jun 6, 2016 at 12:05 PM, Reynold Xin wrote:
> The bahir one was a good argument actually. I just clicked the button to
> push it into Maven central.
>
>
Thank You !!!
t end users -- even with caveats and warnings.
>
> (And I think that's right!)
>
>
In this case, I would only expect the 2.0.0 preview to be treated as just
any other release, period.
--
Luciano Resende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
his should be considered as any other release, published permanently,
which at some point will become obsolete and users will move on to more
stable releases.
Thanks
--
Luciano Resende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
ld a developer choose a preview, alpha, beta
compared to the GA 2.0 release ?
As for the being stale part, this is true for every release anyone put out
there.
--
Luciano Resende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
I recently used labels to mark couple jiras that me and my team have some
interest on them, so it's easier to share a query and check the status on
them. But I noticed that these labels were removed.
Are there any issues with labeling jiras ? Any other suggestions ?
--
Luciano Resende
http
based on a nightly build.
https://github.com/lresende/docker-spark
One question tough, how often should the image be updated ? every night ?
every week ? I could see if I can automate the build + publish in a CI job
at one of our Jenkins servers (Apache or something)...
--
Luciano Resende
Client from Almworks?
> It's free and I highly recommend it, and IIRC it lets you manage some
> private labels locally.
>
>
The issue with maintaining anything locally is that then it's not easily
sharable (e.g. I can't just send a link to a query)
The question is more like, what issues
> it's not from the Apache project.
>
>
+1
--
Luciano Resende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
ter + releasenotes.
>
>
Well, if we consider the worst case scenario, and we have a jira, let's say
with a few labels, what harm does it make ?
--
Luciano Resende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
ming Yanbo!
>
> Matei
> -
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> For additional commands, e-mail: dev-h...@spark.apache.org
>
>
--
Luciano Resende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
so, there is no python support, no samples on the pr
demonstrating how to use security capabilities and no documentation
updates.
Thanks
--
Luciano Resende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
The Apache Bahir project is voting a release based on Spark 2.0.0-preview.
https://www.mail-archive.com/dev@bahir.apache.org/msg00085.html
It currently provides the following Apache Spark Streaming connectors:
streaming-akka
streaming-mqtt
streaming-twitter
streaming-zeromq
ions and APIs. Please join me in welcoming
> Herman and Wenchen.
>
> Matei
>
Congratulations !!!
--
Luciano Resende
http://people.apache.org/~lresende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
; To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>> For additional commands, e-mail: dev-h...@spark.apache.org
>>
>>
>
--
Luciano Resende
http://people.apache.org/~lresende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
n nexus
for review.
The other option is to add the RC into
https://dist.apache.org/repos/dist/dev/
--
Luciano Resende
http://people.apache.org/~lresende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
ot be that expensive to maintain.
>
>
Subprojects or even if we send this back to incubator as "connectors
project" is better then public github per package in my opinion.
Now, if with this move is signalizing to customers that the Streaming API
as in 1.x is going away in favor the
it's own
set of committers etc which gives less burden on the Spark PMC.
Anyway, my main issue here is not who and how it's going to be managed, but
that it continues under Apache governance.
--
Luciano Resende
http://people.apache.org/~lresende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
be easier upgrade I guess ?).
>> >
>> >
>> > Proposal is for 1.6x line to continue to be supported with critical
>> fixes; newer features will require 2.x and so jdk8
>> >
>> > Regards
>> > Mridul
>> >
>> >
>>
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>> For additional commands, e-mail: dev-h...@spark.apache.org
>>
>>
>
>
> --
> Michael Gummelt
> Software Engineer
> Mesosphere
>
--
Luciano Resende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
o keep the code in the main
> >>>>> Spark repo, right?
> >>>>>
> >>>>> iii. Usability
> >>>>>
> >>>>> This is another thing I don't see discussed. For Scala-based code
> >>>>> things don't change much, I guess, if the artifact names don't change
> >>>>> (another reason to keep things in the ASF?), but what about python?
> >>>>> How are pyspark users expected to get that code going forward, since
> >>>>> it's not in Spark's pyspark.zip anymore?
> >>>>>
> >>>>>
> >>>>> Is there an easy way of keeping these things within the ASF Spark
> >>>>> project? I think that would be better for everybody.
> >>>>>
> >>>>> --
> >>>>> Marcelo
> >>>>>
> >>>>> -
> >>>>> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> >>>>> For additional commands, e-mail: dev-h...@spark.apache.org
> >>>>>
> >>>>
> >>
> >> -
> >> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> >> For additional commands, e-mail: dev-h...@spark.apache.org
> >>
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> > For additional commands, e-mail: dev-h...@spark.apache.org
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> For additional commands, e-mail: dev-h...@spark.apache.org
>
>
--
Luciano Resende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
the Apache rule (due to
> licensing issue) can go into GitHub Spark-Extra (or Spark-Package). It's
> like the ServiceMix Extra or Camel Extra on github.
>
>
We could look into this, but it might be a "Spark Extra discussion" on how
we can help foster a community around the non-comp
ava 7 and still be able to test things in 1.8, including lambdas,
>>>>> which seems to be the main thing you were worried about.
>>>>>
>>>>>
>>>>> > On Thu, Mar 24, 2016 at 4:48 PM, Marcelo Vanzin <van...@cloudera.com>
>>>>> wrote:
>>>>> >>
>>>>> >> On Thu, Mar 24, 2016 at 4:46 PM, Reynold Xin <r...@databricks.com>
>>>>> wrote:
>>>>> >> > Actually it's *way* harder to upgrade Scala from 2.10 to 2.11,
>>>>> than
>>>>> >> > upgrading the JVM runtime from 7 to 8, because Scala 2.10 and
>>>>> 2.11 are
>>>>> >> > not
>>>>> >> > binary compatible, whereas JVM 7 and 8 are binary compatible
>>>>> except
>>>>> >> > certain
>>>>> >> > esoteric cases.
>>>>> >>
>>>>> >> True, but ask anyone who manages a large cluster how long it would
>>>>> >> take them to upgrade the jdk across their cluster and validate all
>>>>> >> their applications and everything... binary compatibility is a tiny
>>>>> >> drop in that bucket.
>>>>> >>
>>>>> >> --
>>>>> >> Marcelo
>>>>> >
>>>>> >
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Marcelo
>>>>>
>>>>> -
>>>>> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>>>>> For additional commands, e-mail: dev-h...@spark.apache.org
>>>>>
>>>>>
>>>
>>
>
--
Luciano Resende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
a project name that does not have "Spark"
as part of it, and we will provide an update here when we find a suitable
name. Suggestions are welcome (please send them directly to my inbox to
avoid flooding the mailing list).
Thanks
On Sun, Apr 17, 2016 at 9:16 AM, Luciano Resende <luck
t or something)
Some of these are described at
https://github.com/SparkTC/redrock/blob/master/twitter-decahose/src/main/scala/com/decahose/ApplicationContext.scala
--
Luciano Resende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
On Sunday, May 22, 2016, Matei Zaharia wrote:
> It looks like the discussion thread on this has only had positive replies,
> so I'm going to call a VOTE. The proposal is to remove the maintainer
> process in
>
nt above, if making the
project "Spark-Extras" a more acceptable name, I believe this is ok as well.
I also understand that the Spark PMC might have concerns with branding, and
that's why we are inviting all members of the Spark PMC to join the project
and help oversee and manage the project.
ile also considering a way to move code to a
maintenance mode location.
--
Luciano Resende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
C would be willing to
continue the development of the remaining connectors that stayed in Spark
2.0 codebase in the "Spark Extras" project.
Thanks in advance, and we welcome any feedback around this proposal before
we present to the Apache Board for consideration.
On Sat, Mar 26, 20
h out to us that maintain the active ecosystem projects.
> (I’m not saying you should put me in :) but rather suggesting that if
> this is your aim, it would be good to reach out beyond just the Spark PMC
> members.
>
> thanks,
> Evan
>
> On Apr 17, 2016, at 9:16 AM, Luciano Resend
commit list gets write access to this extras
> repo, moving things is straightforward. Release wise, things could/should
> be in sync.
> >
> > If there's a risk, its the eternal problem of the contrib/ dir
> Stuff ends up there that never gets maintained. I don't see that being any
> worse than if things were thrown to the wind of a thousand github repos: at
> least now there'd be a central issue tracking location.
>
--
Luciano Resende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
PMC has been added to it, but I don't feel comfortable adding
names to it at my will. And I have updated the list of committers and
currently we have the following on the draft proposal:
Initial PMC
-
Luciano Resende (lresende AT apache DOT org) (Apache Member)
-
Chris Mattmann (
=
> Critical bugs impacting major functionalities.
>
> Bugs already present in 1.x, missing features, or bugs related to new
> features will not necessarily block this release. Note that historically
> Spark documentation has been published on the website separately from the
>
-1 votes. I will
> work on packaging the new release next week.
>
>
> +1
>
> Reynold Xin*
> Sean Owen*
> Shivaram Venkataraman*
> Jonathan Kelly
> Joseph E. Gonzalez*
> Krishna Sankar
> Dongjoon Hyun
> Ricardo Almeida
> Joseph Bradley*
> Matei Zahari
h a spark server
running on that same machine).
>From sbt, I think you can just use publishTo and define a local repository,
something like
publishTo := Some("Local Maven Repository" at
"file://"+Path.userHome.absolutePath+"/.m2/repository")
--
Luciano Resende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
o improve its usability and performance.
>
> Please join me in welcoming the two!
>
>
>
--
Luciano Resende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
t; at org.glassfish.jersey.client.JerseyInvocation.
> validateHttpMethodAndEntity(JerseyInvocation.java:126)
> ...
> 16/09/06 11:52:00 INFO SparkContext: Invoking stop() from shutdown hook
> 16/09/06 11:52:00 INFO MapOutputTrackerMasterEndpoint:
> MapOutputTrackerMasterEndpoint stopped!
>
>
>
> Thanks
> -suresh
>
>
--
Luciano Resende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
atively simple use of Docker here, I wonder whether we could just write
> some simple scripting over the `docker` command-line tool instead of
> pulling in such a problematic library.
>
> On Wed, Sep 7, 2016 at 2:36 PM Luciano Resende <luckbr1...@gmail.com>
> wrote:
>
>> I
2.0
> <https://databricks.com/blog/2016/07/28/continuous-applications-evolving-streaming-in-apache-spark-2-0.html>
> databricks.com
> Apache Spark 2.0 lays the foundation for Continuous Applications, a
> simplified and unified way to write end-to-end streaming applications that
> reacts to data in real-time.
>
>
--
Luciano Resende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
gt; A: Please mark the fix version as 2.0.2, rather than 2.0.1. If a new RC
> (i.e. RC4) is cut, I will change the fix version of those patches to 2.0.1.
>
>
>
--
Luciano Resende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
.html
>>>
>>> We would like to acknowledge all community members for contributing
>>> patches to this release.
>>>
>>>
>>>
>>
>>
>> --
>> --
>> Cheers,
>> Praj
>>
>
>
--
Luciano Resende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
repositories/releases/
> org/apache/spark/spark-core_2.11/
> -- Not sure why they haven't synced to maven central yet
>
> Shivaram
>
> On Wed, Oct 5, 2016 at 8:37 PM, Luciano Resende <luckbr1...@gmail.com>
> wrote:
> > It usually don't take that long to be sy
ct, just
> >>> means it would no longer be published as a Maven artifact. (These have
> >>> never been bundled in the main Spark artifacts.)
> >>>
> >>> I wanted to give a heads up to see if anyone a) believes this
> >>> conclusion is wrong or b) wants to take it up with legal@? I'm
&
make it easy to do so) -
> as they are explicitly agreeing to additional licenses.
>
> Regards
> Mridul
>
>
+1, by providing instructions on how the user would build, and attaching
the license details on the instructions, we are then safe on the legal
aspects of it.
--
Luciano
ease mark the fix version as 2.0.2, rather than 2.0.1. If a new RC
> (i.e. RC5) is cut, I will change the fix version of those patches to 2.0.1.
>
>
>
--
Luciano Resende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
Congratulations Sean !!!
On Monday, October 3, 2016, Reynold Xin wrote:
> Hi all,
>
> Xiao Li, aka gatorsmile, has recently been elected as an Apache Spark
> committer. Xiao has been a super active contributor to Spark SQL. Congrats
> and welcome, Xiao!
>
> - Reynold
>
>
atches merging into branch-2.0 from
> now on?
> A: Please mark the fix version as 2.0.3, rather than 2.0.2. If a new RC
> (i.e. RC2) is cut, I will change the fix version of those patches to 2.0.2.
>
>
>
--
Luciano Resende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
t;>>>>>> > as well. We should also start publishing checksums in the Spark
>>>>>>> VOTE thread,
>>>>>>> > which are currently missing. The risk I'm concerned about is that
>>>>>>> if the key
>>>>>>> > were compromised, it would be possible to replace binaries with
>>>>>>> perfectly
>>>>>>> > valid ones, at least on some mirrors. If the Apache copy were
>>>>>>> replaced, then
>>>>>>> > we wouldn't even be able to catch that it had happened. Given the
>>>>>>> high
>>>>>>> > profile of Spark and the number of companies that run it, I think
>>>>>>> we need to
>>>>>>> > take extra care to make sure that can't happen, even if it is an
>>>>>>> annoyance
>>>>>>> > for the release managers.
>>>>>>>
>>>>>>> --
>>>>>>> Marcelo
>>>>>>>
>>>>>>>
>>>>>>> -
>>>>>>> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> Cell : 425-233-8271 <(425)%20233-8271>
>>> Twitter: https://twitter.com/holdenkarau
>>>
>>
>>
>
>
> --
> Ryan Blue
> Software Engineer
> Netflix
>
--
Luciano Resende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
; the jobs. I suppose it would be ideal, in any event, for the actual
>>>>>>> release
>>>>>>> manager to sign.
>>>>>>>
>>>>>>> On Fri, Sep 15, 2017 at 8:28 PM Holden Karau <hol...@pigscanfly.ca>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> That's a good question, I built the release candidate however the
>>>>>>>> Jenkins scripts don't take a parameter for configuring who signs them
>>>>>>>> rather it always signs them with Patrick's key. You can see this from
>>>>>>>> previous releases which were managed by other folks but still signed by
>>>>>>>> Patrick.
>>>>>>>>
>>>>>>>> On Fri, Sep 15, 2017 at 12:16 PM, Ryan Blue <rb...@netflix.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> The signature is valid, but why was the release signed with
>>>>>>>>> Patrick Wendell's private key? Did Patrick build the release
>>>>>>>>> candidate?
>>>>>>>>>
>>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Ryan Blue
>>>>>> Software Engineer
>>>>>> Netflix
>>>>>>
>>>>> --
>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>
>>>> --
>>>> Twitter: https://twitter.com/holdenkarau
>>>>
>>> --
>>> Twitter: https://twitter.com/holdenkarau
>>>
>> --
>> Twitter: https://twitter.com/holdenkarau
>>
>
>
--
Luciano Resende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
se this thread or a new specific
one to discuss any details.
Thanks
--
Luciano Resende
http://twitter.com/lresende1975
http://lresende.blogspot.com/
code
> freeze" so bug fixes only. If you're uncertain if something should be back
> ported please reach out. If you do commit to branch-2.1 please tag your
> JIRA issue fix version for 2.1.3 and if we cut another RC I'll move the
> 2.1.3 fixed into 2.1.2 as appropriate.
>
> *
65 matches
Mail list logo