are at best
> 1 per year. That is too slow to respond to a critical vulnerability.
>
> On Wed, Mar 11, 2020 at 5:02 PM Igor Dvorzhak
> wrote:
>
> > Generally I'm for updating dependencies, but I think that Hadoop should
> > stick with semantic versioning and do not make
Generally I'm for updating dependencies, but I think that Hadoop should
stick with semantic versioning and do not make major and
minor dependency updates in subminor releases.
For example, Hadoop 3.2.1 updated Guava to 27.0-jre, and because of this
Spark 3.0 stuck with Hadoop 3.2.0 - they use
How this proposal will impact public APIs? I.e does Hadoop expose any Guava
classes in the client APIs that will require recompiling all client
applications because they need to use shaded Guava classes?
On Sat, Apr 4, 2020 at 12:13 PM Wei-Chiu Chuang wrote:
> Hi Hadoop devs,
>
> I spent a good
What will be the solution for object stores to have fast and correct commit
algorithms?
On Wed, Sep 23, 2020 at 11:42 AM Steve Loughran
wrote:
> I've got a PR up to completely remove the v2 commit algorithm
>
> https://github.com/apache/hadoop/pull/2320
>
> That may seem overkill, but while
+1 to re-focusing on 3.4 branch and upgrading it to Java 11/17, instead of
making potentially breaking changes to 3.3.
On Tue, Mar 28, 2023 at 11:17 AM Chris Nauroth wrote:
> In theory, I like the idea of setting aside Java 8. Unfortunately, I don't
> know that upgrading within the 3.3 line
Igor Dvorzhak created MAPREDUCE-7185:
Summary: Parallelize part files move in FileOutputCommitter
Key: MAPREDUCE-7185
URL: https://issues.apache.org/jira/browse/MAPREDUCE-7185
Project: Hadoop Map