Amareshwari ported this in MAPREDUCE-355:
https://issues.apache.org/jira/browse/MAPREDUCE-355
It has not been backported to the 1.x line, but it is in the 2.x branch. -C
On Mon, Sep 16, 2013 at 4:34 PM, Ivan Balashov wrote:
> Hi,
>
> Just wondering if there is any particular reason that 'mapred
On Thu, Sep 13, 2012 at 7:04 AM, Martin Dobmeier
wrote:
> What exactly is a segment? Is it the number of spills?
A segment in this context is a fraction of spill output for a
particular reduce. Each spill contains a segment for every reduce.
> What does "0 segments left" mean? Does it mean that
On Tue, Sep 18, 2012 at 7:02 AM, Martin Dobmeier
wrote:
> Ah, alright. But why is Hadoop telling me that there are 117 segments given
> that only 96 reducers have been configured?
> (btw, I'm using Hadoop 1.0.0)
There were 117 spills, so the merger starts with 117 files, does an
intermediate merg
It was ported in a later version:
https://issues.apache.org/jira/browse/MAPREDUCE-355
-C
On Wed, Oct 10, 2012 at 7:47 AM, Sigurd Spieckermann
wrote:
> Hi,
>
> I've just noticed that the join-package only exists in the old map-reduce
> API. Is there a particular reason why it's not in the new AP
See https://issues.apache.org/jira/browse/MAPREDUCE-355 (not in 1.x series) -C
On Tue, Nov 13, 2012 at 8:26 AM, Guang Yang wrote:
> Hi,
>
> I'm trying to use Hadoop map-side join in my application and wondering if
> anybody knows if there's a way to use it with the new Hadoop API
> ("org.apache.h
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
CVE-2017-3161: Apache Hadoop NameNode XSS vulnerability
Severity: Important
Vendor: The Apache Software Foundation
Versions affected: Hadoop 2.6.x and earlier
Description:
The HDFS web UI is vulnerable to a cross-site scripting (XSS) attack
throu
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
CVE-2017-3162: Apache Hadoop DataNode web UI vulnerability
Severity: Important
Vendor: The Apache Software Foundation
Versions affected: Hadoop 2.6.x and earlier
Description:
HDFS clients interact with a servlet on the DataNode to browse the
HDFS
On Tue, Feb 20, 2018 at 3:09 AM, Lars Francke wrote:
> Is this intentional or just oversight/inconsistencies?
The release candidate (RC) tags are created during votes. They can
probably be cleaned up after the release is published.
At a glance, rel/ looks correct. The hash should match the RC ta