[jira] [Resolved] (HADOOP-16853) ITestS3GuardOutOfBandOperations failing on versioned S3 buckets

2020-02-24 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu resolved HADOOP-16853.

Fix Version/s: 3.3.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Committed to {{trunk}}. Thanks [~ste...@apache.org] for reporting and fixing. 
Thanks [~gabor.bota] for review.

> ITestS3GuardOutOfBandOperations failing on versioned S3 buckets
> ---
>
> Key: HADOOP-16853
> URL: https://issues.apache.org/jira/browse/HADOOP-16853
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.3.0
>
>
> org.apache.hadoop.fs.s3a.ITestS3GuardOutOfBandOperations.testListingDelete[auth=true]
> failing because the deleted file can still be read when the s3guard entry has 
> the versionId.
> Proposed: if the FS is versioned and the file status has versionID then we 
> switch to tests which assert the file is readable, rather than tests which 
> assert it isn't there



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] EOL Hadoop branch-2.8

2020-02-24 Thread Wei-Chiu Chuang
Looking at the EOL policy wiki:
https://cwiki.apache.org/confluence/display/HADOOP/EOL+%28End-of-life%29+Release+Branches

The Hadoop community can still elect to make security update for EOL'ed
releases.

I think the EOL is to give more clarity to downstream applications (such as
HBase) the guidance of which Hadoop release lines are still active.
Additionally, I don't think it is sustainable to maintain 6 concurrent
release lines in this big project, which is why I wanted to start this
discussion.

Thoughts?

On Mon, Feb 24, 2020 at 10:22 AM Sunil Govindan  wrote:

> Hi Wei-Chiu
>
> Extremely sorry for the late reply here.
> Cud u pls help to add more clarity on defining what will happen for
> branch-2.8 when we call EOL.
> Does this mean that, no more release coming out from this branch, or some
> more additional guidelines?
>
> - Sunil
>
>
> On Mon, Feb 24, 2020 at 11:47 PM Wei-Chiu Chuang
>  wrote:
>
> > This thread has been running for 7 days and no -1.
> >
> > Don't think we've established a formal EOL process, but to publicize the
> > EOL, I am going to file a jira, update the wiki and post the announcement
> > to general@ and user@
> >
> > On Wed, Feb 19, 2020 at 1:40 PM Dinesh Chitlangia  >
> > wrote:
> >
> > > Thanks Wei-Chiu for initiating this.
> > >
> > > +1 for 2.8 EOL.
> > >
> > > On Tue, Feb 18, 2020 at 10:48 PM Akira Ajisaka 
> > > wrote:
> > >
> > > > Thanks Wei-Chiu for starting the discussion,
> > > >
> > > > +1 for the EoL.
> > > >
> > > > -Akira
> > > >
> > > > On Tue, Feb 18, 2020 at 4:59 PM Ayush Saxena 
> > wrote:
> > > >
> > > > > Thanx Wei-Chiu for initiating this
> > > > > +1 for marking 2.8 EOL
> > > > >
> > > > > -Ayush
> > > > >
> > > > > > On 17-Feb-2020, at 11:14 PM, Wei-Chiu Chuang  >
> > > > wrote:
> > > > > >
> > > > > > The last Hadoop 2.8.x release, 2.8.5, was GA on September 15th
> > 2018.
> > > > > >
> > > > > > It's been 17 months since the release and the community by and
> > large
> > > > have
> > > > > > moved up to 2.9/2.10/3.x.
> > > > > >
> > > > > > With Hadoop 3.3.0 over the horizon, is it time to start the EOL
> > > > > discussion
> > > > > > and reduce the number of active branches?
> > > > >
> > > > >
> -
> > > > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > > > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> > > > >
> > > > >
> > > >
> > >
> >
>


[jira] [Created] (HADOOP-16880) Declare 2.8.x release line EOL

2020-02-24 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HADOOP-16880:


 Summary: Declare 2.8.x release line EOL
 Key: HADOOP-16880
 URL: https://issues.apache.org/jira/browse/HADOOP-16880
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 2.8.6
Reporter: Wei-Chiu Chuang


The Hadoop community discussed the 2.8.x release line EOL plan:
https://s.apache.org/hadoop2.8eol

The first 2.8 release, 2.8.0 was released on 03/27/2017
The last 2.8 release, 2.8.5, was released on 09/15/2018

Per the [Hadoop EOL 
policy|https://cwiki.apache.org/confluence/display/HADOOP/EOL+%28End-of-life%29+Release+Branches]
 2.8.5 was released nearly 18 months ago and with no volunteer taking up the RM 
for 2.8.6, it is time to start the EOL process.

I am going to update the wiki and post an announcement to the user/general 
mailing list declaring 2.8 EOL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] EOL Hadoop branch-2.8

2020-02-24 Thread Sunil Govindan
Hi Wei-Chiu

Extremely sorry for the late reply here.
Cud u pls help to add more clarity on defining what will happen for
branch-2.8 when we call EOL.
Does this mean that, no more release coming out from this branch, or some
more additional guidelines?

- Sunil


On Mon, Feb 24, 2020 at 11:47 PM Wei-Chiu Chuang
 wrote:

> This thread has been running for 7 days and no -1.
>
> Don't think we've established a formal EOL process, but to publicize the
> EOL, I am going to file a jira, update the wiki and post the announcement
> to general@ and user@
>
> On Wed, Feb 19, 2020 at 1:40 PM Dinesh Chitlangia 
> wrote:
>
> > Thanks Wei-Chiu for initiating this.
> >
> > +1 for 2.8 EOL.
> >
> > On Tue, Feb 18, 2020 at 10:48 PM Akira Ajisaka 
> > wrote:
> >
> > > Thanks Wei-Chiu for starting the discussion,
> > >
> > > +1 for the EoL.
> > >
> > > -Akira
> > >
> > > On Tue, Feb 18, 2020 at 4:59 PM Ayush Saxena 
> wrote:
> > >
> > > > Thanx Wei-Chiu for initiating this
> > > > +1 for marking 2.8 EOL
> > > >
> > > > -Ayush
> > > >
> > > > > On 17-Feb-2020, at 11:14 PM, Wei-Chiu Chuang 
> > > wrote:
> > > > >
> > > > > The last Hadoop 2.8.x release, 2.8.5, was GA on September 15th
> 2018.
> > > > >
> > > > > It's been 17 months since the release and the community by and
> large
> > > have
> > > > > moved up to 2.9/2.10/3.x.
> > > > >
> > > > > With Hadoop 3.3.0 over the horizon, is it time to start the EOL
> > > > discussion
> > > > > and reduce the number of active branches?
> > > >
> > > > -
> > > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> > > >
> > > >
> > >
> >
>


Re: [DISCUSS] EOL Hadoop branch-2.8

2020-02-24 Thread Wei-Chiu Chuang
This thread has been running for 7 days and no -1.

Don't think we've established a formal EOL process, but to publicize the
EOL, I am going to file a jira, update the wiki and post the announcement
to general@ and user@

On Wed, Feb 19, 2020 at 1:40 PM Dinesh Chitlangia 
wrote:

> Thanks Wei-Chiu for initiating this.
>
> +1 for 2.8 EOL.
>
> On Tue, Feb 18, 2020 at 10:48 PM Akira Ajisaka 
> wrote:
>
> > Thanks Wei-Chiu for starting the discussion,
> >
> > +1 for the EoL.
> >
> > -Akira
> >
> > On Tue, Feb 18, 2020 at 4:59 PM Ayush Saxena  wrote:
> >
> > > Thanx Wei-Chiu for initiating this
> > > +1 for marking 2.8 EOL
> > >
> > > -Ayush
> > >
> > > > On 17-Feb-2020, at 11:14 PM, Wei-Chiu Chuang 
> > wrote:
> > > >
> > > > The last Hadoop 2.8.x release, 2.8.5, was GA on September 15th 2018.
> > > >
> > > > It's been 17 months since the release and the community by and large
> > have
> > > > moved up to 2.9/2.10/3.x.
> > > >
> > > > With Hadoop 3.3.0 over the horizon, is it time to start the EOL
> > > discussion
> > > > and reduce the number of active branches?
> > >
> > > -
> > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> > >
> > >
> >
>


[jira] [Created] (HADOOP-16879) s3a mkdirs() to not check dest for a dir marker

2020-02-24 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16879:
---

 Summary: s3a mkdirs() to not check dest for a dir marker
 Key: HADOOP-16879
 URL: https://issues.apache.org/jira/browse/HADOOP-16879
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.2.1
Reporter: Steve Loughran


S3A innerMkdirs() calls getFileStatus() to probe dest path for being a file or 
dir, then goes to reject/no-op.


The HEAD path + / in that code may add a 404 to the S3 load balancers, so 
subsequent probes for the path fail. 

Proposed: only look for file then LIST underneath

if no entry found: probe for parent being a dir (LIST; HEAD + /), if true 
create the marker entry. If not, start the walk (or should we then check?)

This increases the cost of mkdir on an existing empty dir marker; reduces it on 
a non-empty dir. Creates dir markers above dir markers to avoid those cached 
404s.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2020-02-24 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/

[Feb 23, 2020 8:55:39 AM] (ayushsaxena) HDFS-15041. Make MAX_LOCK_HOLD_MS and 
full queue size configurable.
[Feb 23, 2020 6:37:18 PM] (ayushsaxena) HDFS-15176. Enable GcTimePercentage 
Metric in NameNode's JvmMetrics.




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

FindBugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   Redundant nullcheck of dir, which is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:[line 66] 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, 
File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long):in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long): new String(byte[]) At 
CosNativeFileSystemStore.java:[line 178] 
   org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, 
String, String, int) may fail to clean up java.io.InputStream Obligation to 
clean up resource created at CosNativeFileSystemStore.java:fail to clean up 
java.io.InputStream Obligation to clean up resource created at 
CosNativeFileSystemStore.java:[line 252] is not discharged 

Failed junit tests :

   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy 
   hadoop.hdfs.TestDFSStorageStateRecovery 
   hadoop.hdfs.TestErasureCodingPolicyWithSnapshot 
   hadoop.hdfs.TestReconstructStripedFile 
   hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor 
   hadoop.hdfs.TestErasureCodingExerciseAPIs 
   hadoop.hdfs.server.namenode.TestQuotaByStorageType 
   hadoop.hdfs.TestReadStripedFileWithDNFailure 
   hadoop.hdfs.TestFileChecksum 
   hadoop.hdfs.TestGetBlocks 
   hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy 
   hadoop.hdfs.TestDFSStripedOutputStream 
   hadoop.hdfs.TestErasureCodeBenchmarkThroughput 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure 
   hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant 
   hadoop.hdfs.TestSetrepDecreasing 
   hadoop.hdfs.TestDFSStripedInputStream 
   hadoop.hdfs.TestLeaseRecovery 
   

[jira] [Created] (HADOOP-16878) Copy command in FileUtil to LOG at warn level if the source and destination is the same

2020-02-24 Thread Gabor Bota (Jira)
Gabor Bota created HADOOP-16878:
---

 Summary: Copy command in FileUtil to LOG at warn level if the 
source and destination is the same
 Key: HADOOP-16878
 URL: https://issues.apache.org/jira/browse/HADOOP-16878
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.3.0
Reporter: Gabor Bota
Assignee: Gabor Bota


We encountered an error during a test in our QE when the file destination and 
source path were the same. This happened during an ADLS test, and there were no 
meaningful error messages, so it was hard to find the root cause of the failure.
The error we saw was that file size has changed during the copy operation - the 
new file creation in the destination, which is the same as the source, creates 
a new file and sets the file length to zero. After this, getting the source 
file will fail because the sile size changed during the operation.

I propose a solution to at least log in error level in the {{FileUtil}} if the 
source and destination of the copy operation is the same, so debugging issues 
like this will be easier in the future.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-02-24 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem 
   hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/diff-compile-cc-root-jdk1.8.0_242.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/diff-compile-javac-root-jdk1.8.0_242.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_242.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [232K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
  [36K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/606/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]