[jira] [Resolved] (HADOOP-16658) S3A connector does not support including the token renewer in the token identifier

2019-10-23 Thread Steve Loughran (Jira)
[ https://issues.apache.org/jira/browse/HADOOP-16658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-16658. - Fix Version/s: 3.3.0 Resolution: Fixed > S3A connector does not support

[jira] [Resolved] (HADOOP-16316) S3A delegation tests fail if you set fs.s3a.secret.key

2019-10-23 Thread Steve Loughran (Jira)
[ https://issues.apache.org/jira/browse/HADOOP-16316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-16316. - Resolution: Duplicate HADOOP-16477 is the same issue; this came first but that has a

Re: [VOTE] Release Apache Hadoop 2.10.0 (RC1)

2019-10-23 Thread epa...@apache.org
Hi Jonathan, Thanks very much for all of your work on this release. I have a concern about cross-queue (inter-queue) preemption in 2.10. In 2.8, on a 6 node pseudo-cluster, preempting from one queue to meet the needs of another queue seems to work as expected. However, 2.10 in the same

Re: [VOTE] Release Apache Hadoop 2.10.0 (RC1)

2019-10-23 Thread Jonathan Hung
Hi Eric, thanks for trying it out. We talked about this in today's YARN community sync up, summarizing here for everyone else: I don't think it's worth delaying the 2.10.0 release further, we can address this in a subsequent 2.10.x release. Wangda mentioned it might be related to changes in

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-10-23 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1298/ [Oct 22, 2019 1:04:02 PM] (ayushsaxena) HDFS-14918. Remove useless getRedundancyThread from [Oct 22, 2019 1:14:22 PM] (ayushsaxena) HDFS-14915. Move Superuser Check Before Taking Lock For Encryption API.

Re: [Discuss] Hadoop-Ozone repository mailing list configurations

2019-10-23 Thread Elek, Marton
Thanks to report this problem Rohith, Yes, it seems to be configured with the wrong mailing list. I think the right fix is to create ozone-dev@ and ozone-issues@ and use them instead of hdfs-(dev/issues). Is there any objections against creating new ozone-* mailing lists? Thanks, Marton

Re: [Discuss] Hadoop-Ozone repository mailing list configurations

2019-10-23 Thread Matt Foley
Definitely yes on ‘ozone-issues’. Whether we want to keep ozone-dev and hdfs-dev together or separate, I’m neutral. Thanks, —Matt On Oct 23, 2019, at 2:11 PM, Elek, Marton wrote: Thanks to report this problem Rohith, Yes, it seems to be configured with the wrong mailing list. I think the

Re: [Discuss] Hadoop-Ozone repository mailing list configurations

2019-10-23 Thread Wangda Tan
We're going to fix the Submarine email list issues once spin-off works start On Wed, Oct 23, 2019 at 2:39 PM Matt Foley wrote: > Definitely yes on ‘ozone-issues’. Whether we want to keep ozone-dev and > hdfs-dev together or separate, I’m neutral. > Thanks, > —Matt > > On Oct 23, 2019, at 2:11

Reminder: APAC Hadoop storage community sync

2019-10-23 Thread Wei-Chiu Chuang
PDT 10pm Wednesday = tonight, CST 1pm Thursday = today. Feel free to join Zoom and chat. Join Zoom Meeting https://cloudera.zoom.us/j/880548968 Past sessions: https://docs.google.com/document/d/1jXM5Ujvf-zhcyw_5kiQVx6g-HeKe-YGnFS_1-qFXomI/edit Also heads-up, On November 20/21, Feilong from

Re: [VOTE] Release Apache Hadoop 2.10.0 (RC1)

2019-10-23 Thread Konstantin Shvachko
+1 on RC1 - Verified signatures - Verified maven artifacts on Nexus for sources - Checked rat reports - Checked documentation - Checked packaging contents - Built from sources on RHEL 7 box - Ran unit tests for new HDFS features with Java 8 Thanks, --Konstantin On Tue, Oct 22, 2019 at 2:55 PM

Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-10-23 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/484/ No changes - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: