Re: Reverted YARN-10063 from branch-3.2

2020-04-23 Thread Wilfred Spiegelenburg
Sorry for that, I had reverted it I thought but either I did not push or the 
push failed.
I just checked and my local branch-3.2 had the revert in it dated about 2 hours 
after the original commit.

Wilfred

> On 24 Apr 2020, at 05:52, Wei-Chiu Chuang  
> wrote:
> 
> It broke the build (see here )
> so revert the commit. Looks like it was unintentional.


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17011) Trailing whitespace in fs.defaultFS will crash namenode and datanode

2020-04-23 Thread Ctest (Jira)
Ctest created HADOOP-17011:
--

 Summary: Trailing whitespace in fs.defaultFS will crash namenode 
and datanode
 Key: HADOOP-17011
 URL: https://issues.apache.org/jira/browse/HADOOP-17011
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Reporter: Ctest


*Problem:*

Currently, `getDefaultUri` is using `conf.get` to get the value of 
`fs.defaultFS`, which means that the trailing whitespace after a valid URI 
won’t be removed and could stop namenode and datanode from starting up.

 

*How to reproduce (Hadoop-2.8.5):*

Set the configuration

 
{code:java}

 fs.defaultFS
 hdfs://localhost:9000 
{code}
In core-site.xml (there is a whitespace after 9000) and start HDFS.

Namenode and datanode won’t start and the log message is:
{code:java}
2020-04-23 11:09:48,198 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: 
Failed to start namenode.
java.lang.IllegalArgumentException: Illegal character in authority at index 7: 
hdfs://localhost:9000 
at java.net.URI.create(URI.java:852)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.setClientNamenodeAddress(NameNode.java:440)
at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:897)
at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:885)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1626)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1694)
Caused by: java.net.URISyntaxException: Illegal character in authority at index 
7: hdfs://localhost:9000 
at java.net.URI$Parser.fail(URI.java:2848)
at java.net.URI$Parser.parseAuthority(URI.java:3186)
at java.net.URI$Parser.parseHierarchical(URI.java:3097)
at java.net.URI$Parser.parse(URI.java:3053)
at java.net.URI.(URI.java:588)
at java.net.URI.create(URI.java:850)
... 5 more
{code}
 

*Solution:*

Use `getTrimmed` instead of `get` for `fs.defaultFS`:
{code:java}
public static URI getDefaultUri(Configuration conf) {
URI uri =
  URI.create(fixName(conf.getTrimmed(FS_DEFAULT_NAME_KEY, DEFAULT_FS)));
if (uri.getScheme() == null) {
  throw new IllegalArgumentException("No scheme in default FS: " + uri);
}
return uri;
  }
{code}
I have submitted a patch for trunk about this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Reverted YARN-10063 from branch-3.2

2020-04-23 Thread Wei-Chiu Chuang
It broke the build (see here )
so revert the commit. Looks like it was unintentional.


Next week's Hadoop storage community call: HDFS-15289

2020-04-23 Thread Wei-Chiu Chuang
Hi!

I am inviting @Uma Maheswara Rao Gangumalla  to
talk about a new feature proposal HDFS-15289
 and invite the community
to discuss it. This feature is built upon ViewFS and overcomes its
limitations. It is intended to bridge Ozone and HDFS, making the migration
to Ozone more seamless. It can also make migration to cloud easier.

For details, please check out the design doc in the JIRA attachment.

April 29th (Wednesday) US Pacific: 10am, GMT 5pm, India 10:30pm

Please join via Zoom:
https://cloudera.zoom.us/j/880548968

Past meeting minutes
https://docs.google.com/document/d/1jXM5Ujvf-zhcyw_5kiQVx6g-HeKe-YGnFS_1-qFXomI/edit


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2020-04-23 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1478/

[Apr 22, 2020 4:39:48 AM] (github) Hadoop 16857. ABFS: Stop CustomTokenProvider 
retry logic to depend on
[Apr 22, 2020 7:36:33 PM] (liuml07) HADOOP-17001. The suffix name of the 
unified compression class.
[Apr 22, 2020 8:31:02 PM] (liuml07) HDFS-15276. Concat on INodeRefernce fails 
with illegal state exception.




-1 overall


The following subsystems voted -1:
asflicense compile findbugs mvnsite pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

FindBugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Possible null pointer dereference of effectiveDirective in 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheDirective(CacheDirectiveInfo,
 EnumSet, boolean) Dereferenced at FSNamesystem.java:effectiveDirective in 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheDirective(CacheDirectiveInfo,
 EnumSet, boolean) Dereferenced at FSNamesystem.java:[line 7444] 
   Possible null pointer dereference of ret in 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(String, String, 
boolean) Dereferenced at FSNamesystem.java:ret in 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(String, String, 
boolean) Dereferenced at FSNamesystem.java:[line 3213] 
   Possible null pointer dereference of res in 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(String, String, 
boolean, Options$Rename[]) Dereferenced at FSNamesystem.java:res in 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(String, String, 
boolean, Options$Rename[]) Dereferenced at FSNamesystem.java:[line 3248] 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common
 
   org.apache.hadoop.yarn.server.webapp.WebServiceClient.sslFactory should 
be package protected At WebServiceClient.java: At WebServiceClient.java:[line 
42] 

FindBugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 

Failed junit tests :

   hadoop.hdfs.server.namenode.ha.TestConfiguredFailoverProxyProvider 
   
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.yarn.sls.TestSLSRunner 
   hadoop.yarn.sls.appmaster.TestAMSimulator 
  

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1478/artifact/out/patch-compile-root.txt
  [720K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1478/artifact/out/patch-compile-root.txt
  [720K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1478/artifact/out/patch-compile-root.txt
  [720K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1478/artifact/out/diff-checkstyle-root.txt
  [16M]

   mvnsite:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1478/artifact/out/patch-mvnsite-root.txt
  [284K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1478/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1478/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1478/artifact/out/diff-patch-shellcheck.txt
  [16K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1478/artifact/out/diff-patch-shelldocs.txt
  [96K]

   

[jira] [Resolved] (HADOOP-16914) Adding Output Stream Counters in ABFS

2020-04-23 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16914.
-
Fix Version/s: 3.3.0
   Resolution: Fixed

> Adding Output Stream Counters in ABFS
> -
>
> Key: HADOOP-16914
> URL: https://issues.apache.org/jira/browse/HADOOP-16914
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.2.1
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Major
> Fix For: 3.3.0
>
>
> AbfsOutputStream does not have any counters that can be populated or referred 
> to when needed for finding bottlenecks in that area.
> purpose:
>  * Create an interface and Implementation class for all the AbfsOutputStream 
> counters.
>  * populate the counters in AbfsOutputStream in appropriate places.
>  * Override the toString() to see counters in logs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-04-23 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/664/

[Apr 22, 2020 9:53:15 PM] (liuml07) HDFS-15276. Concat on INodeRefernce fails 
with illegal state exception.




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   module:hadoop-common-project/hadoop-minikdc 
   Possible null pointer dereference in 
org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called 
method Dereferenced at 
MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value 
of called method Dereferenced at MiniKdc.java:[line 515] 

FindBugs :

   module:hadoop-common-project/hadoop-auth 
   
org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest,
 HttpServletResponse) makes inefficient use of keySet iterator instead of 
entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator 
instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 
192] 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) 
unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At 
CipherSuite.java:[line 44] 
   org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) 
unconditionally sets the field unknownValue At 
CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of 
called method Dereferenced at 
FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to 
return value of called method Dereferenced at FileUtil.java:[line 118] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, 
File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path,
 File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:[line 383] 
   Useless condition:lazyPersist == true at this point At 
CommandWithDestination.java:[line 502] 
   org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) 
incorrectly handles double value At DoubleWritable.java: At 
DoubleWritable.java:[line 78] 
   org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) 
incorrectly handles double value At DoubleWritable.java:[line 97] 
   org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly 
handles float value At FloatWritable.java: At FloatWritable.java:[line 71] 
   org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles float value At FloatWritable.java:int) 
incorrectly handles float value At FloatWritable.java:[line 89] 
   Possible null pointer dereference in 
org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return 
value of called method Dereferenced at 
IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) 
due to return value of called method Dereferenced at IOUtils.java:[line 389] 
   Possible bad parsing of shift operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At 
Utils.java:operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At Utils.java:[line 
398] 
   
org.apache.hadoop.metrics2.lib.DefaultMetricsFactory.setInstance(MutableMetricsFactory)
 unconditionally sets the field mmfImpl At DefaultMetricsFactory.java:mmfImpl 
At DefaultMetricsFactory.java:[line 49] 
   
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.setMiniClusterMode(boolean) 
unconditionally sets the field miniClusterMode At 
DefaultMetricsSystem.java:miniClusterMode At DefaultMetricsSystem.java:[line 
92] 
   Useless object stored in variable seqOs of method 
org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.addOrUpdateToken(AbstractDelegationTokenIdentifier,
 

[jira] [Created] (HADOOP-17010) Add queue capacity weights support in FairCallQueue

2020-04-23 Thread Fengnan Li (Jira)
Fengnan Li created HADOOP-17010:
---

 Summary: Add queue capacity weights support in FairCallQueue
 Key: HADOOP-17010
 URL: https://issues.apache.org/jira/browse/HADOOP-17010
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Fengnan Li
Assignee: Fengnan Li


Right now in FairCallQueue all subqueues share the same capacity by evenly 
distributing total capacity. This requested feature is to make subqueues able 
to have different queue capacity where more important queues can have more 
capacity, thus less queue overflow and client backoffs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [Hadoop-3.3 Release update]- branch-3.3 has created

2020-04-23 Thread Akira Ajisaka
> Since blockers are not closed, I didn't cut the branch because
multiple branches might confuse or sombody might miss to commit.

The current situation is already confusing. The 3.3.1 version already
exists in JIRA, so some committers wrongly commit non-critical issues to
branch-3.3 and set the fix version to 3.3.1.
I think now we should cut branch-3.3.0 and freeze source code except the
blockers.

-Akira

On Tue, Apr 21, 2020 at 3:05 PM Brahma Reddy Battula 
wrote:

> Sure, I will do that.
>
> Since blockers are not closed, I didn't cut the branch because
> multiple branches might confuse or sombody might miss to commit.Shall I
> wait till this weekend to create..?
>
> On Mon, Apr 20, 2020 at 11:57 AM Akira Ajisaka 
> wrote:
>
>> Hi Brahma,
>>
>> Thank you for preparing the release.
>> Could you cut branch-3.3.0? I would like to backport some fixes for 3.3.1
>> and not for 3.3.0.
>>
>> Thanks and regards,
>> Akira
>>
>> On Fri, Apr 17, 2020 at 11:11 AM Brahma Reddy Battula 
>> wrote:
>>
>>> Hi All,
>>>
>>> we are down to two blockers issues now (YARN-10194 and YARN-9848) which
>>> are in patch available state.Hopefully we can out the RC soon.
>>>
>>> thanks to @Prabhu Joseph  ,@masakate,@akira
>>> and @Wei-Chiu Chuang   and others for helping
>>> resloving the blockers.
>>>
>>>
>>>
>>> On Tue, Apr 14, 2020 at 10:49 PM Brahma Reddy Battula 
>>> wrote:
>>>

 @Prabhu Joseph 
 >>> Have committed the YARN blocker YARN-10219 to trunk and
 cherry-picked to branch-3.3. Right now, there are two blocker Jiras -
 YARN-10233 and HADOOP-16982
 which i will help to review and commit. Thanks.

 Looks you committed YARN-10219. Noted YARN-10233 and HADOOP-16982 as a
 blockers. (without YARN-10233 we have given so many releases,it's not newly
 introduced.).. Thanks

 @Vinod Kumar Vavilapalli  ,@adam Antal,

 I noted YARN-9848 as a blocker as you mentioned above.

 @All,

 Currently following four blockers are pending for 3.3.0 RC.

 HADOOP-16963,YARN-10233,HADOOP-16982 and YARN-9848.



 On Tue, Apr 14, 2020 at 8:11 PM Vinod Kumar Vavilapalli <
 vino...@apache.org> wrote:

> Looks like a really bad bug to me.
>
> +1 for revert and +1 for making that a 3.3.0 blocker. I think should
> also revert it in a 3.2 maintenance release too.
>
> Thanks
> +Vinod
>
> > On Apr 14, 2020, at 5:03 PM, Adam Antal 
> > 
> wrote:
> >
> > Hi everyone,
> >
> > Sorry for coming a bit late with this, but there's also one jira
> that can
> > have potential impact on clusters and we should talk about it.
> >
> > Steven Rand found this problem earlier and commented to
> > https://issues.apache.org/jira/browse/YARN-4946.
> > The bug has impact on the RM state store: the RM does not delete
> apps - see
> > more details in his comment here:
> >
> https://issues.apache.org/jira/browse/YARN-4946?focusedCommentId=16898599=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16898599
> > .
> > (FYI He also created https://issues.apache.org/jira/browse/YARN-9848
> with
> > the revert task).
> >
> > It might not be an actual blocker, but since there wasn't any
> consensus
> > about a follow up action, I thought we should decide how to proceed
> before
> > release 3.3.0.
> >
> > Regards,
> > Adam
> >
> > On Tue, Apr 14, 2020 at 9:35 AM Prabhu Joseph <
> prabhujose.ga...@gmail.com>
> > wrote:
> >
> >> Thanks Brahma for the update.
> >>
> >> Have committed the YARN blocker YARN-10219 to trunk and
> cherry-picked to
> >> branch-3.3. Right now, there are two blocker Jiras - YARN-10233 and
> >> HADOOP-16982
> >> which i will help to review and commit. Thanks.
> >>
> >> [image: Screen Shot 2020-04-14 at 1.01.51 PM.png]
> >>
> >> project in (YARN, HADOOP, MAPREDUCE, HDFS) AND priority in (Blocker,
> >> Critical) AND resolution = Unresolved AND "Target Version/s" =
> 3.3.0 ORDER
> >> BY priority DESC
> >>
> >>
> >> On Sun, Apr 12, 2020 at 12:19 AM Brahma Reddy Battula <
> bra...@apache.org>
> >> wrote:
> >>
> >>> *Pending for 3.3.0 Release:*
> >>>
> >>> One Blocker(HADOOP-16963) confirmation and following jira's are
> open as
> >>> these needs to merged to other branches(I am tracking the same,
> Ideally
> >>> this can be closed and can raise seperate jira's to track).
> >>>
> >>>
> >>> 1–4 of 4Refresh results
> >>> <
> >>>
> https://issues.apache.org/jira/issues/?jql=project%20in%20(%22Hadoop%20HDFS%22)%20AND%20resolution%20%3D%20Unresolved%20AND%20(cf%5B12310320%5D%20%3D%203.3.0%20OR%20fixVersion%20%3D%203.3.0)%20ORDER%20BY%20priority%20DESC#
> 
> >>> Columns
> >>> Patch InfoKeyTSummaryAssigneeReporterP
>