Re: [VOTE] Release Apache Hadoop 3.0.0 RC1

2017-12-11 Thread Xiao Chen
+1 (binding)

- downloaded src tarball, verified md5
- built from source with jdk1.8.0_112
- started a pseudo cluster with hdfs and kms
- sanity checked encryption related operations working
- sanity checked webui and logs.

-Xiao

On Mon, Dec 11, 2017 at 6:10 PM, Aaron T. Myers  wrote:

> +1 (binding)
>
> - downloaded the src tarball and built the source (-Pdist -Pnative)
> - verified the checksum
> - brought up a secure pseudo distributed cluster
> - did some basic file system operations (mkdir, list, put, cat) and
> confirmed that everything was working
> - confirmed that the web UI worked
>
> Best,
> Aaron
>
> On Fri, Dec 8, 2017 at 12:31 PM, Andrew Wang 
> wrote:
>
> > Hi all,
> >
> > Let me start, as always, by thanking the efforts of all the contributors
> > who contributed to this release, especially those who jumped on the
> issues
> > found in RC0.
> >
> > I've prepared RC1 for Apache Hadoop 3.0.0. This release incorporates 302
> > fixed JIRAs since the previous 3.0.0-beta1 release.
> >
> > You can find the artifacts here:
> >
> > http://home.apache.org/~wang/3.0.0-RC1/
> >
> > I've done the traditional testing of building from the source tarball and
> > running a Pi job on a single node cluster. I also verified that the
> shaded
> > jars are not empty.
> >
> > Found one issue that create-release (probably due to the mvn deploy
> change)
> > didn't sign the artifacts, but I fixed that by calling mvn one more time.
> > Available here:
> >
> > https://repository.apache.org/content/repositories/orgapachehadoop-1075/
> >
> > This release will run the standard 5 days, closing on Dec 13th at 12:31pm
> > Pacific. My +1 to start.
> >
> > Best,
> > Andrew
> >
>


[jira] [Created] (HDFS-12917) Fix description errors in testErasureCodingConf.xml

2017-12-11 Thread chencan (JIRA)
chencan created HDFS-12917:
--

 Summary: Fix description errors in testErasureCodingConf.xml
 Key: HDFS-12917
 URL: https://issues.apache.org/jira/browse/HDFS-12917
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: chencan


In testErasureCodingConf.xml,there are two case's description may be "getPolicy 
: get EC policy information at specified path, whick have an EC Policy".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-12-11 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/619/

[Dec 11, 2017 2:31:46 AM] (wwei) YARN-7608. Incorrect sTarget column causing 
DataTable warning on RM
[Dec 11, 2017 1:50:02 PM] (sunilg) YARN-7632. Effective min and max resource 
need to be set for auto




-1 overall


The following subsystems voted -1:
asflicense findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Possible null pointer dereference of replication in 
org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
 Short, Byte) Dereferenced at INodeFile.java:replication in 
org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
 Short, Byte) Dereferenced at INodeFile.java:[line 210] 

FindBugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
   org.apache.hadoop.yarn.api.records.Resource.getResources() may expose 
internal representation by returning Resource.resources At Resource.java:by 
returning Resource.resources At Resource.java:[line 234] 

Failed junit tests :

   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 
   hadoop.hdfs.TestReplication 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.TestReadStripedFileWithDecoding 
   hadoop.hdfs.server.balancer.TestBalancerRPCDelay 
   hadoop.hdfs.TestErasureCodingPolicies 
   hadoop.hdfs.TestDFSStripedOutputStream 
   hadoop.hdfs.server.namenode.TestDecommissioningStatus 
   hadoop.hdfs.TestReconstructStripedFile 
   
hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch 
   
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesSchedulerActivities 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 
   hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched 
   hadoop.yarn.client.api.impl.TestAMRMClientOnRMRestart 
   hadoop.mapreduce.v2.app.rm.TestRMContainerAllocator 
   hadoop.mapreduce.v2.TestUberAM 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/619/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/619/artifact/out/diff-compile-javac-root.txt
  [280K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/619/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/619/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/619/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/619/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/619/artifact/out/whitespace-eol.txt
  [8.8M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/619/artifact/out/whitespace-tabs.txt
  [288K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/619/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/619/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/619/artifact/out/diff-javadoc-javadoc-root.txt
  [760K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/619/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [380K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/619/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [44K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/619/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [100K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/619/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
  [20K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/619/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt
  

Re: [VOTE] Release Apache Hadoop 3.0.0 RC1

2017-12-11 Thread Aaron T. Myers
+1 (binding)

- downloaded the src tarball and built the source (-Pdist -Pnative)
- verified the checksum
- brought up a secure pseudo distributed cluster
- did some basic file system operations (mkdir, list, put, cat) and
confirmed that everything was working
- confirmed that the web UI worked

Best,
Aaron

On Fri, Dec 8, 2017 at 12:31 PM, Andrew Wang 
wrote:

> Hi all,
>
> Let me start, as always, by thanking the efforts of all the contributors
> who contributed to this release, especially those who jumped on the issues
> found in RC0.
>
> I've prepared RC1 for Apache Hadoop 3.0.0. This release incorporates 302
> fixed JIRAs since the previous 3.0.0-beta1 release.
>
> You can find the artifacts here:
>
> http://home.apache.org/~wang/3.0.0-RC1/
>
> I've done the traditional testing of building from the source tarball and
> running a Pi job on a single node cluster. I also verified that the shaded
> jars are not empty.
>
> Found one issue that create-release (probably due to the mvn deploy change)
> didn't sign the artifacts, but I fixed that by calling mvn one more time.
> Available here:
>
> https://repository.apache.org/content/repositories/orgapachehadoop-1075/
>
> This release will run the standard 5 days, closing on Dec 13th at 12:31pm
> Pacific. My +1 to start.
>
> Best,
> Andrew
>


Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)

2017-12-11 Thread Brahma Reddy Battula
+1 (non-binding), thanks Junping for driving this.


--Built from the source
--Installaed 3 Node HA cluster
--Verified Basic shell Commands
--Browsed the HDFS/YARN web UI
--Ran sample pi,wordcount jobs

--Brahma Reddy Battula


On Tue, Dec 5, 2017 at 3:28 PM, Junping Du  wrote:

> Hi all,
>  I've created the first release candidate (RC0) for Apache Hadoop
> 2.8.3. This is our next maint release to follow up 2.8.2. It includes 79
> important fixes and improvements.
>
>   The RC artifacts are available at: http://home.apache.org/~
> junping_du/hadoop-2.8.3-RC0
>
>   The RC tag in git is: release-2.8.3-RC0
>
>   The maven artifacts are available via repository.apache.org at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1072
>
>   Please try the release and vote; the vote will run for the usual 5
> working days, ending on 12/12/2017 PST time.
>
> Thanks,
>
> Junping
>



-- 



--Brahma Reddy Battula


Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)

2017-12-11 Thread Junping Du
Hi Konstantin,

 Thanks for verification and comments. I was verifying your example below 
but found it is actually matched:


jduMBP:hadoop-2.8.3 jdu$ md5 ~/Downloads/hadoop-2.8.3-src.tar.gz
MD5 (/Users/jdu/Downloads/hadoop-2.8.3-src.tar.gz) = 
e53d04477b85e8b58ac0a26468f04736

What's your md5 checksum for given source tar ball?


Thanks,


Junping



From: Konstantin Shvachko 
Sent: Saturday, December 9, 2017 11:06 AM
To: Junping Du
Cc: common-...@hadoop.apache.org; hdfs-dev@hadoop.apache.org; 
mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
Subject: Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)

Hey Junping,

Could you pls upload mds relative to the tar.gz etc. files rather than their 
full path
/build/source/target/artifacts/hadoop-2.8.3-src.tar.gz:
   MD5 = E5 3D 04 47 7B 85 E8 B5  8A C0 A2 64 68 F0 47 36

Otherwise mds don't match for me.

Thanks,
--Konstantin

On Tue, Dec 5, 2017 at 1:58 AM, Junping Du 
> wrote:
Hi all,
 I've created the first release candidate (RC0) for Apache Hadoop 2.8.3. 
This is our next maint release to follow up 2.8.2. It includes 79 important 
fixes and improvements.

  The RC artifacts are available at: 
http://home.apache.org/~junping_du/hadoop-2.8.3-RC0

  The RC tag in git is: release-2.8.3-RC0

  The maven artifacts are available via 
repository.apache.org at: 
https://repository.apache.org/content/repositories/orgapachehadoop-1072

  Please try the release and vote; the vote will run for the usual 5 
working days, ending on 12/12/2017 PST time.

Thanks,

Junping



Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)

2017-12-11 Thread Junping Du
Thanks Eric for verification!

A kindly reminder: The original due date for 2.8.3 voting is tomorrow but I 
only receive 1 binding vote so far - we have 76 PMCs and 127 Committers!

I can understand the whole community are busy with 3 release RC voting (2.7.5, 
2.8.3 and 3.0.0) and may be it is necessary to extend the voting period for a 
few more days. But please try as much as possible to verify our release bits. 
Thanks!


Thanks,


Junping?



From: Eric Payne 
Sent: Monday, December 11, 2017 1:51 PM
To: Junping Du; common-...@hadoop.apache.org; hdfs-dev@hadoop.apache.org; 
mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
Subject: Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)

Thanks Junping for the hard work on this release.

+1 (binding)

On a 6 node pseudo cluster (4 NMs), I performed the following manual tests:

- Built and installed from source

- Successfully ran a stream job

- Verified that user weights are honored by assigning the appropriate amount of 
resources to the weighted users.

- Ensured that FiarOrderingPolicy and FifoOrderingPolicy worked in the Capacity 
Scheduler as expected

- Applications with higher priorities are assigned containers as expected in 
the FifoOrderingPolicy of the Capacity Scheduler until the user reaches its 
user resource limit.

Eric Payne



From: Junping Du 
To: "common-...@hadoop.apache.org" ; 
"hdfs-dev@hadoop.apache.org" ; 
"mapreduce-...@hadoop.apache.org" ; 
"yarn-...@hadoop.apache.org" 
Sent: Tuesday, December 5, 2017 3:58 AM
Subject: [VOTE] Release Apache Hadoop 2.8.3 (RC0)

Hi all,
I've created the first release candidate (RC0) for Apache Hadoop 2.8.3. 
This is our next maint release to follow up 2.8.2. It includes 79 important 
fixes and improvements.

  The RC artifacts are available at: 
http://home.apache.org/~junping_du/hadoop-2.8.3-RC0

  The RC tag in git is: release-2.8.3-RC0

  The maven artifacts are available via repository.apache.org at: 
https://repository.apache.org/content/repositories/orgapachehadoop-1072

  Please try the release and vote; the vote will run for the usual 5 
working days, ending on 12/12/2017 PST time.

Thanks,

Junping




Re: [VOTE] Release Apache Hadoop 3.0.0 RC1

2017-12-11 Thread Andrew Wang
Good point on the mutability. Release tags are immutable, RCs are not.

On Mon, Dec 11, 2017 at 1:39 PM, Sangjin Lee  wrote:

> Thanks Andrew. For the record, the commit id would be
> c25427ceca461ee979d30edd7a4b0f50718e6533. I mention that for completeness
> because of the mutability of tags.
>
> On Mon, Dec 11, 2017 at 10:31 AM, Andrew Wang 
> wrote:
>
>> Sorry, forgot to push the tag. It's up there now.
>>
>> On Sun, Dec 10, 2017 at 8:31 PM, Vinod Kumar Vavilapalli <
>> vino...@apache.org> wrote:
>>
>>> I couldn't find the release tag for RC1 either - is it just me or has
>>> the release-process changed?
>>>
>>> +Vinod
>>>
>>> > On Dec 10, 2017, at 4:31 PM, Sangjin Lee  wrote:
>>> >
>>> > Hi Andrew,
>>> >
>>> > Thanks much for your effort! Just to be clear, could you please state
>>> the
>>> > git commit id of the RC1 we're voting for?
>>> >
>>> > Sangjin
>>> >
>>> > On Fri, Dec 8, 2017 at 12:31 PM, Andrew Wang >> >
>>> > wrote:
>>> >
>>> >> Hi all,
>>> >>
>>> >> Let me start, as always, by thanking the efforts of all the
>>> contributors
>>> >> who contributed to this release, especially those who jumped on the
>>> issues
>>> >> found in RC0.
>>> >>
>>> >> I've prepared RC1 for Apache Hadoop 3.0.0. This release incorporates
>>> 302
>>> >> fixed JIRAs since the previous 3.0.0-beta1 release.
>>> >>
>>> >> You can find the artifacts here:
>>> >>
>>> >> http://home.apache.org/~wang/3.0.0-RC1/
>>> >>
>>> >> I've done the traditional testing of building from the source tarball
>>> and
>>> >> running a Pi job on a single node cluster. I also verified that the
>>> shaded
>>> >> jars are not empty.
>>> >>
>>> >> Found one issue that create-release (probably due to the mvn deploy
>>> change)
>>> >> didn't sign the artifacts, but I fixed that by calling mvn one more
>>> time.
>>> >> Available here:
>>> >>
>>> >> https://repository.apache.org/content/repositories/orgapache
>>> hadoop-1075/
>>> >>
>>> >> This release will run the standard 5 days, closing on Dec 13th at
>>> 12:31pm
>>> >> Pacific. My +1 to start.
>>> >>
>>> >> Best,
>>> >> Andrew
>>> >>
>>>
>>>
>>
>


[jira] [Created] (HDFS-12916) HDFS commands throws error, when only shaded clients in classpath

2017-12-11 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDFS-12916:
-

 Summary: HDFS commands throws error, when only shaded clients in 
classpath
 Key: HDFS-12916
 URL: https://issues.apache.org/jira/browse/HDFS-12916
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Bharat Viswanadham


[root@n001 hadoop]# bin/hdfs dfs -rm /
Exception in thread "main" java.lang.NoClassDefFoundError: 
org/apache/htrace/core/Tracer$Builder
at org.apache.hadoop.fs.FsShell.run(FsShell.java:303)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
Caused by: java.lang.ClassNotFoundException: 
org.apache.htrace.core.Tracer$Builder
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 4 more



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0 RC1

2017-12-11 Thread Sangjin Lee
Thanks Andrew. For the record, the commit id would be
c25427ceca461ee979d30edd7a4b0f50718e6533. I mention that for completeness
because of the mutability of tags.

On Mon, Dec 11, 2017 at 10:31 AM, Andrew Wang 
wrote:

> Sorry, forgot to push the tag. It's up there now.
>
> On Sun, Dec 10, 2017 at 8:31 PM, Vinod Kumar Vavilapalli <
> vino...@apache.org> wrote:
>
>> I couldn't find the release tag for RC1 either - is it just me or has the
>> release-process changed?
>>
>> +Vinod
>>
>> > On Dec 10, 2017, at 4:31 PM, Sangjin Lee  wrote:
>> >
>> > Hi Andrew,
>> >
>> > Thanks much for your effort! Just to be clear, could you please state
>> the
>> > git commit id of the RC1 we're voting for?
>> >
>> > Sangjin
>> >
>> > On Fri, Dec 8, 2017 at 12:31 PM, Andrew Wang 
>> > wrote:
>> >
>> >> Hi all,
>> >>
>> >> Let me start, as always, by thanking the efforts of all the
>> contributors
>> >> who contributed to this release, especially those who jumped on the
>> issues
>> >> found in RC0.
>> >>
>> >> I've prepared RC1 for Apache Hadoop 3.0.0. This release incorporates
>> 302
>> >> fixed JIRAs since the previous 3.0.0-beta1 release.
>> >>
>> >> You can find the artifacts here:
>> >>
>> >> http://home.apache.org/~wang/3.0.0-RC1/
>> >>
>> >> I've done the traditional testing of building from the source tarball
>> and
>> >> running a Pi job on a single node cluster. I also verified that the
>> shaded
>> >> jars are not empty.
>> >>
>> >> Found one issue that create-release (probably due to the mvn deploy
>> change)
>> >> didn't sign the artifacts, but I fixed that by calling mvn one more
>> time.
>> >> Available here:
>> >>
>> >> https://repository.apache.org/content/repositories/orgapache
>> hadoop-1075/
>> >>
>> >> This release will run the standard 5 days, closing on Dec 13th at
>> 12:31pm
>> >> Pacific. My +1 to start.
>> >>
>> >> Best,
>> >> Andrew
>> >>
>>
>>
>


Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)

2017-12-11 Thread Eric Payne
Thanks Junping for the hard work on this release.
+1 (binding)
On a 6 node pseudo cluster (4 NMs), I performed the following manual tests:
- Built and installed from source
- Successfully ran a stream job
- Verified that user weights are honored by assigning the appropriate amount of 
resources to the weighted users.
- Ensured that FiarOrderingPolicy and FifoOrderingPolicy worked in the Capacity 
Scheduler as expected
- Applications with higher priorities are assigned containers as expected in 
the FifoOrderingPolicy of the Capacity Scheduler until the user reaches its 
user resource limit.
Eric Payne


  From: Junping Du 
 To: "common-...@hadoop.apache.org" ; 
"hdfs-dev@hadoop.apache.org" ; 
"mapreduce-...@hadoop.apache.org" ; 
"yarn-...@hadoop.apache.org"  
 Sent: Tuesday, December 5, 2017 3:58 AM
 Subject: [VOTE] Release Apache Hadoop 2.8.3 (RC0)
   
Hi all,
    I've created the first release candidate (RC0) for Apache Hadoop 2.8.3. 
This is our next maint release to follow up 2.8.2. It includes 79 important 
fixes and improvements.

      The RC artifacts are available at: 
http://home.apache.org/~junping_du/hadoop-2.8.3-RC0

      The RC tag in git is: release-2.8.3-RC0

      The maven artifacts are available via repository.apache.org at: 
https://repository.apache.org/content/repositories/orgapachehadoop-1072

      Please try the release and vote; the vote will run for the usual 5 
working days, ending on 12/12/2017 PST time.

Thanks,

Junping

   

[jira] [Created] (HDFS-12915) Fix findbugs warning in INodeFile$HeaderFormat.getBlockLayoutRedundancy

2017-12-11 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-12915:
--

 Summary: Fix findbugs warning in 
INodeFile$HeaderFormat.getBlockLayoutRedundancy
 Key: HDFS-12915
 URL: https://issues.apache.org/jira/browse/HDFS-12915
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Wei-Chiu Chuang


It seems HDFS-12840 creates a new findbugs warning.

Possible null pointer dereference of replication in 
org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
 Short, Byte)
Bug type NP_NULL_ON_SOME_PATH (click for details) 
In class org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat
In method 
org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
 Short, Byte)
Value loaded from replication
Dereferenced at INodeFile.java:[line 210]
Known null at INodeFile.java:[line 207]

>From a quick look at the patch, it seems bogus though. [~eddyxu][~Sammi] would 
>you please double check?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.7.5 (RC1)

2017-12-11 Thread Kihwal Lee
+1 (binding)
- checked out the rc1 tag and built the source (-Pdist -Pnative)
- brought up a pseudo distributed cluster
- ran sample MR jobs
- verified web UIs working.

On Thu, Dec 7, 2017 at 9:22 PM, Konstantin Shvachko 
wrote:

> Hi everybody,
>
> I updated CHANGES.txt and fixed documentation links.
> Also committed  MAPREDUCE-6165, which fixes a consistently failing test.
>
> This is RC1 for the next dot release of Apache Hadoop 2.7 line. The
> previous one 2.7.4 was release August 4, 2017.
> Release 2.7.5 includes critical bug fixes and optimizations. See more
> details in Release Note:
> http://home.apache.org/~shv/hadoop-2.7.5-RC1/releasenotes.html
>
> The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.5-RC1/
>
> Please give it a try and vote on this thread. The vote will run for 5 days
> ending 12/13/2017.
>
> My up to date public key is available from:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> Thanks,
> --Konstantin
>


Re: [VOTE] Release Apache Hadoop 3.0.0 RC1

2017-12-11 Thread Andrew Wang
Sorry, forgot to push the tag. It's up there now.

On Sun, Dec 10, 2017 at 8:31 PM, Vinod Kumar Vavilapalli  wrote:

> I couldn't find the release tag for RC1 either - is it just me or has the
> release-process changed?
>
> +Vinod
>
> > On Dec 10, 2017, at 4:31 PM, Sangjin Lee  wrote:
> >
> > Hi Andrew,
> >
> > Thanks much for your effort! Just to be clear, could you please state the
> > git commit id of the RC1 we're voting for?
> >
> > Sangjin
> >
> > On Fri, Dec 8, 2017 at 12:31 PM, Andrew Wang 
> > wrote:
> >
> >> Hi all,
> >>
> >> Let me start, as always, by thanking the efforts of all the contributors
> >> who contributed to this release, especially those who jumped on the
> issues
> >> found in RC0.
> >>
> >> I've prepared RC1 for Apache Hadoop 3.0.0. This release incorporates 302
> >> fixed JIRAs since the previous 3.0.0-beta1 release.
> >>
> >> You can find the artifacts here:
> >>
> >> http://home.apache.org/~wang/3.0.0-RC1/
> >>
> >> I've done the traditional testing of building from the source tarball
> and
> >> running a Pi job on a single node cluster. I also verified that the
> shaded
> >> jars are not empty.
> >>
> >> Found one issue that create-release (probably due to the mvn deploy
> change)
> >> didn't sign the artifacts, but I fixed that by calling mvn one more
> time.
> >> Available here:
> >>
> >> https://repository.apache.org/content/repositories/
> orgapachehadoop-1075/
> >>
> >> This release will run the standard 5 days, closing on Dec 13th at
> 12:31pm
> >> Pacific. My +1 to start.
> >>
> >> Best,
> >> Andrew
> >>
>
>


Re: [VOTE] Release Apache Hadoop 2.9.0 (RC0)

2017-12-11 Thread Ajay Kumar
+1 (non binding)
Thanks for working on this, Arun!!

Tested below cases after building from source in mac , java 1.8.:
1) setup a small cluster
2) run hdfs commands
3) ran wordcount, pi and TestDFSIO tests.

Thanks,
Ajay

On 12/10/17, 7:47 PM, "Vinod Kumar Vavilapalli"  wrote:

Missed this response on the old thread, but closing the loop here..

The incompatibility conundrum with Dot-zeroes did indeed happen, in early 
2.x releases - multiple times at that. And the downstream projects did raise 
concerns at these unfixable situations.

I wasn't advocating a new formalism, this was more of a lesson taken from 
real life experience that I wanted share with fellow RMs - as IMO the effort 
was worth the value for the releases where I used it.

If RMs of these more recent releases choose to not do this if it is 
perceived that a release won't run into those past issues at all, it's clearly 
their call. It's just that we are bound to potentially make the same mistakes 
and learn the same lesson all over again..

+Vinod

> On Nov 9, 2017, at 9:51 AM, Chris Douglas  wrote:
> 
> The labor required for these release formalisms is exceeding their
> value. Our minor releases have more bugs than our patch releases (we
> hope), but every consumer should understand how software versioning
> works. Every device I own has bugs on major OS updates. That doesn't
> imply that every minor release is strictly less stable than a patch
> release, and users need to be warned off it.
> 
> In contrast, we should warn users about features that compromise
> invariants like security or durability, either by design or due to
> their early stage of development. We can't reasonably expect them to
> understand those tradeoffs, since they depend on internal details of
> Hadoop.
> 
> On Wed, Nov 8, 2017 at 5:34 PM, Vinod Kumar Vavilapalli
> > wrote:
>> When we tried option (b), we used to make .0 as a GA release, but 
downstream projects like Tez, Hive, Spark would come back and find an 
incompatible change - and now we were forced into a conundrum - is fixing this 
incompatible change itself an incompatibility?
> 
> Every project takes these case-by-case. Most of the time we'll
> accommodate the old semantics- and we try to be explicit where we
> promise compatibility- but this isn't a logic problem, it's a
> practical one. If it's an easy fix to an obscure API, we probably
> won't even hear about it.
> 
>> Long story short, I'd just add to your voting thread and release notes 
that 2.9.0 still needs to be tested downstream and so users may want to wait 
for subsequent point releases.
> 
> It's uncomfortable to have four active release branches, with 3.1
> coming in early 2018. We all benefit from the shared deployment
> experiences that harden these releases, and fragmentation creates
> incentives to compete for that attention. Rather than tacitly
> scuffling over waning interest in the 2.x series, I'd endorse your
> other thread encouraging consolidation around 3.x.
> 
> To that end, there is no policy or precedent that requires that new
> minor releases be labeled as "alpha". If there is cause to believe
> that 2.9.0 is not ready to release in the stable line, then we
> shouldn't release it. -C
> 
>>> On Nov 8, 2017, at 12:43 AM, Subru Krishnan  wrote:
>>> 
>>> We are canceling the RC due to the issue that Rohith/Sunil identified. 
The
>>> issue was difficult to track down as it only happens when you use IP 
for ZK
>>> (works fine with host names) and moreover if ZK and RM are co-located on
>>> same machine. We are hopeful to get the fix in tomorrow and roll out 
RC1.
>>> 
>>> Thanks to everyone for the extensive testing/validation. Hopefully cost 
to
>>> replicate with RC1 is much lower.
>>> 
>>> -Subru/Arun.
>>> 
>>> On Tue, Nov 7, 2017 at 5:27 PM, Konstantinos Karanasos 
>> 
 +1 from me too.
 
 Did the following:
 1) set up a 9-node cluster;
 2) ran some Gridmix jobs;
 3) ran (2) after enabling opportunistic containers (used a mix of
 guaranteed and opportunistic containers for each job);
 4) ran (3) but this time enabling distributed scheduling of 
opportunistic
 containers.
 
 All the above worked with no issues.
 
 Thanks for all the effort guys!
 
 Konstantinos
 
 
 
 Konstantinos
 
 On Tue, Nov 7, 2017 at 2:56 PM, Eric Badger 
 wrote:
 
> +1 (non-binding) pending the issue that Sunil/Rohith pointed out
> 
> - 

[jira] [Created] (HDFS-12914) Block report leases cause missing blocks until next report

2017-12-11 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-12914:
--

 Summary: Block report leases cause missing blocks until next report
 Key: HDFS-12914
 URL: https://issues.apache.org/jira/browse/HDFS-12914
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.8.0
Reporter: Daryn Sharp
Priority: Critical


{{BlockReportLeaseManager#checkLease}} will reject FBRs from DNs for conditions 
such as "unknown datanode", "not in pending set", "lease has expired", wrong 
lease id, etc.  Lease rejection does not throw an exception.  It returns false 
which bubbles up to  {{NameNodeRpcServer#blockReport}} and interpreted as 
{{noStaleStorages}}.

A re-registering node whose FBR is rejected from an invalid lease becomes 
active with _no blocks_.  A replication storm ensues possibly causing DNs to 
temporarily go dead (HDFS-12645), leading to more FBR lease rejections on 
re-registration.  The cluster will have many "missing blocks" until the DNs 
next FBR is sent and/or forced.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12913) TestDNFencingWithReplication.testFencingStress:137 ? Runtime Deferred

2017-12-11 Thread Zsolt Venczel (JIRA)
Zsolt Venczel created HDFS-12913:


 Summary: TestDNFencingWithReplication.testFencingStress:137 ? 
Runtime Deferred
 Key: HDFS-12913
 URL: https://issues.apache.org/jira/browse/HDFS-12913
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Zsolt Venczel
Assignee: Zsolt Venczel


Once in every 5000 test run the following issue happens:
{code}
2017-12-11 10:33:09 [INFO] 
2017-12-11 10:33:09 [INFO] 
---
2017-12-11 10:33:09 [INFO]  T E S T S
2017-12-11 10:33:09 [INFO] 
---
2017-12-11 10:33:09 [INFO] Running 
org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
2017-12-11 10:37:32 [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, 
Time elapsed: 262.641 s <<< FAILURE! - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
2017-12-11 10:37:32 [ERROR] 
testFencingStress(org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication)
  Time elapsed: 262.477 s  <<< ERROR!
2017-12-11 10:37:32 java.lang.RuntimeException: Deferred
2017-12-11 10:37:32 at 
org.apache.hadoop.test.MultithreadedTestUtil$TestContext.checkException(MultithreadedTestUtil.java:130)
2017-12-11 10:37:32 at 
org.apache.hadoop.test.MultithreadedTestUtil$TestContext.stop(MultithreadedTestUtil.java:166)
2017-12-11 10:37:32 at 
org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress(TestDNFencingWithReplication.java:137)
2017-12-11 10:37:32 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
2017-12-11 10:37:32 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2017-12-11 10:37:32 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2017-12-11 10:37:32 at java.lang.reflect.Method.invoke(Method.java:498)
2017-12-11 10:37:32 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
2017-12-11 10:37:32 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2017-12-11 10:37:32 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
2017-12-11 10:37:32 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
2017-12-11 10:37:32 at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
2017-12-11 10:37:32 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
2017-12-11 10:37:32 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
2017-12-11 10:37:32 at 
org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
2017-12-11 10:37:32 at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
2017-12-11 10:37:32 at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
2017-12-11 10:37:32 at 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
2017-12-11 10:37:32 at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
2017-12-11 10:37:32 at 
org.junit.runners.ParentRunner.run(ParentRunner.java:309)
2017-12-11 10:37:32 at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
2017-12-11 10:37:32 at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
2017-12-11 10:37:32 at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
2017-12-11 10:37:32 at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
2017-12-11 10:37:32 at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
2017-12-11 10:37:32 at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
2017-12-11 10:37:32 at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119)
2017-12-11 10:37:32 at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407)
2017-12-11 10:37:32 Caused by: java.lang.RuntimeException: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): 
Operation category READ is not supported in state standby. Visit 
https://s.apache.org/sbnn-error
2017-12-11 10:37:32 at 
org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:88)
2017-12-11 10:37:32 at 
org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1962)
2017-12-11 10:37:32 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1421)
2017-12-11 10:37:32 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1862)
2017-12-11 10:37:32 at