Re: [VOTE] Release Apache Hadoop 3.0.0 RC1

2017-12-11 Thread Xiao Chen
+1 (binding)

- downloaded src tarball, verified md5
- built from source with jdk1.8.0_112
- started a pseudo cluster with hdfs and kms
- sanity checked encryption related operations working
- sanity checked webui and logs.

-Xiao

On Mon, Dec 11, 2017 at 6:10 PM, Aaron T. Myers  wrote:

> +1 (binding)
>
> - downloaded the src tarball and built the source (-Pdist -Pnative)
> - verified the checksum
> - brought up a secure pseudo distributed cluster
> - did some basic file system operations (mkdir, list, put, cat) and
> confirmed that everything was working
> - confirmed that the web UI worked
>
> Best,
> Aaron
>
> On Fri, Dec 8, 2017 at 12:31 PM, Andrew Wang 
> wrote:
>
> > Hi all,
> >
> > Let me start, as always, by thanking the efforts of all the contributors
> > who contributed to this release, especially those who jumped on the
> issues
> > found in RC0.
> >
> > I've prepared RC1 for Apache Hadoop 3.0.0. This release incorporates 302
> > fixed JIRAs since the previous 3.0.0-beta1 release.
> >
> > You can find the artifacts here:
> >
> > http://home.apache.org/~wang/3.0.0-RC1/
> >
> > I've done the traditional testing of building from the source tarball and
> > running a Pi job on a single node cluster. I also verified that the
> shaded
> > jars are not empty.
> >
> > Found one issue that create-release (probably due to the mvn deploy
> change)
> > didn't sign the artifacts, but I fixed that by calling mvn one more time.
> > Available here:
> >
> > https://repository.apache.org/content/repositories/orgapachehadoop-1075/
> >
> > This release will run the standard 5 days, closing on Dec 13th at 12:31pm
> > Pacific. My +1 to start.
> >
> > Best,
> > Andrew
> >
>


[jira] [Created] (HADOOP-15110) Objects are getting logged when we got exception from AutoRenewalThreadForUserCreds

2017-12-11 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HADOOP-15110:
--

 Summary: Objects are getting logged when we got exception from 
AutoRenewalThreadForUserCreds
 Key: HADOOP-15110
 URL: https://issues.apache.org/jira/browse/HADOOP-15110
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0-alpha2, 2.8.0
Reporter: Harshakiran Reddy


*scenario*:
-

While Running the renewal command for principal it's printing the direct 
objects for *renewalFailures *and *renewalFailuresTotal*

{noformat}
bin> ./hdfs dfs -ls /
2017-12-12 12:31:50,910 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
2017-12-12 12:31:52,312 WARN security.UserGroupInformation: Exception 
encountered while running the renewal command for principal_name. (TGT end 
time:1513070122000, renewalFailures: 
org.apache.hadoop.metrics2.lib.MutableGaugeInt@1bbb43eb,renewalFailuresTotal: 
org.apache.hadoop.metrics2.lib.MutableGaugeLong@424a0549)
ExitCodeException exitCode=1: kinit: KDC can't fulfill requested option while 
renewing credentials

at org.apache.hadoop.util.Shell.runCommand(Shell.java:994)
at org.apache.hadoop.util.Shell.run(Shell.java:887)
at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1212)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:1306)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:1288)
at 
org.apache.hadoop.security.UserGroupInformation$1.run(UserGroupInformation.java:1067)
at java.lang.Thread.run(Thread.java:745)
{noformat}

*Expected Result*:
it's should be user understandable value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-12-11 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/619/

[Dec 11, 2017 2:31:46 AM] (wwei) YARN-7608. Incorrect sTarget column causing 
DataTable warning on RM
[Dec 11, 2017 1:50:02 PM] (sunilg) YARN-7632. Effective min and max resource 
need to be set for auto




-1 overall


The following subsystems voted -1:
asflicense findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Possible null pointer dereference of replication in 
org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
 Short, Byte) Dereferenced at INodeFile.java:replication in 
org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
 Short, Byte) Dereferenced at INodeFile.java:[line 210] 

FindBugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
   org.apache.hadoop.yarn.api.records.Resource.getResources() may expose 
internal representation by returning Resource.resources At Resource.java:by 
returning Resource.resources At Resource.java:[line 234] 

Failed junit tests :

   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 
   hadoop.hdfs.TestReplication 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.TestReadStripedFileWithDecoding 
   hadoop.hdfs.server.balancer.TestBalancerRPCDelay 
   hadoop.hdfs.TestErasureCodingPolicies 
   hadoop.hdfs.TestDFSStripedOutputStream 
   hadoop.hdfs.server.namenode.TestDecommissioningStatus 
   hadoop.hdfs.TestReconstructStripedFile 
   
hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch 
   
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesSchedulerActivities 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 
   hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched 
   hadoop.yarn.client.api.impl.TestAMRMClientOnRMRestart 
   hadoop.mapreduce.v2.app.rm.TestRMContainerAllocator 
   hadoop.mapreduce.v2.TestUberAM 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/619/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/619/artifact/out/diff-compile-javac-root.txt
  [280K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/619/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/619/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/619/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/619/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/619/artifact/out/whitespace-eol.txt
  [8.8M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/619/artifact/out/whitespace-tabs.txt
  [288K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/619/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/619/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/619/artifact/out/diff-javadoc-javadoc-root.txt
  [760K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/619/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [380K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/619/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [44K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/619/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [100K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/619/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
  [20K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/619/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt
  

Re: [VOTE] Release Apache Hadoop 3.0.0 RC1

2017-12-11 Thread Aaron T. Myers
+1 (binding)

- downloaded the src tarball and built the source (-Pdist -Pnative)
- verified the checksum
- brought up a secure pseudo distributed cluster
- did some basic file system operations (mkdir, list, put, cat) and
confirmed that everything was working
- confirmed that the web UI worked

Best,
Aaron

On Fri, Dec 8, 2017 at 12:31 PM, Andrew Wang 
wrote:

> Hi all,
>
> Let me start, as always, by thanking the efforts of all the contributors
> who contributed to this release, especially those who jumped on the issues
> found in RC0.
>
> I've prepared RC1 for Apache Hadoop 3.0.0. This release incorporates 302
> fixed JIRAs since the previous 3.0.0-beta1 release.
>
> You can find the artifacts here:
>
> http://home.apache.org/~wang/3.0.0-RC1/
>
> I've done the traditional testing of building from the source tarball and
> running a Pi job on a single node cluster. I also verified that the shaded
> jars are not empty.
>
> Found one issue that create-release (probably due to the mvn deploy change)
> didn't sign the artifacts, but I fixed that by calling mvn one more time.
> Available here:
>
> https://repository.apache.org/content/repositories/orgapachehadoop-1075/
>
> This release will run the standard 5 days, closing on Dec 13th at 12:31pm
> Pacific. My +1 to start.
>
> Best,
> Andrew
>


Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)

2017-12-11 Thread Brahma Reddy Battula
+1 (non-binding), thanks Junping for driving this.


--Built from the source
--Installaed 3 Node HA cluster
--Verified Basic shell Commands
--Browsed the HDFS/YARN web UI
--Ran sample pi,wordcount jobs

--Brahma Reddy Battula


On Tue, Dec 5, 2017 at 3:28 PM, Junping Du  wrote:

> Hi all,
>  I've created the first release candidate (RC0) for Apache Hadoop
> 2.8.3. This is our next maint release to follow up 2.8.2. It includes 79
> important fixes and improvements.
>
>   The RC artifacts are available at: http://home.apache.org/~
> junping_du/hadoop-2.8.3-RC0
>
>   The RC tag in git is: release-2.8.3-RC0
>
>   The maven artifacts are available via repository.apache.org at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1072
>
>   Please try the release and vote; the vote will run for the usual 5
> working days, ending on 12/12/2017 PST time.
>
> Thanks,
>
> Junping
>



-- 



--Brahma Reddy Battula


Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)

2017-12-11 Thread Junping Du
Thanks Eric for verification!

A kindly reminder: The original due date for 2.8.3 voting is tomorrow but I 
only receive 1 binding vote so far - we have 76 PMCs and 127 Committers!

I can understand the whole community are busy with 3 release RC voting (2.7.5, 
2.8.3 and 3.0.0) and may be it is necessary to extend the voting period for a 
few more days. But please try as much as possible to verify our release bits. 
Thanks!


Thanks,


Junping?



From: Eric Payne 
Sent: Monday, December 11, 2017 1:51 PM
To: Junping Du; common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org; 
mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
Subject: Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)

Thanks Junping for the hard work on this release.

+1 (binding)

On a 6 node pseudo cluster (4 NMs), I performed the following manual tests:

- Built and installed from source

- Successfully ran a stream job

- Verified that user weights are honored by assigning the appropriate amount of 
resources to the weighted users.

- Ensured that FiarOrderingPolicy and FifoOrderingPolicy worked in the Capacity 
Scheduler as expected

- Applications with higher priorities are assigned containers as expected in 
the FifoOrderingPolicy of the Capacity Scheduler until the user reaches its 
user resource limit.

Eric Payne



From: Junping Du 
To: "common-dev@hadoop.apache.org" ; 
"hdfs-...@hadoop.apache.org" ; 
"mapreduce-...@hadoop.apache.org" ; 
"yarn-...@hadoop.apache.org" 
Sent: Tuesday, December 5, 2017 3:58 AM
Subject: [VOTE] Release Apache Hadoop 2.8.3 (RC0)

Hi all,
I've created the first release candidate (RC0) for Apache Hadoop 2.8.3. 
This is our next maint release to follow up 2.8.2. It includes 79 important 
fixes and improvements.

  The RC artifacts are available at: 
http://home.apache.org/~junping_du/hadoop-2.8.3-RC0

  The RC tag in git is: release-2.8.3-RC0

  The maven artifacts are available via repository.apache.org at: 
https://repository.apache.org/content/repositories/orgapachehadoop-1072

  Please try the release and vote; the vote will run for the usual 5 
working days, ending on 12/12/2017 PST time.

Thanks,

Junping




Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)

2017-12-11 Thread Junping Du
Hi Konstantin,

 Thanks for verification and comments. I was verifying your example below 
but found it is actually matched:


jduMBP:hadoop-2.8.3 jdu$ md5 ~/Downloads/hadoop-2.8.3-src.tar.gz
MD5 (/Users/jdu/Downloads/hadoop-2.8.3-src.tar.gz) = 
e53d04477b85e8b58ac0a26468f04736

What's your md5 checksum for given source tar ball?


Thanks,


Junping



From: Konstantin Shvachko 
Sent: Saturday, December 9, 2017 11:06 AM
To: Junping Du
Cc: common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org; 
mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
Subject: Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)

Hey Junping,

Could you pls upload mds relative to the tar.gz etc. files rather than their 
full path
/build/source/target/artifacts/hadoop-2.8.3-src.tar.gz:
   MD5 = E5 3D 04 47 7B 85 E8 B5  8A C0 A2 64 68 F0 47 36

Otherwise mds don't match for me.

Thanks,
--Konstantin

On Tue, Dec 5, 2017 at 1:58 AM, Junping Du 
> wrote:
Hi all,
 I've created the first release candidate (RC0) for Apache Hadoop 2.8.3. 
This is our next maint release to follow up 2.8.2. It includes 79 important 
fixes and improvements.

  The RC artifacts are available at: 
http://home.apache.org/~junping_du/hadoop-2.8.3-RC0

  The RC tag in git is: release-2.8.3-RC0

  The maven artifacts are available via 
repository.apache.org at: 
https://repository.apache.org/content/repositories/orgapachehadoop-1072

  Please try the release and vote; the vote will run for the usual 5 
working days, ending on 12/12/2017 PST time.

Thanks,

Junping



Re: [VOTE] Release Apache Hadoop 3.0.0 RC1

2017-12-11 Thread Andrew Wang
Good point on the mutability. Release tags are immutable, RCs are not.

On Mon, Dec 11, 2017 at 1:39 PM, Sangjin Lee  wrote:

> Thanks Andrew. For the record, the commit id would be
> c25427ceca461ee979d30edd7a4b0f50718e6533. I mention that for completeness
> because of the mutability of tags.
>
> On Mon, Dec 11, 2017 at 10:31 AM, Andrew Wang 
> wrote:
>
>> Sorry, forgot to push the tag. It's up there now.
>>
>> On Sun, Dec 10, 2017 at 8:31 PM, Vinod Kumar Vavilapalli <
>> vino...@apache.org> wrote:
>>
>>> I couldn't find the release tag for RC1 either - is it just me or has
>>> the release-process changed?
>>>
>>> +Vinod
>>>
>>> > On Dec 10, 2017, at 4:31 PM, Sangjin Lee  wrote:
>>> >
>>> > Hi Andrew,
>>> >
>>> > Thanks much for your effort! Just to be clear, could you please state
>>> the
>>> > git commit id of the RC1 we're voting for?
>>> >
>>> > Sangjin
>>> >
>>> > On Fri, Dec 8, 2017 at 12:31 PM, Andrew Wang >> >
>>> > wrote:
>>> >
>>> >> Hi all,
>>> >>
>>> >> Let me start, as always, by thanking the efforts of all the
>>> contributors
>>> >> who contributed to this release, especially those who jumped on the
>>> issues
>>> >> found in RC0.
>>> >>
>>> >> I've prepared RC1 for Apache Hadoop 3.0.0. This release incorporates
>>> 302
>>> >> fixed JIRAs since the previous 3.0.0-beta1 release.
>>> >>
>>> >> You can find the artifacts here:
>>> >>
>>> >> http://home.apache.org/~wang/3.0.0-RC1/
>>> >>
>>> >> I've done the traditional testing of building from the source tarball
>>> and
>>> >> running a Pi job on a single node cluster. I also verified that the
>>> shaded
>>> >> jars are not empty.
>>> >>
>>> >> Found one issue that create-release (probably due to the mvn deploy
>>> change)
>>> >> didn't sign the artifacts, but I fixed that by calling mvn one more
>>> time.
>>> >> Available here:
>>> >>
>>> >> https://repository.apache.org/content/repositories/orgapache
>>> hadoop-1075/
>>> >>
>>> >> This release will run the standard 5 days, closing on Dec 13th at
>>> 12:31pm
>>> >> Pacific. My +1 to start.
>>> >>
>>> >> Best,
>>> >> Andrew
>>> >>
>>>
>>>
>>
>


Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)

2017-12-11 Thread Eric Payne
Thanks Junping for the hard work on this release.
+1 (binding)
On a 6 node pseudo cluster (4 NMs), I performed the following manual tests:
- Built and installed from source
- Successfully ran a stream job
- Verified that user weights are honored by assigning the appropriate amount of 
resources to the weighted users.
- Ensured that FiarOrderingPolicy and FifoOrderingPolicy worked in the Capacity 
Scheduler as expected
- Applications with higher priorities are assigned containers as expected in 
the FifoOrderingPolicy of the Capacity Scheduler until the user reaches its 
user resource limit.
Eric Payne


  From: Junping Du 
 To: "common-dev@hadoop.apache.org" ; 
"hdfs-...@hadoop.apache.org" ; 
"mapreduce-...@hadoop.apache.org" ; 
"yarn-...@hadoop.apache.org"  
 Sent: Tuesday, December 5, 2017 3:58 AM
 Subject: [VOTE] Release Apache Hadoop 2.8.3 (RC0)
   
Hi all,
    I've created the first release candidate (RC0) for Apache Hadoop 2.8.3. 
This is our next maint release to follow up 2.8.2. It includes 79 important 
fixes and improvements.

      The RC artifacts are available at: 
http://home.apache.org/~junping_du/hadoop-2.8.3-RC0

      The RC tag in git is: release-2.8.3-RC0

      The maven artifacts are available via repository.apache.org at: 
https://repository.apache.org/content/repositories/orgapachehadoop-1072

      Please try the release and vote; the vote will run for the usual 5 
working days, ending on 12/12/2017 PST time.

Thanks,

Junping

   

Re: [VOTE] Release Apache Hadoop 3.0.0 RC1

2017-12-11 Thread Sangjin Lee
Thanks Andrew. For the record, the commit id would be
c25427ceca461ee979d30edd7a4b0f50718e6533. I mention that for completeness
because of the mutability of tags.

On Mon, Dec 11, 2017 at 10:31 AM, Andrew Wang 
wrote:

> Sorry, forgot to push the tag. It's up there now.
>
> On Sun, Dec 10, 2017 at 8:31 PM, Vinod Kumar Vavilapalli <
> vino...@apache.org> wrote:
>
>> I couldn't find the release tag for RC1 either - is it just me or has the
>> release-process changed?
>>
>> +Vinod
>>
>> > On Dec 10, 2017, at 4:31 PM, Sangjin Lee  wrote:
>> >
>> > Hi Andrew,
>> >
>> > Thanks much for your effort! Just to be clear, could you please state
>> the
>> > git commit id of the RC1 we're voting for?
>> >
>> > Sangjin
>> >
>> > On Fri, Dec 8, 2017 at 12:31 PM, Andrew Wang 
>> > wrote:
>> >
>> >> Hi all,
>> >>
>> >> Let me start, as always, by thanking the efforts of all the
>> contributors
>> >> who contributed to this release, especially those who jumped on the
>> issues
>> >> found in RC0.
>> >>
>> >> I've prepared RC1 for Apache Hadoop 3.0.0. This release incorporates
>> 302
>> >> fixed JIRAs since the previous 3.0.0-beta1 release.
>> >>
>> >> You can find the artifacts here:
>> >>
>> >> http://home.apache.org/~wang/3.0.0-RC1/
>> >>
>> >> I've done the traditional testing of building from the source tarball
>> and
>> >> running a Pi job on a single node cluster. I also verified that the
>> shaded
>> >> jars are not empty.
>> >>
>> >> Found one issue that create-release (probably due to the mvn deploy
>> change)
>> >> didn't sign the artifacts, but I fixed that by calling mvn one more
>> time.
>> >> Available here:
>> >>
>> >> https://repository.apache.org/content/repositories/orgapache
>> hadoop-1075/
>> >>
>> >> This release will run the standard 5 days, closing on Dec 13th at
>> 12:31pm
>> >> Pacific. My +1 to start.
>> >>
>> >> Best,
>> >> Andrew
>> >>
>>
>>
>


Re: [VOTE] Release Apache Hadoop 2.7.5 (RC1)

2017-12-11 Thread Kihwal Lee
+1 (binding)
- checked out the rc1 tag and built the source (-Pdist -Pnative)
- brought up a pseudo distributed cluster
- ran sample MR jobs
- verified web UIs working.

On Thu, Dec 7, 2017 at 9:22 PM, Konstantin Shvachko 
wrote:

> Hi everybody,
>
> I updated CHANGES.txt and fixed documentation links.
> Also committed  MAPREDUCE-6165, which fixes a consistently failing test.
>
> This is RC1 for the next dot release of Apache Hadoop 2.7 line. The
> previous one 2.7.4 was release August 4, 2017.
> Release 2.7.5 includes critical bug fixes and optimizations. See more
> details in Release Note:
> http://home.apache.org/~shv/hadoop-2.7.5-RC1/releasenotes.html
>
> The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.5-RC1/
>
> Please give it a try and vote on this thread. The vote will run for 5 days
> ending 12/13/2017.
>
> My up to date public key is available from:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> Thanks,
> --Konstantin
>


Re: [VOTE] Release Apache Hadoop 2.9.0 (RC0)

2017-12-11 Thread Ajay Kumar
+1 (non binding)
Thanks for working on this, Arun!!

Tested below cases after building from source in mac , java 1.8.:
1) setup a small cluster
2) run hdfs commands
3) ran wordcount, pi and TestDFSIO tests.

Thanks,
Ajay

On 12/10/17, 7:47 PM, "Vinod Kumar Vavilapalli"  wrote:

Missed this response on the old thread, but closing the loop here..

The incompatibility conundrum with Dot-zeroes did indeed happen, in early 
2.x releases - multiple times at that. And the downstream projects did raise 
concerns at these unfixable situations.

I wasn't advocating a new formalism, this was more of a lesson taken from 
real life experience that I wanted share with fellow RMs - as IMO the effort 
was worth the value for the releases where I used it.

If RMs of these more recent releases choose to not do this if it is 
perceived that a release won't run into those past issues at all, it's clearly 
their call. It's just that we are bound to potentially make the same mistakes 
and learn the same lesson all over again..

+Vinod

> On Nov 9, 2017, at 9:51 AM, Chris Douglas  wrote:
> 
> The labor required for these release formalisms is exceeding their
> value. Our minor releases have more bugs than our patch releases (we
> hope), but every consumer should understand how software versioning
> works. Every device I own has bugs on major OS updates. That doesn't
> imply that every minor release is strictly less stable than a patch
> release, and users need to be warned off it.
> 
> In contrast, we should warn users about features that compromise
> invariants like security or durability, either by design or due to
> their early stage of development. We can't reasonably expect them to
> understand those tradeoffs, since they depend on internal details of
> Hadoop.
> 
> On Wed, Nov 8, 2017 at 5:34 PM, Vinod Kumar Vavilapalli
> > wrote:
>> When we tried option (b), we used to make .0 as a GA release, but 
downstream projects like Tez, Hive, Spark would come back and find an 
incompatible change - and now we were forced into a conundrum - is fixing this 
incompatible change itself an incompatibility?
> 
> Every project takes these case-by-case. Most of the time we'll
> accommodate the old semantics- and we try to be explicit where we
> promise compatibility- but this isn't a logic problem, it's a
> practical one. If it's an easy fix to an obscure API, we probably
> won't even hear about it.
> 
>> Long story short, I'd just add to your voting thread and release notes 
that 2.9.0 still needs to be tested downstream and so users may want to wait 
for subsequent point releases.
> 
> It's uncomfortable to have four active release branches, with 3.1
> coming in early 2018. We all benefit from the shared deployment
> experiences that harden these releases, and fragmentation creates
> incentives to compete for that attention. Rather than tacitly
> scuffling over waning interest in the 2.x series, I'd endorse your
> other thread encouraging consolidation around 3.x.
> 
> To that end, there is no policy or precedent that requires that new
> minor releases be labeled as "alpha". If there is cause to believe
> that 2.9.0 is not ready to release in the stable line, then we
> shouldn't release it. -C
> 
>>> On Nov 8, 2017, at 12:43 AM, Subru Krishnan  wrote:
>>> 
>>> We are canceling the RC due to the issue that Rohith/Sunil identified. 
The
>>> issue was difficult to track down as it only happens when you use IP 
for ZK
>>> (works fine with host names) and moreover if ZK and RM are co-located on
>>> same machine. We are hopeful to get the fix in tomorrow and roll out 
RC1.
>>> 
>>> Thanks to everyone for the extensive testing/validation. Hopefully cost 
to
>>> replicate with RC1 is much lower.
>>> 
>>> -Subru/Arun.
>>> 
>>> On Tue, Nov 7, 2017 at 5:27 PM, Konstantinos Karanasos 
>> 
 +1 from me too.
 
 Did the following:
 1) set up a 9-node cluster;
 2) ran some Gridmix jobs;
 3) ran (2) after enabling opportunistic containers (used a mix of
 guaranteed and opportunistic containers for each job);
 4) ran (3) but this time enabling distributed scheduling of 
opportunistic
 containers.
 
 All the above worked with no issues.
 
 Thanks for all the effort guys!
 
 Konstantinos
 
 
 
 Konstantinos
 
 On Tue, Nov 7, 2017 at 2:56 PM, Eric Badger 
 wrote:
 
> +1 (non-binding) pending the issue that Sunil/Rohith pointed out
> 
> - 

[jira] [Created] (HADOOP-15109) TestDFSIO -read -random doesn't work on file sized 4GB

2017-12-11 Thread zhoutai.zt (JIRA)
zhoutai.zt created HADOOP-15109:
---

 Summary: TestDFSIO -read -random doesn't work on file sized 4GB
 Key: HADOOP-15109
 URL: https://issues.apache.org/jira/browse/HADOOP-15109
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, test
Affects Versions: 3.0.0-beta1
Reporter: zhoutai.zt


TestDFSIO -read -random throws IllegalArgumentException on 4GB file. The cause 
is:

{code:java}
private long nextOffset(long current) {
  if(skipSize == 0)
return rnd.nextInt((int)(fileSize));
  if(skipSize > 0)
return (current < 0) ? 0 : (current + bufferSize + skipSize);
  // skipSize < 0
  return (current < 0) ? Math.max(0, fileSize - bufferSize) :
 Math.max(0, current + skipSize);
}
  }
{code}

When {color:#d04437}_filesize_{color} exceeds signed int, (int)(filesize) will 
be negative and cause Random.nextInt throws  IllegalArgumentException("n must 
be positive").




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15108) Testcase TestBalancer#testBalancerWithPinnedBlocks always fails

2017-12-11 Thread Jianfei Jiang (JIRA)
Jianfei Jiang created HADOOP-15108:
--

 Summary: Testcase TestBalancer#testBalancerWithPinnedBlocks always 
fails
 Key: HADOOP-15108
 URL: https://issues.apache.org/jira/browse/HADOOP-15108
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0-beta1
Reporter: Jianfei Jiang


When running testcases without any code changes, the function 
testBalancerWithPinnedBlocks in TestBalancer.java never succeeded. I tried to 
use Ubuntu 16.04 and redhat 7, maybe the failure is not related to various 
linux environment. I am not sure if there is some bug in this case or I used 
wrong environment and settings. Could anyone give some advice.

---
Test set: org.apache.hadoop.hdfs.server.balancer.TestBalancer
---
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 100.389 sec <<< 
FAILURE! - in org.apache.hadoop.hdfs.server.balancer.TestBalancer
testBalancerWithPinnedBlocks(org.apache.hadoop.hdfs.server.balancer.TestBalancer)
  Time elapsed: 100.134 sec  <<< ERROR!
java.lang.Exception: test timed out after 10 milliseconds
at java.lang.Object.wait(Native Method)
at 
org.apache.hadoop.hdfs.DataStreamer.waitForAckedSeqno(DataStreamer.java:903)
at 
org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:773)
at 
org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:870)
at 
org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:842)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:441)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancerWithPinnedBlocks(TestBalancer.java:515)




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org