Re: [VOTE] Release Apache Hadoop 2.9.1 (RC0)

2018-04-27 Thread Rakesh Radhakrishnan
Thanks Sammi for getting this out!

+1 (binding)

Verified the following and looks fine to me.

 * Built from source.
 * Deployed 3 node cluster with NameNode HA.
 * Verified HDFS web UIs.
 * Tried out HDFS shell commands.
 * Ran Mover, Balancer tools.
 * Ran sample MapReduce jobs.


Rakesh

On Thu, Apr 19, 2018 at 2:27 PM, Chen, Sammi  wrote:

> Hi all,
>
> This is the first dot release of Apache Hadoop 2.9 line since 2.9.0 was
> released on November 17, 2017.
>
> It includes 208 changes. Among them, 9 blockers, 15 critical issues and
> rest are normal bug fixes and feature improvements.
>
> Thanks to the many who contributed to the 2.9.1 development.
>
> The artifacts are available here:  https://dist.apache.org/repos/
> dist/dev/hadoop/2.9.1-RC0/
>
> The RC tag in git is release-2.9.1-RC0. Last git commit SHA is
> e30710aea4e6e55e69372929106cf119af06fd0e.
>
> The maven artifacts are available at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1115/
>
> My public key is available from:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> Please try the release and vote; the vote will run for the usual 5 days,
> ending on 4/25/2018 PST time.
>
> Also I would like thank Lei(Eddy) Xu and Chris Douglas for your help
> during the RC preparation.
>
> Bests,
> Sammi Chen
>


Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-04-27 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/450/

[Apr 25, 2018 10:50:52 PM] (omalley) HDFS-8456. Introduce 
STORAGE_CONTAINER_SERVICE as a new NodeType.
[Apr 25, 2018 10:51:00 PM] (omalley) HDFS-8614. OzoneHandler : Add Quota 
Support. (Contributed by Anu
[Apr 25, 2018 10:51:00 PM] (omalley) HDFS-8448. Create REST Interface for 
Volumes. Contributed by Anu
[Apr 25, 2018 10:51:00 PM] (omalley) HDFS-8637. OzoneHandler : Add Error Table. 
(Contributed by Anu Engineer)
[Apr 25, 2018 10:51:00 PM] (omalley) HDFS-8634. OzoneHandler: Add userAuth 
Interface and Simple userAuth
[Apr 25, 2018 10:51:00 PM] (omalley) HDFS-8644. OzoneHandler : Add volume 
handler. (Contributed by Anu
[Apr 25, 2018 10:51:00 PM] (omalley) HDFS-8654. OzoneHandler : Add ACL support. 
(Contributed by Anu Engineer)
[Apr 25, 2018 10:51:00 PM] (omalley) HDFS-8680. OzoneHandler : Add Local 
StorageHandler support for volumes.
[Apr 25, 2018 10:51:00 PM] (omalley) HDFS-8717. OzoneHandler : Add common 
bucket objects. (Contributed by Anu
[Apr 25, 2018 10:51:00 PM] (omalley) HDFS-8753. Ozone: Unify 
StorageContainerConfiguration with
[Apr 25, 2018 10:51:00 PM] (omalley) HDFS-8695. OzoneHandler : Add Bucket REST 
Interface. (aengineer)
[Apr 25, 2018 10:51:36 PM] (omalley) HDFS-8527. OzoneHandler: Integration of 
REST interface and container
[Apr 25, 2018 10:51:39 PM] (omalley) HDFS-8757 : OzoneHandler : Add 
localStorageHandler support for Buckets.
[Apr 25, 2018 10:51:39 PM] (omalley) HDFS-9834. OzoneHandler : Enable 
MiniDFSCluster based testing for Ozone.
[Apr 25, 2018 10:51:39 PM] (omalley) HDFS-9845. OzoneHandler : Support List and 
Info Volumes. Contributed by
[Apr 25, 2018 10:51:51 PM] (omalley) HDFS-9853. Ozone: Add container 
definitions. Contributed by Anu
[Apr 25, 2018 10:51:53 PM] (omalley) HDFS-9848. Ozone: Add Ozone Client lib for 
volume handling. Contributed
[Apr 25, 2018 10:51:53 PM] (omalley) HDFS-9907. Exclude Ozone 
protobuf-generated classes from Findbugs
[Apr 25, 2018 10:51:53 PM] (omalley) HDFS-9873. Ozone: Add container transport 
server. Contributed by Anu
[Apr 25, 2018 10:51:58 PM] (omalley) HDFS-9891. Ozone: Add container transport 
client. Contributed by Anu
[Apr 25, 2018 10:52:00 PM] (omalley) HDFS-9920. Stop tracking CHANGES.txt in 
the HDFS-7240 feature branch.
[Apr 25, 2018 10:52:00 PM] (omalley) HDFS-9916. OzoneHandler : Add Key handler. 
Contributed by Anu Engineer.
[Apr 25, 2018 10:52:00 PM] (omalley) HDFS-9916. OzoneHandler : Add Key handler. 
Contributed by Anu Engineer.
[Apr 25, 2018 10:52:00 PM] (omalley) HDFS-9925. Ozone: Add Ozone Client lib for 
bucket handling. Contributed
[Apr 25, 2018 10:52:00 PM] (omalley) HDFS-9926. ozone : Add volume commands to 
CLI. Contributed by Anu
[Apr 25, 2018 10:52:00 PM] (omalley) HDFS-9944. Ozone : Add container 
dispatcher. Contributed by Anu
[Apr 25, 2018 10:52:00 PM] (omalley) HDFS-9961. Ozone: Add buckets commands to 
CLI. Contributed by Anu
[Apr 25, 2018 10:52:00 PM] (omalley) HDFS-10180. Ozone: Refactor container 
Namespace. Contributed by Anu
[Apr 25, 2018 10:52:00 PM] (omalley) HDFS-9960. OzoneHandler : Add localstorage 
support for keys. Contributed
[Apr 25, 2018 10:52:07 PM] (omalley) HDFS-10179. Ozone: Adding logging support. 
Contributed by Anu Engineer.
[Apr 25, 2018 10:52:09 PM] (omalley) HDFS-10196. Ozone : Enable better error 
reporting for failed commands in
[Apr 25, 2018 10:52:25 PM] (omalley) HDFS-10202. ozone : Add key commands to 
CLI. Contributed by Anu
[Apr 25, 2018 10:52:33 PM] (omalley) HDFS-10195. Ozone: Add container 
persistence. Contributed by Anu
[Apr 25, 2018 10:52:46 PM] (omalley) HDFS-8210. Ozone: Implement storage 
container manager. Contributed by
[Apr 25, 2018 10:52:48 PM] (omalley) HDFS-10238. Ozone : Add chunk persistance. 
Contributed by Anu Engineer.
[Apr 25, 2018 10:52:48 PM] (omalley) HDFS-10250. Ozone: Add key Persistence. 
Contributed by Anu Engineer.
[Apr 25, 2018 10:52:56 PM] (omalley) HDFS-10268. Ozone: end-to-end integration 
for create/get volumes,
[Apr 25, 2018 10:52:58 PM] (omalley) HDFS-10251. Ozone: Shutdown cleanly. 
Contributed by Anu Engineer
[Apr 25, 2018 10:52:58 PM] (omalley) HDFS-10278. Ozone: Add paging support to 
list Volumes. Contributed by
[Apr 25, 2018 10:53:05 PM] (omalley) HDFS-10232. Ozone: Make config key naming 
consistent. Contributed by Anu
[Apr 25, 2018 10:53:07 PM] (omalley) HDFS-10349. Ozone: StorageContainerManager 
fails to compile after merge
[Apr 25, 2018 10:53:07 PM] (omalley) HDFS-10351. Ozone: Optimize key writes to 
chunks by providing a bulk
[Apr 25, 2018 10:53:12 PM] (omalley) HDFS-10361. Support starting 
StorageContainerManager as a daemon.
[Apr 25, 2018 10:53:19 PM] (omalley) HDFS-10363. Ozone: Introduce new config 
keys for SCM service endpoints.
[Apr 25, 2018 10:53:21 PM] (omalley) HDFS-10420. Fix Ozone unit tests to use 
MiniOzoneCluster. Contributed by
[Apr 25, 2018 10:53:21 PM] (omalley) HDFS-10897. Ozone: SCM: Add NodeManager. 
Contributed by Anu Engineer.

Re: RE: [VOTE] Release Apache Hadoop 2.9.1 (RC0)

2018-04-27 Thread Eric Payne
Thanks Sammi for all of the hard work!

+1 (binding)
Tested the following:
- Build from source
- Deploy to pseudo cluster
- Various error cases where the AM fails to start. The failure reasons were 
prop ogated to the GUI and to the client.
- Streaming mapred job
- killing apps from the command line
- intra-queue preemption
- inter-queue preemption- verified preemption properties are refreshable
Thanks,
Eric Payne








On Wednesday, April 25, 2018, 12:12:24 AM CDT, Chen, Sammi 
 wrote: 






Paste the links here,

The artifacts are available here:  
https://dist.apache.org/repos/dist/dev/hadoop/2.9.1-RC0/  

The RC tag in git is release-2.9.1-RC0. Last git commit SHA is 
e30710aea4e6e55e69372929106cf119af06fd0e.

The maven artifacts are available at:
https://repository.apache.org/content/repositories/orgapachehadoop-1115/ 

My public key is available from:
https://dist.apache.org/repos/dist/release/hadoop/common/KEYS 


Bests,
Sammi

-Original Message-
From: Chen, Sammi [mailto:sammi.c...@intel.com] 
Sent: Wednesday, April 25, 2018 12:02 PM
To: junping...@apache.org
Cc: Hadoop Common ; Rushabh Shah 
; hdfs-dev ; 
mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
Subject: RE: [VOTE] Release Apache Hadoop 2.9.1 (RC0)


Thanks Jason Lowe for the quick investigation to find out that the test 
failures belong to the test only.

Based on the current facts, I would like to continue calling the VOTE for 2.9.1 
RC0,  and extend the vote deadline to end of this week 4/27.


I will add following note to the final release notes,

HADOOP-15385  Test case 
failures in Hadoop-distcp project doesn’t impact the distcp function in 2.9.1


Bests,
Sammi
From: 俊平堵 [mailto:junping...@apache.org]
Sent: Tuesday, April 24, 2018 11:50 PM
To: Chen, Sammi 
Cc: Hadoop Common ; Rushabh Shah 
; hdfs-dev ; 
mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
Subject: Re: [VOTE] Release Apache Hadoop 2.9.1 (RC0)

Thanks for reporting the issue, Rushabh! Actually, we found that these test 
failures belong to test issues but not production issue, so not really a solid 
blocker for release. Anyway, I will let RM of 2.9.1 to decide if to cancel RC 
or not for this test issue.

Thanks,

Junping


Chen, Sammi >于2018年4月24日 
周二下午7:50写道:
Hi Rushabh,

Thanks for reporting the issue.  I will upload a new RC candidate soon after 
the test failing issue is resolved.


Bests,
Sammi Chen
From: Rushabh Shah [mailto:rusha...@oath.com]
Sent: Friday, April 20, 2018 5:12 AM
To: Chen, Sammi >
Cc: Hadoop Common 
>; hdfs-dev 
>; 
mapreduce-...@hadoop.apache.org; 
yarn-...@hadoop.apache.org
Subject: Re: [VOTE] Release Apache Hadoop 2.9.1 (RC0)

Hi Chen,
I am so sorry to bring this up now but there are 16 tests failing in 
hadoop-distcp project.
I have opened a ticket and cc'ed Junping since he is branch-2.8 committer but I 
missed to ping you.

IMHO we should fix the unit tests before we release but I would leave upto 
other members to give their opinion.


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15426) S3guard throttle event on delete => 400 error code => exception

2018-04-27 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15426:
---

 Summary: S3guard throttle event on delete => 400 error code => 
exception
 Key: HADOOP-15426
 URL: https://issues.apache.org/jira/browse/HADOOP-15426
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.2.0
Reporter: Steve Loughran


managed to create on a parallel test run
{code}
org.apache.hadoop.fs.s3a.AWSServiceThrottledException: delete on 
s3a://hwdev-steve-ireland-new/fork-0005/test/existing-dir/existing-file: 
com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException: 
The level of configured provisioned throughput for the table was exceeded. 
Consider increasing your provisioning level with the UpdateTable API. (Service: 
AmazonDynamoDBv2; Status Code: 400; Error Code: 
ProvisionedThroughputExceededException; Request ID: 
RDM3370REDBBJQ0SLCLOFC8G43VV4KQNSO5AEMVJF66Q9ASUAAJG): The level of configured 
provisioned throughput for the table was exceeded. Consider increasing your 
provisioning level with the UpdateTable API. (Service: AmazonDynamoDBv2; Status 
Code: 400; Error Code: ProvisionedThroughputExceededException; Request ID: 
RDM3370REDBBJQ0SLCLOFC8G43VV4KQNSO5AEMVJF66Q9ASUAAJG)
at 

{code}

We should be able to handle this. 400 "bad things happened" error though, not 
the 503 from S3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15425) CopyFilesMapper.doCopyFile hangs with misconfigured sizeBuf

2018-04-27 Thread John Doe (JIRA)
John Doe created HADOOP-15425:
-

 Summary: CopyFilesMapper.doCopyFile hangs with misconfigured 
sizeBuf
 Key: HADOOP-15425
 URL: https://issues.apache.org/jira/browse/HADOOP-15425
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 2.5.0
Reporter: John Doe


When the sizeBuf is configured to be 0, the for loop in 
DistCpV1$CopyFilesMapper.doCopyFile function hangs endlessly.
This is because when the buf.size (i.e., sizeBuf) is 0, the bytesRead will 
always be 0 by invoking bytesRead=in.read(buf).
Here is the code snippet.

{code:java}
sizeBuf = job.getInt("copy.buf.size", 128 * 1024); //when copy.buf.size = 0
buffer = new byte[sizeBuf];

private long doCopyFile(FileStatus srcstat, Path tmpfile, Path absdst, 
Reporter reporter) throws IOException {
  FSDataInputStream in = null;
  FSDataOutputStream out = null;
  long bytesCopied = 0L;
  try {
Path srcPath = srcstat.getPath();
// open src file
in = srcPath.getFileSystem(job).open(srcPath);
reporter.incrCounter(Counter.BYTESEXPECTED, srcstat.getLen());
// open tmp file
out = create(tmpfile, reporter, srcstat);
LOG.info("Copying file " + srcPath + " of size " + srcstat.getLen() + " 
bytes...");

// copy file
for(int bytesRead; (bytesRead = in.read(buffer)) >= 0; ) {
  out.write(buffer, 0, bytesRead);
  bytesCopied += bytesRead;
  reporter.setStatus(... );
}
  } finally {
checkAndClose(in);
checkAndClose(out);
  }
  return bytesCopied;
}
{code}





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15424) XDR.ensureFreeSpace hangs when a corrupted bytebuffer passed into the constructor

2018-04-27 Thread John Doe (JIRA)
John Doe created HADOOP-15424:
-

 Summary: XDR.ensureFreeSpace hangs when a corrupted bytebuffer 
passed into the constructor
 Key: HADOOP-15424
 URL: https://issues.apache.org/jira/browse/HADOOP-15424
 Project: Hadoop Common
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.5.0
Reporter: John Doe


When a corrupted bytebuffer passed into the constructor, i.e., the 
bytebuffer.capacity = 0, the while loop in XDR.ensureFreeSpace function hangs 
endlessly. 
This is because the loop stride (newCapacity) is always 0, making the loop 
index (newRemaining) always less than the upper bound (size).
Here is the code snippet.

{code:java}
  public XDR(ByteBuffer buf, State state) {
this.buf = buf;
this.state = state;
  }

  private void ensureFreeSpace(int size) {
Preconditions.checkState(state == State.WRITING);
if (buf.remaining() < size) {
  int newCapacity = buf.capacity() * 2;
  int newRemaining = buf.capacity() + buf.remaining();

  while (newRemaining < size) {
newRemaining += newCapacity;
newCapacity *= 2;
  }

  ByteBuffer newbuf = ByteBuffer.allocate(newCapacity);
  buf.flip();
  newbuf.put(buf);
  buf = newbuf;
}
  }
{code}

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.9.1 (RC0)

2018-04-27 Thread Eric Badger
+1 (non-binding)

- Verified all hashes and checksums
- Built from source on macOS 10.13.4, Java 1.8.0u65
- Deployed a pseudo cluster
- Ran some example jobs

Eric

On Thu, Apr 26, 2018 at 11:16 PM, Takanobu Asanuma 
wrote:

> Thanks for working on this, Sammi!
>
> +1 (non-binding)
>- Verified checksums
>- Succeeded "mvn clean package -Pdist,native -Dtar -DskipTests"
>- Started hadoop cluster with 1 master and 5 slaves
>- Run TeraGen/TeraSort
>- Verified some hdfs operations
>- Verified Web UI (NameNode, ResourceManager(classic and V2),
> JobHistory, Timeline)
>
> Thanks,
> -Takanobu
>
> > -Original Message-
> > From: Jinhu Wu [mailto:jinhu.wu@gmail.com]
> > Sent: Friday, April 27, 2018 12:39 PM
> > To: Gabor Bota 
> > Cc: Chen, Sammi ; junping...@apache.org; Hadoop
> > Common ; Rushabh Shah ;
> > hdfs-dev ; mapreduce-...@hadoop.apache.org;
> > yarn-...@hadoop.apache.org
> > Subject: Re: [VOTE] Release Apache Hadoop 2.9.1 (RC0)
> >
> > Thanks Sammi for driving the release work!
> >
> > +1 (non-binding)
> >
> > based on following verification work:
> > - built succeed from source on Mac OSX 10.13.4, java version "1.8.0_151"
> > - run hadoop-aliyun tests successfully on cn-shanghai endpoint
> > - deployed a one node cluster and verified PI job
> > - verfied word-count job by using hadoop-aliyun as storage.
> >
> > Thanks,
> > jinhu
> >
> > On Fri, Apr 27, 2018 at 12:45 AM, Gabor Bota 
> > wrote:
> >
> > >   Thanks for the work Sammi!
> > >
> > >   +1 (non-binding)
> > >
> > >-   checked out git tag release-2.9.1-RC0
> > >-   S3A unit (mvn test) and integration (mvn verify) test run were
> > >successful on us-west-2
> > >-   built from source on Mac OS X 10.13.4, openjdk 1.8.0_144 (zulu)
> > >-   deployed on a 3 node cluster
> > >-   verified pi job, teragen, terasort and teravalidate
> > >
> > >
> > >   Regards,
> > >   Gabor Bota
> > >
> > > On Wed, Apr 25, 2018 at 7:12 AM, Chen, Sammi 
> wrote:
> > >
> > > >
> > > > Paste the links here,
> > > >
> > > > The artifacts are available here:  https://dist.apache.org/repos/
> > > > dist/dev/hadoop/2.9.1-RC0/
> > > >
> > > > The RC tag in git is release-2.9.1-RC0. Last git commit SHA is
> > > > e30710aea4e6e55e69372929106cf119af06fd0e.
> > > >
> > > > The maven artifacts are available at:
> > > >
> > https://repository.apache.org/content/repositories/orgapachehadoop-1
> > > > 115/
> > > >
> > > > My public key is available from:
> > > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> > > >
> > > >
> > > > Bests,
> > > > Sammi
> > > > -Original Message-
> > > > From: Chen, Sammi [mailto:sammi.c...@intel.com]
> > > > Sent: Wednesday, April 25, 2018 12:02 PM
> > > > To: junping...@apache.org
> > > > Cc: Hadoop Common ; Rushabh Shah <
> > > > rusha...@oath.com>; hdfs-dev ;
> > > > mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
> > > > Subject: RE: [VOTE] Release Apache Hadoop 2.9.1 (RC0)
> > > >
> > > >
> > > > Thanks Jason Lowe for the quick investigation to find out that the
> > > > test failures belong to the test only.
> > > >
> > > > Based on the current facts, I would like to continue calling the
> > > > VOTE for
> > > > 2.9.1 RC0,  and extend the vote deadline to end of this week 4/27.
> > > >
> > > >
> > > > I will add following note to the final release notes,
> > > >
> > > > HADOOP-15385
> > Test
> > > > case failures in Hadoop-distcp project doesn’t impact the distcp
> > > > function in 2.9.1
> > > >
> > > >
> > > > Bests,
> > > > Sammi
> > > > From: 俊平堵 [mailto:junping...@apache.org]
> > > > Sent: Tuesday, April 24, 2018 11:50 PM
> > > > To: Chen, Sammi 
> > > > Cc: Hadoop Common ; Rushabh Shah <
> > > > rusha...@oath.com>; hdfs-dev ;
> > > > mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
> > > > Subject: Re: [VOTE] Release Apache Hadoop 2.9.1 (RC0)
> > > >
> > > > Thanks for reporting the issue, Rushabh! Actually, we found that
> > > > these test failures belong to test issues but not production issue,
> > > > so not
> > > really
> > > > a solid blocker for release. Anyway, I will let RM of 2.9.1 to
> > > > decide if
> > > to
> > > > cancel RC or not for this test issue.
> > > >
> > > > Thanks,
> > > >
> > > > Junping
> > > >
> > > >
> > > > Chen, Sammi >
> > > 于2018年4月24日
> > > > 周二下午7:50写道:
> > > > Hi Rushabh,
> > > >
> > > > Thanks for reporting the issue.  I will upload a new RC candidate
> > > > soon after the test failing issue is resolved.
> > > >
> > > >
> > > > Bests,
> > > > Sammi Chen
> > > > From: Rushabh 

Re: [NOTIFICATION] Hadoop trunk rebased

2018-04-27 Thread Allen Wittenauer

Did the patch that fixes the mountain of maven warnings get missed?

> On Apr 26, 2018, at 11:52 PM, Akira Ajisaka  wrote:
> 
> + common-dev and mapreduce-dev
> 
> On 2018/04/27 6:23, Owen O'Malley wrote:
>> As we discussed in hdfs-dev@hadoop, I did a force push to Hadoop's trunk to
>> replace the Ozone merge with a rebase.
>> That means that you'll need to rebase your branches.
>> .. Owen
> 
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-04-27 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/764/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-hdds/common 
   Exceptional return value of java.io.File.mkdirs() ignored in 
org.apache.hadoop.utils.LevelDBStore.openDB(File, Options) At 
LevelDBStore.java:ignored in org.apache.hadoop.utils.LevelDBStore.openDB(File, 
Options) At LevelDBStore.java:[line 79] 

Failed junit tests :

   hadoop.hdfs.server.namenode.TestReencryptionWithKMS 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
   hadoop.yarn.api.resource.TestPlacementConstraintTransformations 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/764/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/764/artifact/out/diff-compile-javac-root.txt
  [288K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/764/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/764/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/764/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/764/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/764/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/764/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/764/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/764/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/764/artifact/out/branch-findbugs-hadoop-hdds_common-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/764/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [36K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/764/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/764/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [24K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/764/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [20K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/764/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/764/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/764/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/764/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/764/artifact/out/branch-findbugs-hadoop-ozone_tools.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/764/artifact/out/branch-findbugs-hadoop-tools_hadoop-ozone.txt
  [12K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/764/artifact/out/diff-javadoc-javadoc-root.txt
  [760K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/764/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [500K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/764/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
  [40K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/764/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/764/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
  [84K]
   

[jira] [Created] (HADOOP-15423) Use single hash Path -> tuple(DirListingMetadata, PathMetadata) in LocalMetadataStore

2018-04-27 Thread Gabor Bota (JIRA)
Gabor Bota created HADOOP-15423:
---

 Summary: Use single hash Path -> tuple(DirListingMetadata, 
PathMetadata) in LocalMetadataStore
 Key: HADOOP-15423
 URL: https://issues.apache.org/jira/browse/HADOOP-15423
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Gabor Bota
Assignee: Gabor Bota


Right now the s3guard.LocalMetadataStore uses two HashMap in the implementation 
one for the file and one for the dir hash.
{code:java}
  /** Contains directories and files. */
  private LruHashMap fileHash;

  /** Contains directory listings. */
  private LruHashMap dirHash;
{code}

It would be nice to have only one hash instead of these two for storing the 
values. An idea for the implementation would be to have a class with nullable 
fields:

{code:java}
  static class LocalMetaEntry {
@Nullable
public PathMetadata pathMetadata;
@Nullable
public DirListingMetadata dirListingMetadata;
  }
{code}

or a Pair (tuple):

{code:java}
Pair metaEntry;
{code}

And only one hash/cache for these elements.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15422) s3guard doesnt init when the secrets are in the s3a URI

2018-04-27 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15422:
---

 Summary: s3guard doesnt init when the secrets are in the s3a URI
 Key: HADOOP-15422
 URL: https://issues.apache.org/jira/browse/HADOOP-15422
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.1.0
Reporter: Steve Loughran


If the AWS secrets are in the login, S3guard doesn't init. Presumably this is 
related to the credential chain setup
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15421) Stabilise/formalise the JSON _SUCCESS format used in the S3A committers

2018-04-27 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15421:
---

 Summary: Stabilise/formalise the JSON _SUCCESS format used in the 
S3A committers
 Key: HADOOP-15421
 URL: https://issues.apache.org/jira/browse/HADOOP-15421
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 3.2.0
Reporter: Steve Loughran


the S3A committers rely on an atomic PUT to save a JSON summary of the job to 
the dest FS, containing files, statistics, etc. This is for internal testing, 
but it turns out to be useful for spark integration testing, Hive, etc.

IBM's stocator also generated a manifest.

Proposed: come up with (an extensible) design that we are happy with as a long 
lived format.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15417) retrieveBlock hangs when the configuration file is corrupted

2018-04-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15417.
-
Resolution: Won't Fix

> retrieveBlock hangs when the configuration file is corrupted
> 
>
> Key: HADOOP-15417
> URL: https://issues.apache.org/jira/browse/HADOOP-15417
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 0.23.0
>Reporter: John Doe
>Priority: Major
>
> The bufferSize is read from the configuration files.
> When the configuration file is corrupted, i.e.,bufferSize=0, the numRead will 
> always be 0, making the while loop's condition always true, hanging 
> Jets3tFileSystemStore.retrieveBlock() endlessly.
> Here is the snippet of the code. 
> {code:java}
>   private int bufferSize;
>   this.bufferSize = conf.getInt( 
> S3FileSystemConfigKeys.S3_STREAM_BUFFER_SIZE_KEY, 
> S3FileSystemConfigKeys.S3_STREAM_BUFFER_SIZE_DEFAULT);
>   public File retrieveBlock(Block block, long byteRangeStart)
> throws IOException {
> File fileBlock = null;
> InputStream in = null;
> OutputStream out = null;
> try {
>   fileBlock = newBackupFile();
>   in = get(blockToKey(block), byteRangeStart);
>   out = new BufferedOutputStream(new FileOutputStream(fileBlock));
>   byte[] buf = new byte[bufferSize];
>   int numRead;
>   while ((numRead = in.read(buf)) >= 0) {
> out.write(buf, 0, numRead);
>   }
>   return fileBlock;
> } catch (IOException e) {
>   ...
> } finally {
>   ...
> }
>   }
> {code}
> Similar case: 
> [Hadoop-15415|https://issues.apache.org/jira/browse/HADOOP-15415].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [NOTIFICATION] Hadoop trunk rebased

2018-04-27 Thread Akira Ajisaka

+ common-dev and mapreduce-dev

On 2018/04/27 6:23, Owen O'Malley wrote:

As we discussed in hdfs-dev@hadoop, I did a force push to Hadoop's trunk to
replace the Ozone merge with a rebase.

That means that you'll need to rebase your branches.

.. Owen



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org