[jira] [Created] (HDFS-10810) Setreplication making file corrupted temporarly when batch IBR is enabled.

2016-08-26 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HDFS-10810:
---

 Summary:  Setreplication making file corrupted temporarly when 
batch IBR is enabled.
 Key: HDFS-10810
 URL: https://issues.apache.org/jira/browse/HDFS-10810
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula



1)Batch IBR is enabled with number of committed blocks allowed=1
2) Written one block and closed the file without waiting for IBR
3)Setreplication called immediately on the file. 

SO till the finalized IBR Received, this block will be marked as corrupt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.7.3 RC0

2016-08-26 Thread Andrew Wang
Thanks Jason for triaging and sorry for the slow follow up on my part, I
just filed YARN-5569 and YARN-5570 to track these two issues.

On Fri, Aug 5, 2016 at 8:43 AM, Jason Lowe  wrote:

> Both sound like real problems to me, and I think it's appropriate to file
> JIRAs to track them.
>
> Jason
>
>
> --
> *From:* Andrew Wang 
> *To:* Karthik Kambatla 
> *Cc:* larry mccay ; Vinod Kumar Vavilapalli <
> vino...@apache.org>; "common-...@hadoop.apache.org" <
> common-...@hadoop.apache.org>; "hdfs-dev@hadoop.apache.org" <
> hdfs-dev@hadoop.apache.org>; "yarn-...@hadoop.apache.org" <
> yarn-...@hadoop.apache.org>; "mapreduce-...@hadoop.apache.org" <
> mapreduce-...@hadoop.apache.org>
> *Sent:* Thursday, August 4, 2016 5:56 PM
> *Subject:* Re: [VOTE] Release Apache Hadoop 2.7.3 RC0
>
> Could a YARN person please comment on these two issues, one of which Vinay
> also hit? If someone already triaged or filed JIRAs, I missed it.
>
> On Mon, Jul 25, 2016 at 11:52 AM, Andrew Wang 
> wrote:
>
> > I'll also add that, as a YARN newbie, I did hit two usability issues.
> > These are very unlikely to be regressions, and I can file JIRAs if they
> > seem fixable.
> >
> > * I didn't have SSH to localhost set up (new laptop), and when I tried to
> > run the Pi job, it'd exit my window manager session. I feel there must
> be a
> > more developer-friendly solution here.
> > * If you start the NodeManager and not the RM, the NM has a handler for
> > SIGTERM and SIGINT that blocked my Ctrl-C and kill attempts during
> startup.
> > I had to kill -9 it.
> >
> > On Mon, Jul 25, 2016 at 11:44 AM, Andrew Wang 
> > wrote:
> >
> >> I got asked this off-list, so as a reminder, only PMC votes are binding
> >> on releases. Everyone is encouraged to vote on releases though!
> >>
> >> +1 (binding)
> >>
> >> * Downloaded source, built
> >> * Started up HDFS and YARN
> >> * Ran Pi job which as usual returned 4, and a little teragen
> >>
> >> On Mon, Jul 25, 2016 at 11:08 AM, Karthik Kambatla 
> >> wrote:
> >>
> >>> +1 (binding)
> >>>
> >>> * Downloaded and build from source
> >>> * Checked LICENSE and NOTICE
> >>> * Pseudo-distributed cluster with FairScheduler
> >>> * Ran MR and HDFS tests
> >>> * Verified basic UI
> >>>
> >>> On Sun, Jul 24, 2016 at 1:07 PM, larry mccay 
> wrote:
> >>>
> >>> > +1 binding
> >>> >
> >>> > * downloaded and built from source
> >>> > * checked LICENSE and NOTICE files
> >>> > * verified signatures
> >>> > * ran standalone tests
> >>> > * installed pseudo-distributed instance on my mac
> >>> > * ran through HDFS and mapreduce tests
> >>> > * tested credential command
> >>> > * tested webhdfs access through Apache Knox
> >>> >
> >>> >
> >>> > On Fri, Jul 22, 2016 at 10:15 PM, Vinod Kumar Vavilapalli <
> >>> > vino...@apache.org> wrote:
> >>> >
> >>> > > Hi all,
> >>> > >
> >>> > > I've created a release candidate RC0 for Apache Hadoop 2.7.3.
> >>> > >
> >>> > > As discussed before, this is the next maintenance release to follow
> >>> up
> >>> > > 2.7.2.
> >>> > >
> >>> > > The RC is available for validation at:
> >>> > > http://home.apache.org/~vinodkv/hadoop-2.7.3-RC0/ <
> >>> > > http://home.apache.org/~vinodkv/hadoop-2.7.3-RC0/>
> >>> > >
> >>> > > The RC tag in git is: release-2.7.3-RC0
> >>> > >
> >>> > > The maven artifacts are available via repository.apache.org <
> >>> > > http://repository.apache.org/> at
> >>> > > https://repository.apache.org/content/repositories/
> >>> orgapachehadoop-1040/
> >>> > <
> >>> > > https://repository.apache.org/content/repositories/
> >>> orgapachehadoop-1040/
> >>> > >
> >>> > >
> >>> > > The release-notes are inside the tar-balls at location
> >>> > > hadoop-common-project/hadoop-common/src/main/docs/
> releasenotes.html.
> >>> I
> >>> > > hosted this at
> >>> > > http://home.apache.org/~vinodkv/hadoop-2.7.3-RC0/releasenotes.html
> <
> >>> > > http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/
> releasenotes.html
> >>> >
> >>> > for
> >>> > > your quick perusal.
> >>> > >
> >>> > > As you may have noted, a very long fix-cycle for the License &
> Notice
> >>> > > issues (HADOOP-12893) caused 2.7.3 (along with every other Hadoop
> >>> > release)
> >>> > > to slip by quite a bit. This release's related discussion thread is
> >>> > linked
> >>> > > below: [1].
> >>> > >
> >>> > > Please try the release and vote; the vote will run for the usual 5
> >>> days.
> >>> > >
> >>> > > Thanks,
> >>> > > Vinod
> >>> > >
> >>> > > [1]: 2.7.3 release plan:
> >>> > > https://www.mail-archive.com/hdfs-dev%40hadoop.apache.org/
> >>> msg24439.html
> >>> > <
> >>> > > http://markmail.org/thread/6yv2fyrs4jlepmmr>
> >>> >
> >>>
> >>
> >>
> >
>
>
>


[jira] [Created] (HDFS-10809) getNumEncryptionZones causes NPE in branch-2.7

2016-08-26 Thread Zhe Zhang (JIRA)
Zhe Zhang created HDFS-10809:


 Summary: getNumEncryptionZones causes NPE in branch-2.7
 Key: HDFS-10809
 URL: https://issues.apache.org/jira/browse/HDFS-10809
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: encryption, namenode
Affects Versions: 2.7.4
Reporter: Zhe Zhang


This bug was caused by the fact that we did HDFS-10458 from trunk to 
branch-2.7, but we did HDFS-8721 initially up to branch-2.8. So from branch-2.8 
and up, the order is HDFS-8721 -> HDFS-10458. But in branch-2.7, we have the 
reverse order. Hence the inconsistency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10598) DiskBalancer does not execute multi-steps plan.

2016-08-26 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HDFS-10598.

Resolution: Fixed

Thanks Anu, I'll go ahead and resolve then. Feel free to ping this JIRA when 
you file a new one.

> DiskBalancer does not execute multi-steps plan.
> ---
>
> Key: HDFS-10598
> URL: https://issues.apache.org/jira/browse/HDFS-10598
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: diskbalancer
>Affects Versions: 3.0.0-beta1
>Reporter: Lei (Eddy) Xu
>Assignee: Anu Engineer
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10598.00.patch
>
>
> I set up a 3 DN node cluster, each one with 2 small disks.  After creating 
> some files to fill HDFS, I added two more small disks to one DN.  And run the 
> diskbalancer on this DataNode.
> The disk usage before running diskbalancer:
> {code}
> /dev/loop0  3.9G  2.1G  1.6G 58%  /mnt/data1
> /dev/loop1  3.9G  2.6G  1.1G 71%  /mnt/data2
> /dev/loop2  3.9G  17M  3.6G 1%  /mnt/data3
> /dev/loop3  3.9G  17M  3.6G 1%  /mnt/data4
> {code}
> However, after running diskbalancer (i.e., {{-query}} shows {{PLAN_DONE}})
> {code}
> /dev/loop0  3.9G  1.2G  2.5G 32%  /mnt/data1
> /dev/loop1  3.9G  2.6G  1.1G 71%  /mnt/data2
> /dev/loop2  3.9G  953M  2.7G 26%  /mnt/data3
> /dev/loop3  3.9G  17M  3.6G 1%   /mnt/data4
> {code}
> It is suspicious that in {{DiskBalancerMover#copyBlocks}}, every return does 
> {{this.setExitFlag}} which prevents {{copyBlocks()}} be called multiple times 
> from {{DiskBalancer#executePlan}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-10598) DiskBalancer does not execute multi-steps plan.

2016-08-26 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reopened HDFS-10598:
-
  Assignee: Anu Engineer  (was: Lei (Eddy) Xu)

> DiskBalancer does not execute multi-steps plan.
> ---
>
> Key: HDFS-10598
> URL: https://issues.apache.org/jira/browse/HDFS-10598
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: diskbalancer
>Affects Versions: 3.0.0-beta1
>Reporter: Lei (Eddy) Xu
>Assignee: Anu Engineer
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10598.00.patch
>
>
> I set up a 3 DN node cluster, each one with 2 small disks.  After creating 
> some files to fill HDFS, I added two more small disks to one DN.  And run the 
> diskbalancer on this DataNode.
> The disk usage before running diskbalancer:
> {code}
> /dev/loop0  3.9G  2.1G  1.6G 58%  /mnt/data1
> /dev/loop1  3.9G  2.6G  1.1G 71%  /mnt/data2
> /dev/loop2  3.9G  17M  3.6G 1%  /mnt/data3
> /dev/loop3  3.9G  17M  3.6G 1%  /mnt/data4
> {code}
> However, after running diskbalancer (i.e., {{-query}} shows {{PLAN_DONE}})
> {code}
> /dev/loop0  3.9G  1.2G  2.5G 32%  /mnt/data1
> /dev/loop1  3.9G  2.6G  1.1G 71%  /mnt/data2
> /dev/loop2  3.9G  953M  2.7G 26%  /mnt/data3
> /dev/loop3  3.9G  17M  3.6G 1%   /mnt/data4
> {code}
> It is suspicious that in {{DiskBalancerMover#copyBlocks}}, every return does 
> {{this.setExitFlag}} which prevents {{copyBlocks()}} be called multiple times 
> from {{DiskBalancer#executePlan}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10807) Doc about upgrading to a version of HDFS with snapshots may be confusing

2016-08-26 Thread Mingliang Liu (JIRA)
Mingliang Liu created HDFS-10807:


 Summary: Doc about upgrading to a version of HDFS with snapshots 
may be confusing
 Key: HDFS-10807
 URL: https://issues.apache.org/jira/browse/HDFS-10807
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Mingliang Liu
Assignee: Mingliang Liu
Priority: Minor


{code}
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsSnapshots.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsSnapshots.md
index 94a37cd..d856e8c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsSnapshots.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsSnapshots.md
@@ -113,7 +113,7 @@ Upgrading to a version of HDFS with snapshots
 
 The HDFS snapshot feature introduces a new reserved path name used to
 interact with snapshots: `.snapshot`. When upgrading from an
-older version of HDFS, existing paths named `.snapshot` need
+older version of HDFS which does not support snapshots, existing paths named 
`.snapshot` need
 to first be renamed or deleted to avoid conflicting with the reserved path.
 See the upgrade section in
 [the HDFS user guide](HdfsUserGuide.html#Upgrade_and_Rollback)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10806) Mapreduce jobs fail when StoragePolicy is set

2016-08-26 Thread Dennis Lattka (JIRA)
Dennis Lattka created HDFS-10806:


 Summary: Mapreduce jobs fail when StoragePolicy is set
 Key: HDFS-10806
 URL: https://issues.apache.org/jira/browse/HDFS-10806
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 2.7.1
 Environment: Apache Hadoop 10 Datanode cluster running version 2.7.1 
using SAS 15K 6Gbps HDD drives and SATA SSD drives. 
Reporter: Dennis Lattka


Before applying any StoragePolicy, running any mapreduce jobs will complete as 
expected. As soon as the StoragePolicy is set (Tested using HOT and ONE_SSD) 
any mapreduce job will fail with the following error: 

NOTE: I also tested this with hadoop streaming using two python scripts, one 
for the mapper and one for the reduce and the error is identical.

ERROR:
[hdfs@hadoop-vm-client 16:25:53] /usr/hdp/current/hadoop-client/bin/yarn 
--config /usr/hdp/current/hadoop-client/conf jar 
/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
randomtextwriter -D mapreduce.randomtextwriter.totalbytes=2147483648000 
/benchmarks/Wordcount/Input.11358
16/08/26 12:58:38 INFO impl.TimelineClientImpl: Timeline service address: 
http://hadoop-vm-rm.aae.lcl:8188/ws/v1/timeline/
16/08/26 12:58:38 INFO client.RMProxy: Connecting to ResourceManager at 
hadoop-vm-rm.aae.lcl/172.16.4.12:8050
Running 2000 maps.
Job started: Fri Aug 26 12:58:39 CDT 2016
16/08/26 12:58:39 INFO impl.TimelineClientImpl: Timeline service address: 
http://hadoop-vm-rm.aae.lcl:8188/ws/v1/timeline/
16/08/26 12:58:39 INFO client.RMProxy: Connecting to ResourceManager at 
hadoop-vm-rm.aae.lcl/172.16.4.12:8050
16/08/26 12:58:40 INFO mapreduce.JobSubmitter: Cleaning up the staging area 
/user/hdfs/.staging/job_1472151637713_0002
org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
java.lang.IllegalArgumentException
at 
com.google.common.base.Preconditions.checkArgument(Preconditions.java:72)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.getStorageTypeDeltas(FSDirectory.java:789)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:711)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.unprotectedSetReplication(FSDirAttrOp.java:397)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setReplication(FSDirAttrOp.java:151)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setReplication(FSNamesystem.java:1968)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setReplication(NameNodeRpcServer.java:740)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setReplication(ClientNamenodeProtocolServerSideTranslatorPB.java:440)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)

at org.apache.hadoop.ipc.Client.call(Client.java:1427)
at org.apache.hadoop.ipc.Client.call(Client.java:1358)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy22.setReplication(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setReplication(ClientNamenodeProtocolTranslatorPB.java:349)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:252)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
at com.sun.proxy.$Proxy23.setReplication(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.setReplication(DFSClient.java:1902)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$9.doCall(DistributedFileSystem.java:517)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$9.doCall(DistributedFileSystem.java:513)
at 

Re: Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2016-08-26 Thread Allen Wittenauer

> On Aug 26, 2016, at 7:55 AM, Apache Jenkins Server 
>  wrote:
> 
> 
>Failed CTEST tests :
> 
>   test_test_libhdfs_threaded_hdfs_static 
>   test_test_libhdfs_zerocopy_hdfs_static 


Something here likely broke these tests:

[Aug 24, 2016 7:47:52 AM] (aajisaka) HADOOP-13538. Deprecate getInstance and 
initialize methods with Path in
[Aug 24, 2016 1:46:47 PM] (daryn) HDFS-10762. Pass IIP for file status related 
methods
[Aug 24, 2016 1:57:23 PM] (kai.zheng) HDFS-8905. Refactor 
DFSInputStream#ReaderStrategy. Contributed by Kai
[Aug 24, 2016 2:17:05 PM] (kai.zheng) MAPREDUCE-6578. Add support for HDFS 
heterogeneous storage testing to
[Aug 24, 2016 2:40:51 PM] (jlowe) MAPREDUCE-6761. Regression when handling 
providers - invalid
[Aug 24, 2016 5:14:46 PM] (xiao) HADOOP-13396. Allow pluggable audit loggers in 
KMS. Contributed by Xiao
[Aug 24, 2016 8:21:08 PM] (kihwal) HDFS-10772. Reduce byte/string conversions 
for get listing. Contributed
[Aug 25, 2016 1:55:00 AM] (aajisaka) MAPREDUCE-6767. TestSlive fails after a 
common change. Contributed by
[Aug 25, 2016 4:54:57 AM] (aajisaka) HADOOP-13534. Remove unused 
TrashPolicy#getInstance and initialize code.




-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2016-08-26 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/145/

[Aug 25, 2016 8:40:34 AM] (varunsaxena) YARN-5537. Fix intermittent failure of
[Aug 25, 2016 8:48:26 AM] (vvasudev) YARN-5042. Mount /sys/fs/cgroup into 
Docker containers as read only
[Aug 25, 2016 1:58:42 PM] (aw) HADOOP-13532. Fix typo in 
hadoop_connect_to_hosts error message (Albert
[Aug 25, 2016 2:11:06 PM] (aw) HADOOP-13533. Do not require user to set 
HADOOP_SSH_OPTS to a non-null
[Aug 25, 2016 2:42:06 PM] (jlowe) YARN-5389. 
TestYarnClient#testReservationDelete fails. Contributed by
[Aug 25, 2016 4:00:44 PM] (xyao) HDFS-10748. 
TestFileTruncate#testTruncateWithDataNodesRestart runs
[Aug 25, 2016 4:44:13 PM] (kihwal) HADOOP-13465. Design Server.Call to be 
extensible for unified call
[Aug 25, 2016 4:50:12 PM] (weichiu) MAPREDUCE-6764. Teragen LOG initialization 
bug. Contributed by Yufei Gu.
[Aug 25, 2016 9:04:54 PM] (kihwal) Revert "HADOOP-13465. Design Server.Call to 
be extensible for unified
[Aug 27, 2016 2:54:25 AM] (kai.zheng) HDFS-10795. Fix an error in 
ReaderStrategy#ByteBufferStrategy.
[Aug 26, 2016 3:17:21 AM] (naganarasimha_gr) YARN-5564. Fix typo in




-1 overall


The following subsystems voted -1:
asflicense unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 

Failed junit tests :

   hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency 
   
hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
 
   hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 

Timed out junit tests :

   org.apache.hadoop.http.TestHttpServerLifecycle 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/145/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/145/artifact/out/diff-compile-javac-root.txt
  [172K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/145/artifact/out/diff-checkstyle-root.txt
  [16M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/145/artifact/out/diff-patch-pylint.txt
  [16K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/145/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/145/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/145/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/145/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/145/artifact/out/diff-javadoc-javadoc-root.txt
  [2.2M]

   CTEST:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/145/artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt
  [24K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/145/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [120K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/145/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [144K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/145/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/145/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [36K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/145/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/145/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [268K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/145/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt
  [124K]

   asflicense:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/145/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org




[jira] [Created] (HDFS-10805) Reduce runtime for append test

2016-08-26 Thread JIRA
Gergely Novák created HDFS-10805:


 Summary: Reduce runtime for append test
 Key: HDFS-10805
 URL: https://issues.apache.org/jira/browse/HDFS-10805
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Gergely Novák
Priority: Minor


{{testAppend}} takes by far the most time of test suite 
{{TestHDFSFileSystemContract}}, more than 1 min 45 sec (while all the other 
tests run under 3 seconds). In this test we perform 500 appends, which takes a 
lot of time. I suggest to reduce the number of appends as it won't change the 
test's strength, only its runtime. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10804) Use finer-granularity lock for ReplicaMap

2016-08-26 Thread Fenghua Hu (JIRA)
Fenghua Hu created HDFS-10804:
-

 Summary: Use finer-granularity lock for ReplicaMap
 Key: HDFS-10804
 URL: https://issues.apache.org/jira/browse/HDFS-10804
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Reporter: Fenghua Hu
Assignee: Fenghua Hu
Priority: Minor


In currently implementation, ReplicaMap takes an external object as lock for 
synchronization.

In function FsDatasetImpl#FsDatasetImpl(), the object is for synchronization is 
"this", i.e. FsDatasetImpl: 
volumeMap = new ReplicaMap(this);

and in private FsDatasetImpl#addVolume(), "this" object is used for 
synchronization as well.
ReplicaMap tempVolumeMap = new ReplicaMap(this);

I am not sure if we really need so big object FsDatasetImpl  for ReplicaMap's 
synchronization. If it's not necessary, this could reduce lock contention on 
FsDatasetImpl object and improve performance. 

Could you please give me some suggestions? Thanks a lot!

Fenghua



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org