[jira] [Updated] (HDFS-6867) For DFSOutputStream, do pipeline recovery for a single block in the background

2014-08-21 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-6867:


Attachment: HDFS-6867-design-20140821.pdf

New design document to incorporate the progress on HDFS-3689. The general 
approach is separated from the choice of using variable sized blocks or not; so 
it can be assessed independent of the final outcome of HDFS-3689. 

> For DFSOutputStream, do pipeline recovery for a single block in the background
> --
>
> Key: HDFS-6867
> URL: https://issues.apache.org/jira/browse/HDFS-6867
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.0.0-alpha
>Reporter: Colin Patrick McCabe
> Attachments: HDFS-6867-design-20140820.pdf, 
> HDFS-6867-design-20140821.pdf
>
>
> For DFSOutputStream, we should be able to do pipeline recovery in the 
> background, while the user is continuing to write to the file.  This is 
> especially useful for long-lived clients that write to an HDFS file slowly. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6886) Use single editlog record for creating file + overwrite.

2014-08-21 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106540#comment-14106540
 ] 

Yi Liu commented on HDFS-6886:
--

Hi, Thanks [~vinayrpet] for the review (latest patch has few change as 
following).  To create/remove a file, we should check its (ancestor)parent 'w' 
permission,  creating file + overwrite implies the old file will be removed if 
it exists and then create a new one. Also creating with overwrite requires the 
path 'w' permission. So we need to do both the two checks. 
Then for permission check, it's the same logic as original code and HDFS 
permissions guide (like POSIX mode).
{code}
if (isPermissionEnabled) {
  // To remove a file, we need to check 'w' permission of parent
  checkParentAccess(pc, src, FsAction.WRITE);
}
{code}

> Use single editlog record for creating file + overwrite.
> 
>
> Key: HDFS-6886
> URL: https://issues.apache.org/jira/browse/HDFS-6886
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Critical
> Attachments: HDFS-6886.001.patch, HDFS-6886.002.patch, editsStored
>
>
> As discussed in HDFS-6871, as [~jingzhao] and [~cmccabe]'s suggestion, we 
> could do further improvement to use one editlog record for creating file + 
> overwrite in this JIRA. We could record the overwrite flag in editlog for 
> creating file.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6908) incorrect snapshot directory diff generated by snapshot deletion

2014-08-21 Thread Juan Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Juan Yu updated HDFS-6908:
--

Attachment: HDFS-6908.002.patch

> incorrect snapshot directory diff generated by snapshot deletion
> 
>
> Key: HDFS-6908
> URL: https://issues.apache.org/jira/browse/HDFS-6908
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Juan Yu
>Assignee: Juan Yu
>Priority: Critical
> Attachments: HDFS-6908.001.patch, HDFS-6908.002.patch
>
>
> In the following scenario, delete snapshot could generate incorrect snapshot 
> directory diff and corrupted fsimage, if you restart NN after that, you will 
> get NullPointerException.
> 1. create a directory and create a file under it
> 2. take a snapshot
> 3. create another file under that directory
> 4. take second snapshot
> 5. delete both files and the directory
> 6. delete second snapshot
> incorrect directory diff will be generated.
> Restart NN will throw NPE
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.addToDeletedList(FSImageFormatPBSnapshot.java:246)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadDeletedList(FSImageFormatPBSnapshot.java:265)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadDirectoryDiffList(FSImageFormatPBSnapshot.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadSnapshotDiffSection(FSImageFormatPBSnapshot.java:192)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:254)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:168)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:208)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:906)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:892)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:715)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:653)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:276)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:882)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:629)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:498)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:554)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6908) incorrect snapshot directory diff generated by snapshot deletion

2014-08-21 Thread Juan Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Juan Yu updated HDFS-6908:
--

Status: Patch Available  (was: Open)

> incorrect snapshot directory diff generated by snapshot deletion
> 
>
> Key: HDFS-6908
> URL: https://issues.apache.org/jira/browse/HDFS-6908
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Juan Yu
>Assignee: Juan Yu
>Priority: Critical
> Attachments: HDFS-6908.001.patch, HDFS-6908.002.patch
>
>
> In the following scenario, delete snapshot could generate incorrect snapshot 
> directory diff and corrupted fsimage, if you restart NN after that, you will 
> get NullPointerException.
> 1. create a directory and create a file under it
> 2. take a snapshot
> 3. create another file under that directory
> 4. take second snapshot
> 5. delete both files and the directory
> 6. delete second snapshot
> incorrect directory diff will be generated.
> Restart NN will throw NPE
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.addToDeletedList(FSImageFormatPBSnapshot.java:246)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadDeletedList(FSImageFormatPBSnapshot.java:265)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadDirectoryDiffList(FSImageFormatPBSnapshot.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadSnapshotDiffSection(FSImageFormatPBSnapshot.java:192)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:254)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:168)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:208)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:906)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:892)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:715)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:653)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:276)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:882)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:629)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:498)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:554)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6886) Use single editlog record for creating file + overwrite.

2014-08-21 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106518#comment-14106518
 ] 

Vinayakumar B commented on HDFS-6886:
-

I have one comment

{code}if (overwrite) {
  // To remove a file, we need to check 'w' permission of parent
  checkParentAccess(pc, src, FsAction.WRITE);{code}
Here checking permission explicitly for delete not required. As its already 
would be checked here.
{code}if (isPermissionEnabled) {
  if (overwrite && myFile != null) {
checkPathAccess(pc, src, FsAction.WRITE);
  } else {
checkAncestorAccess(pc, src, FsAction.WRITE);
  }
}{code}

> Use single editlog record for creating file + overwrite.
> 
>
> Key: HDFS-6886
> URL: https://issues.apache.org/jira/browse/HDFS-6886
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Critical
> Attachments: HDFS-6886.001.patch, HDFS-6886.002.patch, editsStored
>
>
> As discussed in HDFS-6871, as [~jingzhao] and [~cmccabe]'s suggestion, we 
> could do further improvement to use one editlog record for creating file + 
> overwrite in this JIRA. We could record the overwrite flag in editlog for 
> creating file.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6908) incorrect snapshot directory diff generated by snapshot deletion

2014-08-21 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106516#comment-14106516
 ] 

Jing Zhao commented on HDFS-6908:
-

Yeah, I think that is necessary when deleting a snapshot. But when deleting a 
dir/file from the current fsdir, I guess it should be ok to place 
{{cleanSubtreeRecursively}} in the end.

> incorrect snapshot directory diff generated by snapshot deletion
> 
>
> Key: HDFS-6908
> URL: https://issues.apache.org/jira/browse/HDFS-6908
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Juan Yu
>Assignee: Juan Yu
>Priority: Critical
> Attachments: HDFS-6908.001.patch
>
>
> In the following scenario, delete snapshot could generate incorrect snapshot 
> directory diff and corrupted fsimage, if you restart NN after that, you will 
> get NullPointerException.
> 1. create a directory and create a file under it
> 2. take a snapshot
> 3. create another file under that directory
> 4. take second snapshot
> 5. delete both files and the directory
> 6. delete second snapshot
> incorrect directory diff will be generated.
> Restart NN will throw NPE
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.addToDeletedList(FSImageFormatPBSnapshot.java:246)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadDeletedList(FSImageFormatPBSnapshot.java:265)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadDirectoryDiffList(FSImageFormatPBSnapshot.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadSnapshotDiffSection(FSImageFormatPBSnapshot.java:192)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:254)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:168)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:208)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:906)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:892)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:715)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:653)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:276)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:882)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:629)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:498)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:554)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6912) HDFS Short-circuit read implementation throws SIGBUS from misc.Unsafe usage

2014-08-21 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106508#comment-14106508
 ] 

Xiaoyu Yao commented on HDFS-6912:
--

The mmap() function is used to map between the address space of a process and a 
file or shared memory object. JVM uses MAP_NORESERVE flag to minimize swap 
space needed. According to http://man7.org/linux/man-pages/man2/mmap.2.html, no 
swap space can be reserved for a mapping when MAP_NORESERVE flag is used. When 
there is not enough swap space available in the system as reported in the test, 
the write will fail and a SIGBUS or SIGSEGV signal will be sent to the writing 
process as expected. Increase the swap space will solve the problem unless JVM 
change its mmap flag.

> HDFS Short-circuit read implementation throws SIGBUS from misc.Unsafe usage
> ---
>
> Key: HDFS-6912
> URL: https://issues.apache.org/jira/browse/HDFS-6912
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 2.5.0
> Environment: HDFS Data node, with 8 gb tmpfs in /dev/shm
>Reporter: Gopal V
>
> The short-circuit reader throws SIGBUS errors from Unsafe code and crashes 
> the JVM when tmpfs on a disk is depleted.
> {code}
> ---  T H R E A D  ---
> Current thread (0x7eff387df800):  JavaThread "xxx" daemon [_thread_in_vm, 
> id=5880, stack(0x7eff28b93000,0x7eff28c94000)]
> siginfo:si_signo=SIGBUS: si_errno=0, si_code=2 (BUS_ADRERR), 
> si_addr=0x7eff3e51d000
> {code}
> The entire backtrace of the JVM crash is
> {code}
> Stack: [0x7eff28b93000,0x7eff28c94000],  sp=0x7eff28c90a10,  free 
> space=1014k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
> V  [libjvm.so+0x88232c]  Unsafe_GetLongVolatile+0x6c
> j  sun.misc.Unsafe.getLongVolatile(Ljava/lang/Object;J)J+0
> j  org.apache.hadoop.hdfs.ShortCircuitShm$Slot.setFlag(J)V+8
> j  org.apache.hadoop.hdfs.ShortCircuitShm$Slot.makeValid()V+4
> j  
> org.apache.hadoop.hdfs.ShortCircuitShm.allocAndRegisterSlot(Lorg/apache/hadoop/hdfs/ExtendedBlockId;)Lorg/apache/hadoop/hdfs/ShortCircuitShm$Slot;+70
> j  
> org.apache.hadoop.hdfs.client.DfsClientShmManager$EndpointShmManager.allocSlotFromExistingShm(Lorg/apache/hadoop/hdfs/ExtendedBlockId;)Lorg/apache/hadoop/hdfs/ShortCircuitShm$Slot;+38
> j  
> org.apache.hadoop.hdfs.client.DfsClientShmManager$EndpointShmManager.allocSlot(Lorg/apache/hadoop/hdfs/net/DomainPeer;Lorg/apache/commons/lang/mutable/MutableBoolean;Ljava/lang/String;Lorg/apache/hadoop/hdfs/ExtendedBlockId;)Lorg/apache/hadoop/hdfs/ShortCircuitShm$Slot;+100
> j  
> org.apache.hadoop.hdfs.client.DfsClientShmManager.allocSlot(Lorg/apache/hadoop/hdfs/protocol/DatanodeInfo;Lorg/apache/hadoop/hdfs/net/DomainPeer;Lorg/apache/commons/lang/mutable/MutableBoolean;Lorg/apache/hadoop/hdfs/ExtendedBlockId;Ljava/lang/String;)Lorg/apache/hadoop/hdfs/ShortCircuitShm$Slot;+102
> j  
> org.apache.hadoop.hdfs.client.ShortCircuitCache.allocShmSlot(Lorg/apache/hadoop/hdfs/protocol/DatanodeInfo;Lorg/apache/hadoop/hdfs/net/DomainPeer;Lorg/apache/commons/lang/mutable/MutableBoolean;Lorg/apache/hadoop/hdfs/ExtendedBlockId;Ljava/lang/String;)Lorg/apache/hadoop/hdfs/ShortCircuitShm$Slot;+18
> j  
> org.apache.hadoop.hdfs.BlockReaderFactory.createShortCircuitReplicaInfo()Lorg/apache/hadoop/hdfs/client/ShortCircuitReplicaInfo;+151
> j  
> org.apache.hadoop.hdfs.client.ShortCircuitCache.create(Lorg/apache/hadoop/hdfs/ExtendedBlockId;Lorg/apache/hadoop/hdfs/client/ShortCircuitCache$ShortCircuitReplicaCreator;Lorg/apache/hadoop/util/Waitable;)Lorg/apache/hadoop/hdfs/client/ShortCircuitReplicaInfo;+46
> j  
> org.apache.hadoop.hdfs.client.ShortCircuitCache.fetchOrCreate(Lorg/apache/hadoop/hdfs/ExtendedBlockId;Lorg/apache/hadoop/hdfs/client/ShortCircuitCache$ShortCircuitReplicaCreator;)Lorg/apache/hadoop/hdfs/client/ShortCircuitReplicaInfo;+230
> j  
> org.apache.hadoop.hdfs.BlockReaderFactory.getBlockReaderLocal()Lorg/apache/hadoop/hdfs/BlockReader;+175
> j  
> org.apache.hadoop.hdfs.BlockReaderFactory.build()Lorg/apache/hadoop/hdfs/BlockReader;+87
> j  
> org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(J)Lorg/apache/hadoop/hdfs/protocol/DatanodeInfo;+291
> j  
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(Lorg/apache/hadoop/hdfs/DFSInputStream$ReaderStrategy;II)I+83
> j  org.apache.hadoop.hdfs.DFSInputStream.read([BII)I+15
> {code}
> This can be easily reproduced by starting the DataNode, filling up tmpfs (dd 
> if=/dev/zero bs=1M of=/dev/shm/dummy.zero) and running a simple task.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6908) incorrect snapshot directory diff generated by snapshot deletion

2014-08-21 Thread Juan Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106507#comment-14106507
 ] 

Juan Yu commented on HDFS-6908:
---

[~jingzhao]] Thanks for the new unit test and explain the difference.
I assumed when deleting a directory recursively, all children will be added to 
the diff list. but that's not how the implementation is done. snapshot diff 
only record directory deletion. so the fix you suggested is better.
One more question, I think what's really needed is to  call 
{{cleanSubtreeRecursively}} before {{destroyCreatedList}}, isn't it?
{code}
+  counts.add(currentINode.cleanSubtreeRecursively(snapshot, prior,
+  collectedBlocks, removedINodes, priorDeleted, countDiffChange));
  // delete everything in created list
  DirectoryDiff lastDiff = diffs.getLast();
  if (lastDiff != null) {
 counts.add(lastDiff.diff.destroyCreatedList(currentINode,
 collectedBlocks, removedINodes));
   }
 } else {
   // update prior
   prior = getDiffs().updatePrior(snapshot, prior);
@@ -739,7 +741,10 @@ boolean computeDiffBetweenSnapshots(Snapshot fromSnapshot,
   
   counts.add(getDiffs().deleteSnapshotDiff(snapshot, prior,
   currentINode, collectedBlocks, removedINodes, countDiffChange));
-  
+
+  counts.add(currentINode.cleanSubtreeRecursively(snapshot, prior,
+  collectedBlocks, removedINodes, priorDeleted, countDiffChange));
+
   // check priorDiff again since it may be created during the diff deletion
   if (prior != Snapshot.NO_SNAPSHOT_ID) {
 DirectoryDiff priorDiff = this.getDiffs().getDiffById(prior);
@@ -778,9 +783,7 @@ boolean computeDiffBetweenSnapshots(Snapshot fromSnapshot,
 }
   }
 }
-counts.add(currentINode.cleanSubtreeRecursively(snapshot, prior,
-collectedBlocks, removedINodes, priorDeleted, countDiffChange));
-
+
 if (currentINode.isQuotaSet()) {
   currentINode.getDirectoryWithQuotaFeature().addSpaceConsumed2Cache(
   -counts.get(Quota.NAMESPACE), -counts.get(Quota.DISKSPACE));
{code}

> incorrect snapshot directory diff generated by snapshot deletion
> 
>
> Key: HDFS-6908
> URL: https://issues.apache.org/jira/browse/HDFS-6908
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Juan Yu
>Assignee: Juan Yu
>Priority: Critical
> Attachments: HDFS-6908.001.patch
>
>
> In the following scenario, delete snapshot could generate incorrect snapshot 
> directory diff and corrupted fsimage, if you restart NN after that, you will 
> get NullPointerException.
> 1. create a directory and create a file under it
> 2. take a snapshot
> 3. create another file under that directory
> 4. take second snapshot
> 5. delete both files and the directory
> 6. delete second snapshot
> incorrect directory diff will be generated.
> Restart NN will throw NPE
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.addToDeletedList(FSImageFormatPBSnapshot.java:246)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadDeletedList(FSImageFormatPBSnapshot.java:265)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadDirectoryDiffList(FSImageFormatPBSnapshot.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadSnapshotDiffSection(FSImageFormatPBSnapshot.java:192)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:254)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:168)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:208)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:906)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:892)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:715)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:653)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:276)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:882)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:629)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:498)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:554)
> {cod

[jira] [Updated] (HDFS-6905) fs-encryption merge triggered release audit failures

2014-08-21 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HDFS-6905:
-

   Resolution: Fixed
Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

thanks Charles. Committed to trunk.

> fs-encryption merge triggered release audit failures
> 
>
> Key: HDFS-6905
> URL: https://issues.apache.org/jira/browse/HDFS-6905
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Charles Lamb
>Priority: Blocker
>  Labels: newbie
> Fix For: 3.0.0
>
> Attachments: HDFS-6905.001.patch
>
>
> Release audit is failing on three files since the merge of fs-encryption code 
> due to missing Apache license:
> * hdfs/protocol/EncryptionZoneWithId.java
> * hdfs/server/namenode/EncryptionFaultInjector.java
> * hdfs/server/namenode/EncryptionZoneManager.java



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6905) fs-encryption merge triggered release audit failures

2014-08-21 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106475#comment-14106475
 ] 

Alejandro Abdelnur commented on HDFS-6905:
--

+1

> fs-encryption merge triggered release audit failures
> 
>
> Key: HDFS-6905
> URL: https://issues.apache.org/jira/browse/HDFS-6905
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Charles Lamb
>Priority: Blocker
>  Labels: newbie
> Attachments: HDFS-6905.001.patch
>
>
> Release audit is failing on three files since the merge of fs-encryption code 
> due to missing Apache license:
> * hdfs/protocol/EncryptionZoneWithId.java
> * hdfs/server/namenode/EncryptionFaultInjector.java
> * hdfs/server/namenode/EncryptionZoneManager.java



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6826) Plugin interface to enable delegation of HDFS authorization assertions

2014-08-21 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HDFS-6826:
-

Attachment: HDFS-6826v7.1.patch

uploaded patch taking care of the testcases failures (needed a bit of defensive 
coding as testcases are not always initializing everything), fixed javadoc 
warning.

audit warnings are unrelated (the come from the fs-encryption merge and they 
are being addressed already).

> Plugin interface to enable delegation of HDFS authorization assertions
> --
>
> Key: HDFS-6826
> URL: https://issues.apache.org/jira/browse/HDFS-6826
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 2.4.1
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Attachments: HDFS-6826-idea.patch, HDFS-6826-idea2.patch, 
> HDFS-6826v3.patch, HDFS-6826v4.patch, HDFS-6826v5.patch, HDFS-6826v6.patch, 
> HDFS-6826v7.1.patch, HDFS-6826v7.patch, 
> HDFSPluggableAuthorizationProposal-v2.pdf, 
> HDFSPluggableAuthorizationProposal.pdf
>
>
> When Hbase data, HiveMetaStore data or Search data is accessed via services 
> (Hbase region servers, HiveServer2, Impala, Solr) the services can enforce 
> permissions on corresponding entities (databases, tables, views, columns, 
> search collections, documents). It is desirable, when the data is accessed 
> directly by users accessing the underlying data files (i.e. from a MapReduce 
> job), that the permission of the data files map to the permissions of the 
> corresponding data entity (i.e. table, column family or search collection).
> To enable this we need to have the necessary hooks in place in the NameNode 
> to delegate authorization to an external system that can map HDFS 
> files/directories to data entities and resolve their permissions based on the 
> data entities permissions.
> I’ll be posting a design proposal in the next few days.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6899) Allow changing MiniDFSCluster volumes per DN and capacity per volume

2014-08-21 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106457#comment-14106457
 ] 

Jing Zhao commented on HDFS-6899:
-

Thanks Arpit! +1 for the patch.

> Allow changing MiniDFSCluster volumes per DN and capacity per volume
> 
>
> Key: HDFS-6899
> URL: https://issues.apache.org/jira/browse/HDFS-6899
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, test
>Affects Versions: 2.5.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-6899.01.patch, HDFS-6899.02.patch, 
> HDFS-6899.03.patch, HDFS-6899.04.patch
>
>
> MiniDFSCluster hardcodes the number of directories per volume to two. Propose 
> removing the hard-coded restriction.
> It would be useful to limit the capacity of individual storage directories 
> for testing purposes. There is already a way to do so for SimulatedFSDataset, 
> we can add one when using real volumes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6899) Allow changing MiniDFSCluster volumes per DN and capacity per volume

2014-08-21 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-6899:


Attachment: HDFS-6899.04.patch

> Allow changing MiniDFSCluster volumes per DN and capacity per volume
> 
>
> Key: HDFS-6899
> URL: https://issues.apache.org/jira/browse/HDFS-6899
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, test
>Affects Versions: 2.5.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-6899.01.patch, HDFS-6899.02.patch, 
> HDFS-6899.03.patch, HDFS-6899.04.patch
>
>
> MiniDFSCluster hardcodes the number of directories per volume to two. Propose 
> removing the hard-coded restriction.
> It would be useful to limit the capacity of individual storage directories 
> for testing purposes. There is already a way to do so for SimulatedFSDataset, 
> we can add one when using real volumes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HDFS-6913) Some source files miss Apache license header

2014-08-21 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDFS-6913.
-

Resolution: Duplicate

> Some source files miss Apache license header
> 
>
> Key: HDFS-6913
> URL: https://issues.apache.org/jira/browse/HDFS-6913
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zhijie Shen
>
> EncryptionFaultInjector, EncryptionZoneManager, EncryptionZoneWithId miss 
> Apache license header.
> See: 
> https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4816//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6913) Some source files miss Apache license header

2014-08-21 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106450#comment-14106450
 ] 

Arpit Agarwal commented on HDFS-6913:
-

Dup of HDFS-6905.

> Some source files miss Apache license header
> 
>
> Key: HDFS-6913
> URL: https://issues.apache.org/jira/browse/HDFS-6913
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zhijie Shen
>
> EncryptionFaultInjector, EncryptionZoneManager, EncryptionZoneWithId miss 
> Apache license header.
> See: 
> https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4816//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HDFS-6899) Allow changing MiniDFSCluster volumes per DN and capacity per volume

2014-08-21 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106445#comment-14106445
 ] 

Arpit Agarwal edited comment on HDFS-6899 at 8/22/14 4:33 AM:
--

Yeah I updated the patch with an additional assert for the storageCapacities 
subarray. The rest of the cases should all be covered already with asserts. 
Thanks!


was (Author: arpitagarwal):
Yeah I added an additional assert for the storageCapacities subarray. The rest 
of the cases should all be covered already with asserts. Thanks!

> Allow changing MiniDFSCluster volumes per DN and capacity per volume
> 
>
> Key: HDFS-6899
> URL: https://issues.apache.org/jira/browse/HDFS-6899
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, test
>Affects Versions: 2.5.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-6899.01.patch, HDFS-6899.02.patch, 
> HDFS-6899.03.patch
>
>
> MiniDFSCluster hardcodes the number of directories per volume to two. Propose 
> removing the hard-coded restriction.
> It would be useful to limit the capacity of individual storage directories 
> for testing purposes. There is already a way to do so for SimulatedFSDataset, 
> we can add one when using real volumes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6899) Allow changing MiniDFSCluster volumes per DN and capacity per volume

2014-08-21 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-6899:


Attachment: HDFS-6899.03.patch

Yeah I added an additional assert for the storageCapacities subarray. The rest 
of the cases should all be covered already with asserts. Thanks!

> Allow changing MiniDFSCluster volumes per DN and capacity per volume
> 
>
> Key: HDFS-6899
> URL: https://issues.apache.org/jira/browse/HDFS-6899
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, test
>Affects Versions: 2.5.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-6899.01.patch, HDFS-6899.02.patch, 
> HDFS-6899.03.patch
>
>
> MiniDFSCluster hardcodes the number of directories per volume to two. Propose 
> removing the hard-coded restriction.
> It would be useful to limit the capacity of individual storage directories 
> for testing purposes. There is already a way to do so for SimulatedFSDataset, 
> we can add one when using real volumes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6829) DFSAdmin refreshSuperUserGroupsConfiguration failed in security cluster

2014-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106441#comment-14106441
 ] 

Hadoop QA commented on HDFS-6829:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12660077/HDFS-6829.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 3 
release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.security.TestRefreshUserMappings
  org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7715//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7715//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7715//console

This message is automatically generated.

> DFSAdmin refreshSuperUserGroupsConfiguration failed in security cluster
> ---
>
> Key: HDFS-6829
> URL: https://issues.apache.org/jira/browse/HDFS-6829
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.4.1
>Reporter: zhaoyunjiong
>Assignee: zhaoyunjiong
>Priority: Minor
> Attachments: HDFS-6829.patch
>
>
> When we run command "hadoop dfsadmin -refreshSuperUserGroupsConfiguration", 
> it failed and report below message:
> 14/08/05 21:32:06 WARN security.MultiRealmUserAuthentication: The 
> serverPrincipal = doesn't confirm to the standards
> refreshSuperUserGroupsConfiguration: null
> After check the code, I found the bug was triggered by below reasons:
> 1. We didn't set 
> CommonConfigurationKeys.HADOOP_SECURITY_SERVICE_USER_NAME_KEY, which needed 
> by RefreshUserMappingsProtocol. And in DFSAdmin, if no 
> CommonConfigurationKeys.HADOOP_SECURITY_SERVICE_USER_NAME_KEY set, it will 
> try to use DFSConfigKeys.DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY: 
> conf.set(CommonConfigurationKeys.HADOOP_SECURITY_SERVICE_USER_NAME_KEY,   
> conf.get(DFSConfigKeys.DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY, ""));
> 2. But we set DFSConfigKeys.DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY in 
> hdfs-site.xml
> 3. DFSAdmin didn't load hdfs-site.xml



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6913) Some source files miss Apache license header

2014-08-21 Thread Zhijie Shen (JIRA)
Zhijie Shen created HDFS-6913:
-

 Summary: Some source files miss Apache license header
 Key: HDFS-6913
 URL: https://issues.apache.org/jira/browse/HDFS-6913
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Zhijie Shen


EncryptionFaultInjector, EncryptionZoneManager, EncryptionZoneWithId miss 
Apache license header.

See: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4816//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6840) Clients are always sent to the same datanode when read is off rack

2014-08-21 Thread Ashwin Shankar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106428#comment-14106428
 ] 

Ashwin Shankar commented on HDFS-6840:
--

Since we are planning to make a 2.5.1 release, should this jira be made a 
blocker for 2.5.1 ?


> Clients are always sent to the same datanode when read is off rack
> --
>
> Key: HDFS-6840
> URL: https://issues.apache.org/jira/browse/HDFS-6840
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Jason Lowe
>Assignee: Andrew Wang
>Priority: Critical
>
> After HDFS-6268 the sorting order of block locations is deterministic for a 
> given block and locality level (e.g.: local, rack. off-rack), so off-rack 
> clients all see the same datanode for the same block.  This leads to very 
> poor behavior in distributed cache localization and other scenarios where 
> many clients all want the same block data at approximately the same time.  
> The one datanode is crushed by the load while the other replicas only handle 
> local and rack-local requests.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6886) Use single editlog record for creating file + overwrite.

2014-08-21 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-6886:
-

Attachment: HDFS-6886.002.patch

Update patch to fix failure of _TestFileCreation.testOverwriteOpenForWrite_,
 
TestOfflineEditsViewer is successful with {{editsStored}},
TestJournal is not related and is successful locally
TestPipelinesFailover is not related.


> Use single editlog record for creating file + overwrite.
> 
>
> Key: HDFS-6886
> URL: https://issues.apache.org/jira/browse/HDFS-6886
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Critical
> Attachments: HDFS-6886.001.patch, HDFS-6886.002.patch, editsStored
>
>
> As discussed in HDFS-6871, as [~jingzhao] and [~cmccabe]'s suggestion, we 
> could do further improvement to use one editlog record for creating file + 
> overwrite in this JIRA. We could record the overwrite flag in editlog for 
> creating file.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HDFS-2975) Rename with overwrite flag true can make NameNode to stuck in safemode on NN (crash + restart).

2014-08-21 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G reassigned HDFS-2975:
-

Assignee: Yi Liu  (was: Uma Maheswara Rao G)

Yes, Nicholas. I looked at the code. This should be fixed.

> Rename with overwrite flag true can make NameNode to stuck in safemode on NN 
> (crash + restart).
> ---
>
> Key: HDFS-2975
> URL: https://issues.apache.org/jira/browse/HDFS-2975
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 0.24.0
>Reporter: Uma Maheswara Rao G
>Assignee: Yi Liu
>
> When we rename the file with overwrite flag as true, it will delete the 
> destination file blocks. After deleting the blocks, whenever it releases the 
> fsNameSystem lock, NN can give the invalidation work to corresponding DNs to 
> delete the blocks.
> Parallaly it will sync the rename related edits to editlog file. At this step 
> before NN sync the edits if NN crashes, NN can stuck into safemode on 
> restart. This is because block already deleted from the DN as part of 
> invalidations. But dst file still exist as rename edits not persisted in log 
> file and no DN will report that blocks now.
> This is similar to HDFS-2815
>  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6871) Improve NameNode performance when creating file

2014-08-21 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106392#comment-14106392
 ] 

Vinayakumar B commented on HDFS-6871:
-

Thanks [~umamahesh] for pointing to Jira.

> Improve NameNode performance when creating file  
> -
>
> Key: HDFS-6871
> URL: https://issues.apache.org/jira/browse/HDFS-6871
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, performance
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Critical
> Fix For: 2.6.0
>
> Attachments: HDFS-6871.001.patch, HDFS-6871.002.patch, 
> HDFS-6871.003.patch
>
>
> Creating file with overwrite flag will cause NN fall into flush edit logs and 
> block other requests if the file exists.
> When we create a file with overwrite flag (default is true) in HDFS, NN will 
> remove original file if it exists. In FSNamesystem#startFileInternal, NN 
> already holds the write lock, it calls {{deleteInt}} if the file exists, 
> there is logSync in {{deleteInt}}. So in this case, logSync is under write 
> lock, it will heavily affect the NN performance. 
> We should ignore the force logSync in {{deleteInt}} in this case.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6905) fs-encryption merge triggered release audit failures

2014-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106391#comment-14106391
 ] 

Hadoop QA commented on HDFS-6905:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12663416/HDFS-6905.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestEncryptionZones
  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
  
org.apache.hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits

  The following test timeouts occurred in 
hadoop-hdfs-project/hadoop-hdfs:

org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7708//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7708//console

This message is automatically generated.

> fs-encryption merge triggered release audit failures
> 
>
> Key: HDFS-6905
> URL: https://issues.apache.org/jira/browse/HDFS-6905
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Charles Lamb
>Priority: Blocker
>  Labels: newbie
> Attachments: HDFS-6905.001.patch
>
>
> Release audit is failing on three files since the merge of fs-encryption code 
> due to missing Apache license:
> * hdfs/protocol/EncryptionZoneWithId.java
> * hdfs/server/namenode/EncryptionFaultInjector.java
> * hdfs/server/namenode/EncryptionZoneManager.java



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6871) Improve NameNode performance when creating file

2014-08-21 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106389#comment-14106389
 ] 

Uma Maheswara Rao G commented on HDFS-6871:
---

We have one HDFS-2975

> Improve NameNode performance when creating file  
> -
>
> Key: HDFS-6871
> URL: https://issues.apache.org/jira/browse/HDFS-6871
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, performance
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Critical
> Fix For: 2.6.0
>
> Attachments: HDFS-6871.001.patch, HDFS-6871.002.patch, 
> HDFS-6871.003.patch
>
>
> Creating file with overwrite flag will cause NN fall into flush edit logs and 
> block other requests if the file exists.
> When we create a file with overwrite flag (default is true) in HDFS, NN will 
> remove original file if it exists. In FSNamesystem#startFileInternal, NN 
> already holds the write lock, it calls {{deleteInt}} if the file exists, 
> there is logSync in {{deleteInt}}. So in this case, logSync is under write 
> lock, it will heavily affect the NN performance. 
> We should ignore the force logSync in {{deleteInt}} in this case.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6871) Improve NameNode performance when creating file

2014-08-21 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106388#comment-14106388
 ] 

Vinayakumar B commented on HDFS-6871:
-

Similar to Create+Overwrite, we need to delete blocks after logSync even in 
case of Rename with overwrite. May be this can be filed in separate Jira.

> Improve NameNode performance when creating file  
> -
>
> Key: HDFS-6871
> URL: https://issues.apache.org/jira/browse/HDFS-6871
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, performance
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Critical
> Fix For: 2.6.0
>
> Attachments: HDFS-6871.001.patch, HDFS-6871.002.patch, 
> HDFS-6871.003.patch
>
>
> Creating file with overwrite flag will cause NN fall into flush edit logs and 
> block other requests if the file exists.
> When we create a file with overwrite flag (default is true) in HDFS, NN will 
> remove original file if it exists. In FSNamesystem#startFileInternal, NN 
> already holds the write lock, it calls {{deleteInt}} if the file exists, 
> there is logSync in {{deleteInt}}. So in this case, logSync is under write 
> lock, it will heavily affect the NN performance. 
> We should ignore the force logSync in {{deleteInt}} in this case.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6581) Write to single replica in memory

2014-08-21 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106386#comment-14106386
 ] 

Gopal V commented on HDFS-6581:
---

Fair enough, the choice of backing layer should not be enforced by this patch.

In that respect, the huge advantage of this particular approach is that the 
HDFS mediatype [RAM] can now refer to both ramfs and tmpfs transparently 
according to admin configuration.

As far as tmpfs goes, it was used primarily because HDFS-4949 already has a 
hard-dependency on tmpfs for handling short-circuit reads (the skip-checksums 
Andrew was talking about is negotiated over tmpfs today).

I found "/dev/shm" references in HDFS code recently and hit issues with that - 
filed HDFS-6912, now that I remembered.

> Write to single replica in memory
> -
>
> Key: HDFS-6581
> URL: https://issues.apache.org/jira/browse/HDFS-6581
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFSWriteableReplicasInMemory.pdf
>
>
> Per discussion with the community on HDFS-5851, we will implement writing to 
> a single replica in DN memory via DataTransferProtocol.
> This avoids some of the issues with short-circuit writes, which we can 
> revisit at a later time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-4486) Add log category for long-running DFSClient notices

2014-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106383#comment-14106383
 ] 

Hadoop QA commented on HDFS-4486:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12663539/hdfs-4486-20140821-2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestReservedRawPaths
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestUpdatePipelineWithSnapshots
  org.apache.hadoop.hdfs.TestSetrepIncreasing
  org.apache.hadoop.hdfs.TestModTime
  org.apache.hadoop.hdfs.security.TestDelegationToken
  org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead
  org.apache.hadoop.hdfs.TestDisableConnCache
  org.apache.hadoop.hdfs.server.namenode.TestEditLogAutoroll
  
org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints
  org.apache.hadoop.hdfs.TestConnCache
  org.apache.hadoop.hdfs.TestDFSClientRetries
  org.apache.hadoop.hdfs.TestSetrepDecreasing
  org.apache.hadoop.hdfs.server.datanode.TestDiskError
  org.apache.hadoop.hdfs.server.namenode.TestHostsFiles
  org.apache.hadoop.hdfs.server.datanode.TestTransferRbw
  org.apache.hadoop.hdfs.server.namenode.TestFileContextAcl
  org.apache.hadoop.hdfs.server.namenode.TestDeleteRace
  org.apache.hadoop.hdfs.TestPread
  org.apache.hadoop.hdfs.server.namenode.TestLeaseManager
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotListing
  org.apache.hadoop.hdfs.server.datanode.TestStorageReport
  org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery
  org.apache.hadoop.hdfs.TestReadWhileWriting
  org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache
  
org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock
  org.apache.hadoop.hdfs.server.namenode.ha.TestQuotasWithHA
  org.apache.hadoop.hdfs.server.namenode.ha.TestGetGroupsWithHA
  org.apache.hadoop.hdfs.server.namenode.TestAuditLogger
  
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS
  org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
  org.apache.hadoop.hdfs.TestBlocksScheduledCounter
  org.apache.hadoop.hdfs.TestDatanodeLayoutUpgrade
  
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
  org.apache.hadoop.hdfs.server.datanode.TestCachingStrategy
  org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby
  org.apache.hadoop.hdfs.server.datanode.TestDataNodeInitStorage
  org.apache.hadoop.hdfs.server.datanode.TestBlockReplacement
  
org.apache.hadoop.hdfs.server.blockmanagement.TestHeartbeatHandling
  
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestInterDatanodeProtocol
  org.apache.hadoop.hdfs.security.token.block.TestBlockToken
  org.apache.hadoop.hdfs.server.namenode.ha.TestHAMetrics
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap
  org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount
  org.apache.hadoop.hdfs.TestFileAppend
  org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster
  
org.apache.hadoop.hdfs.server.datanode.TestReadOnlyShar

[jira] [Created] (HDFS-6912) HDFS Short-circuit read implementation throws SIGBUS from misc.Unsafe usage

2014-08-21 Thread Gopal V (JIRA)
Gopal V created HDFS-6912:
-

 Summary: HDFS Short-circuit read implementation throws SIGBUS from 
misc.Unsafe usage
 Key: HDFS-6912
 URL: https://issues.apache.org/jira/browse/HDFS-6912
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: caching
Affects Versions: 2.5.0
 Environment: HDFS Data node, with 8 gb tmpfs in /dev/shm
Reporter: Gopal V


The short-circuit reader throws SIGBUS errors from Unsafe code and crashes the 
JVM when tmpfs on a disk is depleted.

{code}
---  T H R E A D  ---

Current thread (0x7eff387df800):  JavaThread "xxx" daemon [_thread_in_vm, 
id=5880, stack(0x7eff28b93000,0x7eff28c94000)]

siginfo:si_signo=SIGBUS: si_errno=0, si_code=2 (BUS_ADRERR), 
si_addr=0x7eff3e51d000
{code}

The entire backtrace of the JVM crash is

{code}
Stack: [0x7eff28b93000,0x7eff28c94000],  sp=0x7eff28c90a10,  free 
space=1014k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V  [libjvm.so+0x88232c]  Unsafe_GetLongVolatile+0x6c
j  sun.misc.Unsafe.getLongVolatile(Ljava/lang/Object;J)J+0
j  org.apache.hadoop.hdfs.ShortCircuitShm$Slot.setFlag(J)V+8
j  org.apache.hadoop.hdfs.ShortCircuitShm$Slot.makeValid()V+4
j  
org.apache.hadoop.hdfs.ShortCircuitShm.allocAndRegisterSlot(Lorg/apache/hadoop/hdfs/ExtendedBlockId;)Lorg/apache/hadoop/hdfs/ShortCircuitShm$Slot;+70
j  
org.apache.hadoop.hdfs.client.DfsClientShmManager$EndpointShmManager.allocSlotFromExistingShm(Lorg/apache/hadoop/hdfs/ExtendedBlockId;)Lorg/apache/hadoop/hdfs/ShortCircuitShm$Slot;+38
j  
org.apache.hadoop.hdfs.client.DfsClientShmManager$EndpointShmManager.allocSlot(Lorg/apache/hadoop/hdfs/net/DomainPeer;Lorg/apache/commons/lang/mutable/MutableBoolean;Ljava/lang/String;Lorg/apache/hadoop/hdfs/ExtendedBlockId;)Lorg/apache/hadoop/hdfs/ShortCircuitShm$Slot;+100
j  
org.apache.hadoop.hdfs.client.DfsClientShmManager.allocSlot(Lorg/apache/hadoop/hdfs/protocol/DatanodeInfo;Lorg/apache/hadoop/hdfs/net/DomainPeer;Lorg/apache/commons/lang/mutable/MutableBoolean;Lorg/apache/hadoop/hdfs/ExtendedBlockId;Ljava/lang/String;)Lorg/apache/hadoop/hdfs/ShortCircuitShm$Slot;+102
j  
org.apache.hadoop.hdfs.client.ShortCircuitCache.allocShmSlot(Lorg/apache/hadoop/hdfs/protocol/DatanodeInfo;Lorg/apache/hadoop/hdfs/net/DomainPeer;Lorg/apache/commons/lang/mutable/MutableBoolean;Lorg/apache/hadoop/hdfs/ExtendedBlockId;Ljava/lang/String;)Lorg/apache/hadoop/hdfs/ShortCircuitShm$Slot;+18
j  
org.apache.hadoop.hdfs.BlockReaderFactory.createShortCircuitReplicaInfo()Lorg/apache/hadoop/hdfs/client/ShortCircuitReplicaInfo;+151
j  
org.apache.hadoop.hdfs.client.ShortCircuitCache.create(Lorg/apache/hadoop/hdfs/ExtendedBlockId;Lorg/apache/hadoop/hdfs/client/ShortCircuitCache$ShortCircuitReplicaCreator;Lorg/apache/hadoop/util/Waitable;)Lorg/apache/hadoop/hdfs/client/ShortCircuitReplicaInfo;+46
j  
org.apache.hadoop.hdfs.client.ShortCircuitCache.fetchOrCreate(Lorg/apache/hadoop/hdfs/ExtendedBlockId;Lorg/apache/hadoop/hdfs/client/ShortCircuitCache$ShortCircuitReplicaCreator;)Lorg/apache/hadoop/hdfs/client/ShortCircuitReplicaInfo;+230
j  
org.apache.hadoop.hdfs.BlockReaderFactory.getBlockReaderLocal()Lorg/apache/hadoop/hdfs/BlockReader;+175
j  
org.apache.hadoop.hdfs.BlockReaderFactory.build()Lorg/apache/hadoop/hdfs/BlockReader;+87
j  
org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(J)Lorg/apache/hadoop/hdfs/protocol/DatanodeInfo;+291
j  
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(Lorg/apache/hadoop/hdfs/DFSInputStream$ReaderStrategy;II)I+83
j  org.apache.hadoop.hdfs.DFSInputStream.read([BII)I+15
{code}

This can be easily reproduced by starting the DataNode, filling up tmpfs (dd 
if=/dev/zero bs=1M of=/dev/shm/dummy.zero) and running a simple task.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6827) Both NameNodes stuck in STANDBY state due to HealthMonitor not aware of the target's status changing sometimes

2014-08-21 Thread Zesheng Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106364#comment-14106364
 ] 

Zesheng Wu commented on HDFS-6827:
--

Thanks [~vinayrpet], I will try the scenario in the latest trunk code soon.

> Both NameNodes stuck in STANDBY state due to HealthMonitor not aware of the 
> target's status changing sometimes
> --
>
> Key: HDFS-6827
> URL: https://issues.apache.org/jira/browse/HDFS-6827
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.4.1
>Reporter: Zesheng Wu
>Assignee: Zesheng Wu
>Priority: Critical
> Attachments: HDFS-6827.1.patch
>
>
> In our production cluster, we encounter a scenario like this: ANN crashed due 
> to write journal timeout, and was restarted by the watchdog automatically, 
> but after restarting both of the NNs are standby.
> Following is the logs of the scenario:
> # NN1 is down due to write journal timeout:
> {color:red}2014-08-03,23:02:02,219{color} INFO 
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG
> # ZKFC1 detected "connection reset by peer"
> {color:red}2014-08-03,23:02:02,560{color} ERROR 
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException 
> as:xx@xx.HADOOP (auth:KERBEROS) cause:java.io.IOException: 
> {color:red}Connection reset by peer{color}
> # NN1 wat restarted successfully by the watchdog:
> 2014-08-03,23:02:07,884 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> Web-server up at: xx:13201
> 2014-08-03,23:02:07,884 INFO org.apache.hadoop.ipc.Server: IPC Server 
> Responder: starting
> {color:red}2014-08-03,23:02:07,884{color} INFO org.apache.hadoop.ipc.Server: 
> IPC Server listener on 13200: starting
> 2014-08-03,23:02:08,742 INFO org.apache.hadoop.ipc.Server: RPC server clean 
> thread started!
> 2014-08-03,23:02:08,743 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> Registered DFSClientInformation MBean
> 2014-08-03,23:02:08,744 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> NameNode up at: xx/xx:13200
> 2014-08-03,23:02:08,744 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services 
> required for standby state
> # ZKFC1 retried the connection and considered NN1 was healthy
> {color:red}2014-08-03,23:02:08,292{color} INFO org.apache.hadoop.ipc.Client: 
> Retrying connect to server: xx/xx:13200. Already tried 0 time(s); retry 
> policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1 
> SECONDS)
> # ZKFC1 still considered NN1 as a healthy Active NN, and didn't trigger the 
> failover, as a result, both NNs were standby.
> The root cause of this bug is that NN is restarted too quickly and ZKFC 
> health monitor doesn't realize that.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6888) Remove audit logging of getFIleInfo()

2014-08-21 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106362#comment-14106362
 ] 

Gera Shegalov commented on HDFS-6888:
-

[~kihwal], I am +1 for making some commands debug level, so we have an option 
to capture them in the logs and remove them.
[~airbots], how about making the list of DEBUG-level commands configurable via 
a csv list for conf.getTrimmedStrings instead of hardcoding it as in v2 of the 
patch. 


> Remove audit logging of getFIleInfo()
> -
>
> Key: HDFS-6888
> URL: https://issues.apache.org/jira/browse/HDFS-6888
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Kihwal Lee
>Assignee: Chen He
>  Labels: log
> Attachments: HDFS-6888-2.patch, HDFS-6888.patch
>
>
> The audit logging of getFileInfo() was added in HDFS-3733.  Since this is a 
> one of the most called method, users have noticed that audit log is now 
> filled with this.  Since we now have HTTP request logging, this seems 
> unnecessary.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6905) fs-encryption merge triggered release audit failures

2014-08-21 Thread Chen He (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106359#comment-14106359
 ] 

Chen He commented on HDFS-6905:
---

+1, lgtm

> fs-encryption merge triggered release audit failures
> 
>
> Key: HDFS-6905
> URL: https://issues.apache.org/jira/browse/HDFS-6905
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Charles Lamb
>Priority: Blocker
>  Labels: newbie
> Attachments: HDFS-6905.001.patch
>
>
> Release audit is failing on three files since the merge of fs-encryption code 
> due to missing Apache license:
> * hdfs/protocol/EncryptionZoneWithId.java
> * hdfs/server/namenode/EncryptionFaultInjector.java
> * hdfs/server/namenode/EncryptionZoneManager.java



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6886) Use single editlog record for creating file + overwrite.

2014-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106353#comment-14106353
 ] 

Hadoop QA commented on HDFS-6886:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12663393/HDFS-6886.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 3 
release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.qjournal.server.TestJournal
  org.apache.hadoop.hdfs.TestFileCreation
  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
  
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7705//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7705//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7705//console

This message is automatically generated.

> Use single editlog record for creating file + overwrite.
> 
>
> Key: HDFS-6886
> URL: https://issues.apache.org/jira/browse/HDFS-6886
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Critical
> Attachments: HDFS-6886.001.patch, editsStored
>
>
> As discussed in HDFS-6871, as [~jingzhao] and [~cmccabe]'s suggestion, we 
> could do further improvement to use one editlog record for creating file + 
> overwrite in this JIRA. We could record the overwrite flag in editlog for 
> creating file.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6729) Support maintenance mode for DN

2014-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106354#comment-14106354
 ] 

Hadoop QA commented on HDFS-6729:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12663545/HDFS-6729.000.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 3 
release audit warnings.

{color:red}-1 core tests{color}.  The test build failed in 
hadoop-hdfs-project/hadoop-hdfs 

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7714//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7714//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7714//console

This message is automatically generated.

> Support maintenance mode for DN
> ---
>
> Key: HDFS-6729
> URL: https://issues.apache.org/jira/browse/HDFS-6729
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-6729.000.patch
>
>
> Some maintenance works (e.g., upgrading RAM or add disks) on DataNode only 
> takes a short amount of time (e.g., 10 minutes). In these cases, the users do 
> not want to report missing blocks on this DN because the DN will be online 
> shortly without data lose. Thus, we need a maintenance mode for a DN so that 
> maintenance work can be carried out on the DN without having to decommission 
> it or the DN being marked as dead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6581) Write to single replica in memory

2014-08-21 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106349#comment-14106349
 ] 

Colin Patrick McCabe commented on HDFS-6581:


bq. If ramfs ate 90% of memory, would a YARN task die or instead just get their 
heaps swapped?

Depends on how much memory 10% of memory is, whether swap is enabled, etc. etc.

The point I was trying to make is that if 90% of ram goes missing, you will 
notice.  Whereas if tmpfs starts swapping, you may not notice, unless you're 
paying careful attention to performance.  tmpfs doesn't have the predictability 
that ramfs does.

> Write to single replica in memory
> -
>
> Key: HDFS-6581
> URL: https://issues.apache.org/jira/browse/HDFS-6581
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFSWriteableReplicasInMemory.pdf
>
>
> Per discussion with the community on HDFS-5851, we will implement writing to 
> a single replica in DN memory via DataTransferProtocol.
> This avoids some of the issues with short-circuit writes, which we can 
> revisit at a later time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6910) Initial prototype implementation for replicas in memory using tmpfs

2014-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106346#comment-14106346
 ] 

Hadoop QA commented on HDFS-6910:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12663501/HDFS-6910.01.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 23 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to cause Findbugs 
(version 2.0.3) to fail.

{color:red}-1 release audit{color}.  The applied patch generated 3 
release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs-httpfs:

  org.apache.hadoop.cli.TestCLI

  The test build failed in 
hadoop-hdfs-project/hadoop-hdfs-httpfs hadoop-hdfs-project/hadoop-hdfs 

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7712//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7712//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7712//console

This message is automatically generated.

> Initial prototype implementation for replicas in memory using tmpfs
> ---
>
> Key: HDFS-6910
> URL: https://issues.apache.org/jira/browse/HDFS-6910
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-6910.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6865) Byte array native checksumming on client side (HDFS changes)

2014-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106344#comment-14106344
 ] 

Hadoop QA commented on HDFS-6865:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12663419/HDFS-6865.4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 3 
release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.balancer.TestBalancer
  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

  The following test timeouts occurred in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

org.apache.hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots
org.apache.hadoop.hdfs.server.namenode.TestBlockUnderConstruction

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7707//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7707//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7707//console

This message is automatically generated.

> Byte array native checksumming on client side (HDFS changes)
> 
>
> Key: HDFS-6865
> URL: https://issues.apache.org/jira/browse/HDFS-6865
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, performance
>Reporter: James Thomas
>Assignee: James Thomas
> Attachments: HDFS-6865.2.patch, HDFS-6865.3.patch, HDFS-6865.4.patch, 
> HDFS-6865.patch
>
>
> Refactor FSOutputSummer to buffer data and use the native checksum 
> calculation functionality introduced in HADOOP-10975.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6888) Remove audit logging of getFIleInfo()

2014-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106334#comment-14106334
 ] 

Hadoop QA commented on HDFS-6888:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12663508/HDFS-6888-2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 3 
release audit warnings.

{color:red}-1 core tests{color}.  The test build failed in 
hadoop-hdfs-project/hadoop-hdfs 

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7710//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7710//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7710//console

This message is automatically generated.

> Remove audit logging of getFIleInfo()
> -
>
> Key: HDFS-6888
> URL: https://issues.apache.org/jira/browse/HDFS-6888
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Kihwal Lee
>Assignee: Chen He
>  Labels: log
> Attachments: HDFS-6888-2.patch, HDFS-6888.patch
>
>
> The audit logging of getFileInfo() was added in HDFS-3733.  Since this is a 
> one of the most called method, users have noticed that audit log is now 
> filled with this.  Since we now have HTTP request logging, this seems 
> unnecessary.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6826) Plugin interface to enable delegation of HDFS authorization assertions

2014-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106314#comment-14106314
 ] 

Hadoop QA commented on HDFS-6826:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12663254/HDFS-6826v7.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.
See 
https://builds.apache.org/job/PreCommit-HDFS-Build/7709//artifact/trunk/patchprocess/diffJavadocWarnings.txt
 for details.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 3 
release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestCheckpointsWithSnapshots
  org.apache.hadoop.hdfs.server.namenode.TestStorageRestore
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotManager
  org.apache.hadoop.hdfs.server.namenode.TestNameNodeRecovery
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap
  org.apache.hadoop.hdfs.server.namenode.TestEditLogRace
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestUpdatePipelineWithSnapshots
  org.apache.hadoop.hdfs.server.namenode.TestFSImageWithSnapshot
  org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshot
  org.apache.hadoop.hdfs.server.namenode.TestNamenodeRetryCache
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot
  org.apache.hadoop.hdfs.server.namenode.TestSaveNamespace
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots
  org.apache.hadoop.hdfs.server.namenode.TestFsLimits
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestXAttrWithSnapshot
  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
  
org.apache.hadoop.hdfs.server.namenode.TestValidateConfigurationSettings
  
org.apache.hadoop.hdfs.server.namenode.TestSecurityTokenEditLog
  org.apache.hadoop.hdfs.server.namenode.TestStartup
  org.apache.hadoop.hdfs.server.namenode.TestFSPermissionChecker

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7709//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7709//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7709//console

This message is automatically generated.

> Plugin interface to enable delegation of HDFS authorization assertions
> --
>
> Key: HDFS-6826
> URL: https://issues.apache.org/jira/browse/HDFS-6826
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 2.4.1
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Attachments: HDFS-6826-idea.patch, HDFS-6826-idea2.patch, 
> HDFS-6826v3.patch, HDFS-6826v4.patch, HDFS-6826v5.patch, HDFS-6826v6.patch, 
> HDFS-6826v7.patch, HDFSPluggableAuthorizationProposal-v2.pdf, 
> HDFSPluggableAuthorizationProposal.pdf
>
>
> When Hbase data, HiveMetaStore data or Search data is accessed via services 
> (Hbase region servers, HiveServer2, Impala, Solr) the services can enforce 
> permissions on corresponding entities (databases, tables, views, columns, 
> search collections, documents). It is desirable, when the data is accessed 
> directly by users accessing the underlying data files (i.e. from a MapReduce 
> job), that the permission of the data files map to the permissions of the 
> corresponding data entity (i.e. table, column family or search collection).
> To enable this we need to have the necessary hooks in place in the Nam

[jira] [Commented] (HDFS-6581) Write to single replica in memory

2014-08-21 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106308#comment-14106308
 ] 

Gopal V commented on HDFS-6581:
---

bq. ramfs: causes applications to be aborted with OOM errors.

If ramfs ate 90% of memory, would a YARN task die or instead just get their 
heaps swapped?

> Write to single replica in memory
> -
>
> Key: HDFS-6581
> URL: https://issues.apache.org/jira/browse/HDFS-6581
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFSWriteableReplicasInMemory.pdf
>
>
> Per discussion with the community on HDFS-5851, we will implement writing to 
> a single replica in DN memory via DataTransferProtocol.
> This avoids some of the issues with short-circuit writes, which we can 
> revisit at a later time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6581) Write to single replica in memory

2014-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106302#comment-14106302
 ] 

Hadoop QA commented on HDFS-6581:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12661926/HDFSWriteableReplicasInMemory.pdf
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7713//console

This message is automatically generated.

> Write to single replica in memory
> -
>
> Key: HDFS-6581
> URL: https://issues.apache.org/jira/browse/HDFS-6581
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFSWriteableReplicasInMemory.pdf
>
>
> Per discussion with the community on HDFS-5851, we will implement writing to 
> a single replica in DN memory via DataTransferProtocol.
> This avoids some of the issues with short-circuit writes, which we can 
> revisit at a later time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-3689) Add support for variable length block

2014-08-21 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106298#comment-14106298
 ] 

Colin Patrick McCabe commented on HDFS-3689:


So, there are a few use-cases for variable-length blocks that we've kicked 
around in the past:

* Simpler implementation of append and pipeline recovery.  We could just start 
a new block and forget about the old blocks.  genstamp can go away, as well as 
all the pipeline recovery code and replica state machine.  Replicas are then 
either finalized or not, like in the original Hadoop versions.

* Make hdfsConcat fully generic, rather than requiring N-1 of the files being 
concatted to be exactly 1 block long like now.  This would make that call a lot 
more useful.  (Implemented above by Jing)

* Some file formats really, really want to have block-aligned records.  This is 
natural if you want to have one node process a set of records... you don't want 
"torn" records that span multiple datanodes.  Apache Parquet is certainly one 
of these formats; I think ORCFile is too.  Right now these file formats need to 
accept "torn" records or add padding.  I guess sparse files could make the 
padding less inefficient.

Disadvantages of variable-length blocks:

* As Doug pointed out, MapReduce InputFormats that use # of blocks to decide on 
a good data split won't work too well.  I wonder how much effort it would take 
to convert these to take block length into account?

* Other applications may also be assuming fixed block sizes, although our APIs 
have never technically guaranteed that.

> Add support for variable length block
> -
>
> Key: HDFS-3689
> URL: https://issues.apache.org/jira/browse/HDFS-3689
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs-client, namenode
>Affects Versions: 3.0.0
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Attachments: HDFS-3689.000.patch, HDFS-3689.001.patch
>
>
> Currently HDFS supports fixed length blocks. Supporting variable length block 
> will allow new use cases and features to be built on top of HDFS. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6376) Distcp data between two HA clusters requires another configuration

2014-08-21 Thread Craig Condit (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106301#comment-14106301
 ] 

Craig Condit commented on HDFS-6376:


Tested slightly modified version of patch against 2.2.0. Verified that distcp 
between two secure HA clusters works as expected.

> Distcp data between two HA clusters requires another configuration
> --
>
> Key: HDFS-6376
> URL: https://issues.apache.org/jira/browse/HDFS-6376
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, federation, hdfs-client
>Affects Versions: 2.3.0, 2.4.0
> Environment: Hadoop 2.3.0
>Reporter: Dave Marion
>Assignee: Dave Marion
> Fix For: 3.0.0
>
> Attachments: HDFS-6376-2.patch, HDFS-6376-3-branch-2.4.patch, 
> HDFS-6376-4-branch-2.4.patch, HDFS-6376-5-trunk.patch, 
> HDFS-6376-6-trunk.patch, HDFS-6376-7-trunk.patch, HDFS-6376-branch-2.4.patch, 
> HDFS-6376-patch-1.patch
>
>
> User has to create a third set of configuration files for distcp when 
> transferring data between two HA clusters.
> Consider the scenario in [1]. You cannot put all of the required properties 
> in core-site.xml and hdfs-site.xml for the client to resolve the location of 
> both active namenodes. If you do, then the datanodes from cluster A may join 
> cluster B. I can not find a configuration option that tells the datanodes to 
> federate blocks for only one of the clusters in the configuration.
> [1] 
> http://mail-archives.apache.org/mod_mbox/hadoop-user/201404.mbox/%3CBAY172-W2133964E0C283968C161DD1520%40phx.gbl%3E



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6376) Distcp data between two HA clusters requires another configuration

2014-08-21 Thread Craig Condit (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Craig Condit updated HDFS-6376:
---

Affects Version/s: 2.2.0

> Distcp data between two HA clusters requires another configuration
> --
>
> Key: HDFS-6376
> URL: https://issues.apache.org/jira/browse/HDFS-6376
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, federation, hdfs-client
>Affects Versions: 2.2.0, 2.3.0, 2.4.0
> Environment: Hadoop 2.3.0
>Reporter: Dave Marion
>Assignee: Dave Marion
> Fix For: 3.0.0
>
> Attachments: HDFS-6376-2.patch, HDFS-6376-3-branch-2.4.patch, 
> HDFS-6376-4-branch-2.4.patch, HDFS-6376-5-trunk.patch, 
> HDFS-6376-6-trunk.patch, HDFS-6376-7-trunk.patch, HDFS-6376-branch-2.4.patch, 
> HDFS-6376-patch-1.patch
>
>
> User has to create a third set of configuration files for distcp when 
> transferring data between two HA clusters.
> Consider the scenario in [1]. You cannot put all of the required properties 
> in core-site.xml and hdfs-site.xml for the client to resolve the location of 
> both active namenodes. If you do, then the datanodes from cluster A may join 
> cluster B. I can not find a configuration option that tells the datanodes to 
> federate blocks for only one of the clusters in the configuration.
> [1] 
> http://mail-archives.apache.org/mod_mbox/hadoop-user/201404.mbox/%3CBAY172-W2133964E0C283968C161DD1520%40phx.gbl%3E



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6888) Remove audit logging of getFIleInfo()

2014-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106295#comment-14106295
 ] 

Hadoop QA commented on HDFS-6888:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12663508/HDFS-6888-2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 3 
release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
  org.apache.hadoop.hdfs.server.namenode.TestAuditLogger
  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
  org.apache.hadoop.hdfs.TestDecommission

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7703//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7703//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7703//console

This message is automatically generated.

> Remove audit logging of getFIleInfo()
> -
>
> Key: HDFS-6888
> URL: https://issues.apache.org/jira/browse/HDFS-6888
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Kihwal Lee
>Assignee: Chen He
>  Labels: log
> Attachments: HDFS-6888-2.patch, HDFS-6888.patch
>
>
> The audit logging of getFileInfo() was added in HDFS-3733.  Since this is a 
> one of the most called method, users have noticed that audit log is now 
> filled with this.  Since we now have HTTP request logging, this seems 
> unnecessary.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6867) For DFSOutputStream, do pipeline recovery for a single block in the background

2014-08-21 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106282#comment-14106282
 ] 

Colin Patrick McCabe commented on HDFS-6867:


Re: approach #2 in the design doc: there is some discussion of variable-length 
blocks ongoing at HDFS-3689.

> For DFSOutputStream, do pipeline recovery for a single block in the background
> --
>
> Key: HDFS-6867
> URL: https://issues.apache.org/jira/browse/HDFS-6867
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.0.0-alpha
>Reporter: Colin Patrick McCabe
> Attachments: HDFS-6867-design-20140820.pdf
>
>
> For DFSOutputStream, we should be able to do pipeline recovery in the 
> background, while the user is continuing to write to the file.  This is 
> especially useful for long-lived clients that write to an HDFS file slowly. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HDFS-6908) incorrect snapshot directory diff generated by snapshot deletion

2014-08-21 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106272#comment-14106272
 ] 

Jing Zhao edited comment on HDFS-6908 at 8/22/14 1:03 AM:
--

Thanks for the response, [~j...@cloudera.com].

bq. so there are create/delete pair operations for those files.

The challenge here is that we cannot guarantee we always have the create/delete 
pair. Imagine the deletion happens on the directory while the creation happens 
on a file under the directory. Then we cannot depend on the snapshot diff 
combination to clean the file. The following unit test (based on your original 
test case) demos the scenario (but with your patch the following test will hit 
another exception before the leaking check):
{code}
  @Test (timeout=6)
  public void testDeleteSnapshot() throws Exception {
final Path root = new Path("/");

Path dir = new Path("/dir1");
Path file1 = new Path(dir, "file1");
DFSTestUtil.createFile(hdfs, file1, BLOCKSIZE, REPLICATION, seed);

hdfs.allowSnapshot(root);
hdfs.createSnapshot(root, "s1");

Path file2 = new Path(dir, "file2");
DFSTestUtil.createFile(hdfs, file2, BLOCKSIZE, REPLICATION, seed);
INodeFile file2Node = fsdir.getINode(file2.toString()).asFile();
long file2NodeId = file2Node.getId();

hdfs.createSnapshot(root, "s2");

// delete directory
assertTrue(hdfs.delete(dir, true));
assertNotNull(fsdir.getInode(file2NodeId));

// delete second snapshot
hdfs.deleteSnapshot(root, "s2");
assertTrue(fsdir.getInode(file2NodeId) == null);

NameNodeAdapter.enterSafeMode(cluster.getNameNode(), false);
NameNodeAdapter.saveNamespace(cluster.getNameNode());

// restart NN
cluster.restartNameNodes();
  }
{code}



was (Author: jingzhao):
Thanks for the response, [~j...@cloudera.com].

bq. so there are create/delete pair operations for those files.

The challenge here is that we cannot guarantee we always have the create/delete 
pair here. Imagine the deletion happens on the directory while the creation 
happens on a file under the directory. Then we cannot depend on the snapshot 
diff combination to clean the file. The following unit test (based on your 
original test case) demos the scenario (but with your patch the following test 
will fail before the leaking check):
{code}
  @Test (timeout=6)
  public void testDeleteSnapshot() throws Exception {
final Path root = new Path("/");

Path dir = new Path("/dir1");
Path file1 = new Path(dir, "file1");
DFSTestUtil.createFile(hdfs, file1, BLOCKSIZE, REPLICATION, seed);

hdfs.allowSnapshot(root);
hdfs.createSnapshot(root, "s1");

Path file2 = new Path(dir, "file2");
DFSTestUtil.createFile(hdfs, file2, BLOCKSIZE, REPLICATION, seed);
INodeFile file2Node = fsdir.getINode(file2.toString()).asFile();
long file2NodeId = file2Node.getId();

hdfs.createSnapshot(root, "s2");

// delete directory
assertTrue(hdfs.delete(dir, true));
assertNotNull(fsdir.getInode(file2NodeId));

// delete second snapshot
hdfs.deleteSnapshot(root, "s2");
assertTrue(fsdir.getInode(file2NodeId) == null);

NameNodeAdapter.enterSafeMode(cluster.getNameNode(), false);
NameNodeAdapter.saveNamespace(cluster.getNameNode());

// restart NN
cluster.restartNameNodes();
  }
{code}


> incorrect snapshot directory diff generated by snapshot deletion
> 
>
> Key: HDFS-6908
> URL: https://issues.apache.org/jira/browse/HDFS-6908
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Juan Yu
>Assignee: Juan Yu
>Priority: Critical
> Attachments: HDFS-6908.001.patch
>
>
> In the following scenario, delete snapshot could generate incorrect snapshot 
> directory diff and corrupted fsimage, if you restart NN after that, you will 
> get NullPointerException.
> 1. create a directory and create a file under it
> 2. take a snapshot
> 3. create another file under that directory
> 4. take second snapshot
> 5. delete both files and the directory
> 6. delete second snapshot
> incorrect directory diff will be generated.
> Restart NN will throw NPE
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.addToDeletedList(FSImageFormatPBSnapshot.java:246)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadDeletedList(FSImageFormatPBSnapshot.java:265)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadDirectoryDiffList(FSImageFormatPBSnapshot.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadSnapshotDiffSectio

[jira] [Commented] (HDFS-6908) incorrect snapshot directory diff generated by snapshot deletion

2014-08-21 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106272#comment-14106272
 ] 

Jing Zhao commented on HDFS-6908:
-

Thanks for the response, [~j...@cloudera.com].

bq. so there are create/delete pair operations for those files.

The challenge here is that we cannot guarantee we always have the create/delete 
pair here. Imagine the deletion happens on the directory while the creation 
happens on a file under the directory. Then we cannot depend on the snapshot 
diff combination to clean the file. The following unit test (based on your 
original test case) demos the scenario (but with your patch the following test 
will fail before the leaking check):
{code}
  @Test (timeout=6)
  public void testDeleteSnapshot() throws Exception {
final Path root = new Path("/");

Path dir = new Path("/dir1");
Path file1 = new Path(dir, "file1");
DFSTestUtil.createFile(hdfs, file1, BLOCKSIZE, REPLICATION, seed);

hdfs.allowSnapshot(root);
hdfs.createSnapshot(root, "s1");

Path file2 = new Path(dir, "file2");
DFSTestUtil.createFile(hdfs, file2, BLOCKSIZE, REPLICATION, seed);
INodeFile file2Node = fsdir.getINode(file2.toString()).asFile();
long file2NodeId = file2Node.getId();

hdfs.createSnapshot(root, "s2");

// delete directory
assertTrue(hdfs.delete(dir, true));
assertNotNull(fsdir.getInode(file2NodeId));

// delete second snapshot
hdfs.deleteSnapshot(root, "s2");
assertTrue(fsdir.getInode(file2NodeId) == null);

NameNodeAdapter.enterSafeMode(cluster.getNameNode(), false);
NameNodeAdapter.saveNamespace(cluster.getNameNode());

// restart NN
cluster.restartNameNodes();
  }
{code}


> incorrect snapshot directory diff generated by snapshot deletion
> 
>
> Key: HDFS-6908
> URL: https://issues.apache.org/jira/browse/HDFS-6908
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Juan Yu
>Assignee: Juan Yu
>Priority: Critical
> Attachments: HDFS-6908.001.patch
>
>
> In the following scenario, delete snapshot could generate incorrect snapshot 
> directory diff and corrupted fsimage, if you restart NN after that, you will 
> get NullPointerException.
> 1. create a directory and create a file under it
> 2. take a snapshot
> 3. create another file under that directory
> 4. take second snapshot
> 5. delete both files and the directory
> 6. delete second snapshot
> incorrect directory diff will be generated.
> Restart NN will throw NPE
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.addToDeletedList(FSImageFormatPBSnapshot.java:246)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadDeletedList(FSImageFormatPBSnapshot.java:265)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadDirectoryDiffList(FSImageFormatPBSnapshot.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadSnapshotDiffSection(FSImageFormatPBSnapshot.java:192)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:254)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:168)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:208)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:906)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:892)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:715)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:653)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:276)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:882)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:629)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:498)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:554)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6581) Write to single replica in memory

2014-08-21 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106267#comment-14106267
 ] 

Colin Patrick McCabe commented on HDFS-6581:


The key difference between tmpfs and ramfs is that unprivileged users can't be 
allowed write access to ramfs, since you can trivially fill up the entire 
memory by writing to ramfs.  tmpfs has a kernel-enforced size limit, and 
swapping.  Since the design outlined here doesn't require giving unprivileged 
users write access to the temporary area, it is compatible with *both* tmpfs 
and ramfs.

bq. I do prefer tmpfs as the OS limits tmpfs usage beyond the configured size 
so the failure case is safer (DiskOutOfSpace instead of exhaust all RAM). swap 
is not as much of a concern as it is usually disabled.

I can think of two cases where we might run out of memory:
1. The user configures the DN to use so much memory for cache that there is not 
enough memory to run other programs.

ramfs: causes applications to be aborted with OOM errors.
tmpfs: degrades performance to very slow levels by swapping out our "cached" 
files.

An OOM error is easy to diagnose.  Sluggish performance is not.  The ramfs 
behavior is better than the tmpfs behavior.

2. There is a bug in the DataNode causing it to try to cache more than it 
should.

ramfs: causes applications to be aborted with OOM errors.
tmpfs: degrades performance to very slow levels by swapping out our "cached" 
files.

The bug is easy to find when using ramfs, hard to find with tmpfs.

So I would say, tmpfs is always worse for us.  Swapping is just not something 
we ever want, and memory limits are something we enforce ourselves, so tmpfs's 
features don't help us.

bq. Agreed that plain LRU would be a poor choice. Perhaps a hybrid of MRU+LRU 
would be a good option. i.e. evict the most recently read replica, unless there 
are replicas older than some threshold, in which case evict the LRU one. The 
assumption being that a client is unlikely to reread from a recently read 
replica.

Yeah, we'll need some benchmarking on this probably.

bq. Yes I reviewed the former, it looks interesting with eviction in mind. I'll 
create a subtask to investigate eviction via truncate.

Yeah, thanks for the review on HDFS-6750.  As Todd pointed out, we probably 
want to give clients some warning before the truncate in HDFS-6581, just like 
we do with HDFS-4949 and the munlock...

bq. The DataNode does not create the RAM disk since we cannot require root. An 
administrator will have to configure the partition.

Yeah, that makes sense.  Similarly, for HDFS-4949, the administrator must set 
the ulimit for the DataNode before caching can work.



> Write to single replica in memory
> -
>
> Key: HDFS-6581
> URL: https://issues.apache.org/jira/browse/HDFS-6581
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFSWriteableReplicasInMemory.pdf
>
>
> Per discussion with the community on HDFS-5851, we will implement writing to 
> a single replica in DN memory via DataTransferProtocol.
> This avoids some of the issues with short-circuit writes, which we can 
> revisit at a later time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-4486) Add log category for long-running DFSClient notices

2014-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106255#comment-14106255
 ] 

Hadoop QA commented on HDFS-4486:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12663511/HDFS-4486-20140821.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl
  org.apache.hadoop.hdfs.TestSafeMode
  org.apache.hadoop.hdfs.TestHFlush
  org.apache.hadoop.hdfs.TestModTime
  
org.apache.hadoop.hdfs.server.datanode.TestReadOnlySharedStorage
  
org.apache.hadoop.hdfs.server.datanode.TestIncrementalBlockReports
  org.apache.hadoop.hdfs.TestReservedRawPaths
  org.apache.hadoop.hdfs.web.TestHttpsFileSystem
  
org.apache.hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation
  org.apache.hadoop.hdfs.TestBlocksScheduledCounter
  
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
  org.apache.hadoop.hdfs.TestAbandonBlock
  org.apache.hadoop.hdfs.TestSetTimes
  org.apache.hadoop.hdfs.TestDFSFinalize
  org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
  org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery
  org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount
  org.apache.hadoop.hdfs.TestAppendDifferentChecksum
  
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestDatanodeRestart
  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
  
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration
  org.apache.hadoop.hdfs.TestHDFSFileSystemContract
  
org.apache.hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport
  org.apache.hadoop.hdfs.TestDFSShell
  org.apache.hadoop.hdfs.TestMissingBlocksAlert
  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes
  org.apache.hadoop.hdfs.server.namenode.TestGenericJournalConf
  org.apache.hadoop.hdfs.TestFileCreationClient
  org.apache.hadoop.hdfs.TestClientReportBadBlock
  org.apache.hadoop.hdfs.web.TestWebHDFS
  org.apache.hadoop.hdfs.TestSmallBlock
  org.apache.hadoop.hdfs.TestHdfsAdmin
  
org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus
  
org.apache.hadoop.hdfs.server.blockmanagement.TestComputeInvalidateWork
  org.apache.hadoop.hdfs.web.TestWebHdfsTokens
  org.apache.hadoop.hdfs.server.datanode.TestTransferRbw
  org.apache.hadoop.hdfs.TestFileCreation
  org.apache.hadoop.hdfs.crypto.TestHdfsCryptoStreams
  
org.apache.hadoop.hdfs.server.datanode.TestBlockHasMultipleReplicasOnSameDN
  
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestInterDatanodeProtocol
  org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
  org.apache.hadoop.hdfs.server.datanode.TestDataNodeMetrics
  
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForAcl
  org.apache.hadoop.hdfs.TestRenameWhileOpen
  org.apache.hadoop.hdfs.TestDFSClientRetries
  org.apache.hadoop.hdfs.server.namenode.TestNameNodeXAttr
  org.apache.hadoop.hdfs.TestSnapshotCommands
  org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA
  org.apache.hadoop.hdfs.TestL

[jira] [Commented] (HDFS-6581) Write to single replica in memory

2014-08-21 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106245#comment-14106245
 ] 

Andrew Wang commented on HDFS-6581:
---

I also took a look at the doc, seems pretty reasonable. Had a few questions as 
well.

* Related to Colin's point about configuring separate pools of memory on the 
DN, I'd really like to see integration with the cache pools from HDFS-4949. 
Memory is ideally shareable between HDFS and YARN, and cache pools were 
designed with that in mind. Simple storage quotas do not fit as well.
* Quotas are also a very rigid policy and can result in under-utilization. 
Cache pools are more flexible, and can be extended to support fair share and 
more complex policies. Avoiding underutilization seems especially important for 
a limited resource like memory.
* Do you have any benchmarks? For the read side, we found checksum overhead to 
be substantial, essentially the cost of a copy. If we use tmpfs, it can swap, 
so we're forced to calculate checksums at both write and read time. My guess is 
also that a normal 1-replication write will be fairly fast because of the OS 
buffer cache, so it'd be nice to quantify the potential improvement.
* There's a mention of LAZY_PERSIST having a config option to unlink corrupt 
TMP files. It seems better for this to be per-file rather than NN-wide, since 
different clients might want different behavior.
* 5.2.2 lists a con of mmaped files as not having control over page writeback. 
Is this actually true when using mlock? Also not sure why memory pressure is 
worse with mmaped files compared to tmpfs. mmap might make eviction+SCR nicer 
too, since you can just drop the mlocks if you want to evict, and the client 
has a hope of falling back gracefully.

HSM-related questions
* Caveat, I'm not sure what the HSM APIs will look like, or how this will be 
integrated, so some of these might be out of scope.
* Will we support changing a file from DISK storage type to TMP storage type? I 
would say no, since cache directives seem better for read caching when 
something is already on disk.
* Will we support writing a file on both TMP and another storage type? Similar 
to the above, it also doesn't feel that useful.

> Write to single replica in memory
> -
>
> Key: HDFS-6581
> URL: https://issues.apache.org/jira/browse/HDFS-6581
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFSWriteableReplicasInMemory.pdf
>
>
> Per discussion with the community on HDFS-5851, we will implement writing to 
> a single replica in DN memory via DataTransferProtocol.
> This avoids some of the issues with short-circuit writes, which we can 
> revisit at a later time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6899) Allow changing MiniDFSCluster volumes per DN and capacity per volume

2014-08-21 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106239#comment-14106239
 ] 

Jing Zhao commented on HDFS-6899:
-

Thanks for the update, [~arpitagarwal]! The new patch looks good to me. Some 
minors:
# In the following code, if {{Builder#numDataNodes}} and/or 
{{Builder#storagesPerDatanode}} is called after {{storageCapacities(long[])}} 
then the size of the capacities may be wrong? I think we can remove 
{{storageCapacities(long[])}} here.
# Maybe we can add some code in the following segment to check the correctness 
of the size of {{storageCapacities}}.
{code}
+if (storageCapacities != null) {
+  for (int i = curDatanodesNum; i < curDatanodesNum+numDataNodes; ++i) {
+List volumes = 
dns[i].getFSDataset().getVolumes();
+assert volumes.size() == storagesPerDatanode;
+
+for (int j = 0; j < volumes.size(); ++j) {
+  FsVolumeImpl volume = (FsVolumeImpl) volumes.get(j);
+  volume.setCapacityForTesting(storageCapacities[i][j]);
+}
+  }
+}
{code}

> Allow changing MiniDFSCluster volumes per DN and capacity per volume
> 
>
> Key: HDFS-6899
> URL: https://issues.apache.org/jira/browse/HDFS-6899
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, test
>Affects Versions: 2.5.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-6899.01.patch, HDFS-6899.02.patch
>
>
> MiniDFSCluster hardcodes the number of directories per volume to two. Propose 
> removing the hard-coded restriction.
> It would be useful to limit the capacity of individual storage directories 
> for testing purposes. There is already a way to do so for SimulatedFSDataset, 
> we can add one when using real volumes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HDFS-6581) Write to single replica in memory

2014-08-21 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106232#comment-14106232
 ] 

Arpit Agarwal edited comment on HDFS-6581 at 8/22/14 12:24 AM:
---

Thank you for taking the time to look at the doc and provide feedback.

bq. The problem with using tmpfs is that the system could move the data to swap 
at any time. In addition to performance problems, this could cause correctness 
problems later when we read back the data from swap (i.e. from the hard disk). 
Since we don't want to verify checksums here, we should use a storage method 
that we know never touches the disk. Tachyon uses ramfs instead of tmpfs for 
this reason.
The implementation makes no assumptions of the underlying partition, whether it 
is tmpfs or ramfs. I think renaming TMPFS to RAM as Gopal suggested will avoid 
confusion. I do prefer tmpfs as the OS limits tmpfs usage beyond the configured 
size so the failure case is safer (DiskOutOfSpace instead of exhaust all RAM). 
swap is not as much of a concern as it is usually disabled.

bq. An LRU replacement policy isn't a good choice. It's very easy for a batch 
job to kick out everything in memory before it can ever be used again 
(thrashing). An LFU (least frequently used) policy would be much better. We'd 
have to keep usage statistics to implement this, but that doesn't seem too bad.
Agreed that plain LRU would be a poor choice. Perhaps a hybrid of MRU+LRU would 
be a good option. i.e. evict the most recently read replica, unless there are 
replicas older than some threshold, in which case evict the LRU one. The 
assumption being that a client is unlikely to reread from a recently read 
replica.

bq. You can effectively revoke access to a block file stored in ramfs or tmpfs 
by truncating that file to 0 bytes. The client can hang on to the file 
descriptor, but this doesn't keep any data bytes in memory. So we can move 
things out of the cache even if the clients are unresponsive. Also see 
HDFS-6750 and HDFS-6036 for examples of how we can ask the clients to stop 
using a short-circuit replica before tearing it down.
Yes I reviewed the former, it looks interesting with eviction in mind. I'll 
create a subtask to investigate eviction via truncate.

bq. How is the maximum tmpfs/ramfs size per datanode configured? I think we 
should use the existing dfs.datanode.max.locked.memory property to configure 
this, for consistency. System administrators should not need to configure 
separate pools of memory for HDFS-4949 and this feature. It should be one 
memory size.
bq. Related to that, we might want to rename dfs.datanode.max.locked.memory to 
dfs.data.node.max.cache.memory or something.
The DataNode does not create the RAM disk since we cannot require root. An 
administrator will have to configure the partition.


was (Author: arpitagarwal):
Thank you for taking the time to look at the doc and provide feedback.

bq. The problem with using tmpfs is that the system could move the data to swap 
at any time. In addition to performance problems, this could cause correctness 
problems later when we read back the data from swap (i.e. from the hard disk). 
Since we don't want to verify checksums here, we should use a storage method 
that we know never touches the disk. Tachyon uses ramfs instead of tmpfs for 
this reason.
The implementation makes no assumptions of the underlying platform, whether it 
is tmpfs or ramfs. I think renaming TMPFS to RAM as Gopal suggested will avoid 
confusion. I do prefer tmpfs as the OS limits tmpfs usage beyond the configured 
size so the failure case is safer (DiskOutOfSpace instead of exhaust all RAM). 
swap is not as much of a concern as it is usually disabled.

bq. An LRU replacement policy isn't a good choice. It's very easy for a batch 
job to kick out everything in memory before it can ever be used again 
(thrashing). An LFU (least frequently used) policy would be much better. We'd 
have to keep usage statistics to implement this, but that doesn't seem too bad.
Agreed that plain LRU would be a poor choice. Perhaps a hybrid of MRU+LRU would 
be a good option. i.e. evict the most recently read replica, unless there are 
replicas older than some threshold, in which case evict the LRU one. The 
assumption being that a client is unlikely to reread from a recently read 
replica.

bq. You can effectively revoke access to a block file stored in ramfs or tmpfs 
by truncating that file to 0 bytes. The client can hang on to the file 
descriptor, but this doesn't keep any data bytes in memory. So we can move 
things out of the cache even if the clients are unresponsive. Also see 
HDFS-6750 and HDFS-6036 for examples of how we can ask the clients to stop 
using a short-circuit replica before tearing it down.
Yes I reviewed the former, it looks interesting with eviction in mind. I'll 
create a subtask to investigate eviction 

[jira] [Commented] (HDFS-6581) Write to single replica in memory

2014-08-21 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106232#comment-14106232
 ] 

Arpit Agarwal commented on HDFS-6581:
-

Thank you for taking the time to look at the doc and provide feedback.

bq. The problem with using tmpfs is that the system could move the data to swap 
at any time. In addition to performance problems, this could cause correctness 
problems later when we read back the data from swap (i.e. from the hard disk). 
Since we don't want to verify checksums here, we should use a storage method 
that we know never touches the disk. Tachyon uses ramfs instead of tmpfs for 
this reason.
The implementation makes no assumptions of the underlying platform, whether it 
is tmpfs or ramfs. I think renaming TMPFS to RAM as Gopal suggested will avoid 
confusion. I do prefer tmpfs as the OS limits tmpfs usage beyond the configured 
size so the failure case is safer (DiskOutOfSpace instead of exhaust all RAM). 
swap is not as much of a concern as it is usually disabled.

bq. An LRU replacement policy isn't a good choice. It's very easy for a batch 
job to kick out everything in memory before it can ever be used again 
(thrashing). An LFU (least frequently used) policy would be much better. We'd 
have to keep usage statistics to implement this, but that doesn't seem too bad.
Agreed that plain LRU would be a poor choice. Perhaps a hybrid of MRU+LRU would 
be a good option. i.e. evict the most recently read replica, unless there are 
replicas older than some threshold, in which case evict the LRU one. The 
assumption being that a client is unlikely to reread from a recently read 
replica.

bq. You can effectively revoke access to a block file stored in ramfs or tmpfs 
by truncating that file to 0 bytes. The client can hang on to the file 
descriptor, but this doesn't keep any data bytes in memory. So we can move 
things out of the cache even if the clients are unresponsive. Also see 
HDFS-6750 and HDFS-6036 for examples of how we can ask the clients to stop 
using a short-circuit replica before tearing it down.
Yes I reviewed the former, it looks interesting with eviction in mind. I'll 
create a subtask to investigate eviction via truncate.

bq. How is the maximum tmpfs/ramfs size per datanode configured? I think we 
should use the existing dfs.datanode.max.locked.memory property to configure 
this, for consistency. System administrators should not need to configure 
separate pools of memory for HDFS-4949 and this feature. It should be one 
memory size.
bq. Related to that, we might want to rename dfs.datanode.max.locked.memory to 
dfs.data.node.max.cache.memory or something.
The DataNode does not create the RAM disk since we cannot require root. An 
administrator will have to configure the partition.

> Write to single replica in memory
> -
>
> Key: HDFS-6581
> URL: https://issues.apache.org/jira/browse/HDFS-6581
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFSWriteableReplicasInMemory.pdf
>
>
> Per discussion with the community on HDFS-5851, we will implement writing to 
> a single replica in DN memory via DataTransferProtocol.
> This avoids some of the issues with short-circuit writes, which we can 
> revisit at a later time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6908) incorrect snapshot directory diff generated by snapshot deletion

2014-08-21 Thread Juan Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106231#comment-14106231
 ] 

Juan Yu commented on HDFS-6908:
---

Thanks [~jingzhao].
because the directory is deleted, it means the file created between prior 
snapshot and the deleting one must be deleted as well. so there are 
create/delete pair operations for those files. the file diff processing part 
will add the file to removedINodes list. when I debug the fix, I saw the inode 
for the file are deleted correctly, no leak. and the intermediate create/delete 
file change is cleaned after combining the diff with prior one as well.

{code}
} else if (topNode.isFile() && topNode.asFile().isWithSnapshot()) {
INodeFile file = topNode.asFile();
counts.add(file.getDiffs().deleteSnapshotDiff(post, prior, file,
collectedBlocks, removedINodes, countDiffChange));
{code}

> incorrect snapshot directory diff generated by snapshot deletion
> 
>
> Key: HDFS-6908
> URL: https://issues.apache.org/jira/browse/HDFS-6908
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Juan Yu
>Assignee: Juan Yu
>Priority: Critical
> Attachments: HDFS-6908.001.patch
>
>
> In the following scenario, delete snapshot could generate incorrect snapshot 
> directory diff and corrupted fsimage, if you restart NN after that, you will 
> get NullPointerException.
> 1. create a directory and create a file under it
> 2. take a snapshot
> 3. create another file under that directory
> 4. take second snapshot
> 5. delete both files and the directory
> 6. delete second snapshot
> incorrect directory diff will be generated.
> Restart NN will throw NPE
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.addToDeletedList(FSImageFormatPBSnapshot.java:246)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadDeletedList(FSImageFormatPBSnapshot.java:265)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadDirectoryDiffList(FSImageFormatPBSnapshot.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadSnapshotDiffSection(FSImageFormatPBSnapshot.java:192)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:254)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:168)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:208)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:906)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:892)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:715)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:653)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:276)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:882)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:629)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:498)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:554)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6892) Add XDR packaging method for each NFS request

2014-08-21 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-6892:
-

Attachment: HDFS-6892.001.patch

Uploaded a patch with some unit tests. The unit tests are not e2e but can 
validate the added methods.

> Add XDR packaging method for each NFS request
> -
>
> Key: HDFS-6892
> URL: https://issues.apache.org/jira/browse/HDFS-6892
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: nfs
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-6892.001.patch
>
>
> This method can be used for unit tests.
> Most request implements this by overriding RequestWithHandle#serialize() 
> method. However, some request classes missed it, e.g., COMMIT3Request, 
> MKDIR3Request,READDIR3Request, READDIRPLUS3Request, 
> RMDIR3RequestREMOVE3Request, SETATTR3Request,SYMLINK3Request.  RENAME3Reqeust 
> is another example. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6892) Add XDR packaging method for each NFS request

2014-08-21 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-6892:
-

Affects Version/s: 2.2.0

> Add XDR packaging method for each NFS request
> -
>
> Key: HDFS-6892
> URL: https://issues.apache.org/jira/browse/HDFS-6892
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: nfs
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-6892.001.patch
>
>
> This method can be used for unit tests.
> Most request implements this by overriding RequestWithHandle#serialize() 
> method. However, some request classes missed it, e.g., COMMIT3Request, 
> MKDIR3Request,READDIR3Request, READDIRPLUS3Request, 
> RMDIR3RequestREMOVE3Request, SETATTR3Request,SYMLINK3Request.  RENAME3Reqeust 
> is another example. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6892) Add XDR packaging method for each NFS request

2014-08-21 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-6892:
-

Status: Patch Available  (was: Open)

> Add XDR packaging method for each NFS request
> -
>
> Key: HDFS-6892
> URL: https://issues.apache.org/jira/browse/HDFS-6892
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: nfs
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-6892.001.patch
>
>
> This method can be used for unit tests.
> Most request implements this by overriding RequestWithHandle#serialize() 
> method. However, some request classes missed it, e.g., COMMIT3Request, 
> MKDIR3Request,READDIR3Request, READDIRPLUS3Request, 
> RMDIR3RequestREMOVE3Request, SETATTR3Request,SYMLINK3Request.  RENAME3Reqeust 
> is another example. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-4257) The ReplaceDatanodeOnFailure policies could have a forgiving option

2014-08-21 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106213#comment-14106213
 ] 

Gregory Chanan commented on HDFS-4257:
--

Looking forward to this, it would definitely help us in Solr, similar to the 
description for Flume given in HDFS-5131.

> The ReplaceDatanodeOnFailure policies could have a forgiving option
> ---
>
> Key: HDFS-4257
> URL: https://issues.apache.org/jira/browse/HDFS-4257
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs-client
>Affects Versions: 2.0.2-alpha
>Reporter: Harsh J
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h4257_20140325.patch, h4257_20140325b.patch, 
> h4257_20140326.patch, h4257_20140819.patch
>
>
> Similar question has previously come over HDFS-3091 and friends, but the 
> essential problem is: "Why can't I write to my cluster of 3 nodes, when I 
> just have 1 node available at a point in time.".
> The policies cover the 4 options, with {{Default}} being default:
> {{Disable}} -> Disables the whole replacement concept by throwing out an 
> error (at the server) or acts as {{Never}} at the client.
> {{Never}} -> Never replaces a DN upon pipeline failures (not too desirable in 
> many cases).
> {{Default}} -> Replace based on a few conditions, but whose minimum never 
> touches 1. We always fail if only one DN remains and none others can be added.
> {{Always}} -> Replace no matter what. Fail if can't replace.
> Would it not make sense to have an option similar to Always/Default, where 
> despite _trying_, if it isn't possible to have > 1 DN in the pipeline, do not 
> fail. I think that is what the former write behavior was, and what fit with 
> the minimum replication factor allowed value.
> Why is it grossly wrong to pass a write from a client for a block with just 1 
> remaining replica in the pipeline (the minimum of 1 grows with the 
> replication factor demanded from the write), when replication is taken care 
> of immediately afterwards? How often have we seen missing blocks arise out of 
> allowing this + facing a big rack(s) failure or so?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6581) Write to single replica in memory

2014-08-21 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106203#comment-14106203
 ] 

Gopal V commented on HDFS-6581:
---

Swap in modern kernels are checksummed with crc32c.

Re: RAMfs, the [tmpfs] mediatype applies to anything which shows up as a 
directory. Perhaps a rename would make it clearer.

This approach allows someone who wants to mount some other kind of volatile 
storage (or even a real "ramdisk").

> Write to single replica in memory
> -
>
> Key: HDFS-6581
> URL: https://issues.apache.org/jira/browse/HDFS-6581
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFSWriteableReplicasInMemory.pdf
>
>
> Per discussion with the community on HDFS-5851, we will implement writing to 
> a single replica in DN memory via DataTransferProtocol.
> This avoids some of the issues with short-circuit writes, which we can 
> revisit at a later time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HDFS-6863) Archival Storage: Support migration for snapshot paths

2014-08-21 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao resolved HDFS-6863.
-

  Resolution: Fixed
Hadoop Flags: Reviewed

I've committed this. Thanks for the review, Nicholas!

> Archival Storage: Support migration for snapshot paths
> --
>
> Key: HDFS-6863
> URL: https://issues.apache.org/jira/browse/HDFS-6863
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer, namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-6863.000.patch, HDFS-6863.001.patch
>
>
> Per discussion in HDFS-6801, we use this jira to support migrate files and 
> directories that only exist in snapshots (i.e., files/dirs that have been 
> deleted from the current fsdir).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6906) Archival Storage: Add more tests for BlockStoragePolicy

2014-08-21 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106161#comment-14106161
 ] 

Jing Zhao commented on HDFS-6906:
-

+1. I will commit it shortly.

> Archival Storage: Add more tests for BlockStoragePolicy
> ---
>
> Key: HDFS-6906
> URL: https://issues.apache.org/jira/browse/HDFS-6906
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h6906_20140821.patch
>
>
> The logic for choosing storage types is tricky, especially when there are 
> chosen storage types and/or unavailable storage types.  Let's add more unit 
> tests.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6800) Determine how Datanode layout changes should interact with rolling upgrade

2014-08-21 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106137#comment-14106137
 ] 

Colin Patrick McCabe commented on HDFS-6800:


Thanks for fixing the DataNode usage message.  That bugged me.  The overall 
strategy looks good.  Starting the DN with {{\-rollback}} matches how we handle 
starting the NN during rollback.

{code}
   private void doTransition(DataNode datanode, StorageDirectory sd,
   NamespaceInfo nsInfo, StartupOption startOpt) throws IOException {
-if (startOpt == StartupOption.ROLLBACK) {
+if (startOpt == StartupOption.ROLLBACK && sd.getPreviousDir().exists()) {
   doRollback(sd, nsInfo); // rollback if applicable
+  // we have already restored everything in the trash by rolling back to
+  // the previous directory, so we must delete the trash to ensure
+  // that it's not restored by BPOfferService.signalRollingUpgrade()
+  FileUtil.fullyDelete(getTrashRootDir(sd));
 } else {
{code}

What if the rename inside doRollback succeeds, but the deletion of the trash 
fails?

I think to avoid this, we should have a process like this:
1. rename trash to trash.old.
2. doRollback (renames previous to current, etc.)
3. if doRollback succeeded, delete trash.old.
If it failed, rename trash.old. to trash.

This means using try/catch and/or checking return booleans as needed

> Determine how Datanode layout changes should interact with rolling upgrade
> --
>
> Key: HDFS-6800
> URL: https://issues.apache.org/jira/browse/HDFS-6800
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Colin Patrick McCabe
>Assignee: James Thomas
> Attachments: HDFS-6800.2.patch, HDFS-6800.3.patch, HDFS-6800.4.patch, 
> HDFS-6800.patch
>
>
> We need to handle attempts to rolling-upgrade the DataNode to a new storage 
> directory layout.
> One approach is to disallow such upgrades.  If we choose this approach, we 
> should make sure that the system administrator gets a helpful error message 
> and a clean failure when trying to use rolling upgrade to a version that 
> doesn't support it.  Based on the compatibility guarantees described in 
> HDFS-5535, this would mean that *any* future DataNode layout changes would 
> require a major version upgrade.
> Another approach would be to support rolling upgrade from an old DN storage 
> layout to a new layout.  This approach requires us to change our 
> documentation to explain to users that they should supply the {{\-rollback}} 
> command on the command-line when re-starting the DataNodes during rolling 
> rollback.  Currently the documentation just says to restart the DataNode 
> normally.
> Another issue here is that the DataNode's usage message describes rollback 
> options that no longer exist.  The help text says that the DN supports 
> {{\-rollingupgrade rollback}}, but this option was removed by HDFS-6005.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6908) incorrect snapshot directory diff generated by snapshot deletion

2014-08-21 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-6908:


Priority: Critical  (was: Major)

> incorrect snapshot directory diff generated by snapshot deletion
> 
>
> Key: HDFS-6908
> URL: https://issues.apache.org/jira/browse/HDFS-6908
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Juan Yu
>Assignee: Juan Yu
>Priority: Critical
> Attachments: HDFS-6908.001.patch
>
>
> In the following scenario, delete snapshot could generate incorrect snapshot 
> directory diff and corrupted fsimage, if you restart NN after that, you will 
> get NullPointerException.
> 1. create a directory and create a file under it
> 2. take a snapshot
> 3. create another file under that directory
> 4. take second snapshot
> 5. delete both files and the directory
> 6. delete second snapshot
> incorrect directory diff will be generated.
> Restart NN will throw NPE
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.addToDeletedList(FSImageFormatPBSnapshot.java:246)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadDeletedList(FSImageFormatPBSnapshot.java:265)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadDirectoryDiffList(FSImageFormatPBSnapshot.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadSnapshotDiffSection(FSImageFormatPBSnapshot.java:192)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:254)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:168)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:208)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:906)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:892)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:715)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:653)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:276)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:882)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:629)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:498)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:554)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6908) incorrect snapshot directory diff generated by snapshot deletion

2014-08-21 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-6908:


Component/s: snapshots

> incorrect snapshot directory diff generated by snapshot deletion
> 
>
> Key: HDFS-6908
> URL: https://issues.apache.org/jira/browse/HDFS-6908
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Juan Yu
>Assignee: Juan Yu
>Priority: Critical
> Attachments: HDFS-6908.001.patch
>
>
> In the following scenario, delete snapshot could generate incorrect snapshot 
> directory diff and corrupted fsimage, if you restart NN after that, you will 
> get NullPointerException.
> 1. create a directory and create a file under it
> 2. take a snapshot
> 3. create another file under that directory
> 4. take second snapshot
> 5. delete both files and the directory
> 6. delete second snapshot
> incorrect directory diff will be generated.
> Restart NN will throw NPE
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.addToDeletedList(FSImageFormatPBSnapshot.java:246)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadDeletedList(FSImageFormatPBSnapshot.java:265)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadDirectoryDiffList(FSImageFormatPBSnapshot.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadSnapshotDiffSection(FSImageFormatPBSnapshot.java:192)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:254)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:168)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:208)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:906)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:892)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:715)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:653)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:276)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:882)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:629)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:498)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:554)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6581) Write to single replica in memory

2014-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106135#comment-14106135
 ] 

Hadoop QA commented on HDFS-6581:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12661926/HDFSWriteableReplicasInMemory.pdf
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7706//console

This message is automatically generated.

> Write to single replica in memory
> -
>
> Key: HDFS-6581
> URL: https://issues.apache.org/jira/browse/HDFS-6581
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFSWriteableReplicasInMemory.pdf
>
>
> Per discussion with the community on HDFS-5851, we will implement writing to 
> a single replica in DN memory via DataTransferProtocol.
> This avoids some of the issues with short-circuit writes, which we can 
> revisit at a later time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6908) incorrect snapshot directory diff generated by snapshot deletion

2014-08-21 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106133#comment-14106133
 ] 

Jing Zhao commented on HDFS-6908:
-

For the current patch, another comment is that we can move the new unit test to 
TestSnapshotDeletion.java, and call {{hdfs.delete(file1, true);}} instead of 
{{hdfs.delete(file1);}}.

> incorrect snapshot directory diff generated by snapshot deletion
> 
>
> Key: HDFS-6908
> URL: https://issues.apache.org/jira/browse/HDFS-6908
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Juan Yu
>Assignee: Juan Yu
> Attachments: HDFS-6908.001.patch
>
>
> In the following scenario, delete snapshot could generate incorrect snapshot 
> directory diff and corrupted fsimage, if you restart NN after that, you will 
> get NullPointerException.
> 1. create a directory and create a file under it
> 2. take a snapshot
> 3. create another file under that directory
> 4. take second snapshot
> 5. delete both files and the directory
> 6. delete second snapshot
> incorrect directory diff will be generated.
> Restart NN will throw NPE
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.addToDeletedList(FSImageFormatPBSnapshot.java:246)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadDeletedList(FSImageFormatPBSnapshot.java:265)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadDirectoryDiffList(FSImageFormatPBSnapshot.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadSnapshotDiffSection(FSImageFormatPBSnapshot.java:192)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:254)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:168)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:208)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:906)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:892)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:715)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:653)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:276)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:882)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:629)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:498)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:554)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6908) incorrect snapshot directory diff generated by snapshot deletion

2014-08-21 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106127#comment-14106127
 ] 

Jing Zhao commented on HDFS-6908:
-

Thanks for working on this, [~j...@cloudera.com]! Actually this is a case the 
current code fails to cover. Your analysis makes sense to me.

However, for the fix, if we only call dir.removeChild, the inodes that were 
created between prior snapshot and the deleting one will still be kept in the 
created list, thus can cause leaking. Maybe a better way to fix is to call 
{{cleanSubtreeRecursively}} before {{cleanDeletedINode}}:
{code}
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectoryWithSnapshotFeature.java
 b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hado
index 9893bba..a4f69f0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectoryWithSnapshotFeature.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectoryWithSnapshotFeature.java
@@ -722,6 +722,8 @@ boolean computeDiffBetweenSnapshots(Snapshot fromSnapshot,
 counts.add(lastDiff.diff.destroyCreatedList(currentINode,
 collectedBlocks, removedINodes));
   }
+  counts.add(currentINode.cleanSubtreeRecursively(snapshot, prior,
+  collectedBlocks, removedINodes, priorDeleted, countDiffChange));
 } else {
   // update prior
   prior = getDiffs().updatePrior(snapshot, prior);
@@ -739,7 +741,10 @@ boolean computeDiffBetweenSnapshots(Snapshot fromSnapshot,
   
   counts.add(getDiffs().deleteSnapshotDiff(snapshot, prior,
   currentINode, collectedBlocks, removedINodes, countDiffChange));
-  
+
+  counts.add(currentINode.cleanSubtreeRecursively(snapshot, prior,
+  collectedBlocks, removedINodes, priorDeleted, countDiffChange));
+
   // check priorDiff again since it may be created during the diff deletion
   if (prior != Snapshot.NO_SNAPSHOT_ID) {
 DirectoryDiff priorDiff = this.getDiffs().getDiffById(prior);
@@ -778,9 +783,7 @@ boolean computeDiffBetweenSnapshots(Snapshot fromSnapshot,
 }
   }
 }
-counts.add(currentINode.cleanSubtreeRecursively(snapshot, prior,
-collectedBlocks, removedINodes, priorDeleted, countDiffChange));
-
+
 if (currentINode.isQuotaSet()) {
   currentINode.getDirectoryWithQuotaFeature().addSpaceConsumed2Cache(
   -counts.get(Quota.NAMESPACE), -counts.get(Quota.DISKSPACE));
{code}

> incorrect snapshot directory diff generated by snapshot deletion
> 
>
> Key: HDFS-6908
> URL: https://issues.apache.org/jira/browse/HDFS-6908
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Juan Yu
>Assignee: Juan Yu
> Attachments: HDFS-6908.001.patch
>
>
> In the following scenario, delete snapshot could generate incorrect snapshot 
> directory diff and corrupted fsimage, if you restart NN after that, you will 
> get NullPointerException.
> 1. create a directory and create a file under it
> 2. take a snapshot
> 3. create another file under that directory
> 4. take second snapshot
> 5. delete both files and the directory
> 6. delete second snapshot
> incorrect directory diff will be generated.
> Restart NN will throw NPE
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.addToDeletedList(FSImageFormatPBSnapshot.java:246)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadDeletedList(FSImageFormatPBSnapshot.java:265)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadDirectoryDiffList(FSImageFormatPBSnapshot.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadSnapshotDiffSection(FSImageFormatPBSnapshot.java:192)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:254)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:168)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:208)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:906)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:892)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:715)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:653)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSIm

[jira] [Updated] (HDFS-6729) Support maintenance mode for DN

2014-08-21 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-6729:


Attachment: HDFS-6729.000.patch

Update patch to correct file name.

> Support maintenance mode for DN
> ---
>
> Key: HDFS-6729
> URL: https://issues.apache.org/jira/browse/HDFS-6729
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-6729.000.patch
>
>
> Some maintenance works (e.g., upgrading RAM or add disks) on DataNode only 
> takes a short amount of time (e.g., 10 minutes). In these cases, the users do 
> not want to report missing blocks on this DN because the DN will be online 
> shortly without data lose. Thus, we need a maintenance mode for a DN so that 
> maintenance work can be carried out on the DN without having to decommission 
> it or the DN being marked as dead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6729) Support maintenance mode for DN

2014-08-21 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-6729:


Attachment: (was: hadoop-6729.000.patch)

> Support maintenance mode for DN
> ---
>
> Key: HDFS-6729
> URL: https://issues.apache.org/jira/browse/HDFS-6729
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-6729.000.patch
>
>
> Some maintenance works (e.g., upgrading RAM or add disks) on DataNode only 
> takes a short amount of time (e.g., 10 minutes). In these cases, the users do 
> not want to report missing blocks on this DN because the DN will be online 
> shortly without data lose. Thus, we need a maintenance mode for a DN so that 
> maintenance work can be carried out on the DN without having to decommission 
> it or the DN being marked as dead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-4486) Add log category for long-running DFSClient notices

2014-08-21 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-4486:


Attachment: hdfs-4486-20140821-2.patch

This new patch removes noisy messages.

> Add log category for long-running DFSClient notices
> ---
>
> Key: HDFS-4486
> URL: https://issues.apache.org/jira/browse/HDFS-4486
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Todd Lipcon
>Assignee: Zhe Zhang
>Priority: Minor
> Attachments: HDFS-4486-20140820.patch, HDFS-4486-20140821.patch, 
> hdfs-4486-20140821-2.patch
>
>
> There are a number of features in the DFS client which are transparent but 
> can make a fairly big difference for performance -- two in particular are 
> short circuit reads and native checksumming. Because we don't want log spew 
> for clients like "hadoop fs -cat" we currently log only at DEBUG level when 
> these features are disabled. This makes it difficult to troubleshoot/verify 
> for long-running perf-sensitive clients like HBase.
> One simple solution is to add a new log category - eg 
> o.a.h.h.DFSClient.PerformanceAdvisory - which long-running clients could 
> enable at DEBUG level without getting the full debug spew.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6863) Archival Storage: Support migration for snapshot paths

2014-08-21 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106090#comment-14106090
 ] 

Tsz Wo Nicholas Sze commented on HDFS-6863:
---

Filed HDFS-6911 for #2.

> Archival Storage: Support migration for snapshot paths
> --
>
> Key: HDFS-6863
> URL: https://issues.apache.org/jira/browse/HDFS-6863
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer, namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-6863.000.patch, HDFS-6863.001.patch
>
>
> Per discussion in HDFS-6801, we use this jira to support migrate files and 
> directories that only exist in snapshots (i.e., files/dirs that have been 
> deleted from the current fsdir).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6911) Archival Storage: check if a block is already scheduled in Mover

2014-08-21 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HDFS-6911:
-

 Summary: Archival Storage: check if a block is already scheduled 
in Mover
 Key: HDFS-6911
 URL: https://issues.apache.org/jira/browse/HDFS-6911
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze


Similar to balancer, Mover should remember all blocks already scheduled to move 
(movedBlocks). Then, check it before schedule a new block move.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6863) Archival Storage: Support migration for snapshot paths

2014-08-21 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106080#comment-14106080
 ] 

Tsz Wo Nicholas Sze commented on HDFS-6863:
---

A file may be available in two or more snapshots.  Then, the block will be 
scheduled to be moved twice.  We may
# For files in snapshot, check also if it is in some later snapshots.  Only 
moves a file in its last available snapshot/current state.  However, this leads 
to a O(n*m) number of checking, where n is number of files and m is number of 
snapshots.
# Similar to balancer, remember all blocks already scheduled to move 
(movedBlocks).  Check it before schedule a new block move.

I guess we need #2.

+1 for the current patch.

> Archival Storage: Support migration for snapshot paths
> --
>
> Key: HDFS-6863
> URL: https://issues.apache.org/jira/browse/HDFS-6863
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer, namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-6863.000.patch, HDFS-6863.001.patch
>
>
> Per discussion in HDFS-6801, we use this jira to support migrate files and 
> directories that only exist in snapshots (i.e., files/dirs that have been 
> deleted from the current fsdir).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5135) Umbrella JIRA for NFS end to end unit test frameworks

2014-08-21 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106073#comment-14106073
 ] 

Zhe Zhang commented on HDFS-5135:
-

I created a new class under 
{{hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3}}
 named {{TestRPCMessagesInNFS}}. The current working copy is attached. It is 
based on the original {{TestOutOfOrderWrite}} class. The problem I'm getting is 
that the {{messageReceived()}} method still receives RPC message with the 
fragment header you mentioned above. Shouldn't it be taken off by 
{{RpcFrameDecoder}} before reaching {{messageReceived()}}?

> Umbrella JIRA for NFS end to end unit test frameworks
> -
>
> Key: HDFS-5135
> URL: https://issues.apache.org/jira/browse/HDFS-5135
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Zhe Zhang
> Attachments: TestRPCMessagesInNFS.java
>
>
> Currently, we have to manually start portmap and nfs3 processes to test patch 
> and new functionalities. This JIRA is to track the effort to introduce a test 
> framework to NFS unit test without starting standalone nfs3 processes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6855) Add a different end-to-end non-manual NFS test to replace TestOutOfOrderWrite

2014-08-21 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-6855:
-

Issue Type: Sub-task  (was: Improvement)
Parent: HDFS-5135

> Add a different end-to-end non-manual NFS test to replace TestOutOfOrderWrite
> -
>
> Key: HDFS-6855
> URL: https://issues.apache.org/jira/browse/HDFS-6855
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: nfs
>Reporter: Brandon Li
>Assignee: Zhe Zhang
>
> TestOutOfOrderWrite is an end-to-end test with a TCP client. However, it's a 
> manual test and out-of-order write is covered by new added test in HDFS-6850.
> This JIRA is to track the effort of adding a new end-to-end test with more 
> test cases to replace TestOutOfOrderWrite.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5135) Umbrella JIRA for NFS end to end unit test frameworks

2014-08-21 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-5135:


Attachment: TestRPCMessagesInNFS.java

> Umbrella JIRA for NFS end to end unit test frameworks
> -
>
> Key: HDFS-5135
> URL: https://issues.apache.org/jira/browse/HDFS-5135
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Zhe Zhang
> Attachments: TestRPCMessagesInNFS.java
>
>
> Currently, we have to manually start portmap and nfs3 processes to test patch 
> and new functionalities. This JIRA is to track the effort to introduce a test 
> framework to NFS unit test without starting standalone nfs3 processes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6581) Write to single replica in memory

2014-08-21 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106069#comment-14106069
 ] 

Colin Patrick McCabe commented on HDFS-6581:


bq. If you truncate() a file while a reader has it mmapped, will the reader get 
0s or a bus error? I seem to recall it's the latter, which may be a bit nasty 
for a revocation path.

A reader which reads via read() will get EOF.  A reader reading via mmap will 
get SIGBUS.  It's not nasty, because we only do this after telling the client 
to stop reading using the mechanism in HDFS-5182.  The client only ever has 
problems if it goes rogue and decides to ignore our message telling it to stop 
reading.

> Write to single replica in memory
> -
>
> Key: HDFS-6581
> URL: https://issues.apache.org/jira/browse/HDFS-6581
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFSWriteableReplicasInMemory.pdf
>
>
> Per discussion with the community on HDFS-5851, we will implement writing to 
> a single replica in DN memory via DataTransferProtocol.
> This avoids some of the issues with short-circuit writes, which we can 
> revisit at a later time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-4486) Add log category for long-running DFSClient notices

2014-08-21 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106061#comment-14106061
 ] 

Colin Patrick McCabe commented on HDFS-4486:


{code}
+PerformanceAdvisory.LOG.debug("Failed to get a local block reader. " +
+"Falling back to a socket-based block reader.");
{code}

The problem with putting this log where you have it now is that it will be 
extremely noisy.  We should only fire off this log message when we try to do a 
short-circuit read and fail, not just any old time when the replicas are all 
remote or something.

For example, we should change this to a PerformanceAdvisory:
{code}
  if (LOG.isTraceEnabled()) {
LOG.trace(this + ": " + pathInfo + " is not " +
"usable for short circuit; giving up on BlockReaderLocal.");
  }
{code}

and probably this:
{code}
LOG.trace(this + ": not trying to create a remote block reader " +
"because the UNIX domain socket at " + pathInfo +
" is not usable.");
{code}

> Add log category for long-running DFSClient notices
> ---
>
> Key: HDFS-4486
> URL: https://issues.apache.org/jira/browse/HDFS-4486
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Todd Lipcon
>    Assignee: Zhe Zhang
>Priority: Minor
> Attachments: HDFS-4486-20140820.patch, HDFS-4486-20140821.patch
>
>
> There are a number of features in the DFS client which are transparent but 
> can make a fairly big difference for performance -- two in particular are 
> short circuit reads and native checksumming. Because we don't want log spew 
> for clients like "hadoop fs -cat" we currently log only at DEBUG level when 
> these features are disabled. This makes it difficult to troubleshoot/verify 
> for long-running perf-sensitive clients like HBase.
> One simple solution is to add a new log category - eg 
> o.a.h.h.DFSClient.PerformanceAdvisory - which long-running clients could 
> enable at DEBUG level without getting the full debug spew.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6729) Support maintenance mode for DN

2014-08-21 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-6729:


Attachment: (was: hadoop-6729.000.patch)

> Support maintenance mode for DN
> ---
>
> Key: HDFS-6729
> URL: https://issues.apache.org/jira/browse/HDFS-6729
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: hadoop-6729.000.patch
>
>
> Some maintenance works (e.g., upgrading RAM or add disks) on DataNode only 
> takes a short amount of time (e.g., 10 minutes). In these cases, the users do 
> not want to report missing blocks on this DN because the DN will be online 
> shortly without data lose. Thus, we need a maintenance mode for a DN so that 
> maintenance work can be carried out on the DN without having to decommission 
> it or the DN being marked as dead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HDFS-6729) Support maintenance mode for DN

2014-08-21 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu reassigned HDFS-6729:
---

Assignee: Lei (Eddy) Xu

> Support maintenance mode for DN
> ---
>
> Key: HDFS-6729
> URL: https://issues.apache.org/jira/browse/HDFS-6729
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: hadoop-6729.000.patch
>
>
> Some maintenance works (e.g., upgrading RAM or add disks) on DataNode only 
> takes a short amount of time (e.g., 10 minutes). In these cases, the users do 
> not want to report missing blocks on this DN because the DN will be online 
> shortly without data lose. Thus, we need a maintenance mode for a DN so that 
> maintenance work can be carried out on the DN without having to decommission 
> it or the DN being marked as dead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6729) Support maintenance mode for DN

2014-08-21 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-6729:


 Target Version/s: 3.0.0
Affects Version/s: (was: 2.4.0)
   2.5.0
   Status: Patch Available  (was: Open)

> Support maintenance mode for DN
> ---
>
> Key: HDFS-6729
> URL: https://issues.apache.org/jira/browse/HDFS-6729
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: hadoop-6729.000.patch
>
>
> Some maintenance works (e.g., upgrading RAM or add disks) on DataNode only 
> takes a short amount of time (e.g., 10 minutes). In these cases, the users do 
> not want to report missing blocks on this DN because the DN will be online 
> shortly without data lose. Thus, we need a maintenance mode for a DN so that 
> maintenance work can be carried out on the DN without having to decommission 
> it or the DN being marked as dead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6729) Support maintenance mode for DN

2014-08-21 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-6729:


Attachment: hadoop-6729.000.patch

> Support maintenance mode for DN
> ---
>
> Key: HDFS-6729
> URL: https://issues.apache.org/jira/browse/HDFS-6729
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: hadoop-6729.000.patch
>
>
> Some maintenance works (e.g., upgrading RAM or add disks) on DataNode only 
> takes a short amount of time (e.g., 10 minutes). In these cases, the users do 
> not want to report missing blocks on this DN because the DN will be online 
> shortly without data lose. Thus, we need a maintenance mode for a DN so that 
> maintenance work can be carried out on the DN without having to decommission 
> it or the DN being marked as dead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6581) Write to single replica in memory

2014-08-21 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106042#comment-14106042
 ] 

Todd Lipcon commented on HDFS-6581:
---

If you truncate() a file while a reader has it mmapped, will the reader get 0s 
or a bus error? I seem to recall it's the latter, which may be a bit nasty for 
a revocation path.

Why not do what we discussed a couple months back and have a disk replica with 
mlock? I think this will prevent any writeback.

> Write to single replica in memory
> -
>
> Key: HDFS-6581
> URL: https://issues.apache.org/jira/browse/HDFS-6581
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFSWriteableReplicasInMemory.pdf
>
>
> Per discussion with the community on HDFS-5851, we will implement writing to 
> a single replica in DN memory via DataTransferProtocol.
> This avoids some of the issues with short-circuit writes, which we can 
> revisit at a later time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6729) Support maintenance mode for DN

2014-08-21 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-6729:


Attachment: hadoop-6729.000.patch

This patch adds support to mark a DataNode as maintenance mode, in which  the 
DataNode can be turned off by the system administrator to upgrade it. An 
expiration time is set for maintenance mode, thus if NameNode does not hear the 
heartbeat after this DataNode expires, NN considers this DataNode dead, then 
the normal data recover process jumps in to replica blocks.

The CLI and configuration file supports for this function will be in another 
JIRA.

> Support maintenance mode for DN
> ---
>
> Key: HDFS-6729
> URL: https://issues.apache.org/jira/browse/HDFS-6729
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.4.0
>Reporter: Lei (Eddy) Xu
> Attachments: hadoop-6729.000.patch
>
>
> Some maintenance works (e.g., upgrading RAM or add disks) on DataNode only 
> takes a short amount of time (e.g., 10 minutes). In these cases, the users do 
> not want to report missing blocks on this DN because the DN will be online 
> shortly without data lose. Thus, we need a maintenance mode for a DN so that 
> maintenance work can be carried out on the DN without having to decommission 
> it or the DN being marked as dead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6898) DN must reserve space for a full block when an RBW block is created

2014-08-21 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106041#comment-14106041
 ] 

Arpit Agarwal commented on HDFS-6898:
-

Cancelled, patch depends on HDFS-6899.

> DN must reserve space for a full block when an RBW block is created
> ---
>
> Key: HDFS-6898
> URL: https://issues.apache.org/jira/browse/HDFS-6898
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.5.0
>Reporter: Gopal V
>Assignee: Arpit Agarwal
> Attachments: HDFS-6898.01.patch, HDFS-6898.03.patch
>
>
> DN will successfully create two RBW blocks on the same volume even if the 
> free space is sufficient for just one full block.
> One or both block writers may subsequently get a DiskOutOfSpace exception. 
> This can be avoided by allocating space up front.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6908) incorrect snapshot directory diff generated by snapshot deletion

2014-08-21 Thread Juan Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Juan Yu updated HDFS-6908:
--

Attachment: HDFS-6908.001.patch

The problem is when deleting snapshot, hdfs will clean Inode that not in any 
snapshot any more. however, the logic is a bit wrong.
if an inode is directory, it calls destroyCreatedList() to put created children 
inodes(created between prior snapshot and the deleting one) to removedINodes 
list, but also clear the createdList. this breaks the create/delete pair 
operation. so later, when combine the diff with prior diff, the delete 
operation stays.
I think instead of calling destroyCreatedList(), it should just do 
dir.removeChild(c), and let the file diff does cleanup for files.


> incorrect snapshot directory diff generated by snapshot deletion
> 
>
> Key: HDFS-6908
> URL: https://issues.apache.org/jira/browse/HDFS-6908
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Juan Yu
>Assignee: Juan Yu
> Attachments: HDFS-6908.001.patch
>
>
> In the following scenario, delete snapshot could generate incorrect snapshot 
> directory diff and corrupted fsimage, if you restart NN after that, you will 
> get NullPointerException.
> 1. create a directory and create a file under it
> 2. take a snapshot
> 3. create another file under that directory
> 4. take second snapshot
> 5. delete both files and the directory
> 6. delete second snapshot
> incorrect directory diff will be generated.
> Restart NN will throw NPE
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.addToDeletedList(FSImageFormatPBSnapshot.java:246)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadDeletedList(FSImageFormatPBSnapshot.java:265)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadDirectoryDiffList(FSImageFormatPBSnapshot.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadSnapshotDiffSection(FSImageFormatPBSnapshot.java:192)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:254)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:168)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:208)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:906)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:892)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:715)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:653)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:276)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:882)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:629)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:498)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:554)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6581) Write to single replica in memory

2014-08-21 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106025#comment-14106025
 ] 

Colin Patrick McCabe commented on HDFS-6581:


Looks good overall.  It's good to see progress on this.

Some comments about the design doc:

* Why not use ramfs instead of tmpfs?  ramfs can't swap.

** The problem with using tmpfs is that the system could move the data to swap 
at any time.  In addition to performance problems, this could cause correctness 
problems later when we read back the data from swap (i.e. from the hard disk).  
Since we don't want to verify checksums here, we should use a storage method 
that we know never touches the disk.  Tachyon uses ramfs instead of tmpfs for 
this reason.

* An LRU replacement policy isn't a good choice.  It's very easy for a batch 
job to kick out everything in memory before it can ever be used again 
(thrashing).  An LFU (least frequently used) policy would be much better.  We'd 
have to keep usage statistics to implement this, but that doesn't seem too bad.

* How is the maximum tmpfs/ramfs size per datanode configured?  I think we 
should use the existing {{dfs.datanode.max.locked.memory}} property to 
configure this, for consistency.  System administrators should not need to 
configure separate pools of memory for HDFS-4949 and this feature.  It should 
be one memory size.

** I also think that cache directives from HDFS-4949 should take precedence 
over this opportunistic write caching.  If we need to evict some HDFS-5851 
cache items to finish our HDFS-4949 caching, we should do that.

* Related to that, we might want to rename {{dfs.datanode.max.locked.memory}} 
to {{dfs.data.node.max.cache.memory}} or something.

* You can effectively revoke access to a block file stored in ramfs or tmpfs by 
truncating that file to 0 bytes.  The client can hang on to the file 
descriptor, but this doesn't keep any data bytes in memory.  So we can move 
things out of the cache even if the clients are unresponsive.  Also see 
HDFS-6750 and HDFS-6036 for examples of how we can ask the clients to stop 
using a short-circuit replica before tearing it down.

> Write to single replica in memory
> -
>
> Key: HDFS-6581
> URL: https://issues.apache.org/jira/browse/HDFS-6581
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFSWriteableReplicasInMemory.pdf
>
>
> Per discussion with the community on HDFS-5851, we will implement writing to 
> a single replica in DN memory via DataTransferProtocol.
> This avoids some of the issues with short-circuit writes, which we can 
> revisit at a later time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6829) DFSAdmin refreshSuperUserGroupsConfiguration failed in security cluster

2014-08-21 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDFS-6829:
---

Status: Patch Available  (was: Open)

> DFSAdmin refreshSuperUserGroupsConfiguration failed in security cluster
> ---
>
> Key: HDFS-6829
> URL: https://issues.apache.org/jira/browse/HDFS-6829
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.4.1
>Reporter: zhaoyunjiong
>Assignee: zhaoyunjiong
>Priority: Minor
> Attachments: HDFS-6829.patch
>
>
> When we run command "hadoop dfsadmin -refreshSuperUserGroupsConfiguration", 
> it failed and report below message:
> 14/08/05 21:32:06 WARN security.MultiRealmUserAuthentication: The 
> serverPrincipal = doesn't confirm to the standards
> refreshSuperUserGroupsConfiguration: null
> After check the code, I found the bug was triggered by below reasons:
> 1. We didn't set 
> CommonConfigurationKeys.HADOOP_SECURITY_SERVICE_USER_NAME_KEY, which needed 
> by RefreshUserMappingsProtocol. And in DFSAdmin, if no 
> CommonConfigurationKeys.HADOOP_SECURITY_SERVICE_USER_NAME_KEY set, it will 
> try to use DFSConfigKeys.DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY: 
> conf.set(CommonConfigurationKeys.HADOOP_SECURITY_SERVICE_USER_NAME_KEY,   
> conf.get(DFSConfigKeys.DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY, ""));
> 2. But we set DFSConfigKeys.DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY in 
> hdfs-site.xml
> 3. DFSAdmin didn't load hdfs-site.xml



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6829) DFSAdmin refreshSuperUserGroupsConfiguration failed in security cluster

2014-08-21 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106014#comment-14106014
 ] 

Jitendra Nath Pandey commented on HDFS-6829:


+1. looks good to me

> DFSAdmin refreshSuperUserGroupsConfiguration failed in security cluster
> ---
>
> Key: HDFS-6829
> URL: https://issues.apache.org/jira/browse/HDFS-6829
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.4.1
>Reporter: zhaoyunjiong
>Assignee: zhaoyunjiong
>Priority: Minor
> Attachments: HDFS-6829.patch
>
>
> When we run command "hadoop dfsadmin -refreshSuperUserGroupsConfiguration", 
> it failed and report below message:
> 14/08/05 21:32:06 WARN security.MultiRealmUserAuthentication: The 
> serverPrincipal = doesn't confirm to the standards
> refreshSuperUserGroupsConfiguration: null
> After check the code, I found the bug was triggered by below reasons:
> 1. We didn't set 
> CommonConfigurationKeys.HADOOP_SECURITY_SERVICE_USER_NAME_KEY, which needed 
> by RefreshUserMappingsProtocol. And in DFSAdmin, if no 
> CommonConfigurationKeys.HADOOP_SECURITY_SERVICE_USER_NAME_KEY set, it will 
> try to use DFSConfigKeys.DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY: 
> conf.set(CommonConfigurationKeys.HADOOP_SECURITY_SERVICE_USER_NAME_KEY,   
> conf.get(DFSConfigKeys.DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY, ""));
> 2. But we set DFSConfigKeys.DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY in 
> hdfs-site.xml
> 3. DFSAdmin didn't load hdfs-site.xml



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6898) DN must reserve space for a full block when an RBW block is created

2014-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106008#comment-14106008
 ] 

Hadoop QA commented on HDFS-6898:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12663482/HDFS-6898.03.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7702//console

This message is automatically generated.

> DN must reserve space for a full block when an RBW block is created
> ---
>
> Key: HDFS-6898
> URL: https://issues.apache.org/jira/browse/HDFS-6898
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.5.0
>Reporter: Gopal V
>Assignee: Arpit Agarwal
> Attachments: HDFS-6898.01.patch, HDFS-6898.03.patch
>
>
> DN will successfully create two RBW blocks on the same volume even if the 
> free space is sufficient for just one full block.
> One or both block writers may subsequently get a DiskOutOfSpace exception. 
> This can be avoided by allocating space up front.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-4486) Add log category for long-running DFSClient notices

2014-08-21 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-4486:


Attachment: HDFS-4486-20140821.patch

[~cmccabe] This new patch has more PerformanceAdvisory debugging messages.

> Add log category for long-running DFSClient notices
> ---
>
> Key: HDFS-4486
> URL: https://issues.apache.org/jira/browse/HDFS-4486
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Todd Lipcon
>Assignee: Zhe Zhang
>Priority: Minor
> Attachments: HDFS-4486-20140820.patch, HDFS-4486-20140821.patch
>
>
> There are a number of features in the DFS client which are transparent but 
> can make a fairly big difference for performance -- two in particular are 
> short circuit reads and native checksumming. Because we don't want log spew 
> for clients like "hadoop fs -cat" we currently log only at DEBUG level when 
> these features are disabled. This makes it difficult to troubleshoot/verify 
> for long-running perf-sensitive clients like HBase.
> One simple solution is to add a new log category - eg 
> o.a.h.h.DFSClient.PerformanceAdvisory - which long-running clients could 
> enable at DEBUG level without getting the full debug spew.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6892) Add XDR packaging method for each NFS request

2014-08-21 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-6892:
-

Description: 
This method can be used for unit tests.
Most request implements this by overriding RequestWithHandle#serialize() 
method. However, some request classes missed it, e.g., COMMIT3Request, 
MKDIR3Request,READDIR3Request, READDIRPLUS3Request, 
RMDIR3RequestREMOVE3Request, SETATTR3Request,SYMLINK3Request.  RENAME3Reqeust 
is another example. 

  was:
The method can be used for unit tests.
Most request implements this by overriding RequestWithHandle#serialize() 
method. However, some request classes missed it, e.g., COMMIT3Request, 
MKDIR3Request,READDIR3Request, READDIRPLUS3Request, 
RMDIR3RequestREMOVE3Request, SETATTR3Request,SYMLINK3Request.  RENAME3Reqeust 
is another example. 


> Add XDR packaging method for each NFS request
> -
>
> Key: HDFS-6892
> URL: https://issues.apache.org/jira/browse/HDFS-6892
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: nfs
>Reporter: Brandon Li
>Assignee: Brandon Li
>
> This method can be used for unit tests.
> Most request implements this by overriding RequestWithHandle#serialize() 
> method. However, some request classes missed it, e.g., COMMIT3Request, 
> MKDIR3Request,READDIR3Request, READDIRPLUS3Request, 
> RMDIR3RequestREMOVE3Request, SETATTR3Request,SYMLINK3Request.  RENAME3Reqeust 
> is another example. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HDFS-6892) Add XDR packaging method for each NFS request

2014-08-21 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li reassigned HDFS-6892:


Assignee: Brandon Li

> Add XDR packaging method for each NFS request
> -
>
> Key: HDFS-6892
> URL: https://issues.apache.org/jira/browse/HDFS-6892
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: nfs
>Reporter: Brandon Li
>Assignee: Brandon Li
>
> The method can be used for unit tests.
> Most request implements this by overriding RequestWithHandle#serialize() 
> method. However, some request classes missed it, e.g., COMMIT3Request, 
> MKDIR3Request,READDIR3Request, READDIRPLUS3Request, 
> RMDIR3RequestREMOVE3Request, SETATTR3Request,SYMLINK3Request.  RENAME3Reqeust 
> is another example. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6888) Remove audit logging of getFIleInfo()

2014-08-21 Thread Chen He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He updated HDFS-6888:
--

Attachment: HDFS-6888-2.patch

patch updated. Thank you for your comments. [~kihwal] and [~jira.shegalov].

> Remove audit logging of getFIleInfo()
> -
>
> Key: HDFS-6888
> URL: https://issues.apache.org/jira/browse/HDFS-6888
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Kihwal Lee
>Assignee: Chen He
>  Labels: log
> Attachments: HDFS-6888-2.patch, HDFS-6888.patch
>
>
> The audit logging of getFileInfo() was added in HDFS-3733.  Since this is a 
> one of the most called method, users have noticed that audit log is now 
> filled with this.  Since we now have HTTP request logging, this seems 
> unnecessary.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6376) Distcp data between two HA clusters requires another configuration

2014-08-21 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14105966#comment-14105966
 ] 

Jing Zhao commented on HDFS-6376:
-

Thanks for the response, [~dlmarion].

bq. I have been running a version of this patch for about 2 months on a test 
cluster. We are using Hadoop 2 so the patch that I am applying is a little 
different. 

Cool. Then could you also post an updated patch for hadoop 2 since we will 
finally merge the patch to branch-2?

bq. Exclude seemed like a good term for that.

Sounds good to me. Let's keep the current name then.

Some other comments (sorry I should have posted them yesterday...):
# Minor: it may be better to wrap the logic of the following code into a new 
method in DFSUtil since we use it in multiple places.
{code}
+Map> newAddressMap =
+DFSUtil.getNNServiceRpcAddresses(conf);
+
+for (String exclude : nameServiceExcludes)
+  newAddressMap.remove(exclude);
{code}
# Currently DFSUtil#getOnlyNameServiceIdOrNull returns null if there are more 
than two nameservices specified. There are a couple of places called this 
method, and looks like DFSHAAdmin#resolveTarget may hit some issue if no -ns 
option is specified by HAAdmin. Thus I think we may also need to add the 
exclude logic in DFSUtil#getOnlyNameServiceIdOrNull. And we need to add more 
tests for this new feature, e.g., to cover its usage in DFSHAAdmin.

> Distcp data between two HA clusters requires another configuration
> --
>
> Key: HDFS-6376
> URL: https://issues.apache.org/jira/browse/HDFS-6376
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, federation, hdfs-client
>Affects Versions: 2.3.0, 2.4.0
> Environment: Hadoop 2.3.0
>Reporter: Dave Marion
>Assignee: Dave Marion
> Fix For: 3.0.0
>
> Attachments: HDFS-6376-2.patch, HDFS-6376-3-branch-2.4.patch, 
> HDFS-6376-4-branch-2.4.patch, HDFS-6376-5-trunk.patch, 
> HDFS-6376-6-trunk.patch, HDFS-6376-7-trunk.patch, HDFS-6376-branch-2.4.patch, 
> HDFS-6376-patch-1.patch
>
>
> User has to create a third set of configuration files for distcp when 
> transferring data between two HA clusters.
> Consider the scenario in [1]. You cannot put all of the required properties 
> in core-site.xml and hdfs-site.xml for the client to resolve the location of 
> both active namenodes. If you do, then the datanodes from cluster A may join 
> cluster B. I can not find a configuration option that tells the datanodes to 
> federate blocks for only one of the clusters in the configuration.
> [1] 
> http://mail-archives.apache.org/mod_mbox/hadoop-user/201404.mbox/%3CBAY172-W2133964E0C283968C161DD1520%40phx.gbl%3E



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6581) Write to single replica in memory

2014-08-21 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14105954#comment-14105954
 ] 

Arpit Agarwal commented on HDFS-6581:
-

Moved the patch to sub-task HDFS-6910.

> Write to single replica in memory
> -
>
> Key: HDFS-6581
> URL: https://issues.apache.org/jira/browse/HDFS-6581
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFSWriteableReplicasInMemory.pdf
>
>
> Per discussion with the community on HDFS-5851, we will implement writing to 
> a single replica in DN memory via DataTransferProtocol.
> This avoids some of the issues with short-circuit writes, which we can 
> revisit at a later time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6910) Initial prototype implementation for replicas in memory using tmpfs

2014-08-21 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-6910:


Status: Patch Available  (was: Open)

> Initial prototype implementation for replicas in memory using tmpfs
> ---
>
> Key: HDFS-6910
> URL: https://issues.apache.org/jira/browse/HDFS-6910
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-6910.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6910) Initial prototype implementation for replicas in memory using tmpfs

2014-08-21 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-6910:


Attachment: HDFS-6910.01.patch

> Initial prototype implementation for replicas in memory using tmpfs
> ---
>
> Key: HDFS-6910
> URL: https://issues.apache.org/jira/browse/HDFS-6910
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-6910.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6910) Initial prototype implementation for replicas in memory using tmpfs

2014-08-21 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-6910:
---

 Summary: Initial prototype implementation for replicas in memory 
using tmpfs
 Key: HDFS-6910
 URL: https://issues.apache.org/jira/browse/HDFS-6910
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal






--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   >