[jira] [Commented] (HDFS-8129) Erasure Coding: Maintain consistent naming for Erasure Coding related classes - EC/ErasureCoding

2015-05-01 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524253#comment-14524253
 ] 

Zhe Zhang commented on HDFS-8129:
-

+1 for EC. Filesystem and File System are both used, so FSXxx and FsXxx both 
look good. _Erasure_ and _Coding_ / _Code_ are always used as 2 words.

 Erasure Coding: Maintain consistent naming for Erasure Coding related classes 
 - EC/ErasureCoding
 

 Key: HDFS-8129
 URL: https://issues.apache.org/jira/browse/HDFS-8129
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G
Priority: Minor

 Currently I see some classes named as ErasureCode* and some are with EC*
 I feel we should maintain consistent naming across project. This jira to 
 correct the places where we named differently to be a unique.
 And also to discuss which naming we can follow from now onwards when we 
 create new classes. 
 ErasureCoding* should be fine IMO. Lets discuss what others feel.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8311) DataStreamer.transfer() should timeout the socket InputStream.

2015-05-01 Thread Esteban Gutierrez (JIRA)
Esteban Gutierrez created HDFS-8311:
---

 Summary: DataStreamer.transfer() should timeout the socket 
InputStream.
 Key: HDFS-8311
 URL: https://issues.apache.org/jira/browse/HDFS-8311
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Reporter: Esteban Gutierrez


While validating some HA failure modes we found that HDFS clients can take a 
long time to recover or sometimes don't recover at all since we don't setup the 
socket timeout in the InputStream:

{code}
private void transfer () { ...
...
 OutputStream unbufOut = NetUtils.getOutputStream(sock, writeTimeout);
 InputStream unbufIn = NetUtils.getInputStream(sock);
...
}
{code}

The InputStream should have its own timeout in the same way as the OutputStream.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7678) Erasure coding: DFSInputStream with decode functionality

2015-05-01 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524353#comment-14524353
 ] 

Andrew Wang commented on HDFS-7678:
---

Nits / comment requests:

* extra imports in DFSStripedInputStream
* I noticed these config keys aren't present in hdfs-default.xml, which is 
where we document the behavior of config keys. I bring it up because I was 
thinking about the semantic difference between overall timeout vs. per-op 
timeout. Since there are a couple EC keys that need documentation, I'm okay 
punting this to a quick follow-on.
* These same keys could also use javadoc on the DFSClientConf getters so we 
don't need to hunt in hdfs-default.xml or look at how the code uses the configs
* The importance of the largest read portion is not explained, nor any large 
comment about the overall flow of the recovery logic. Some ascii art would help 
clarify all this, I drew something like this:

{noformat}
  +--+  +--+  |  +--+  +--+
+--+  |  |  |  |  |  |  |  |  |
|  |  |  |  +--+  |  |  |  |  |
+--+  +--+|  +--+  +--+
  |
 d1d2d3   |   p1p2
{noformat}

This way it becomes obvious, if d1 fails, you need to lengthen the read to d3 
in addition to grabbing the parity. The TODO to optimize the read also is 
clarified; we always calculate the max span and always do max spans for 
recovery, even though we might only need the part of d3 that's unfetched.

Related, I could see printing the plan like this to be nice for debugging. 
Also a nice illustration for the test cases.

Code:
* AFAICT we don't respect deadNodes in scheduleOneStripedRead, or during the 
initial planning phase. I mentioned this a bit in person, but we also need to 
think about the dead node marking policy for EC reads, since the timeout is 
aggressive.
* I don't understand the overall timeout handling, since if the overall timeout 
has elapsed, we still go through the switch and schedule recovery reads. Seems 
like something we should unit test.
* We also probably still want a per-operation timeout in addition to the 
overall timeout. If the per-op timeout hits, we keep trying to recover, until 
the overall timeout runs out (or we run out of blocks).
* The InterruptedException catch should probably rethrow an 
InterruptedIOException or similar, the idea is to get us out of this function 
ASAP while cleaning up state.

Recommended follow-on work:
* fetchBlockByteRange is an outsize function and could use some splitting. We 
have this function 
* There's no abstraction here for cells or stripes, we're mucking directly with 
internal blocks and byte ranges. IMO all the logic should happen on these 
higher-level abstractions, then we turn it into blocks and byte ranges after. 
Max-span for instance is a hack, we should be calculating recovery reads 
cell-by-cell based on what we already have, then combining the cells for an 
internal block together to form the actual read.
* One thought regarding the threadpool. What we really want is to limit the 
concurrency per-DN, i.e. a threadpool per DN. This works nicely with read 
combining mentioned above, and does one better since it can coalesce recovery 
reads with already queued work. This is like how disk IO queues work.

 Erasure coding: DFSInputStream with decode functionality
 

 Key: HDFS-7678
 URL: https://issues.apache.org/jira/browse/HDFS-7678
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Li Bo
Assignee: Zhe Zhang
 Attachments: BlockGroupReader.patch, HDFS-7678-HDFS-7285.002.patch, 
 HDFS-7678-HDFS-7285.003.patch, HDFS-7678-HDFS-7285.004.patch, 
 HDFS-7678-HDFS-7285.005.patch, HDFS-7678-HDFS-7285.006.patch, 
 HDFS-7678.000.patch, HDFS-7678.001.patch


 A block group reader will read data from BlockGroup no matter in striping 
 layout or contiguous layout. The corrupt blocks can be known before 
 reading(told by namenode), or just be found during reading. The block group 
 reader needs to do decoding work when some blocks are found corrupt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7980) Incremental BlockReport will dramatically slow down the startup of a namenode

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524649#comment-14524649
 ] 

Hadoop QA commented on HDFS-7980:
-

(!) The patch artifact directory has been removed! 
This is a fatal error for test-patch.sh.  Aborting. 
Jenkins (node H3) information at 
https://builds.apache.org/job/PreCommit-HDFS-Build/10516/ may provide some 
hints.

 Incremental BlockReport will dramatically slow down the startup of  a namenode
 --

 Key: HDFS-7980
 URL: https://issues.apache.org/jira/browse/HDFS-7980
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Hui Zheng
Assignee: Walter Su
 Attachments: HDFS-7980.001.patch, HDFS-7980.002.patch, 
 HDFS-7980.003.patch, HDFS-7980.004.patch


 In the current implementation the datanode will call the 
 reportReceivedDeletedBlocks() method that is a IncrementalBlockReport before 
 calling the bpNamenode.blockReport() method. So in a large(several thousands 
 of datanodes) and busy cluster it will slow down(more than one hour) the 
 startup of namenode. 
 {code}
 ListDatanodeCommand blockReport() throws IOException {
 // send block report if timer has expired.
 final long startTime = now();
 if (startTime - lastBlockReport = dnConf.blockReportInterval) {
   return null;
 }
 final ArrayListDatanodeCommand cmds = new ArrayListDatanodeCommand();
 // Flush any block information that precedes the block report. Otherwise
 // we have a chance that we will miss the delHint information
 // or we will report an RBW replica after the BlockReport already reports
 // a FINALIZED one.
 reportReceivedDeletedBlocks();
 lastDeletedReport = startTime;
 .
 // Send the reports to the NN.
 int numReportsSent = 0;
 int numRPCs = 0;
 boolean success = false;
 long brSendStartTime = now();
 try {
   if (totalBlockCount  dnConf.blockReportSplitThreshold) {
 // Below split threshold, send all reports in a single message.
 DatanodeCommand cmd = bpNamenode.blockReport(
 bpRegistration, bpos.getBlockPoolId(), reports);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4937) ReplicationMonitor can infinite-loop in BlockPlacementPolicyDefault#chooseRandom()

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524647#comment-14524647
 ] 

Hadoop QA commented on HDFS-4937:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12595453/HDFS-4937.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10547/console |


This message was automatically generated.

 ReplicationMonitor can infinite-loop in 
 BlockPlacementPolicyDefault#chooseRandom()
 --

 Key: HDFS-4937
 URL: https://issues.apache.org/jira/browse/HDFS-4937
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.0.4-alpha, 0.23.8
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: HDFS-4937.patch


 When a large number of nodes are removed by refreshing node lists, the 
 network topology is updated. If the refresh happens at the right moment, the 
 replication monitor thread may stuck in the while loop of {{chooseRandom()}}. 
 This is because the cached cluster size is used in the terminal condition 
 check of the loop. This usually happens when a block with a high replication 
 factor is being processed. Since replicas/rack is also calculated beforehand, 
 no node choice may satisfy the goodness criteria if refreshing removed racks. 
 All nodes will end up in the excluded list, but the size will still be less 
 than the cached cluster size, so it will loop infinitely. This was observed 
 in a production environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4977) Change Checkpoint Size of web ui of SecondaryNameNode

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524639#comment-14524639
 ] 

Hadoop QA commented on HDFS-4977:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12591878/HDFS-4977.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10544/console |


This message was automatically generated.

 Change Checkpoint Size of web ui of SecondaryNameNode
 ---

 Key: HDFS-4977
 URL: https://issues.apache.org/jira/browse/HDFS-4977
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0, 2.0.4-alpha
Reporter: Shinichi Yamashita
Priority: Minor
  Labels: newbie
 Attachments: HDFS-4977-2.patch, HDFS-4977.patch, HDFS-4977.patch


 The checkpoint of SecondaryNameNode after 2.0 is carried out by 
 dfs.namenode.checkpoint.period and dfs.namenode.checkpoint.txns.
 Because Checkpoint Size displayed in status.jsp of SecondaryNameNode, it 
 shuold make modifications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4325) ClientProtocol.createSymlink parameter dirPerm invalid

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524642#comment-14524642
 ] 

Hadoop QA commented on HDFS-4325:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12594139/HDFS-4325.v1.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10545/console |


This message was automatically generated.

 ClientProtocol.createSymlink parameter dirPerm invalid
 --

 Key: HDFS-4325
 URL: https://issues.apache.org/jira/browse/HDFS-4325
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client, namenode
Affects Versions: 2.0.4-alpha
Reporter: Binglin Chang
Assignee: Binglin Chang
 Attachments: HDFS-4325.v1.patch


 {code}
* @param link The path of the link being created.
* @param dirPerm permissions to use when creating parent directories
* @param createParent - if true then missing parent dirs are created
*   if false then parent must exist
 {code}
 According to javadoc, auto created parent dir's permissions will be dirPerm, 
 but in fact directory permissions are always inherit from parent directory 
 plus u+wx.
 IMHO, createSymlink behavior should be the same as create, which also inherit 
 parent dir permission, so the current behavior makes sense, but the related 
 dirPerm parameters should be removed cause it is invalid and confusing. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4777) File creation with overwrite flag set to true results in logSync holding namesystem lock

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524648#comment-14524648
 ] 

Hadoop QA commented on HDFS-4777:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12581106/HDFS-4777.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10548/console |


This message was automatically generated.

 File creation with overwrite flag set to true results in logSync holding 
 namesystem lock
 

 Key: HDFS-4777
 URL: https://issues.apache.org/jira/browse/HDFS-4777
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 0.23.0, 2.0.0-alpha
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HDFS-4777.patch


 FSNamesystem#startFileInternal calls delete. Delete method releases the write 
 lock, making parts of startFileInternal code unintentionally executed without 
 write lock being held.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3618) SSH fencing option may incorrectly succeed if nc (netcat) command not present

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524650#comment-14524650
 ] 

Hadoop QA commented on HDFS-3618:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12597900/HDFS-3618.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle site |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10549/console |


This message was automatically generated.

 SSH fencing option may incorrectly succeed if nc (netcat) command not present
 -

 Key: HDFS-3618
 URL: https://issues.apache.org/jira/browse/HDFS-3618
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: auto-failover
Affects Versions: 2.0.0-alpha
Reporter: Brahma Reddy Battula
Assignee: Vinayakumar B
 Attachments: HDFS-3618.patch, HDFS-3618.patch, HDFS-3618.patch, 
 zkfc.txt, zkfc_threaddump.out


 Started NN's and zkfc's in Suse11.
 Suse11 will have netcat installation and netcat -z will work(but nc -z wn't 
 work)..
 While executing following command, got command not found hence rc will be 
 other than zero and assuming that server was down..Here we are ending up 
 without checking whether service is down or not..
 {code}
 LOG.info(
 Indeterminate response from trying to kill service.  +
 Verifying whether it is running using nc...);
 rc = execCommand(session, nc -z  + serviceAddr.getHostName() +
   + serviceAddr.getPort());
 if (rc == 0) {
   // the service is still listening - we are unable to fence
   LOG.warn(Unable to fence - it is running but we cannot kill it);
   return false;
 } else {
   LOG.info(Verified that the service is down.);
   return true;  
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4870) periodically re-resolve hostnames in included and excluded datanodes list

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524646#comment-14524646
 ] 

Hadoop QA commented on HDFS-4870:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12585903/HDFS-4870.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10546/console |


This message was automatically generated.

 periodically re-resolve hostnames in included and excluded datanodes list
 -

 Key: HDFS-4870
 URL: https://issues.apache.org/jira/browse/HDFS-4870
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.5-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-4870.001.patch


 We currently only resolve the hostnames in the included and excluded 
 datanodes list once-- when the list is read.  The rationale for this is that 
 in big clusters, DNS resolution for thousands of nodes can take a long time 
 (when generating a datanode list in getDatanodeListForReport, for example).  
 However, if the DNS information changes for one of these hosts, we should 
 reflect that.  A background thread could do these DNS resolutions every few 
 minutes without blocking any foreground operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6262) HDFS doesn't raise FileNotFoundException if the source of a rename() is missing

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524857#comment-14524857
 ] 

Hadoop QA commented on HDFS-6262:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12641192/HDFS-6262.2.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10632/console |


This message was automatically generated.

 HDFS doesn't raise FileNotFoundException if the source of a rename() is 
 missing
 ---

 Key: HDFS-6262
 URL: https://issues.apache.org/jira/browse/HDFS-6262
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.0
Reporter: Steve Loughran
Assignee: Akira AJISAKA
 Attachments: HDFS-6262.2.patch, HDFS-6262.patch


 HDFS's {{rename(src, dest)}} returns false if src does not exist -all the 
 other filesystems raise {{FileNotFoundException}}
 This behaviour is defined in {{FSDirectory.unprotectedRenameTo()}} -the 
 attempt is logged, but the operation then just returns false.
 I propose changing the behaviour of {{DistributedFileSystem}} to be the same 
 as that of the others -and of {{FileContext}}, which does reject renames with 
 nonexistent sources



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4504) DFSOutputStream#close doesn't always release resources (such as leases)

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524859#comment-14524859
 ] 

Hadoop QA commented on HDFS-4504:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12599081/HDFS-4504.016.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10633/console |


This message was automatically generated.

 DFSOutputStream#close doesn't always release resources (such as leases)
 ---

 Key: HDFS-4504
 URL: https://issues.apache.org/jira/browse/HDFS-4504
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-4504.001.patch, HDFS-4504.002.patch, 
 HDFS-4504.007.patch, HDFS-4504.008.patch, HDFS-4504.009.patch, 
 HDFS-4504.010.patch, HDFS-4504.011.patch, HDFS-4504.014.patch, 
 HDFS-4504.015.patch, HDFS-4504.016.patch


 {{DFSOutputStream#close}} can throw an {{IOException}} in some cases.  One 
 example is if there is a pipeline error and then pipeline recovery fails.  
 Unfortunately, in this case, some of the resources used by the 
 {{DFSOutputStream}} are leaked.  One particularly important resource is file 
 leases.
 So it's possible for a long-lived HDFS client, such as Flume, to write many 
 blocks to a file, but then fail to close it.  Unfortunately, the 
 {{LeaseRenewerThread}} inside the client will continue to renew the lease for 
 the undead file.  Future attempts to close the file will just rethrow the 
 previous exception, and no progress can be made by the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4273) Fix some issue in DFSInputstream

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524845#comment-14524845
 ] 

Hadoop QA commented on HDFS-4273:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12621932/HDFS-4273.v8.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10630/console |


This message was automatically generated.

 Fix some issue in DFSInputstream
 

 Key: HDFS-4273
 URL: https://issues.apache.org/jira/browse/HDFS-4273
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HDFS-4273-v2.patch, HDFS-4273.patch, HDFS-4273.v3.patch, 
 HDFS-4273.v4.patch, HDFS-4273.v5.patch, HDFS-4273.v6.patch, 
 HDFS-4273.v7.patch, HDFS-4273.v8.patch, TestDFSInputStream.java


 Following issues in DFSInputStream are addressed in this jira:
 1. read may not retry enough in some cases cause early failure
 Assume the following call logic
 {noformat} 
 readWithStrategy()
   - blockSeekTo()
   - readBuffer()
  - reader.doRead()
  - seekToNewSource() add currentNode to deadnode, wish to get a 
 different datanode
 - blockSeekTo()
- chooseDataNode()
   - block missing, clear deadNodes and pick the currentNode again
 seekToNewSource() return false
  readBuffer() re-throw the exception quit loop
 readWithStrategy() got the exception,  and may fail the read call before 
 tried MaxBlockAcquireFailures.
 {noformat} 
 2. In multi-threaded scenario(like hbase), DFSInputStream.failures has race 
 condition, it is cleared to 0 when it is still used by other thread. So it is 
 possible that  some read thread may never quit. Change failures to local 
 variable solve this issue.
 3. If local datanode is added to deadNodes, it will not be removed from 
 deadNodes if DN is back alive. We need a way to remove local datanode from 
 deadNodes when the local datanode is become live.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5745) Unnecessary disk check triggered when socket operation has problem.

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524839#comment-14524839
 ] 

Hadoop QA commented on HDFS-5745:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12622369/HDFS-5745.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10628/console |


This message was automatically generated.

 Unnecessary disk check triggered when socket operation has problem.
 ---

 Key: HDFS-5745
 URL: https://issues.apache.org/jira/browse/HDFS-5745
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 1.2.1
Reporter: MaoYuan Xian
 Attachments: HDFS-5745.patch


 When BlockReceiver transfer data fails, it can be found SocketOutputStream 
 translates the exception as IOException with the message The stream is 
 closed:
 2014-01-06 11:48:04,716 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 IOException in BlockReceiver.run():
 java.io.IOException: The stream is closed
 at org.apache.hadoop.net.SocketOutputStream.write
 at java.io.BufferedOutputStream.flushBuffer
 at java.io.BufferedOutputStream.flush
 at java.io.DataOutputStream.flush
 at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run
 at java.lang.Thread.run
 Which makes the checkDiskError method of DataNode called and triggers the 
 disk scan.
 Can we make the modifications like below in checkDiskError to avoiding this 
 unneccessary disk scan operations?:
 {code}
 --- a/src/hdfs/org/apache/hadoop/hdfs/server/datanode/DataNode.java
 +++ b/src/hdfs/org/apache/hadoop/hdfs/server/datanode/DataNode.java
 @@ -938,7 +938,8 @@ public class DataNode extends Configured
   || e.getMessage().startsWith(An established connection was 
 aborted)
   || e.getMessage().startsWith(Broken pipe)
   || e.getMessage().startsWith(Connection reset)
 - || e.getMessage().contains(java.nio.channels.SocketChannel)) {
 + || e.getMessage().contains(java.nio.channels.SocketChannel)
 + || e.getMessage().startsWith(The stream is closed)) {
LOG.info(Not checking disk as checkDiskError was called on a network 
 +
   related exception); 
return;
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4861) BlockPlacementPolicyDefault does not consider decommissioning racks

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524841#comment-14524841
 ] 

Hadoop QA commented on HDFS-4861:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12638791/HDFS-4861-v2.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10629/console |


This message was automatically generated.

 BlockPlacementPolicyDefault does not consider decommissioning racks
 ---

 Key: HDFS-4861
 URL: https://issues.apache.org/jira/browse/HDFS-4861
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 0.23.7, 2.1.0-beta
Reporter: Kihwal Lee
Assignee: Rushabh S Shah
 Attachments: HDFS-4861-v2.patch, HDFS-4861.patch


 getMaxNodesPerRack() calculates the max replicas/rack like this:
 {code}
 int maxNodesPerRack = (totalNumOfReplicas-1)/clusterMap.getNumOfRacks()+2;
 {code}
 Since this does not consider the racks that are being decommissioned and the 
 decommissioning state is only checked later in isGoodTarget(), certain blocks 
 are not replicated even when there are many racks and nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3503) Move LengthInputStream and PositionTrackingInputStream to common

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524837#comment-14524837
 ] 

Hadoop QA commented on HDFS-3503:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12639073/h3503_20140407.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10626/console |


This message was automatically generated.

 Move LengthInputStream and PositionTrackingInputStream to common
 

 Key: HDFS-3503
 URL: https://issues.apache.org/jira/browse/HDFS-3503
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor
 Attachments: h3503_20140328.patch, h3503_20140407.patch


 We have LengthInputStream in org.apache.hadoop.hdfs.server.datanode.fsdataset 
 and PositionTrackingInputStream in 
 org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.  These two classes 
 are generally useful.  Let's move them to org.apache.hadoop.io.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6588) Investigating removing getTrueCause method in Server.java

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524866#comment-14524866
 ] 

Hadoop QA commented on HDFS-6588:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12657013/HDFS-6588.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10636/console |


This message was automatically generated.

 Investigating removing getTrueCause method in Server.java
 -

 Key: HDFS-6588
 URL: https://issues.apache.org/jira/browse/HDFS-6588
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security, webhdfs
Affects Versions: 2.5.0
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Attachments: HDFS-6588.001.patch, HDFS-6588.001.patch, 
 HDFS-6588.001.patch, HDFS-6588.001.patch


 When addressing Daryn Sharp's comment for HDFS-6475 quoted below:
 {quote}
 What I'm saying is I think the patch adds too much unnecessary code. Filing 
 an improvement to delete all but a few lines of the code changed in this 
 patch seems a bit odd. I think you just need to:
 - Delete getTrueCause entirely instead of moving it elsewhere
 - In saslProcess, just throw the exception instead of running it through 
 getTrueCause since it's not a InvalidToken wrapping another exception 
 anymore.
 - Keep your 3-line change to unwrap SecurityException in toResponse
 {quote}
 There are multiple test failures, after making the suggested changes, Filing 
 this jira to dedicate to the investigation of removing getTrueCause method.
 More detail will be put in the first comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6526) Implement HDFS TtlManager

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524868#comment-14524868
 ] 

Hadoop QA commented on HDFS-6526:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12651960/HDFS-6526.1.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10637/console |


This message was automatically generated.

 Implement HDFS TtlManager
 -

 Key: HDFS-6526
 URL: https://issues.apache.org/jira/browse/HDFS-6526
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client, namenode
Affects Versions: 2.4.0
Reporter: Zesheng Wu
Assignee: Zesheng Wu
 Attachments: HDFS-6526.1.patch


 This issue is used to track development of HDFS TtlManager, for details see 
 HDFS-6382.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6525) FsShell supports HDFS TTL

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524849#comment-14524849
 ] 

Hadoop QA commented on HDFS-6525:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12652123/HDFS-6525.2.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10631/console |


This message was automatically generated.

 FsShell supports HDFS TTL
 -

 Key: HDFS-6525
 URL: https://issues.apache.org/jira/browse/HDFS-6525
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client, tools
Affects Versions: 2.4.0
Reporter: Zesheng Wu
Assignee: Zesheng Wu
 Attachments: HDFS-6525.1.patch, HDFS-6525.2.patch


 This issue is used to track development of supporting  HDFS TTL for FsShell, 
 for details see HDFS-6382.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5464) Simplify block report diff calculation

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524860#comment-14524860
 ] 

Hadoop QA commented on HDFS-5464:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12655886/h5464_20140715b.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10634/console |


This message was automatically generated.

 Simplify block report diff calculation
 --

 Key: HDFS-5464
 URL: https://issues.apache.org/jira/browse/HDFS-5464
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor
 Attachments: h5464_20131105.patch, h5464_20131105b.patch, 
 h5464_20131105c.patch, h5464_20140715.patch, h5464_20140715b.patch


 The current calculation in BlockManager.reportDiff(..) is unnecessarily 
 complicated.  We could simplify the calculation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5730) Inconsistent Audit logging for HDFS APIs

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524865#comment-14524865
 ] 

Hadoop QA commented on HDFS-5730:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12623677/HDFS-5730.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10635/console |


This message was automatically generated.

 Inconsistent Audit logging for HDFS APIs
 

 Key: HDFS-5730
 URL: https://issues.apache.org/jira/browse/HDFS-5730
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0, 2.2.0
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G
 Attachments: HDFS-5730.patch, HDFS-5730.patch


 When looking at the audit loggs in HDFS, I am seeing some inconsistencies 
 what was logged with audit and what is added recently.
 For more details please check the comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8306) Generate ACL and Xattr outputs in OIV XML outputs

2015-05-01 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524099#comment-14524099
 ] 

Lei (Eddy) Xu commented on HDFS-8306:
-

These test failures are not relevant, I've run them locally on my laptop.

Also the findbugs warnings are about {{DataStreammer#LastException}}, which is 
not the change included in this patch.


 Generate ACL and Xattr outputs in OIV XML outputs
 -

 Key: HDFS-8306
 URL: https://issues.apache.org/jira/browse/HDFS-8306
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Attachments: HDFS-8306.000.patch


 Currently, in the {{hdfs oiv}} XML outputs, not all fields of fsimage are 
 outputs. It makes inspecting {{fsimage}} from XML outputs less practical. 
 Also it prevents recovering a fsimage from XML file.
 This JIRA is adding ACL and XAttrs in the XML outputs as the first step to 
 achieve the goal described in HDFS-8061.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8306) Generate ACL and Xattr outputs in OIV XML outputs

2015-05-01 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524189#comment-14524189
 ] 

Andrew Wang commented on HDFS-8306:
---

Nice patch here Eddy. Overall it looks good, just two comments:

* I think xattr keys can be binary too, so would need to base64 them too. Check 
if this is true for me. Should unit test this case then too.
* It's possible we add more data to the fsimage. In this case, I'd really like 
to see a compile time error saying that support for this new data hasn't been 
added to OIV. This would also give me some confidence that our current OIV dump 
is complete. We can do this as a follow-on, might require some refactoring.

 Generate ACL and Xattr outputs in OIV XML outputs
 -

 Key: HDFS-8306
 URL: https://issues.apache.org/jira/browse/HDFS-8306
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Attachments: HDFS-8306.000.patch


 Currently, in the {{hdfs oiv}} XML outputs, not all fields of fsimage are 
 outputs. It makes inspecting {{fsimage}} from XML outputs less practical. 
 Also it prevents recovering a fsimage from XML file.
 This JIRA is adding ACL and XAttrs in the XML outputs as the first step to 
 achieve the goal described in HDFS-8061.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7758) Retire FsDatasetSpi#getVolumes() and use FsDatasetSpi#getVolumeRefs() instead

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524360#comment-14524360
 ] 

Hadoop QA commented on HDFS-7758:
-

\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 37s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 21 new or modified test files. |
| {color:green}+1{color} | javac |   7m 30s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 38s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 44s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m 19s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  6s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native |   3m 13s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests | 164m 29s | Tests passed in hadoop-hdfs. 
|
| | | 206m 14s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12729840/HDFS-7758.007.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 3393461 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10509/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10509/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10509/console |


This message was automatically generated.

 Retire FsDatasetSpi#getVolumes() and use FsDatasetSpi#getVolumeRefs() instead
 -

 Key: HDFS-7758
 URL: https://issues.apache.org/jira/browse/HDFS-7758
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
 Attachments: HDFS-7758.000.patch, HDFS-7758.001.patch, 
 HDFS-7758.002.patch, HDFS-7758.003.patch, HDFS-7758.004.patch, 
 HDFS-7758.005.patch, HDFS-7758.006.patch, HDFS-7758.007.patch


 HDFS-7496 introduced reference-counting  the volume instances being used to 
 prevent race condition when hot swapping a volume.
 However, {{FsDatasetSpi#getVolumes()}} can still leak the volume instance 
 without increasing its reference count. In this JIRA, we retire the 
 {{FsDatasetSpi#getVolumes()}} and propose {{FsDatasetSpi#getVolumeRefs()}} 
 and etc. method to access {{FsVolume}}. Thus it makes sure that the consumer 
 of {{FsVolume}} always has correct reference count.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4812) add hdfsReadFully, hdfsWriteFully

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524624#comment-14524624
 ] 

Hadoop QA commented on HDFS-4812:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12582553/HDFS-4812.001.patch |
| Optional Tests | javac unit |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10539/console |


This message was automatically generated.

 add hdfsReadFully, hdfsWriteFully
 -

 Key: HDFS-4812
 URL: https://issues.apache.org/jira/browse/HDFS-4812
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-4812.001.patch


 It would be nice to have {{hdfsReadFully}} and {{hdfsWriteFully}} in libhdfs. 
  The current APIs don't guarantee that we read or write as much as we're told 
 to do.  We have readFully and writeFully in Java, but not in libhdfs at the 
 moment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4730) KeyManagerFactory.getInstance supports SunX509 ibmX509 in HsftpFileSystem.java

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524620#comment-14524620
 ] 

Hadoop QA commented on HDFS-4730:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12579988/HDFS-4730_trunk.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10536/console |


This message was automatically generated.

 KeyManagerFactory.getInstance supports SunX509  ibmX509 in 
 HsftpFileSystem.java
 

 Key: HDFS-4730
 URL: https://issues.apache.org/jira/browse/HDFS-4730
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang
  Labels: patch
 Attachments: HDFS-4730-v1.patch, HDFS-4730_trunk.patch, 
 HDFS-4730_trunk.patch


 In IBM java, SunX509 should be ibmX509. So use SSLFactory.SSLCERTIFICATE to 
 load dynamically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-923) libhdfs hdfs_read example uses hdfsRead wrongly

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524623#comment-14524623
 ] 

Hadoop QA commented on HDFS-923:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12440608/hdfs-923.patch |
| Optional Tests | javac unit |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10538/console |


This message was automatically generated.

 libhdfs hdfs_read example uses hdfsRead wrongly
 ---

 Key: HDFS-923
 URL: https://issues.apache.org/jira/browse/HDFS-923
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs
Reporter: Ruyue Ma
Assignee: Ruyue Ma
 Attachments: hdfs-923.patch


 In the examples of libhdfs,  the hdfs_read.c uses hdfsRead wrongly. 
 {noformat}
 // read from the file
 tSize curSize = bufferSize;
 for (; curSize == bufferSize;) {
 curSize = hdfsRead(fs, readFile, (void*)buffer, curSize);
 }
 {noformat} 
 the condition curSize == bufferSize has problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4660) Duplicated checksum on DN in a recovered pipeline

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524622#comment-14524622
 ] 

Hadoop QA commented on HDFS-4660:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12576518/HDFS-4660.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10537/console |


This message was automatically generated.

 Duplicated checksum on DN in a recovered pipeline
 -

 Key: HDFS-4660
 URL: https://issues.apache.org/jira/browse/HDFS-4660
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Peng Zhang
Priority: Critical
 Attachments: HDFS-4660.patch


 pipeline DN1  DN2  DN3
 stop DN2
 pipeline added node DN4 located at 2nd position
 DN1  DN4  DN3
 recover RBW
 DN4 after recover rbw
 2013-04-01 21:02:31,570 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Recover 
 RBW replica 
 BP-325305253-10.2.201.14-1364820083462:blk_-9076133543772600337_1004
 2013-04-01 21:02:31,570 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: 
 Recovering ReplicaBeingWritten, blk_-9076133543772600337_1004, RBW
   getNumBytes() = 134144
   getBytesOnDisk() = 134144
   getVisibleLength()= 134144
 end at chunk (134144/512=262)
 DN3 after recover rbw
 2013-04-01 21:02:31,575 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Recover 
 RBW replica 
 BP-325305253-10.2.201.14-1364820083462:blk_-9076133543772600337_10042013-04-01
  21:02:31,575 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: 
 Recovering ReplicaBeingWritten, blk_-9076133543772600337_1004, RBW
   getNumBytes() = 134028 
   getBytesOnDisk() = 134028
   getVisibleLength()= 134028
 client send packet after recover pipeline
 offset=133632  len=1008
 DN4 after flush 
 2013-04-01 21:02:31,779 DEBUG 
 org.apache.hadoop.hdfs.server.datanode.DataNode: FlushOrsync, file 
 offset:134640; meta offset:1063
 // meta end position should be floor(134640/512)*4 + 7 == 1059, but now it is 
 1063.
 DN3 after flush
 2013-04-01 21:02:31,782 DEBUG 
 org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: 
 BP-325305253-10.2.201.14-1364820083462:blk_-9076133543772600337_1005, 
 type=LAST_IN_PIPELINE, downstreams=0:[]: enqueue Packet(seqno=219, 
 lastPacketInBlock=false, offsetInBlock=134640, 
 ackEnqueueNanoTime=8817026136871545)
 2013-04-01 21:02:31,782 DEBUG 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Changing 
 meta file offset of block 
 BP-325305253-10.2.201.14-1364820083462:blk_-9076133543772600337_1005 from 
 1055 to 1051
 2013-04-01 21:02:31,782 DEBUG 
 org.apache.hadoop.hdfs.server.datanode.DataNode: FlushOrsync, file 
 offset:134640; meta offset:1059
 After checking meta on DN4, I found checksum of chunk 262 is duplicated, but 
 data not.
 Later after block was finalized, DN4's scanner detected bad block, and then 
 reported it to NM. NM send a command to delete this block, and replicate this 
 block from other DN in pipeline to satisfy duplication num.
 I think this is because in BlockReceiver it skips data bytes already written, 
 but not skips checksum bytes already written. And function 
 adjustCrcFilePosition is only used for last non-completed chunk, but
 not for this situation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4916) DataTransfer may mask the IOException during block transfering

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524870#comment-14524870
 ] 

Hadoop QA commented on HDFS-4916:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12588510/4916.v0.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10638/console |


This message was automatically generated.

 DataTransfer may mask the IOException during block transfering
 --

 Key: HDFS-4916
 URL: https://issues.apache.org/jira/browse/HDFS-4916
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.0.4-alpha, 2.0.5-alpha
Reporter: Zesheng Wu
Priority: Critical
 Attachments: 4916.v0.patch


 When a new datanode is added to the pipeline, the client will trigger the 
 block transfer process. In the current implementation, the src datanode calls 
 the run() method of the DataTransfer to transfer the block, this method will 
 mask the IOExceptions during the transfering, and will make the client not 
 realize the failure during the transferring, as a result the client will 
 mistake the failing transferring as successful one. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4754) Add an API in the namenode to mark a datanode as stale

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524872#comment-14524872
 ] 

Hadoop QA commented on HDFS-4754:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12596540/4754.v4.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10639/console |


This message was automatically generated.

 Add an API in the namenode to mark a datanode as stale
 --

 Key: HDFS-4754
 URL: https://issues.apache.org/jira/browse/HDFS-4754
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client, namenode
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Critical
 Attachments: 4754.v1.patch, 4754.v2.patch, 4754.v4.patch, 
 4754.v4.patch


 There is a detection of the stale datanodes in HDFS since HDFS-3703, with a 
 timeout, defaulted to 30s.
 There are two reasons to add an API to mark a node as stale even if the 
 timeout is not yet reached:
  1) ZooKeeper can detect that a client is dead at any moment. So, for HBase, 
 we sometimes start the recovery before a node is marked staled. (even with 
 reasonable settings as: stale: 20s; HBase ZK timeout: 30s
  2) Some third parties could detect that a node is dead before the timeout, 
 hence saving us the cost of retrying. An example or such hw is Arista, 
 presented here by [~tsuna] 
 http://tsunanet.net/~tsuna/fsf-hbase-meetup-april13.pdf, and confirmed in 
 HBASE-6290.
 As usual, even if the node is dead it can comeback before the 10 minutes 
 limit. So I would propose to set a timebound. The API would be
 namenode.markStale(String ipAddress, int port, long durationInMs);
 After durationInMs, the namenode would again rely only on its heartbeat to 
 decide.
 Thoughts?
 If there is no objections, and if nobody in the hdfs dev team has the time to 
 spend some time on it, I will give it a try for branch 2  3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6813) WebHdfsFileSystem#OffsetUrlInputStream should implement PositionedReadable with thead-safe.

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524873#comment-14524873
 ] 

Hadoop QA commented on HDFS-6813:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12659533/HDFS-6813.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10640/console |


This message was automatically generated.

 WebHdfsFileSystem#OffsetUrlInputStream should implement PositionedReadable 
 with thead-safe.
 ---

 Key: HDFS-6813
 URL: https://issues.apache.org/jira/browse/HDFS-6813
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.6.0
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-6813.001.patch


 {{PositionedReadable}} definition requires the implementations for its 
 interfaces should be thread-safe.
 OffsetUrlInputStream(WebHdfsFileSystem inputstream) doesn't implement these 
 interfaces with tread-safe, this JIRA is to fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8249) Separate HdfsConstants into the client and the server side class

2015-05-01 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-8249:
--
Hadoop Flags: Reviewed

+1 patch looks good.

 Separate HdfsConstants into the client and the server side class
 

 Key: HDFS-8249
 URL: https://issues.apache.org/jira/browse/HDFS-8249
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-8249.000.patch, HDFS-8249.001.patch, 
 HDFS-8249.002.patch, HDFS-8249.003.patch, HDFS-8249.004.patch


 The constants in {{HdfsConstants}} are used by both the client side and the 
 server side. There are two types of constants in the class:
 1. Constants that are used internally by the servers or not part of the APIs. 
 These constants are free to evolve without breaking compatibilities. For 
 example, {{MAX_PATH_LENGTH}} is used by the NN to enforce the length of the 
 path does not go too long. Developers are free to change the name of the 
 constants and to move it around if necessary.
 1. Constants that are used by the clients, but not parts of the APIs. For 
 example, {{QUOTA_DONT_SET}} represents an unlimited quota. The value is part 
 of the wire protocol but the value is not. Developers are free to rename the 
 constants but are not allowed to change the value of the constants.
 1. Constants that are parts of the APIs. For example, {{SafeModeAction}} is 
 used in {{DistributedFileSystem}}. Changing the name / value of the constant 
 will break binary compatibility, but not source code compatibility.
 This jira proposes to separate the above three types of constants into 
 different classes:
 * Creating a new class {{HdfsConstantsServer}} to hold the first type of 
 constants.
 * Move {{HdfsConstants}} into the {{hdfs-client}} package. The work of 
 separating the second and the third types of constants will be postponed in a 
 separate jira.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8312) Trash does not descent into child directories to check for permissions

2015-05-01 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma reassigned HDFS-8312:
--

Assignee: Takanobu Asanuma

 Trash does not descent into child directories to check for permissions
 --

 Key: HDFS-8312
 URL: https://issues.apache.org/jira/browse/HDFS-8312
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS, security
Affects Versions: 2.2.0, 2.6.0
Reporter: Eric Yang
Assignee: Takanobu Asanuma

 HDFS trash does not descent into child directory to check if user has 
 permission to delete files.  For example:
 Run the following command to initialize directory structure as super user:
 {code}
 hadoop fs -mkdir /BSS/level1
 hadoop fs -mkdir /BSS/level1/level2
 hadoop fs -mkdir /BSS/level1/level2/level3
 hadoop fs -put /tmp/appConfig.json /BSS/level1/level2/level3/testfile.txt
 hadoop fs -chown user1:users /BSS/level1/level2/level3/testfile.txt
 hadoop fs -chown -R user1:users /BSS/level1
 hadoop fs -chown -R 750 /BSS/level1
 hadoop fs -chmod -R 640 /BSS/level1/level2/level3/testfile.txt
 hadoop fs -chmod 775 /BSS
 {code}
 Change to a normal user called user2. 
 When trash is enabled:
 {code}
 sudo su user2 -
 hadoop fs -rm -r /BSS/level1
 15/05/01 16:51:20 INFO fs.TrashPolicyDefault: Namenode trash configuration: 
 Deletion interval = 3600 minutes, Emptier interval = 0 minutes.
 Moved: 'hdfs://bdvs323.svl.ibm.com:9000/BSS/level1' to trash at: 
 hdfs://bdvs323.svl.ibm.com:9000/user/user2/.Trash/Current
 {code}
 When trash is disabled:
 {code}
 /opt/ibm/biginsights/IHC/bin/hadoop fs -Dfs.trash.interval=0 -rm -r 
 /BSS/level1
 15/05/01 16:58:31 INFO fs.TrashPolicyDefault: Namenode trash configuration: 
 Deletion interval = 0 minutes, Emptier interval = 0 minutes.
 rm: Permission denied: user=user2, access=ALL, 
 inode=/BSS/level1:user1:users:drwxr-x---
 {code}
 There is inconsistency between trash behavior and delete behavior.  When 
 trash is enabled, files owned by user1 is deleted by user2.  It looks like 
 trash does not recursively validate if the child directory files can be 
 removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8312) Trash does not descent into child directories to check for permissions

2015-05-01 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-8312:
---
Assignee: (was: Takanobu Asanuma)

 Trash does not descent into child directories to check for permissions
 --

 Key: HDFS-8312
 URL: https://issues.apache.org/jira/browse/HDFS-8312
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS, security
Affects Versions: 2.2.0, 2.6.0
Reporter: Eric Yang

 HDFS trash does not descent into child directory to check if user has 
 permission to delete files.  For example:
 Run the following command to initialize directory structure as super user:
 {code}
 hadoop fs -mkdir /BSS/level1
 hadoop fs -mkdir /BSS/level1/level2
 hadoop fs -mkdir /BSS/level1/level2/level3
 hadoop fs -put /tmp/appConfig.json /BSS/level1/level2/level3/testfile.txt
 hadoop fs -chown user1:users /BSS/level1/level2/level3/testfile.txt
 hadoop fs -chown -R user1:users /BSS/level1
 hadoop fs -chown -R 750 /BSS/level1
 hadoop fs -chmod -R 640 /BSS/level1/level2/level3/testfile.txt
 hadoop fs -chmod 775 /BSS
 {code}
 Change to a normal user called user2. 
 When trash is enabled:
 {code}
 sudo su user2 -
 hadoop fs -rm -r /BSS/level1
 15/05/01 16:51:20 INFO fs.TrashPolicyDefault: Namenode trash configuration: 
 Deletion interval = 3600 minutes, Emptier interval = 0 minutes.
 Moved: 'hdfs://bdvs323.svl.ibm.com:9000/BSS/level1' to trash at: 
 hdfs://bdvs323.svl.ibm.com:9000/user/user2/.Trash/Current
 {code}
 When trash is disabled:
 {code}
 /opt/ibm/biginsights/IHC/bin/hadoop fs -Dfs.trash.interval=0 -rm -r 
 /BSS/level1
 15/05/01 16:58:31 INFO fs.TrashPolicyDefault: Namenode trash configuration: 
 Deletion interval = 0 minutes, Emptier interval = 0 minutes.
 rm: Permission denied: user=user2, access=ALL, 
 inode=/BSS/level1:user1:users:drwxr-x---
 {code}
 There is inconsistency between trash behavior and delete behavior.  When 
 trash is enabled, files owned by user1 is deleted by user2.  It looks like 
 trash does not recursively validate if the child directory files can be 
 removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3488) BlockPoolSliceScanner#getNewBlockScanTime does not handle numbers 31 bits properly

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524606#comment-14524606
 ] 

Hadoop QA commented on HDFS-3488:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12574374/HDFS-3488.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10534/console |


This message was automatically generated.

 BlockPoolSliceScanner#getNewBlockScanTime does not handle numbers  31 bits 
 properly
 

 Key: HDFS-3488
 URL: https://issues.apache.org/jira/browse/HDFS-3488
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-3488.001.patch


 This code does not handle the case where period  2**31 properly:
 {code}
 long period = Math.min(scanPeriod, 
Math.max(blockMap.size(),1) * 600 * 1000L);
 int periodInt = Math.abs((int)period);
 return System.currentTimeMillis() - scanPeriod + 
 DFSUtil.getRandom().nextInt(periodInt);
 {code}
 So, for example, if period = 0x1, we'll map that to 0, and so forth.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3627) OfflineImageViewer oiv Indented processor prints out the Java class name in the DELEGATION_KEY field

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524609#comment-14524609
 ] 

Hadoop QA commented on HDFS-3627:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12574930/HDFS-3627.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10535/console |


This message was automatically generated.

 OfflineImageViewer oiv Indented processor prints out the Java class name in 
 the DELEGATION_KEY field
 

 Key: HDFS-3627
 URL: https://issues.apache.org/jira/browse/HDFS-3627
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Ravi Prakash
Priority: Minor
 Attachments: HDFS-3627.patch, HDFS-3627.patch, HDFS-3627.patch, 
 HDFS-3627.patch, HDFS-3627.patch, HDFS-3627.patch


 Instead of the contents of the delegation key this is printed out
 DELEGATION_KEY = 
 org.apache.hadoop.security.token.delegation.DelegationKey@1e2ca7
 DELEGATION_KEY = 
 org.apache.hadoop.security.token.delegation.DelegationKey@105bd58
 DELEGATION_KEY = 
 org.apache.hadoop.security.token.delegation.DelegationKey@1d1e730
 DELEGATION_KEY = 
 org.apache.hadoop.security.token.delegation.DelegationKey@1a116c9
 DELEGATION_KEY = 
 org.apache.hadoop.security.token.delegation.DelegationKey@df1832



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5319) Links resolving either from active/standby should be same (example clicking on datanodes from Standby)

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524809#comment-14524809
 ] 

Hadoop QA commented on HDFS-5319:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12607249/HDFS-5319-v1.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10614/console |


This message was automatically generated.

 Links resolving either from active/standby should be same (example clicking 
 on datanodes from Standby)
 --

 Key: HDFS-5319
 URL: https://issues.apache.org/jira/browse/HDFS-5319
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.5-alpha
Reporter: Siqi Li
Assignee: Siqi Li
Priority: Minor
 Attachments: HDFS-5319-v1.patch


 click live nodes from standby namenode will throw exception Operation 
 category READ is not supported in state standby



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6093) Expose more caching information for debugging by users

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524812#comment-14524812
 ] 

Hadoop QA commented on HDFS-6093:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12635714/hdfs-6093-4.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle site |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10615/console |


This message was automatically generated.

 Expose more caching information for debugging by users
 --

 Key: HDFS-6093
 URL: https://issues.apache.org/jira/browse/HDFS-6093
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: caching
Affects Versions: 2.4.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hdfs-6093-1.patch, hdfs-6093-2.patch, hdfs-6093-3.patch, 
 hdfs-6093-4.patch


 When users submit a new cache directive, it's unclear if the NN has 
 recognized it and is actively trying to cache it, or if it's hung for some 
 other reason. It'd be nice to expose a pending caching/uncaching count the 
 same way we expose pending replication work.
 It'd also be nice to display the aggregate cache capacity and usage in 
 dfsadmin -report, since we already have have it as a metric and expose it 
 per-DN in report output.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6092) DistributedFileSystem#getCanonicalServiceName() and DistributedFileSystem#getUri() may return inconsistent results w.r.t. port

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524799#comment-14524799
 ] 

Hadoop QA commented on HDFS-6092:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12634494/HDFS-6092-v4.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10610/console |


This message was automatically generated.

 DistributedFileSystem#getCanonicalServiceName() and 
 DistributedFileSystem#getUri() may return inconsistent results w.r.t. port
 --

 Key: HDFS-6092
 URL: https://issues.apache.org/jira/browse/HDFS-6092
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Ted Yu
 Attachments: HDFS-6092-v4.patch, haosdent-HDFS-6092-v2.patch, 
 haosdent-HDFS-6092.patch, hdfs-6092-v1.txt, hdfs-6092-v2.txt, hdfs-6092-v3.txt


 I discovered this when working on HBASE-10717
 Here is sample code to reproduce the problem:
 {code}
 Path desPath = new Path(hdfs://127.0.0.1/);
 FileSystem desFs = desPath.getFileSystem(conf);
 
 String s = desFs.getCanonicalServiceName();
 URI uri = desFs.getUri();
 {code}
 Canonical name string contains the default port - 8020
 But uri doesn't contain port.
 This would result in the following exception:
 {code}
 testIsSameHdfs(org.apache.hadoop.hbase.util.TestFSHDFSUtils)  Time elapsed: 
 0.001 sec   ERROR!
 java.lang.IllegalArgumentException: port out of range:-1
 at java.net.InetSocketAddress.checkPort(InetSocketAddress.java:143)
 at java.net.InetSocketAddress.init(InetSocketAddress.java:224)
 at 
 org.apache.hadoop.hbase.util.FSHDFSUtils.getNNAddresses(FSHDFSUtils.java:88)
 {code}
 Thanks to Brando Li who helped debug this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6165) hdfs dfs -rm -r and hdfs dfs -rmdir commands can't remove empty directory

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524804#comment-14524804
 ] 

Hadoop QA commented on HDFS-6165:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12642655/HDFS-6165.006.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10612/console |


This message was automatically generated.

 hdfs dfs -rm -r and hdfs dfs -rmdir commands can't remove empty directory 
 --

 Key: HDFS-6165
 URL: https://issues.apache.org/jira/browse/HDFS-6165
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.3.0
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
Priority: Minor
 Attachments: HDFS-6165.001.patch, HDFS-6165.002.patch, 
 HDFS-6165.003.patch, HDFS-6165.004.patch, HDFS-6165.004.patch, 
 HDFS-6165.005.patch, HDFS-6165.006.patch, HDFS-6165.006.patch


 Given a directory owned by user A with WRITE permission containing an empty 
 directory owned by user B, it is not possible to delete user B's empty 
 directory with either hdfs dfs -rm -r or hdfs dfs -rmdir. Because the 
 current implementation requires FULL permission of the empty directory, and 
 throws exception. 
 On the other hand, on linux, rm -r and rmdir command can remove empty 
 directory as long as the parent directory has WRITE permission (and prefix 
 component of the path have EXECUTE permission), For the tested OSes, some 
 prompt user asking for confirmation, some don't.
 Here's a reproduction:
 {code}
 [root@vm01 ~]# hdfs dfs -ls /user/
 Found 4 items
 drwxr-xr-x   - userabc users   0 2013-05-03 01:55 /user/userabc
 drwxr-xr-x   - hdfssupergroup  0 2013-05-03 00:28 /user/hdfs
 drwxrwxrwx   - mapred  hadoop  0 2013-05-03 00:13 /user/history
 drwxr-xr-x   - hdfssupergroup  0 2013-04-14 16:46 /user/hive
 [root@vm01 ~]# hdfs dfs -ls /user/userabc
 Found 8 items
 drwx--   - userabc users  0 2013-05-02 17:00 /user/userabc/.Trash
 drwxr-xr-x   - userabc users  0 2013-05-03 01:34 /user/userabc/.cm
 drwx--   - userabc users  0 2013-05-03 01:06 
 /user/userabc/.staging
 drwxr-xr-x   - userabc users  0 2013-04-14 18:31 /user/userabc/apps
 drwxr-xr-x   - userabc users  0 2013-04-30 18:05 /user/userabc/ds
 drwxr-xr-x   - hdfsusers  0 2013-05-03 01:54 /user/userabc/foo
 drwxr-xr-x   - userabc users  0 2013-04-30 16:18 
 /user/userabc/maven_source
 drwxr-xr-x   - hdfsusers  0 2013-05-03 01:40 
 /user/userabc/test-restore
 [root@vm01 ~]# hdfs dfs -ls /user/userabc/foo/
 [root@vm01 ~]# sudo -u userabc hdfs dfs -rm -r -skipTrash /user/userabc/foo
 rm: Permission denied: user=userabc, access=ALL, 
 inode=/user/userabc/foo:hdfs:users:drwxr-xr-x
 {code}
 The super user can delete the directory.
 {code}
 [root@vm01 ~]# sudo -u hdfs hdfs dfs -rm -r -skipTrash /user/userabc/foo
 Deleted /user/userabc/foo
 {code}
 The same is not true for files, however. They have the correct behavior.
 {code}
 [root@vm01 ~]# sudo -u hdfs hdfs dfs -touchz /user/userabc/foo-file
 [root@vm01 ~]# hdfs dfs -ls /user/userabc/
 Found 8 items
 drwx--   - userabc users  0 2013-05-02 17:00 /user/userabc/.Trash
 drwxr-xr-x   - userabc users  0 2013-05-03 01:34 /user/userabc/.cm
 drwx--   - userabc users  0 2013-05-03 01:06 
 /user/userabc/.staging
 drwxr-xr-x   - userabc users  0 2013-04-14 18:31 /user/userabc/apps
 drwxr-xr-x   - userabc users  0 2013-04-30 18:05 /user/userabc/ds
 -rw-r--r--   1 hdfsusers  0 2013-05-03 02:11 
 /user/userabc/foo-file
 drwxr-xr-x   - userabc users  0 2013-04-30 16:18 
 /user/userabc/maven_source
 drwxr-xr-x   - hdfsusers  0 2013-05-03 01:40 
 /user/userabc/test-restore
 [root@vm01 ~]# sudo -u userabc hdfs dfs -rm -skipTrash /user/userabc/foo-file
 Deleted /user/userabc/foo-file
 {code}
 Using hdfs dfs -rmdir command:
 {code}
 bash-4.1$ hadoop fs -lsr /
 lsr: DEPRECATED: Please use 'ls -R' instead.
 drwxr-xr-x   - hdfs supergroup  0 2014-03-25 16:29 /user
 drwxr-xr-x   - hdfs   supergroup  0 2014-03-25 16:28 /user/hdfs
 drwxr-xr-x   - usrabc users   0 2014-03-28 23:39 /user/usrabc
 drwxr-xr-x   - abcabc 0 2014-03-28 23:39 
 /user/usrabc/foo-empty1
 [root@vm01 usrabc]# su usrabc
 [usrabc@vm01 ~]$ hdfs dfs 

[jira] [Commented] (HDFS-5951) Provide diagnosis information in the Web UI

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524806#comment-14524806
 ] 

Hadoop QA commented on HDFS-5951:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12628903/HDFS-5951.000.patch |
| Optional Tests |  |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10613/console |


This message was automatically generated.

 Provide diagnosis information in the Web UI
 ---

 Key: HDFS-5951
 URL: https://issues.apache.org/jira/browse/HDFS-5951
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5951.000.patch, diagnosis-failure.png, 
 diagnosis-succeed.png


 HDFS should provide operation statistics in its UI. it can go one step 
 further by leveraging the information to diagnose common problems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5301) adding block pool % for each namespace on federated namenode webUI

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524801#comment-14524801
 ] 

Hadoop QA commented on HDFS-5301:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12606846/HDFS-5301-v1.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10611/console |


This message was automatically generated.

 adding block pool % for each namespace on federated namenode webUI
 --

 Key: HDFS-5301
 URL: https://issues.apache.org/jira/browse/HDFS-5301
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.5-alpha
Reporter: Siqi Li
Assignee: Siqi Li
Priority: Minor
 Attachments: HDFS-5301-v1.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6193) HftpFileSystem open should throw FileNotFoundException for non-existing paths

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524797#comment-14524797
 ] 

Hadoop QA commented on HDFS-6193:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12643130/HDFS-6193-branch-2.4.v02.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10609/console |


This message was automatically generated.

 HftpFileSystem open should throw FileNotFoundException for non-existing paths
 -

 Key: HDFS-6193
 URL: https://issues.apache.org/jira/browse/HDFS-6193
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
 Attachments: HDFS-6193-branch-2.4.0.v01.patch, 
 HDFS-6193-branch-2.4.v02.patch


 WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
 non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6596) Improve InputStream when read spans two blocks

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524795#comment-14524795
 ] 

Hadoop QA commented on HDFS-6596:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12653150/HDFS-6596.3.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10608/console |


This message was automatically generated.

 Improve InputStream when read spans two blocks
 --

 Key: HDFS-6596
 URL: https://issues.apache.org/jira/browse/HDFS-6596
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.4.0
Reporter: Zesheng Wu
Assignee: Zesheng Wu
 Attachments: HDFS-6596.1.patch, HDFS-6596.2.patch, HDFS-6596.2.patch, 
 HDFS-6596.2.patch, HDFS-6596.3.patch, HDFS-6596.3.patch


 In the current implementation of DFSInputStream, read(buffer, offset, length) 
 is implemented as following:
 {code}
 int realLen = (int) Math.min(len, (blockEnd - pos + 1L));
 if (locatedBlocks.isLastBlockComplete()) {
   realLen = (int) Math.min(realLen, locatedBlocks.getFileLength());
 }
 int result = readBuffer(strategy, off, realLen, corruptedBlockMap);
 {code}
 From the above code, we can conclude that the read will return at most 
 (blockEnd - pos + 1) bytes. As a result, when read spans two blocks, the 
 caller must call read() second time to complete the request, and must wait 
 second time to acquire the DFSInputStream lock(read() is synchronized for 
 DFSInputStream). For latency sensitive applications, such as hbase, this will 
 result in latency pain point when they under massive race conditions. So here 
 we propose that we should loop internally in read() to do best effort read.
 In the current implementation of pread(read(position, buffer, offset, 
 lenght)), it does loop internally to do best effort read. So we can refactor 
 to support this on normal read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5177) blocksScheduled count should be decremented for abandoned blocks

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524815#comment-14524815
 ] 

Hadoop QA commented on HDFS-5177:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12616329/HDFS-5177.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10616/console |


This message was automatically generated.

 blocksScheduled  count should be decremented for abandoned blocks
 -

 Key: HDFS-5177
 URL: https://issues.apache.org/jira/browse/HDFS-5177
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-5177.patch, HDFS-5177.patch, HDFS-5177.patch


 DatanodeDescriptor#incBlocksScheduled() will be called for all datanodes of 
 the block on each allocation. But same should be decremented for abandoned 
 blocks.
 When one of the datanodes is down and same is allocated for the block along 
 with other live datanodes, then this block will be abandoned, but the 
 scheduled count on other datanodes will consider live datanodes as loaded, 
 but in reality these datanodes may not be loaded.
 Anyway this scheduled count will be rolled every 20 mins.
 Problem will come if the rate of creation of files is more. Due to increase 
 in the scheduled count, there might be chances of missing local datanode to 
 write to. and some times writes also can fail in small clusters.
 So we need to decrement the unnecessary count on abandon block call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5699) NPE is thrown when DN is restarted while job execution

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524831#comment-14524831
 ] 

Hadoop QA commented on HDFS-5699:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12620331/HDFS-5699-0001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10624/console |


This message was automatically generated.

 NPE is thrown when DN is restarted while job execution
 --

 Key: HDFS-5699
 URL: https://issues.apache.org/jira/browse/HDFS-5699
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: sathish
Assignee: sathish
Priority: Minor
 Attachments: HDFS-5699-0001.patch


 1.Run jobs 
 2.Restart one DN 
 3.After DN come up it should not throw NPE in DN logs 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8248) Store INodeId instead of the INodeFile object in BlockInfoContiguous

2015-05-01 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524169#comment-14524169
 ] 

Jing Zhao commented on HDFS-8248:
-

# Please add javadoc for {{BlockInfoContiguous#bcId}}
# Currently we only record repliation factor in INodeFile. For a file in 
snapshots, its block replication factor is actually the largest replication 
factor in (current state + snaphsots). The replication factor recorded in 
BlockInfo also needs to follow this logic. Also note that the BlockInfo's 
replication factor may need to be updated when a snapshot is deleted.
# We can wrap the following two lines into a single function inside of 
BlockManager. 
{code}
long bcId = getBlockCollectionId(block);
bc = namesystem.getBlockCollection(bcId);
{code}
# Also need to fix the format in a couple of places in BlockManager.

 Store INodeId instead of the INodeFile object in BlockInfoContiguous
 

 Key: HDFS-8248
 URL: https://issues.apache.org/jira/browse/HDFS-8248
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-8248.000.patch, HDFS-8248.001.patch, 
 HDFS-8248.002.patch, HDFS-8248.003.patch


 Currently the namespace and the block manager are tightly coupled together. 
 There are two couplings in terms of implementation:
 1. The {{BlockInfoContiguous}} stores a reference of the {{INodeFile}} that 
 owns the block, so that the block manager can look up the corresponding file 
 when replicating blocks, recovering from pipeline failures, etc.
 1. The {{INodeFile}} stores {{BlockInfoContiguous}} objects that the file 
 owns.
 Decoupling the namespace and the block manager allows the BM to be separated 
 out from the Java heap or even as a standalone process. This jira proposes to 
 remove the first coupling by storing the id of the inode instead of the 
 object reference of {{INodeFile}} in the {{BlockInfoContiguous}} class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7949) WebImageViewer need support file size calculation with striped blocks

2015-05-01 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524237#comment-14524237
 ] 

Zhe Zhang commented on HDFS-7949:
-

Seems Jenkins is too busy. 

I read the patch again and it looks good. +1, and I just committed it. Thanks 
Rakesh for the contribution! I'll start our branch Jenkins just to make sure.

 WebImageViewer need support file size calculation with striped blocks
 -

 Key: HDFS-7949
 URL: https://issues.apache.org/jira/browse/HDFS-7949
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Hui Zheng
Assignee: Rakesh R
 Attachments: HDFS-7949-001.patch, HDFS-7949-002.patch, 
 HDFS-7949-003.patch, HDFS-7949-004.patch, HDFS-7949-005.patch, 
 HDFS-7949-006.patch, HDFS-7949-007.patch, HDFS-7949-HDFS-7285.08.patch, 
 HDFS-7949-HDFS-7285.08.patch


 The file size calculation should be changed when the blocks of the file are 
 striped in WebImageViewer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7949) WebImageViewer need support file size calculation with striped blocks

2015-05-01 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-7949:

   Resolution: Fixed
Fix Version/s: HDFS-7285
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

 WebImageViewer need support file size calculation with striped blocks
 -

 Key: HDFS-7949
 URL: https://issues.apache.org/jira/browse/HDFS-7949
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Hui Zheng
Assignee: Rakesh R
 Fix For: HDFS-7285

 Attachments: HDFS-7949-001.patch, HDFS-7949-002.patch, 
 HDFS-7949-003.patch, HDFS-7949-004.patch, HDFS-7949-005.patch, 
 HDFS-7949-006.patch, HDFS-7949-007.patch, HDFS-7949-HDFS-7285.08.patch, 
 HDFS-7949-HDFS-7285.08.patch


 The file size calculation should be changed when the blocks of the file are 
 striped in WebImageViewer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6757) Simplify lease manager with INodeID

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524512#comment-14524512
 ] 

Hadoop QA commented on HDFS-6757:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 42s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 9 new or modified test files. |
| {color:green}+1{color} | javac |   7m 28s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 40s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 16s | The applied patch generated  
22 new checkstyle issues (total was 904, now 907). |
| {color:red}-1{color} | whitespace |   0m 19s | The patch has 5  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  5s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native |   3m 15s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests | 165m 24s | Tests passed in hadoop-hdfs. 
|
| | | 208m 47s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12729861/HDFS-6757.013.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 6f541ed |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/10513/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10513/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10513/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10513/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10513/console |


This message was automatically generated.

 Simplify lease manager with INodeID
 ---

 Key: HDFS-6757
 URL: https://issues.apache.org/jira/browse/HDFS-6757
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-6757.000.patch, HDFS-6757.001.patch, 
 HDFS-6757.002.patch, HDFS-6757.003.patch, HDFS-6757.004.patch, 
 HDFS-6757.005.patch, HDFS-6757.006.patch, HDFS-6757.007.patch, 
 HDFS-6757.008.patch, HDFS-6757.009.patch, HDFS-6757.010.patch, 
 HDFS-6757.011.patch, HDFS-6757.012.patch, HDFS-6757.013.patch


 Currently the lease manager records leases based on path instead of inode 
 ids. Therefore, the lease manager needs to carefully keep track of the path 
 of active leases during renames and deletes. This can be a non-trivial task.
 This jira proposes to simplify the logic by tracking leases using inodeids 
 instead of paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8249) Separate HdfsConstants into the client and the server side class

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524528#comment-14524528
 ] 

Hadoop QA commented on HDFS-8249:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 59s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 37 new or modified test files. |
| {color:green}+1{color} | javac |   7m 46s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 51s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 18s | The applied patch generated  6 
new checkstyle issues (total was 156, now 160). |
| {color:red}-1{color} | whitespace |   3m 11s | The patch has 13  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 40s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   5m  3s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native |   3m 19s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 164m 30s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 15s | Tests passed in 
hadoop-hdfs-client. |
| {color:green}+1{color} | hdfs tests |   1m 43s | Tests passed in 
hadoop-hdfs-nfs. |
| {color:green}+1{color} | hdfs tests |   3m 54s | Tests passed in bkjournal. |
| | | 219m 51s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.TestFileTruncate |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12729877/HDFS-8249.004.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 6f541ed |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/10514/artifact/patchprocess/diffcheckstylehadoop-hdfs-client.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10514/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10514/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10514/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| hadoop-hdfs-nfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10514/artifact/patchprocess/testrun_hadoop-hdfs-nfs.txt
 |
| bkjournal test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10514/artifact/patchprocess/testrun_bkjournal.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10514/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10514/console |


This message was automatically generated.

 Separate HdfsConstants into the client and the server side class
 

 Key: HDFS-8249
 URL: https://issues.apache.org/jira/browse/HDFS-8249
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-8249.000.patch, HDFS-8249.001.patch, 
 HDFS-8249.002.patch, HDFS-8249.003.patch, HDFS-8249.004.patch


 The constants in {{HdfsConstants}} are used by both the client side and the 
 server side. There are two types of constants in the class:
 1. Constants that are used internally by the servers or not part of the APIs. 
 These constants are free to evolve without breaking compatibilities. For 
 example, {{MAX_PATH_LENGTH}} is used by the NN to enforce the length of the 
 path does not go too long. Developers are free to change the name of the 
 constants and to move it around if necessary.
 1. Constants that are used by the clients, but not parts of the APIs. For 
 example, {{QUOTA_DONT_SET}} represents an unlimited quota. The value is part 
 of the wire protocol but the value is not. Developers are free to rename the 
 constants but are not allowed to change the value of the constants.
 1. Constants that are parts of the APIs. For example, {{SafeModeAction}} is 
 used in {{DistributedFileSystem}}. Changing the name / value of the constant 
 will break binary compatibility, but not 

[jira] [Commented] (HDFS-3384) DataStreamer thread should be closed immediatly when failed to setup a PipelineForAppendOrRecovery

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524605#comment-14524605
 ] 

Hadoop QA commented on HDFS-3384:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12530164/HDFS-3384_2.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10533/console |


This message was automatically generated.

 DataStreamer thread should be closed immediatly when failed to setup a 
 PipelineForAppendOrRecovery
 --

 Key: HDFS-3384
 URL: https://issues.apache.org/jira/browse/HDFS-3384
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.0.0-alpha
Reporter: Brahma Reddy Battula
Assignee: amith
 Attachments: HDFS-3384.patch, HDFS-3384_2.patch, HDFS-3384_2.patch, 
 HDFS-3384_2.patch


 Scenraio:
 =
 write a file
 corrupt block manually
 call append..
 {noformat}
 2012-04-19 09:33:10,776 INFO  hdfs.DFSClient 
 (DFSOutputStream.java:createBlockOutputStream(1059)) - Exception in 
 createBlockOutputStream
 java.io.EOFException: Premature EOF: no length prefix available
   at 
 org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:162)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1039)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:939)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
 2012-04-19 09:33:10,807 WARN  hdfs.DFSClient (DFSOutputStream.java:run(549)) 
 - DataStreamer Exception
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:510)
 2012-04-19 09:33:10,807 WARN  hdfs.DFSClient 
 (DFSOutputStream.java:hflush(1511)) - Error while syncing
 java.io.IOException: All datanodes 10.18.40.20:50010 are bad. Aborting...
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:908)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
 java.io.IOException: All datanodes 10.18.40.20:50010 are bad. Aborting...
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:908)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4406) read file failure,when the file is not close in secret mode

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524600#comment-14524600
 ] 

Hadoop QA commented on HDFS-4406:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12565451/HDFS-4404.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10532/console |


This message was automatically generated.

 read file failure,when the file is not close in secret mode
 ---

 Key: HDFS-4406
 URL: https://issues.apache.org/jira/browse/HDFS-4406
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: liaowenrui
Priority: Critical
 Attachments: BlockTokenSecretManager.patch, HDFS-4404.patch


 2013-01-14 18:27:06,216 WARN SecurityLogger.org.apache.hadoop.ipc.Server: 
 Auth failed for 160.172.0.11:45176:null
 2013-01-14 18:27:06,217 INFO org.apache.hadoop.ipc.Server: IPC Server 
 listener on 50020: readAndProcess threw exception 
 javax.security.sasl.SaslException: DIGEST-MD5: IO error acquiring password 
 [Caused by org.apache.hadoop.security.token.SecretManager$InvalidToken: Can't 
 re-compute password for block_token_identifier (expiryDate=1358195226206, 
 keyId=1639335405, userId=hbase, blockPoolId=BP-myhacluster-25656, 
 blockId=-6489888518203477527, access modes=[READ]), since the required block 
 key (keyID=1639335405) doesn't exist.] from client 160.172.0.11. Count of 
 bytes read: 0
 javax.security.sasl.SaslException: DIGEST-MD5: IO error acquiring password 
 [Caused by org.apache.hadoop.security.token.SecretManager$InvalidToken: Can't 
 re-compute password for block_token_identifier (expiryDate=1358195226206, 
 keyId=1639335405, userId=hbase, blockPoolId=BP-myhacluster-25656, 
 blockId=-6489888518203477527, access modes=[READ]), since the required block 
 key (keyID=1639335405) doesn't exist.]
 at 
 com.sun.security.sasl.digest.DigestMD5Server.validateClientResponse(DigestMD5Server.java:577)
 at 
 com.sun.security.sasl.digest.DigestMD5Server.evaluateResponse(DigestMD5Server.java:226)
 at 
 org.apache.hadoop.ipc.Server$Connection.saslReadAndProcess(Server.java:1199)
 at 
 org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1393)
 at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:710)
 at 
 org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:509)
 at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:484)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3272) Make it possible to state MIME type for a webhdfs OPEN operation's result

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524599#comment-14524599
 ] 

Hadoop QA commented on HDFS-3272:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12563833/HDFS-3272.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle site |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10531/console |


This message was automatically generated.

 Make it possible to state MIME type for a webhdfs OPEN operation's result
 -

 Key: HDFS-3272
 URL: https://issues.apache.org/jira/browse/HDFS-3272
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: webhdfs
Affects Versions: 1.0.1, 2.0.2-alpha
Reporter: Steve Loughran
Priority: Minor
 Attachments: HDFS-3272.patch


 when you do a GET from the browser with webhdfs, you get the file, but it 
 comes over as a binary as the browser doesn't know what type it is. Having a 
 mime mapping table and such like would be one solution, but another is simply 
 to add a {{mime}} query parameter that would provide a string to be reflected 
 back to the caller as the Content-Type header in the HTTP response.
 e.g.
 {code}
 http://ranier:50070/webhdfs/v1/results/Debounce/part-r-0.csv?op=openmime=text/csv
  
 {code}
 would generate a 307 redirect to the datanode, with the 
 {code}
 http://dn1:50075/webhdfs/v1/results/Debounce/part-r-0.csv?op=openmime=text/csv
  
 {code}
 which would then generate the result
 {code}
 200 OK
 Content-Type:text/csv
 GATE4,eb8bd736445f415e18886ba037f84829,55000,2007-01-14,14:01:54,
 GATE4,ec58edcce1049fa665446dc1fa690638,8030803000,2007-01-14,13:52:31,
 ...
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3671) ByteRangeInputStream shouldn't require the content length header be present

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524596#comment-14524596
 ] 

Hadoop QA commented on HDFS-3671:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12540609/h3671_20120812.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10530/console |


This message was automatically generated.

 ByteRangeInputStream shouldn't require the content length header be present
 ---

 Key: HDFS-3671
 URL: https://issues.apache.org/jira/browse/HDFS-3671
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Priority: Critical
 Attachments: h3671_20120717.patch, h3671_20120719.patch, 
 h3671_20120812.patch


 Per HDFS-3318 the content length header check breaks distcp compatibility 
 with previous releases (0.20.2 and earlier, and 0.21). Like branch-1 this 
 check should be lenient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4311) repair test org.apache.hadoop.fs.http.server.TestHttpFSWithKerberos

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524727#comment-14524727
 ] 

Hadoop QA commented on HDFS-4311:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12611109/HDFS-4311--N5.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10577/console |


This message was automatically generated.

 repair test org.apache.hadoop.fs.http.server.TestHttpFSWithKerberos
 ---

 Key: HDFS-4311
 URL: https://issues.apache.org/jira/browse/HDFS-4311
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
 Attachments: HDFS-4311--N1.patch, HDFS-4311--N2.patch, 
 HDFS-4311--N3.patch, HDFS-4311--N4.patch, HDFS-4311--N5.patch, HDFS-4311.patch


 Some of the test cases in this test class are failing because they are 
 affected by static state changed by the previous test cases. Namely this is 
 the static field org.apache.hadoop.security.UserGroupInformation.loginUser .
 The suggested patch solves this problem.
 Besides, the following improvements are done:
 1) parametrized the user principal and keytab values via system properties;
 2) shutdown of the Jetty server and the minicluster between the test cases is 
 added to make the test methods independent on each other.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3215) Block size is logging as zero Even blockrecevied command received by DN

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524734#comment-14524734
 ] 

Hadoop QA commented on HDFS-3215:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12614783/HDFS-3215.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10580/console |


This message was automatically generated.

 Block size is logging as zero Even blockrecevied command received by DN 
 

 Key: HDFS-3215
 URL: https://issues.apache.org/jira/browse/HDFS-3215
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Brahma Reddy Battula
Assignee: Shinichi Yamashita
Priority: Minor
 Attachments: HDFS-3215.patch, HDFS-3215.patch


 Scenario 1
 ==
 Start NN and DN.
 write file.
 Block size is logging as zero Even blockrecevied command received by DN 
  *NN log*
 2012-03-14 20:23:40,541 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
 NameSystem.allocateBlock: /hadoop-create-user.sh._COPYING_. 
 BP-1166515020-10.18.40.24-1331736264353 
 blk_1264419582929433995_1002{blockUCState=UNDER_CONSTRUCTION, 
 primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[XXX:50010|RBW]]}
 2012-03-14 20:24:26,357 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
 addStoredBlock: blockMap updated: XXX:50010 is added to 
 blk_1264419582929433995_1002{blockUCState=UNDER_CONSTRUCTION, 
 primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[XXX:50010|RBW]]} 
 size 0
  *DN log* 
 2012-03-14 20:24:17,519 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Receiving block 
 BP-1166515020-XXX-1331736264353:blk_1264419582929433995_1002 src: 
 /XXX:53141 dest: /XXX:50010
 2012-03-14 20:24:26,517 INFO 
 org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 
 /XXX:53141, dest: /XXX:50010, bytes: 512, op: HDFS_WRITE, cliID: 
 DFSClient_NONMAPREDUCE_1612873957_1, offset: 0, srvID: 
 DS-1639667928-XXX-50010-1331736284942, blockid: 
 BP-1166515020-XXX-1331736264353:blk_1264419582929433995_1002, duration: 
 1286482503
 2012-03-14 20:24:26,517 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
 PacketResponder: 
 BP-1166515020-XXX-1331736264353:blk_1264419582929433995_1002, 
 type=LAST_IN_PIPELINE, downstreams=0:[] terminating
 2012-03-14 20:24:31,533 INFO 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification 
 succeeded for BP-1166515020-XXX-1331736264353:blk_1264419582929433995_1002



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5654) Add lock context support to FSNamesystemLock

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524740#comment-14524740
 ] 

Hadoop QA commented on HDFS-5654:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12618264/HDFS-5654.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10584/console |


This message was automatically generated.

 Add lock context support to FSNamesystemLock
 

 Key: HDFS-5654
 URL: https://issues.apache.org/jira/browse/HDFS-5654
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HDFS-5654.patch


 Supporting new methods of locking the namesystem, ie. coarse or fine-grain, 
 needs an api to manage the locks (or any object conforming to Lock interface) 
 held during access to the namespace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5263) Delegation token is not created generateNodeDataHeader method of NamenodeJspHelper$NodeListJsp

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524716#comment-14524716
 ] 

Hadoop QA commented on HDFS-5263:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  1s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12606158/HDFS-5263-rev1.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10570/console |


This message was automatically generated.

 Delegation token is not created generateNodeDataHeader method of 
 NamenodeJspHelper$NodeListJsp
 --

 Key: HDFS-5263
 URL: https://issues.apache.org/jira/browse/HDFS-5263
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode, webhdfs
Reporter: Vasu Mariyala
 Attachments: HDFS-5263-rev1.patch, HDFS-5263.patch


 When Kerberos authentication is enabled, we are unable to browse to the data 
 nodes using ( Name node web page -- Live Nodes -- Select any of the data 
 nodes). The reason behind this is the delegation token is not provided as 
 part of the url in the method (generateNodeDataHeader method of NodeListJsp)
 {code}
   String url = HttpConfig.getSchemePrefix() + d.getHostName() + :
   + d.getInfoPort()
   + /browseDirectory.jsp?namenodeInfoPort= + nnHttpPort + dir=
   + URLEncoder.encode(/, UTF-8)
   + JspHelper.getUrlParam(JspHelper.NAMENODE_ADDRESS, nnaddr);
 {code}
 But browsing the file system using name node web page -- Browse the file 
 system - any directory is working fine as the redirectToRandomDataNode 
 method of NamenodeJspHelper creates the delegation token
 {code}
 redirectLocation = HttpConfig.getSchemePrefix() + fqdn + : + 
 redirectPort
 + /browseDirectory.jsp?namenodeInfoPort=
 + nn.getHttpAddress().getPort() + dir=/
 + (tokenString == null ?  :
JspHelper.getDelegationTokenUrlParam(tokenString))
 + JspHelper.getUrlParam(JspHelper.NAMENODE_ADDRESS, addr);
 {code}
 I will work on providing a patch for this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5384) Add a new TransitionState to indicate NN is in transition from standby state to active state

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524726#comment-14524726
 ] 

Hadoop QA commented on HDFS-5384:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12609503/HDFS-5384.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10576/console |


This message was automatically generated.

 Add a new TransitionState to indicate NN is in transition from standby state 
 to active state
 

 Key: HDFS-5384
 URL: https://issues.apache.org/jira/browse/HDFS-5384
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: ha, namenode
Affects Versions: 3.0.0
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HDFS-5384.000.patch, HDFS-5384.001.patch


 Currently in HA setup, when a NameNode is transitioning from standby to 
 active, the current code first sets the state of the NN to Active, then 
 starts the active service, during which the NN still needs to tail the 
 remaining editlog and may not be able to serve certain requests as expected 
 (such as HDFS-5322). 
 So it may be necessary to define a transition state to indicate that NN has 
 left the previous state and is in transitioning to the next state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5180) Output the processing time of slow RPC request to node's log

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524729#comment-14524729
 ] 

Hadoop QA commented on HDFS-5180:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12609752/HDFS-5180.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10578/console |


This message was automatically generated.

 Output the processing time of slow RPC request to node's log
 

 Key: HDFS-5180
 URL: https://issues.apache.org/jira/browse/HDFS-5180
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0
Reporter: Shinichi Yamashita
Assignee: Shinichi Yamashita
 Attachments: HDFS-5180.patch, HDFS-5180.patch


 In current trunk, it is output at DEBUG level for the processing time of all 
 RPC requests to log.
 When we treat it by the troubleshooting of the large-scale cluster, it is 
 hard to handle the current implementation.
 Therefore we should set the threshold and output only a slow RPC to node's 
 log to know the abnormal sign.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5357) TestFileSystemAccessService failures in JDK7

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524722#comment-14524722
 ] 

Hadoop QA commented on HDFS-5357:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12608332/HDFS-5357v1.patch |
| Optional Tests | javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10572/console |


This message was automatically generated.

 TestFileSystemAccessService failures in JDK7
 

 Key: HDFS-5357
 URL: https://issues.apache.org/jira/browse/HDFS-5357
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.9
Reporter: Robert Parker
Assignee: Robert Parker
 Attachments: HDFS-5357v1.patch


 junit.framework.AssertionFailedError: Expected Exception: ServiceException 
 got: ExceptionInInitializerError
   at junit.framework.Assert.fail(Assert.java:47)
   at 
 org.apache.hadoop.test.TestExceptionHelper$1.evaluate(TestExceptionHelper.java:56)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray2(ReflectionUtils.java:208)
   at 
 org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:159)
   at 
 org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:87)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:95)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5362) Add SnapshotException to terse exception group

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524725#comment-14524725
 ] 

Hadoop QA commented on HDFS-5362:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12609093/HDFS-5362.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10575/console |


This message was automatically generated.

 Add SnapshotException to terse exception group
 --

 Key: HDFS-5362
 URL: https://issues.apache.org/jira/browse/HDFS-5362
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 3.0.0
Reporter: Shinichi Yamashita
Assignee: Shinichi Yamashita
Priority: Minor
 Attachments: HDFS-5362.patch


 In trunk, a stack trace of SnapshotException is output NameNode's log via 
 ipc.Server class.
 The trace of the output method is easy for the message of SnapshotException.
 So, it should add SnapshotException to terse exception group of 
 NameNodeRpcServer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5549) Support for implementing custom FsDatasetSpi from outside the project

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524735#comment-14524735
 ] 

Hadoop QA commented on HDFS-5549:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12615703/HDFS-5549.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10581/console |


This message was automatically generated.

 Support for implementing custom FsDatasetSpi from outside the project
 -

 Key: HDFS-5549
 URL: https://issues.apache.org/jira/browse/HDFS-5549
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 3.0.0
Reporter: Ignacio Corderi
 Attachments: HDFS-5549.patch


 Visibility for multiple methods and a few classes got changed to public to 
 allow FsDatasetSpiT and all the related classes that need subtyping to be 
 fully implemented from outside the HDFS project.
 Blocks transfers got abstracted to a factory given that the behavior will be 
 changed for DataNodes using Kinetic drives. The existing DataNode to DataNode 
 block transfer functionality got moved to LegacyBlockTransferer, no new 
 configuration is needed to use this class and have the same behavior that is 
 currently present.
 DataNodes have an additional configuration key 
 DFS_DATANODE_BLOCKTRANSFERER_FACTORY_KEY to override the default block 
 transfer behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5639) rpc scheduler abstraction

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524737#comment-14524737
 ] 

Hadoop QA commented on HDFS-5639:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12617519/HDFS-5639-2.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10582/console |


This message was automatically generated.

 rpc scheduler abstraction
 -

 Key: HDFS-5639
 URL: https://issues.apache.org/jira/browse/HDFS-5639
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma
 Attachments: HDFS-5639-2.patch, HDFS-5639.patch


 We have run into various issues in namenode and hbase w.r.t. rpc handling in 
 multi-tenant clusters. The examples are
 https://issues.apache.org/jira/i#browse/HADOOP-9640
  https://issues.apache.org/jira/i#browse/HBASE-8836
 There are different ideas on how to prioritize rpc requests. It could be 
 based on user id, or whether it is read request or write request, or it could 
 use specific rule like datanode's RPC is more important than client RPC.
 We want to enable people to implement and experiiment different rpc 
 schedulers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5517) Lower the default maximum number of blocks per file

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524732#comment-14524732
 ] 

Hadoop QA commented on HDFS-5517:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12614116/HDFS-5517.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10579/console |


This message was automatically generated.

 Lower the default maximum number of blocks per file
 ---

 Key: HDFS-5517
 URL: https://issues.apache.org/jira/browse/HDFS-5517
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.2.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HDFS-5517.patch


 We introduced the maximum number of blocks per file in HDFS-4305, but we set 
 the default to 1MM. In practice this limit is so high as to never be hit, 
 whereas we know that an individual file with 10s of thousands of blocks can 
 cause problems. We should lower the default value, in my opinion to 10k.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5361) Change the unit of StartupProgress 'PercentComplete' to percentage

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524724#comment-14524724
 ] 

Hadoop QA commented on HDFS-5361:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12608564/HDFS-5361.3.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10574/console |


This message was automatically generated.

 Change the unit of StartupProgress 'PercentComplete' to percentage
 --

 Key: HDFS-5361
 URL: https://issues.apache.org/jira/browse/HDFS-5361
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.1.0-beta
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: metrics, newbie
 Attachments: HDFS-5361.2.patch, HDFS-5361.3.patch, HDFS-5361.patch


 Now the unit of 'PercentComplete' metrics is rate (maximum is 1.0). It's 
 confusing for users because its name includes percent.
 The metrics should be multiplied by 100.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4960) Unnecessary .meta seeks even when skip checksum is true

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524739#comment-14524739
 ] 

Hadoop QA commented on HDFS-4960:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12591131/4960-trunk.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10583/console |


This message was automatically generated.

 Unnecessary .meta seeks even when skip checksum is true
 ---

 Key: HDFS-4960
 URL: https://issues.apache.org/jira/browse/HDFS-4960
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Varun Sharma
Assignee: Varun Sharma
 Attachments: 4960-branch2.patch, 4960-trunk.patch


 While attempting to benchmark an HBase + Hadoop 2.0 setup on SSDs, we found 
 unnecessary seeks into .meta files, each seek was a 7 byte read at the head 
 of the file - this attempts to validate the version #. Since the client is 
 requesting no-checksum, we should not be needing to touch the .meta file at 
 all.
 Since the purpose of skip checksum is to also avoid the performance penalty 
 of the extra seek, we should not be seeking into .meta if skip checksum is 
 true



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5278) Reduce memory consumptions of TestDFSClientRetries

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524723#comment-14524723
 ] 

Hadoop QA commented on HDFS-5278:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12608351/HDFS-5278.001.patch |
| Optional Tests | javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10573/console |


This message was automatically generated.

 Reduce memory consumptions of TestDFSClientRetries
 --

 Key: HDFS-5278
 URL: https://issues.apache.org/jira/browse/HDFS-5278
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
Priority: Minor
 Attachments: HDFS-5278.000.patch, HDFS-5278.001.patch


 TestDFSClientRetries::testDFSClientRetriesOnBusyBlocks() spawns about 50 
 threads during the execution, each of which takes more than 6m memory.  It 
 makes debugging it in eclipse under the default settings difficult since it 
 triggers the OutOfMemoryException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8309) Skip unit test using DataNodeTestUtils#injectDataDirFailure() on Windows

2015-05-01 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8309:
-
Status: Patch Available  (was: Open)

 Skip unit test using DataNodeTestUtils#injectDataDirFailure() on Windows
 

 Key: HDFS-8309
 URL: https://issues.apache.org/jira/browse/HDFS-8309
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.7.0
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor
 Attachments: HDFS-8309.00.patch


 As [~cnauroth] noted  in HDFS-7917 below, 
 DataNodeTestUtils.injectDataDirFailure() won't work for Windows as rename 
 will fail due to open handles on data node dir. This ticket is opened to skip 
 these tests for Windows. 
 bq.Unfortunately, I just remembered that the rename isn't going to work on 
 Windows. It typically doesn't allow you to rename a directory where there are 
 open file handles anywhere in the sub-tree. We'd have to shutdown the 
 DataNode before doing the rename and then start it up. By doing that, we'd be 
 changing the meaning of the test from covering an online failure to covering 
 a failure at DataNode startup, so I don't think we can make that change.
 Below are the two test cases that need to be fixed:
 # TestDataNodeVolumeFailure#testFailedVolumeBeingRemovedFromDataNode
 # TestDataNodeHotSwapVolumes.testDirectlyReloadAfterCheckDiskError



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8305) HDFS INotify: the destination field of RenameOp should always end with the file name

2015-05-01 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524325#comment-14524325
 ] 

Lei (Eddy) Xu commented on HDFS-8305:
-

+1 non-binding.

 HDFS INotify: the destination field of RenameOp should always end with the 
 file name
 

 Key: HDFS-8305
 URL: https://issues.apache.org/jira/browse/HDFS-8305
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-8305.001.patch


 HDFS INotify: the destination field of RenameOp should always end with the 
 file name rather than sometimes being a directory name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6407) new namenode UI, lost ability to sort columns in datanode tab

2015-05-01 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524415#comment-14524415
 ] 

Benoy Antony commented on HDFS-6407:


The sorting seems to be correct for all the fields. Please let me know what 
makes you think otherwise and I can fix it.

The purpose of this jira is to put back sorting on data nodes tab. Since it was 
easy to enable sorting on other tables using the plugin, sorting and pagination 
was added to those tables as well. 

The processing is done completely on the client side and so there is no impact 
on the server side. At this point, the process of enabling sorting pagination 
for a table is simple and uniform.  To give up this simplicity, there should be 
some noticeable delay  with the current approach.

Tested with 6,000 files.  It took time to render 6,000 items irrespective of 
sorting is enabled or not.  
If pagination is set , then items are painted instantly.

In any case, no noticeable delay is caused by sorting. So I am not sure whether 
there is any point in optimizing this client side logic. 

One enhancement which enables the page to render fast , will be to set the 
default pagination to reasonably higher number like 200. In cases where there 
are 10,000+ items to paint, the pages gets rendered fast if page size is 200 
items. If the user wants all items, then he can select All display all files. 
Or User can search for the file in other pages.I will update the patch with 
page size set to 200 items for all tables.







 new namenode UI, lost ability to sort columns in datanode tab
 -

 Key: HDFS-6407
 URL: https://issues.apache.org/jira/browse/HDFS-6407
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.0
Reporter: Nathan Roberts
Assignee: Benoy Antony
Priority: Minor
 Attachments: 002-datanodes-sorted-capacityUsed.png, 
 002-datanodes.png, 002-filebrowser.png, 002-snapshots.png, 
 HDFS-6407-002.patch, HDFS-6407.patch, browse_directory.png, datanodes.png, 
 snapshots.png


 old ui supported clicking on column header to sort on that column. The new ui 
 seems to have dropped this very useful feature.
 There are a few tables in the Namenode UI to display  datanodes information, 
 directory listings and snapshots.
 When there are many items in the tables, it is useful to have ability to sort 
 on the different columns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7678) Erasure coding: DFSInputStream with decode functionality

2015-05-01 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-7678:

Attachment: HDFS-7678-HDFS-7285.007.patch

 Erasure coding: DFSInputStream with decode functionality
 

 Key: HDFS-7678
 URL: https://issues.apache.org/jira/browse/HDFS-7678
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Li Bo
Assignee: Zhe Zhang
 Attachments: BlockGroupReader.patch, HDFS-7678-HDFS-7285.002.patch, 
 HDFS-7678-HDFS-7285.003.patch, HDFS-7678-HDFS-7285.004.patch, 
 HDFS-7678-HDFS-7285.005.patch, HDFS-7678-HDFS-7285.006.patch, 
 HDFS-7678-HDFS-7285.007.patch, HDFS-7678.000.patch, HDFS-7678.001.patch


 A block group reader will read data from BlockGroup no matter in striping 
 layout or contiguous layout. The corrupt blocks can be known before 
 reading(told by namenode), or just be found during reading. The block group 
 reader needs to do decoding work when some blocks are found corrupt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3618) SSH fencing option may incorrectly succeed if nc (netcat) command not present

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524690#comment-14524690
 ] 

Hadoop QA commented on HDFS-3618:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12597900/HDFS-3618.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle site |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10560/console |


This message was automatically generated.

 SSH fencing option may incorrectly succeed if nc (netcat) command not present
 -

 Key: HDFS-3618
 URL: https://issues.apache.org/jira/browse/HDFS-3618
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: auto-failover
Affects Versions: 2.0.0-alpha
Reporter: Brahma Reddy Battula
Assignee: Vinayakumar B
 Attachments: HDFS-3618.patch, HDFS-3618.patch, HDFS-3618.patch, 
 zkfc.txt, zkfc_threaddump.out


 Started NN's and zkfc's in Suse11.
 Suse11 will have netcat installation and netcat -z will work(but nc -z wn't 
 work)..
 While executing following command, got command not found hence rc will be 
 other than zero and assuming that server was down..Here we are ending up 
 without checking whether service is down or not..
 {code}
 LOG.info(
 Indeterminate response from trying to kill service.  +
 Verifying whether it is running using nc...);
 rc = execCommand(session, nc -z  + serviceAddr.getHostName() +
   + serviceAddr.getPort());
 if (rc == 0) {
   // the service is still listening - we are unable to fence
   LOG.warn(Unable to fence - it is running but we cannot kill it);
   return false;
 } else {
   LOG.info(Verified that the service is down.);
   return true;  
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4870) periodically re-resolve hostnames in included and excluded datanodes list

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524702#comment-14524702
 ] 

Hadoop QA commented on HDFS-4870:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12585903/HDFS-4870.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10565/console |


This message was automatically generated.

 periodically re-resolve hostnames in included and excluded datanodes list
 -

 Key: HDFS-4870
 URL: https://issues.apache.org/jira/browse/HDFS-4870
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.5-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-4870.001.patch


 We currently only resolve the hostnames in the included and excluded 
 datanodes list once-- when the list is read.  The rationale for this is that 
 in big clusters, DNS resolution for thousands of nodes can take a long time 
 (when generating a datanode list in getDatanodeListForReport, for example).  
 However, if the DNS information changes for one of these hosts, we should 
 reflect that.  A background thread could do these DNS resolutions every few 
 minutes without blocking any foreground operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4660) Duplicated checksum on DN in a recovered pipeline

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524696#comment-14524696
 ] 

Hadoop QA commented on HDFS-4660:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12576518/HDFS-4660.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10562/console |


This message was automatically generated.

 Duplicated checksum on DN in a recovered pipeline
 -

 Key: HDFS-4660
 URL: https://issues.apache.org/jira/browse/HDFS-4660
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Peng Zhang
Priority: Critical
 Attachments: HDFS-4660.patch


 pipeline DN1  DN2  DN3
 stop DN2
 pipeline added node DN4 located at 2nd position
 DN1  DN4  DN3
 recover RBW
 DN4 after recover rbw
 2013-04-01 21:02:31,570 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Recover 
 RBW replica 
 BP-325305253-10.2.201.14-1364820083462:blk_-9076133543772600337_1004
 2013-04-01 21:02:31,570 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: 
 Recovering ReplicaBeingWritten, blk_-9076133543772600337_1004, RBW
   getNumBytes() = 134144
   getBytesOnDisk() = 134144
   getVisibleLength()= 134144
 end at chunk (134144/512=262)
 DN3 after recover rbw
 2013-04-01 21:02:31,575 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Recover 
 RBW replica 
 BP-325305253-10.2.201.14-1364820083462:blk_-9076133543772600337_10042013-04-01
  21:02:31,575 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: 
 Recovering ReplicaBeingWritten, blk_-9076133543772600337_1004, RBW
   getNumBytes() = 134028 
   getBytesOnDisk() = 134028
   getVisibleLength()= 134028
 client send packet after recover pipeline
 offset=133632  len=1008
 DN4 after flush 
 2013-04-01 21:02:31,779 DEBUG 
 org.apache.hadoop.hdfs.server.datanode.DataNode: FlushOrsync, file 
 offset:134640; meta offset:1063
 // meta end position should be floor(134640/512)*4 + 7 == 1059, but now it is 
 1063.
 DN3 after flush
 2013-04-01 21:02:31,782 DEBUG 
 org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: 
 BP-325305253-10.2.201.14-1364820083462:blk_-9076133543772600337_1005, 
 type=LAST_IN_PIPELINE, downstreams=0:[]: enqueue Packet(seqno=219, 
 lastPacketInBlock=false, offsetInBlock=134640, 
 ackEnqueueNanoTime=8817026136871545)
 2013-04-01 21:02:31,782 DEBUG 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Changing 
 meta file offset of block 
 BP-325305253-10.2.201.14-1364820083462:blk_-9076133543772600337_1005 from 
 1055 to 1051
 2013-04-01 21:02:31,782 DEBUG 
 org.apache.hadoop.hdfs.server.datanode.DataNode: FlushOrsync, file 
 offset:134640; meta offset:1059
 After checking meta on DN4, I found checksum of chunk 262 is duplicated, but 
 data not.
 Later after block was finalized, DN4's scanner detected bad block, and then 
 reported it to NM. NM send a command to delete this block, and replicate this 
 block from other DN in pipeline to satisfy duplication num.
 I think this is because in BlockReceiver it skips data bytes already written, 
 but not skips checksum bytes already written. And function 
 adjustCrcFilePosition is only used for last non-completed chunk, but
 not for this situation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4812) add hdfsReadFully, hdfsWriteFully

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524710#comment-14524710
 ] 

Hadoop QA commented on HDFS-4812:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12582553/HDFS-4812.001.patch |
| Optional Tests | javac unit |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10567/console |


This message was automatically generated.

 add hdfsReadFully, hdfsWriteFully
 -

 Key: HDFS-4812
 URL: https://issues.apache.org/jira/browse/HDFS-4812
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-4812.001.patch


 It would be nice to have {{hdfsReadFully}} and {{hdfsWriteFully}} in libhdfs. 
  The current APIs don't guarantee that we read or write as much as we're told 
 to do.  We have readFully and writeFully in Java, but not in libhdfs at the 
 moment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5040) Audit log for admin commands/ logging output of all DFS admin commands

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524721#comment-14524721
 ] 

Hadoop QA commented on HDFS-5040:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12607420/HDFS-5040.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10571/console |


This message was automatically generated.

 Audit log for admin commands/ logging output of all DFS admin commands
 --

 Key: HDFS-5040
 URL: https://issues.apache.org/jira/browse/HDFS-5040
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: namenode
Affects Versions: 3.0.0
Reporter: Raghu C Doppalapudi
Assignee: Shinichi Yamashita
 Attachments: HDFS-5040.patch, HDFS-5040.patch, HDFS-5040.patch


 enable audit log for all the admin commands/also provide ability to log all 
 the admin commands in separate log file, at this point all the logging is 
 displayed on the console.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3627) OfflineImageViewer oiv Indented processor prints out the Java class name in the DELEGATION_KEY field

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524689#comment-14524689
 ] 

Hadoop QA commented on HDFS-3627:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12574930/HDFS-3627.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10559/console |


This message was automatically generated.

 OfflineImageViewer oiv Indented processor prints out the Java class name in 
 the DELEGATION_KEY field
 

 Key: HDFS-3627
 URL: https://issues.apache.org/jira/browse/HDFS-3627
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Ravi Prakash
Priority: Minor
 Attachments: HDFS-3627.patch, HDFS-3627.patch, HDFS-3627.patch, 
 HDFS-3627.patch, HDFS-3627.patch, HDFS-3627.patch


 Instead of the contents of the delegation key this is printed out
 DELEGATION_KEY = 
 org.apache.hadoop.security.token.delegation.DelegationKey@1e2ca7
 DELEGATION_KEY = 
 org.apache.hadoop.security.token.delegation.DelegationKey@105bd58
 DELEGATION_KEY = 
 org.apache.hadoop.security.token.delegation.DelegationKey@1d1e730
 DELEGATION_KEY = 
 org.apache.hadoop.security.token.delegation.DelegationKey@1a116c9
 DELEGATION_KEY = 
 org.apache.hadoop.security.token.delegation.DelegationKey@df1832



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3512) Delay in scanning blocks at DN side when there are huge number of blocks

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524691#comment-14524691
 ] 

Hadoop QA commented on HDFS-3512:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12531266/HDFS-3512.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10561/console |


This message was automatically generated.

 Delay in scanning blocks at DN side when there are huge number of blocks
 

 Key: HDFS-3512
 URL: https://issues.apache.org/jira/browse/HDFS-3512
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.0.0-alpha
Reporter: suja s
Assignee: amith
 Attachments: HDFS-3512.patch


 Block scanner maintains the full list of blocks at DN side in a map and there 
 is no differentiation between the blocks which are already scanned and the 
 ones not scanend. For every check (ie every 5 secs) it will pick one block 
 and scan. There are chances that it chooses a block which is already scanned 
 which leads to further delay in scanning of blcoks which are yet to be 
 scanned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4387) libhdfs doesn't work with jamVM

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524697#comment-14524697
 ] 

Hadoop QA commented on HDFS-4387:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | http://issues.apache.org/jira/secure/attachment/12564522/01.patch 
|
| Optional Tests | javac unit javadoc findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10563/console |


This message was automatically generated.

 libhdfs doesn't work with jamVM
 ---

 Key: HDFS-4387
 URL: https://issues.apache.org/jira/browse/HDFS-4387
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs
Affects Versions: 3.0.0
Reporter: Andy Isaacson
Priority: Minor
 Attachments: 01.patch


 Building and running tests on OpenJDK 7 on Ubuntu 12.10 fails with {{mvn test 
 -Pnative}}.  The output is hard to decipher but the underlying issue is that 
 {{test_libhdfs_native}} segfaults at startup.
 {noformat}
 (gdb) run
 Starting program: 
 /mnt/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/test_libhdfs_threaded
 [Thread debugging using libthread_db enabled]
 Using host libthread_db library /lib/x86_64-linux-gnu/libthread_db.so.1.
 Program received signal SIGSEGV, Segmentation fault.
 0x7739a897 in attachJNIThread (name=0x0, is_daemon=is_daemon@entry=0 
 '\000', group=0x0) at thread.c:768
 768 thread.c: No such file or directory.
 (gdb) where
 #0 0x7739a897 in attachJNIThread (name=0x0, 
 is_daemon=is_daemon@entry=0 '\000', group=0x0) at thread.c:768
 #1 0x77395020 in attachCurrentThread (is_daemon=0, args=0x0, 
 penv=0x7fffddb8) at jni.c:1454
 #2 Jam_AttachCurrentThread (vm=optimized out, penv=0x7fffddb8, 
 args=0x0) at jni.c:1466
 #3 0x77bcf979 in getGlobalJNIEnv () at 
 /mnt/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.c:527
 #4 getJNIEnv () at 
 /mnt/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.c:585
 #5 0x00402512 in nmdCreate (conf=conf@entry=0x7fffdeb0) at 
 /mnt/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/native_mini_dfs.c:49
 #6 0x004016e1 in main () at 
 /mnt/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test_libhdfs_threaded.c:283
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4924) Show NameNode state on dfsclusterhealth page

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524712#comment-14524712
 ] 

Hadoop QA commented on HDFS-4924:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12588932/HDFS-4924.trunk.1.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10568/console |


This message was automatically generated.

 Show NameNode state on dfsclusterhealth page
 

 Key: HDFS-4924
 URL: https://issues.apache.org/jira/browse/HDFS-4924
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: federation
Affects Versions: 2.1.0-beta
Reporter: Lohit Vijayarenu
Assignee: Lohit Vijayarenu
 Attachments: HDFS-4924.trunk.1.patch


 dfsclusterhealth.jsp shows summary of multiple namenodes in cluster. With 
 federation combined with HA it becomes difficult to quickly know the state of 
 NameNodes in the cluster. It would be good to show if NameNode is 
 Active/Standy on summary page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5066) Inode tree with snapshot information visualization

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524713#comment-14524713
 ] 

Hadoop QA commented on HDFS-5066:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12604243/HDFS-5066.v3.patch |
| Optional Tests | javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10569/console |


This message was automatically generated.

 Inode tree with snapshot information visualization 
 ---

 Key: HDFS-5066
 URL: https://issues.apache.org/jira/browse/HDFS-5066
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HDFS-5066.v1.patch, HDFS-5066.v2.patch, 
 HDFS-5066.v3.patch, visnap.png


 It would be nice to be able to visualize snapshot information, in order to 
 ease the understanding of related data structures. We can generate graph from 
 in memory inode links.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4160) libhdfs / fuse-dfs should implement O_CREAT | O_EXCL

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524706#comment-14524706
 ] 

Hadoop QA commented on HDFS-4160:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12585242/HDFS-4160.001.patch |
| Optional Tests | javac unit |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10566/console |


This message was automatically generated.

 libhdfs / fuse-dfs should implement O_CREAT | O_EXCL
 

 Key: HDFS-4160
 URL: https://issues.apache.org/jira/browse/HDFS-4160
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-4160.001.patch


 {{hdfsOpenFile}} contains this code:
 {code}
 if ((flags  O_CREAT)  (flags  O_EXCL)) {   
   fprintf(stderr, WARN: hdfs does not truly support O_CREATE  O_EXCL\n);
 } 
 {code}
 But {{hdfsOpenFile}} could easily support *O_CREAT* | *O_EXCL* by calling 
 {{FileSystem#create}} with {{overwrite = false}}.
 We should do this.  It would also benefit {{fuse-dfs}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4837) Allow DFSAdmin to run when HDFS is not the default file system

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524699#comment-14524699
 ] 

Hadoop QA commented on HDFS-4837:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12583706/HDFS-4837.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10564/console |


This message was automatically generated.

 Allow DFSAdmin to run when HDFS is not the default file system
 --

 Key: HDFS-4837
 URL: https://issues.apache.org/jira/browse/HDFS-4837
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Mostafa Elhemali
Assignee: Mostafa Elhemali
 Attachments: HDFS-4837.patch


 When Hadoop is running a different default file system than HDFS, but still 
 have HDFS namenode running, we are unable to run dfsadmin commands.
 I suggest that DFSAdmin use the same mechanism as NameNode does today to get 
 its address: look at dfs.namenode.rpc-address, and if not set fallback on 
 getting it from the default file system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6861) Separate Balancer specific logic form Dispatcher

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524880#comment-14524880
 ] 

Hadoop QA commented on HDFS-6861:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  1s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12662699/h6861_20140819.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10642/console |


This message was automatically generated.

 Separate Balancer specific logic form Dispatcher
 

 Key: HDFS-6861
 URL: https://issues.apache.org/jira/browse/HDFS-6861
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: balancer  mover
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Attachments: h6861_20140818.patch, h6861_20140819.patch


 In order to balance datanode storage utilization of a cluster, Balancer (1) 
 classifies datanodes into different groups (overUtilized, aboveAvgUtilized, 
 belowAvgUtilized and underUtilized), (2) chooses source and target datanode 
 pairs and (3) chooses blocks to move.  Some of these logic are in Dispatcher. 
  It is better to separate them out.  This JIRA is a further work of HDFS-6828.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5887) Add suffix to generated protobuf class

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524877#comment-14524877
 ] 

Hadoop QA commented on HDFS-5887:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  1s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12661725/HDFS-5887.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10641/console |


This message was automatically generated.

 Add suffix to generated protobuf class
 --

 Key: HDFS-5887
 URL: https://issues.apache.org/jira/browse/HDFS-5887
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-5698 (FSImage in protobuf)
Reporter: Haohui Mai
Assignee: Tassapol Athiapinya
Priority: Minor
 Attachments: HDFS-5887.000.patch, 
 HDFS-5887.000.proto_files-only.patch, HDFS-5887.001.patch


 As suggested by [~tlipcon], the code is more readable if we give each class 
 generated by the protobuf the suffix Proto.
 This jira proposes to rename the classes and to introduce no functionality 
 changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8305) HDFS INotify: the destination field of RenameOp should always end with the file name

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524129#comment-14524129
 ] 

Hadoop QA commented on HDFS-8305:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m  8s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   8m 50s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 21s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 27s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 33s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 48s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 37s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 33s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native |   4m 23s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 113m 37s | Tests failed in hadoop-hdfs. |
| | | 162m 29s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestEncryptionZonesWithKMS |
| Timed out tests | org.apache.hadoop.hdfs.TestFileCreation |
|   | org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12729603/HDFS-8305.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 3393461 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10508/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10508/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10508/console |


This message was automatically generated.

 HDFS INotify: the destination field of RenameOp should always end with the 
 file name
 

 Key: HDFS-8305
 URL: https://issues.apache.org/jira/browse/HDFS-8305
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-8305.001.patch


 HDFS INotify: the destination field of RenameOp should always end with the 
 file name rather than sometimes being a directory name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8309) Skip unit test using DataNodeTestUtils#injectDataDirFailure() on Windows

2015-05-01 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8309:
-
Attachment: HDFS-8309.00.patch

Attach a patch that ensure all tests that use 
DataNodeTestUtils.injectDataDirFailure() are skipped for Windows and update 
comments. 

 Skip unit test using DataNodeTestUtils#injectDataDirFailure() on Windows
 

 Key: HDFS-8309
 URL: https://issues.apache.org/jira/browse/HDFS-8309
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.7.0
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor
 Attachments: HDFS-8309.00.patch


 As [~cnauroth] noted  in HDFS-7917 below, 
 DataNodeTestUtils.injectDataDirFailure() won't work for Windows as rename 
 will fail due to open handles on data node dir. This ticket is opened to skip 
 these tests for Windows. 
 bq.Unfortunately, I just remembered that the rename isn't going to work on 
 Windows. It typically doesn't allow you to rename a directory where there are 
 open file handles anywhere in the sub-tree. We'd have to shutdown the 
 DataNode before doing the rename and then start it up. By doing that, we'd be 
 changing the meaning of the test from covering an online failure to covering 
 a failure at DataNode startup, so I don't think we can make that change.
 Below are the two test cases that need to be fixed:
 # TestDataNodeVolumeFailure#testFailedVolumeBeingRemovedFromDataNode
 # TestDataNodeHotSwapVolumes.testDirectlyReloadAfterCheckDiskError



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8229) LAZY_PERSIST file gets deleted after NameNode restart.

2015-05-01 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-8229:

   Resolution: Fixed
Fix Version/s: 2.8.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2.

Thanks for the contribution [~surendrasingh].

 LAZY_PERSIST file gets deleted after NameNode restart.
 --

 Key: HDFS-8229
 URL: https://issues.apache.org/jira/browse/HDFS-8229
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.6.0
Reporter: surendra singh lilhore
Assignee: surendra singh lilhore
 Fix For: 2.8.0

 Attachments: HDFS-8229.patch, HDFS-8229_1.patch, HDFS-8229_2.patch


 {code}
 2015-04-20 10:26:55,180 WARN 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Removing lazyPersist 
 file /LAZY_PERSIST/smallfile with no replicas.
 {code}
 After namenode restart and before DN's registration if 
 {{LazyPersistFileScrubber}} will run then it will delete Lazy persist file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8312) Trash does not descent into child directories to check for permissions

2015-05-01 Thread Eric Yang (JIRA)
Eric Yang created HDFS-8312:
---

 Summary: Trash does not descent into child directories to check 
for permissions
 Key: HDFS-8312
 URL: https://issues.apache.org/jira/browse/HDFS-8312
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS, security
Affects Versions: 2.6.0, 2.2.0
Reporter: Eric Yang


HDFS trash does not descent into child directory to check if user has 
permission to delete files.  For example:

Run the following command to initialize directory structure as super user:
{code}
hadoop fs -mkdir /BSS/level1
hadoop fs -mkdir /BSS/level1/level2
hadoop fs -mkdir /BSS/level1/level2/level3
hadoop fs -put /tmp/appConfig.json /BSS/level1/level2/level3/testfile.txt
hadoop fs -chown user1:users /BSS/level1/level2/level3/testfile.txt
hadoop fs -chown -R user1:users /BSS/level1
hadoop fs -chown -R 750 /BSS/level1
hadoop fs -chmod -R 640 /BSS/level1/level2/level3/testfile.txt
hadoop fs -chmod 775 /BSS
{code}

Change to a normal user called user2. 

When trash is enabled:
{code}
sudo su user2 -
hadoop fs -rm -r /BSS/level1
15/05/01 16:51:20 INFO fs.TrashPolicyDefault: Namenode trash configuration: 
Deletion interval = 3600 minutes, Emptier interval = 0 minutes.
Moved: 'hdfs://bdvs323.svl.ibm.com:9000/BSS/level1' to trash at: 
hdfs://bdvs323.svl.ibm.com:9000/user/user2/.Trash/Current
{code}

When trash is disabled:
{code}
/opt/ibm/biginsights/IHC/bin/hadoop fs -Dfs.trash.interval=0 -rm -r /BSS/level1
15/05/01 16:58:31 INFO fs.TrashPolicyDefault: Namenode trash configuration: 
Deletion interval = 0 minutes, Emptier interval = 0 minutes.
rm: Permission denied: user=user2, access=ALL, 
inode=/BSS/level1:user1:users:drwxr-x---
{code}

There is inconsistency between trash behavior and delete behavior.  When trash 
is enabled, files owned by user1 is deleted by user2.  It looks like trash does 
not recursively validate if the child directory files can be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7348) Erasure Coding: striped block recovery

2015-05-01 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524425#comment-14524425
 ] 

Yi Liu commented on HDFS-7348:
--

Thanks Zhe for the review and good comments!
I am updating the patch and addressing your comments, and will reply them later.

Just a quick response:
{quote}
The test failed on my local machine, reporting NPE when closing file:
{quote}
Are you referring to {{TestRecoverStripedFile.java}}? It can run success in my 
local env, and I also confirm it success in latest branch.

 Erasure Coding: striped block recovery
 --

 Key: HDFS-7348
 URL: https://issues.apache.org/jira/browse/HDFS-7348
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Kai Zheng
Assignee: Yi Liu
 Attachments: ECWorker.java, HDFS-7348.001.patch


 This JIRA is to recover one or more missed striped block in the striped block 
 group.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8229) LAZY_PERSIST file gets deleted after NameNode restart.

2015-05-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524457#comment-14524457
 ] 

Hudson commented on HDFS-8229:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #7716 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7716/])
HDFS-8229. LAZY_PERSIST file gets deleted after NameNode restart. (Contributed 
by Surendra Singh Lilhore) (arp: rev 6f541edce0ed64bf316276715c4bc07794ff20ac)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyPersistTestCase.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


 LAZY_PERSIST file gets deleted after NameNode restart.
 --

 Key: HDFS-8229
 URL: https://issues.apache.org/jira/browse/HDFS-8229
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.6.0
Reporter: surendra singh lilhore
Assignee: surendra singh lilhore
 Fix For: 2.8.0

 Attachments: HDFS-8229.patch, HDFS-8229_1.patch, HDFS-8229_2.patch


 {code}
 2015-04-20 10:26:55,180 WARN 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Removing lazyPersist 
 file /LAZY_PERSIST/smallfile with no replicas.
 {code}
 After namenode restart and before DN's registration if 
 {{LazyPersistFileScrubber}} will run then it will delete Lazy persist file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3927) DFSInputStream#blockSeekTo may print incorrect warn msg for IOException

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524569#comment-14524569
 ] 

Hadoop QA commented on HDFS-3927:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12544787/HDFS-3927.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10523/console |


This message was automatically generated.

 DFSInputStream#blockSeekTo may print incorrect warn msg for IOException
 ---

 Key: HDFS-3927
 URL: https://issues.apache.org/jira/browse/HDFS-3927
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Minor
 Attachments: HDFS-3927.patch


 In DFSInputStream#blockSeekTo, for most IOExceptions (except 
 InvalidEncryptionKeyException and InvalidBlockTokenException), the current 
 code only prints Failed to connect to targetAddr, which can be incorrect 
 since there are other possible cases, e.g., the chosen node (remote/local) 
 can be connected but the block/replica cannot be found. The correct error msg 
 (including the error msg from remote node) should be printed to log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3383) libhdfs does not build on ARM because jni_md.h is not found

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524560#comment-14524560
 ] 

Hadoop QA commented on HDFS-3383:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12525971/HDFS-3383.patch |
| Optional Tests | javadoc javac unit |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10522/console |


This message was automatically generated.

 libhdfs does not build on ARM because jni_md.h is not found
 ---

 Key: HDFS-3383
 URL: https://issues.apache.org/jira/browse/HDFS-3383
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs
Affects Versions: 0.23.1
 Environment: Linux 3.2.0-1412-omap4 #16-Ubuntu SMP PREEMPT Tue Apr 17 
 19:38:42 UTC 2012 armv7l armv7l armv7l GNU/Linux
 java version 1.7.0_04-ea
 Java(TM) SE Runtime Environment for Embedded (build 1.7.0_04-ea-b20, headless)
 Java HotSpot(TM) Embedded Server VM (build 23.0-b21, mixed mode, experimental)
Reporter: Trevor Robinson
 Attachments: HDFS-3383.patch


 The wrong include directory is used for jni_md.h:
 [INFO] --- make-maven-plugin:1.0-beta-1:make-install (compile) @ hadoop-hdfs 
 ---
 [INFO] /bin/bash ./libtool --tag=CC   --mode=compile gcc 
 -DPACKAGE_NAME=\libhdfs\ -DPACKAGE_TARNAME=\libhdfs\ 
 -DPACKAGE_VERSION=\0.1.0\ -DPACKAGE_STRING=\libhdfs\ 0.1.0\ 
 -DPACKAGE_BUGREPORT=\omal...@apache.org\ -DPACKAGE_URL=\\ 
 -DPACKAGE=\libhdfs\ -DVERSION=\0.1.0\ -DSTDC_HEADERS=1 
 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 
 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 
 -DHAVE_UNISTD_H=1 -DHAVE_DLFCN_H=1 -DLT_OBJDIR=\.libs/\ -DHAVE_STRDUP=1 
 -DHAVE_STRERROR=1 -DHAVE_STRTOUL=1 -DHAVE_FCNTL_H=1 -DHAVE__BOOL=1 
 -DHAVE_STDBOOL_H=1 -I. -g -O2 -DOS_LINUX -DDSO_DLFCN -DCPU=\arm\ 
 -I/usr/lib/jvm/ejdk1.7.0_04/include -I/usr/lib/jvm/ejdk1.7.0_04/include/arm 
 -Wall -Wstrict-prototypes -MT hdfs.lo -MD -MP -MF .deps/hdfs.Tpo -c -o 
 hdfs.lo hdfs.c
 [INFO] libtool: compile:  gcc -DPACKAGE_NAME=\libhdfs\ 
 -DPACKAGE_TARNAME=\libhdfs\ -DPACKAGE_VERSION=\0.1.0\ 
 -DPACKAGE_STRING=\libhdfs 0.1.0\ 
 -DPACKAGE_BUGREPORT=\omal...@apache.org\ -DPACKAGE_URL=\\ 
 -DPACKAGE=\libhdfs\ -DVERSION=\0.1.0\ -DSTDC_HEADERS=1 
 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 
 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 
 -DHAVE_UNISTD_H=1 -DHAVE_DLFCN_H=1 -DLT_OBJDIR=\.libs/\ -DHAVE_STRDUP=1 
 -DHAVE_STRERROR=1 -DHAVE_STRTOUL=1 -DHAVE_FCNTL_H=1 -DHAVE__BOOL=1 
 -DHAVE_STDBOOL_H=1 -I. -g -O2 -DOS_LINUX -DDSO_DLFCN -DCPU=\arm\ 
 -I/usr/lib/jvm/ejdk1.7.0_04/include -I/usr/lib/jvm/ejdk1.7.0_04/include/arm 
 -Wall -Wstrict-prototypes -MT hdfs.lo -MD -MP -MF .deps/hdfs.Tpo -c hdfs.c  
 -fPIC -DPIC -o .libs/hdfs.o
 [INFO] In file included from hdfs.h:33:0,
 [INFO]  from hdfs.c:19:
 [INFO] /usr/lib/jvm/ejdk1.7.0_04/include/jni.h:45:20: fatal error: jni_md.h: 
 No such file or directory
 [INFO] compilation terminated.
 [INFO] make: *** [hdfs.lo] Error 1
 The problem is caused by 
 hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apsupport.m4 overriding 
 supported_os=arm when host_cpu=arm*; supported_os should remain linux, 
 since it determines the jni_md.h include path. OpenJDK 6 and 7 (in Ubuntu 
 12.04, at least) and Oracle EJDK put jni_md.h in include/linux. Not sure 
 if/why this ever worked before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3118) wiki and hadoop templates provides wrong superusergroup property instead of supergroup

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524548#comment-14524548
 ] 

Hadoop QA commented on HDFS-3118:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12519052/supergroup.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10520/console |


This message was automatically generated.

 wiki and hadoop templates provides wrong superusergroup property instead of 
 supergroup
 --

 Key: HDFS-3118
 URL: https://issues.apache.org/jira/browse/HDFS-3118
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 1.0.0, 1.0.1
 Environment: Used Debian package install
Reporter: Olivier Sallou
Priority: Minor
 Attachments: supergroup.patch


 The hdfs-site template and the wiki: 
 http://hadoop.apache.org/hdfs/docs/current/hdfs_permissions_guide.html#The+Super-User
 refers to property dfs.permissions.superusergroup to define the group of 
 superuser.
 However we must use the property dfs.permissions.supergroup, and not 
 superusergroup, to make it work.
 In file src/hdfs/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java, 
 supergroup is extracted from:
 this.supergroup = conf.get(dfs.permissions.supergroup, supergroup);
 It does not make use of DFS_PERMISSIONS_SUPERUSERGROUP_KEY



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3456) blockReportInterval is long value but when we take the random value it uses getRandom().nextInt,it is causing frequently BR

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524553#comment-14524553
 ] 

Hadoop QA commented on HDFS-3456:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12530588/h3456_20120601.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10521/console |


This message was automatically generated.

 blockReportInterval is long value but when we take the random value it uses 
 getRandom().nextInt,it is causing frequently BR
 ---

 Key: HDFS-3456
 URL: https://issues.apache.org/jira/browse/HDFS-3456
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.0.0-alpha
Reporter: Brahma Reddy Battula
Assignee: Tsz Wo Nicholas Sze
Priority: Minor
 Attachments: HDFS-3456.001.patch, HDFS-3456.002.patch, 
 HDFS-3456.003.patch, HDFS-3456.004.patch, h3456_20120601.patch


 blockReportInterval is long value but when we take the random value it uses 
 getRandom().nextInt.
 Due to this, offerService can throw exception as long may get rotated to 
 negative value.
 So, block report may send very frequently.
 {code}
   if (resetBlockReportTime) {
 lastBlockReport = startTime - 
 DFSUtil.getRandom().nextInt((int)(dnConf.blockReportInterval));
   resetBlockReportTime = false;
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5296) DFS usage gets doubled in the WebUI of federated namenode

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524775#comment-14524775
 ] 

Hadoop QA commented on HDFS-5296:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12606661/HDFS-5296-v1.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10601/console |


This message was automatically generated.

 DFS usage gets doubled in the WebUI of federated namenode
 -

 Key: HDFS-5296
 URL: https://issues.apache.org/jira/browse/HDFS-5296
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.5-alpha
Reporter: Siqi Li
Assignee: Siqi Li
Priority: Minor
 Attachments: BBF12817-B83E-4CC5-B0B8-4FA322E87FB7.png, 
 HDFS-5296-v1.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3503) Move LengthInputStream and PositionTrackingInputStream to common

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524772#comment-14524772
 ] 

Hadoop QA commented on HDFS-3503:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12639073/h3503_20140407.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10598/console |


This message was automatically generated.

 Move LengthInputStream and PositionTrackingInputStream to common
 

 Key: HDFS-3503
 URL: https://issues.apache.org/jira/browse/HDFS-3503
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor
 Attachments: h3503_20140328.patch, h3503_20140407.patch


 We have LengthInputStream in org.apache.hadoop.hdfs.server.datanode.fsdataset 
 and PositionTrackingInputStream in 
 org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.  These two classes 
 are generally useful.  Let's move them to org.apache.hadoop.io.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5951) Provide diagnosis information in the Web UI

2015-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14524770#comment-14524770
 ] 

Hadoop QA commented on HDFS-5951:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12628903/HDFS-5951.000.patch |
| Optional Tests |  |
| git revision | trunk / f1a152c |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10596/console |


This message was automatically generated.

 Provide diagnosis information in the Web UI
 ---

 Key: HDFS-5951
 URL: https://issues.apache.org/jira/browse/HDFS-5951
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5951.000.patch, diagnosis-failure.png, 
 diagnosis-succeed.png


 HDFS should provide operation statistics in its UI. it can go one step 
 further by leveraging the information to diagnose common problems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   4   >