[jira] [Resolved] (HDFS-17249) Fix TestDFSUtil.testIsValidName() unit test failure

2023-11-13 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-17249.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

> Fix TestDFSUtil.testIsValidName() unit test failure
> ---
>
> Key: HDFS-17249
> URL: https://issues.apache.org/jira/browse/HDFS-17249
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: liuguanghua
>Assignee: liuguanghua
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> TestDFSUtil.testIsValidName runs failed when  
> assertFalse(DFSUtil.isValidName("/foo/:/bar")) , fixed it. 
> Add test case in TestDFSUtil.testIsValidName.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16791) Add getEnclosingRoot() API to filesystem interface and implementations

2023-11-08 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-16791.
---
Resolution: Fixed

> Add getEnclosingRoot() API to filesystem interface and implementations
> --
>
> Key: HDFS-16791
> URL: https://issues.apache.org/jira/browse/HDFS-16791
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.3.6
>Reporter: Tom McCormick
>Assignee: Tom McCormick
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> At LinkedIn we run many HDFS volumes that are federated by either 
> ViewFilesystem or Router Based Federation. As our number of hdfs volumes 
> grows, we have a growing need to migrate data seemlessly across volumes.
> Many frameworks have a notion of staging or temp directories, but those 
> directories often live in random locations. We want an API getEnclosingRoot, 
> which provides the root path a file or dataset. 
> In ViewFilesystem / Router Based Federation, the enclosingRoot will be the 
> mount point.
> We will also take into account other restrictions for renames like 
> encryptions zones.
> If there are several paths (a mount point and an encryption zone), we will 
> return the longer path



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-17224) TestRollingUpgrade.testDFSAdminRollingUpgradeCommands failing

2023-10-13 Thread Steve Loughran (Jira)
Steve Loughran created HDFS-17224:
-

 Summary: TestRollingUpgrade.testDFSAdminRollingUpgradeCommands 
failing
 Key: HDFS-17224
 URL: https://issues.apache.org/jira/browse/HDFS-17224
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: dfsadmin, test
Affects Versions: 3.4.0
Reporter: Steve Loughran


TestRollingUpgrade.testDFSAdminRollingUpgradeCommands failing because the 
static mbean isn't null. This is inevitably related to the fact that in test 
runs, the jvm is reused and so the mbean may be present from a previous test 
-maybe one which didn't clean up.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-17202) TestDFSAdmin.testAllDatanodesReconfig assertion failing (again)

2023-09-19 Thread Steve Loughran (Jira)
Steve Loughran created HDFS-17202:
-

 Summary: TestDFSAdmin.testAllDatanodesReconfig assertion failing 
(again)
 Key: HDFS-17202
 URL: https://issues.apache.org/jira/browse/HDFS-17202
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: dfsadmin, test
Affects Versions: 3.3.9
Reporter: Steve Loughran


surfacing in the test run for HADOOP-18895 pr 
https://github.com/apache/hadoop/pull/6073

```
Expecting:
 <["started at Thu Sep 14 23:14:07 GMT 2023SUCCESS: Changed property 
dfs.datanode.peer.stats.enabled",
" From: "false"",
" To: "true"",
" and finished at Thu Sep 14 23:14:07 GMT 2023."]>
to contain subsequence:
 <["SUCCESS: Changed property dfs.datanode.peer.stats.enabled",
" From: "false"",
" To: "true""]>
```
looks like some logging race condition again as the "started at" output is on 
the same line as "SUCCESS". maybe something needs to add a \n after the started 
message, or before SUCCESS>



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16934) org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig regression

2023-03-06 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-16934.
---
Fix Version/s: 3.4.0
   3.3.5
   Resolution: Fixed

fixed; ran new test on 3.3.5 to verify the backport/conflict resolution was ok.

> org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig regression
> -
>
> Key: HDFS-16934
> URL: https://issues.apache.org/jira/browse/HDFS-16934
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfsadmin, test
>Affects Versions: 3.4.0, 3.3.5, 3.3.9
>Reporter: Steve Loughran
>Assignee: Shilun Fan
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.5
>
>
> jenkins test failure as the logged output is in the wrong order for the 
> assertions. HDFS-16624 flipped the order...without that this would have 
> worked.
> {code}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:87)
>   at org.junit.Assert.assertTrue(Assert.java:42)
>   at org.junit.Assert.assertTrue(Assert.java:53)
>   at 
> org.apache.hadoop.hdfs.tools.TestDFSAdmin.testAllDatanodesReconfig(TestDFSAdmin.java:1149)
> {code}
> Here the code is asserting about the contents of the output, 
> {code}
> assertTrue(outs.get(0).startsWith("Reconfiguring status for node"));
> assertTrue("SUCCESS: Changed property 
> dfs.datanode.peer.stats.enabled".equals(outs.get(2))
> || "SUCCESS: Changed property 
> dfs.datanode.peer.stats.enabled".equals(outs.get(1)));  // here
> assertTrue("\tFrom: \"false\"".equals(outs.get(3)) || "\tFrom: 
> \"false\"".equals(outs.get(2)));
> assertTrue("\tTo: \"true\"".equals(outs.get(4)) || "\tTo: 
> \"true\"".equals(outs.get(3)))
> {code}
> If you look at the log, the actual line is appearing in that list, just in a 
> different place. race condition
> {code}
> 2023-02-24 01:02:06,275 [Listener at localhost/41795] INFO  
> tools.TestDFSAdmin (TestDFSAdmin.java:testAllDatanodesReconfig(1146)) - 
> dfsadmin -status -livenodes output:
> 2023-02-24 01:02:06,276 [Listener at localhost/41795] INFO  
> tools.TestDFSAdmin 
> (TestDFSAdmin.java:lambda$testAllDatanodesReconfig$0(1147)) - Reconfiguring 
> status for node [127.0.0.1:41795]: started at Fri Feb 24 01:02:03 GMT 2023 
> and finished at Fri Feb 24 01:02:03 GMT 2023.
> 2023-02-24 01:02:06,276 [Listener at localhost/41795] INFO  
> tools.TestDFSAdmin 
> (TestDFSAdmin.java:lambda$testAllDatanodesReconfig$0(1147)) - Reconfiguring 
> status for node [127.0.0.1:34007]: started at Fri Feb 24 01:02:03 GMT 
> 2023SUCCESS: Changed property dfs.datanode.peer.stats.enabled
> 2023-02-24 01:02:06,277 [Listener at localhost/41795] INFO  
> tools.TestDFSAdmin 
> (TestDFSAdmin.java:lambda$testAllDatanodesReconfig$0(1147)) -  From: "false"
> 2023-02-24 01:02:06,277 [Listener at localhost/41795] INFO  
> tools.TestDFSAdmin 
> (TestDFSAdmin.java:lambda$testAllDatanodesReconfig$0(1147)) -  To: "true"
> 2023-02-24 01:02:06,277 [Listener at localhost/41795] INFO  
> tools.TestDFSAdmin 
> (TestDFSAdmin.java:lambda$testAllDatanodesReconfig$0(1147)) -  and finished 
> at Fri Feb 24 01:02:03 GMT 2023.
> 2023-02-24 01:02:06,277 [Listener at localhost/41795] INFO  
> tools.TestDFSAdmin 
> (TestDFSAdmin.java:lambda$testAllDatanodesReconfig$0(1147)) - SUCCESS: 
> Changed property dfs.datanode.peer.stats.enabled
> {code}
> we have a race condition in output generation and the assertions are clearly 
> too brittle
> for the 3.3.5 release I'm not going to make this a blocker. What i will do is 
> propose that the asserts move to assertJ with an assertion that the 
> collection "containsExactlyInAnyOrder" all the strings.
> That will
> 1. not be brittle.
> 2. give nice errors on failure



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16935) TestFsDatasetImpl.testReportBadBlocks brittle

2023-03-01 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-16935.
---
Resolution: Fixed

> TestFsDatasetImpl.testReportBadBlocks brittle
> -
>
> Key: HDFS-16935
> URL: https://issues.apache.org/jira/browse/HDFS-16935
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.4.0, 3.3.5, 3.3.9
>Reporter: Steve Loughran
>Assignee: Viraj Jasani
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> jenkins failure as sleep() time not long enough
> {code}
> Failing for the past 1 build (Since #4 )
> Took 7.4 sec.
> Error Message
> expected:<1> but was:<0>
> Stacktrace
> java.lang.AssertionError: expected:<1> but was:<0>
>   at org.junit.Assert.fail(Assert.java:89)
>   at org.junit.Assert.failNotEquals(Assert.java:835)
>   at org.junit.Assert.assertEquals(Assert.java:647)
>   at org.junit.Assert.assertEquals(Assert.java:633)
> {code}
> assert is after a 3s sleep waiting for reports coming in.
> {code}
>   dataNode.reportBadBlocks(block, dataNode.getFSDataset()
>   .getFsVolumeReferences().get(0));
>   Thread.sleep(3000);   // 3s 
> sleep
>   BlockManagerTestUtil.updateState(cluster.getNamesystem()
>   .getBlockManager());
>   // Verify the bad block has been reported to namenode
>   Assert.assertEquals(1, 
> cluster.getNamesystem().getCorruptReplicaBlocks());  // here
> {code}
> LambdaTestUtils.eventually() should be used around this assert, maybe with an 
> even shorter initial delay so on faster systems, test is faster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16935) TestFsDatasetImpl.testReportBadBlocks brittle

2023-02-24 Thread Steve Loughran (Jira)
Steve Loughran created HDFS-16935:
-

 Summary: TestFsDatasetImpl.testReportBadBlocks brittle
 Key: HDFS-16935
 URL: https://issues.apache.org/jira/browse/HDFS-16935
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.4.0, 3.3.5, 3.3.9
Reporter: Steve Loughran


jenkins failure as sleep() time not long enough
{code}
Failing for the past 1 build (Since #4 )
Took 7.4 sec.
Error Message
expected:<1> but was:<0>
Stacktrace
java.lang.AssertionError: expected:<1> but was:<0>
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.failNotEquals(Assert.java:835)
at org.junit.Assert.assertEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:633)
{code}

assert is after a 3s sleep waiting for reports coming in.
{code}
  dataNode.reportBadBlocks(block, dataNode.getFSDataset()
  .getFsVolumeReferences().get(0));
  Thread.sleep(3000);   // 3s sleep
  BlockManagerTestUtil.updateState(cluster.getNamesystem()
  .getBlockManager());
  // Verify the bad block has been reported to namenode
  Assert.assertEquals(1, 
cluster.getNamesystem().getCorruptReplicaBlocks());  // here
{code}

LambdaTestUtils.eventually() should be used around this assert, maybe with an 
even shorter initial delay so on faster systems, test is faster.




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16934) org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig regression

2023-02-24 Thread Steve Loughran (Jira)
Steve Loughran created HDFS-16934:
-

 Summary: 
org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig regression
 Key: HDFS-16934
 URL: https://issues.apache.org/jira/browse/HDFS-16934
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: dfsadmin, test
Affects Versions: 3.4.0, 3.3.5, 3.3.9
Reporter: Steve Loughran


jenkins test failure as the logged output is in the wrong order for the 
assertions. HDFS-16624 flipped the order...without that this would have worked.

{code}

java.lang.AssertionError
at org.junit.Assert.fail(Assert.java:87)
at org.junit.Assert.assertTrue(Assert.java:42)
at org.junit.Assert.assertTrue(Assert.java:53)
at 
org.apache.hadoop.hdfs.tools.TestDFSAdmin.testAllDatanodesReconfig(TestDFSAdmin.java:1149)
{code}


Here the code is asserting about the contents of the output, 
{code}
assertTrue(outs.get(0).startsWith("Reconfiguring status for node"));
assertTrue("SUCCESS: Changed property 
dfs.datanode.peer.stats.enabled".equals(outs.get(2))
|| "SUCCESS: Changed property 
dfs.datanode.peer.stats.enabled".equals(outs.get(1)));  // here
assertTrue("\tFrom: \"false\"".equals(outs.get(3)) || "\tFrom: 
\"false\"".equals(outs.get(2)));
assertTrue("\tTo: \"true\"".equals(outs.get(4)) || "\tTo: 
\"true\"".equals(outs.get(3)))
{code}

If you look at the log, the actual line is appearing in that list, just in a 
different place. race condition
{code}
2023-02-24 01:02:06,275 [Listener at localhost/41795] INFO  tools.TestDFSAdmin 
(TestDFSAdmin.java:testAllDatanodesReconfig(1146)) - dfsadmin -status 
-livenodes output:
2023-02-24 01:02:06,276 [Listener at localhost/41795] INFO  tools.TestDFSAdmin 
(TestDFSAdmin.java:lambda$testAllDatanodesReconfig$0(1147)) - Reconfiguring 
status for node [127.0.0.1:41795]: started at Fri Feb 24 01:02:03 GMT 2023 and 
finished at Fri Feb 24 01:02:03 GMT 2023.
2023-02-24 01:02:06,276 [Listener at localhost/41795] INFO  tools.TestDFSAdmin 
(TestDFSAdmin.java:lambda$testAllDatanodesReconfig$0(1147)) - Reconfiguring 
status for node [127.0.0.1:34007]: started at Fri Feb 24 01:02:03 GMT 
2023SUCCESS: Changed property dfs.datanode.peer.stats.enabled
2023-02-24 01:02:06,277 [Listener at localhost/41795] INFO  tools.TestDFSAdmin 
(TestDFSAdmin.java:lambda$testAllDatanodesReconfig$0(1147)) -From: "false"
2023-02-24 01:02:06,277 [Listener at localhost/41795] INFO  tools.TestDFSAdmin 
(TestDFSAdmin.java:lambda$testAllDatanodesReconfig$0(1147)) -To: "true"
2023-02-24 01:02:06,277 [Listener at localhost/41795] INFO  tools.TestDFSAdmin 
(TestDFSAdmin.java:lambda$testAllDatanodesReconfig$0(1147)) -  and finished at 
Fri Feb 24 01:02:03 GMT 2023.
2023-02-24 01:02:06,277 [Listener at localhost/41795] INFO  tools.TestDFSAdmin 
(TestDFSAdmin.java:lambda$testAllDatanodesReconfig$0(1147)) - SUCCESS: Changed 
property dfs.datanode.peer.stats.enabled
{code}
we have a race condition in output generation and the assertions are clearly 
too brittle

for the 3.3.5 release I'm not going to make this a blocker. What i will do is 
propose that the asserts move to assertJ with an assertion that the collection 
"containsExactlyInAnyOrder" all the strings.

That will
1. not be brittle.
2. give nice errors on failure




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16853) The UT TestLeaseRecovery2#testHardLeaseRecoveryAfterNameNodeRestart failed because HADOOP-18324

2023-02-13 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-16853.
---
Fix Version/s: 3.3.5
   Resolution: Fixed

> The UT TestLeaseRecovery2#testHardLeaseRecoveryAfterNameNodeRestart failed 
> because HADOOP-18324
> ---
>
> Key: HDFS-16853
> URL: https://issues.apache.org/jira/browse/HDFS-16853
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.5
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.3.5
>
>
> The UT TestLeaseRecovery2#testHardLeaseRecoveryAfterNameNodeRestart failed 
> with error message: Waiting for cluster to become active. And the blocking 
> jstack as bellows:
> {code:java}
> "BP-1618793397-192.168.3.4-1669198559828 heartbeating to 
> localhost/127.0.0.1:54673" #260 daemon prio=5 os_prio=31 tid=0x
> 7fc1108fa000 nid=0x19303 waiting on condition [0x700017884000]
>    java.lang.Thread.State: WAITING (parking)
>         at sun.misc.Unsafe.park(Native Method)
>         - parking to wait for  <0x0007430a9ec0> (a 
> java.util.concurrent.SynchronousQueue$TransferQueue)
>         at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>         at 
> java.util.concurrent.SynchronousQueue$TransferQueue.awaitFulfill(SynchronousQueue.java:762)
>         at 
> java.util.concurrent.SynchronousQueue$TransferQueue.transfer(SynchronousQueue.java:695)
>         at 
> java.util.concurrent.SynchronousQueue.put(SynchronousQueue.java:877)
>         at 
> org.apache.hadoop.ipc.Client$Connection.sendRpcRequest(Client.java:1186)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1482)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1429)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:258)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:139)
>         at com.sun.proxy.$Proxy23.sendHeartbeat(Unknown Source)
>         at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.sendHeartbeat(DatanodeProtocolClient
> SideTranslatorPB.java:168)
>         at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:570)
>         at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:714)
>         at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:915)
>         at java.lang.Thread.run(Thread.java:748)  {code}
> After looking into the code and found that this bug is imported by 
> HADOOP-18324. Because RpcRequestSender exited without cleaning up the 
> rpcRequestQueue, then caused BPServiceActor was blocked in sending request.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16795) Use secure XML parser utils in hdfs classes

2022-10-20 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-16795.
---
Fix Version/s: 3.3.5
   Resolution: Fixed

> Use secure XML parser utils in hdfs classes
> ---
>
> Key: HDFS-16795
> URL: https://issues.apache.org/jira/browse/HDFS-16795
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.3.4
>Reporter: PJ Fanning
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.5, 3.3.9
>
>
> Uptakes HADOOP-18469



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16755) TestQJMWithFaults.testUnresolvableHostName() can fail due to unexpected host resolution

2022-09-01 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-16755.
---
Fix Version/s: 3.3.9
   Resolution: Fixed

> TestQJMWithFaults.testUnresolvableHostName() can fail due to unexpected host 
> resolution
> ---
>
> Key: HDFS-16755
> URL: https://issues.apache.org/jira/browse/HDFS-16755
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.4.0, 3.3.9
> Environment: Running using both Maven Surefire and an IDE results in 
> a test failure.  Switching the name to "bogus.invalid" results in the 
> expected behavior, which depends on an UnknownHostException.
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> Tests that want to use an unresolvable address may actually resolve in some 
> environments.  Replacing host names like "bogus" with a IETF RFC 2606 domain 
> name avoids the issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16711) Empty hadoop-client-api artifacts on maven central

2022-08-04 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-16711.
---
Resolution: Won't Fix

closing as "cantfix". sorry

> Empty hadoop-client-api artifacts on maven central
> --
>
> Key: HDFS-16711
> URL: https://issues.apache.org/jira/browse/HDFS-16711
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.2.3
>Reporter: Robin Wolters
>Priority: Major
>
> I observed that for at least version 3.2.3 the artifacts on maven central for 
> the shaded jars of both 
> [hadoop-client-api|https://repo1.maven.org/maven2/org/apache/hadoop/hadoop-client-api/3.2.3/]
>  and 
> [hadoop-client-runtime|https://repo1.maven.org/maven2/org/apache/hadoop/hadoop-client-runtime/3.2.3/]
>  are empty, i.e. of ~45KB size and do not contain any class files if 
> extracted (or listed with "jar tf").
> I've come across this [e-mail 
> thread|https://www.mail-archive.com/common-dev@hadoop.apache.org/msg37261.html]
>  suggesting that there was the same problem with version 3.3.3, which appears 
> to be fixed. Version 3.2.3 is mentioned as well, could it be that this 
> version simply wasn't re-released?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-9475) execution of org.apache.hadoop.hdfs.net.TcpPeerServer.close() causes timeout on Hadoop-2.6.0 with IBM-JDK-1.8

2022-07-28 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-9475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-9475.
--
Resolution: Won't Fix

> execution of org.apache.hadoop.hdfs.net.TcpPeerServer.close() causes timeout 
> on Hadoop-2.6.0 with IBM-JDK-1.8
> -
>
> Key: HDFS-9475
> URL: https://issues.apache.org/jira/browse/HDFS-9475
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0, 2.7.1
> Environment: IBM JDK 1.8.0
> Architecture:  s390x GNU/Linux
>Reporter: Rakesh Sharma
>Priority: Blocker
>
> ---
> Test set: org.apache.hadoop.hdfs.server.balancer.TestBalancer
> ---
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 101.69 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.server.balancer.TestBalancer
> testTwoReplicaShouldNotInSameDN(org.apache.hadoop.hdfs.server.balancer.TestBalancer)
>   Time elapsed: 100.008 sec  <<< ERROR!
> java.lang.Exception: test timed out after 10 milliseconds
>   at 
> java.nio.channels.spi.AbstractSelectableChannel.implCloseChannel(AbstractSelectableChannel.java:245)
>   at 
> java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:126)
>   at sun.nio.ch.ServerSocketAdaptor.close(ServerSocketAdaptor.java:149)
>   at 
> org.apache.hadoop.hdfs.net.TcpPeerServer.close(TcpPeerServer.java:153)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.kill(DataXceiverServer.java:223)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:1663)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:1750)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1721)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1705)
>   at 
> org.apache.hadoop.hdfs.server.balancer.TestBalancer.testTwoReplicaShouldNotInSameDN(TestBalancer.java:1382)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16679) StandbyState link

2022-07-22 Thread Steve Loughran (Jira)
Steve Loughran created HDFS-16679:
-

 Summary: StandbyState link
 Key: HDFS-16679
 URL: https://issues.apache.org/jira/browse/HDFS-16679
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Steve Loughran






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16651) io.netty:netty CVE-2019-20444, CVE-2019-20445

2022-07-06 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-16651.
---
Resolution: Duplicate

there's some outstanding netty upgrade jiras; remember to check before creating 
new ones. 

updating any jar is often traumatic; any help in testing is always welcome

> io.netty:netty CVE-2019-20444, CVE-2019-20445
> -
>
> Key: HDFS-16651
> URL: https://issues.apache.org/jira/browse/HDFS-16651
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: 3.1.1
>Affects Versions: 3.3.3
>Reporter: Basavaraj
>Priority: Critical
> Attachments: netty_vuln.html
>
>
> netty library has security issues. Need to upgrade all netty-all.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16563) Namenode WebUI prints sensitive information on Token Expiry

2022-06-07 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-16563.
---
Resolution: Fixed

> Namenode WebUI prints sensitive information on Token Expiry
> ---
>
> Key: HDFS-16563
> URL: https://issues.apache.org/jira/browse/HDFS-16563
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namanode, security, webhdfs
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.4
>
> Attachments: image-2022-04-27-23-01-16-033.png, 
> image-2022-04-27-23-28-40-568.png
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Login to Namenode WebUI.
> Wait for token to expire. (Or modify the Token refresh time 
> dfs.namenode.delegation.token.renew/update-interval to lower value)
> Refresh the WebUI after the Token expiry.
> Full token information gets printed in WebUI.
>  
> !image-2022-04-27-23-01-16-033.png!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-16563) Namenode WebUI prints sensitive information on Token Expiry

2022-06-07 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HDFS-16563:
---

> Namenode WebUI prints sensitive information on Token Expiry
> ---
>
> Key: HDFS-16563
> URL: https://issues.apache.org/jira/browse/HDFS-16563
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namanode, security, webhdfs
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.4
>
> Attachments: image-2022-04-27-23-01-16-033.png, 
> image-2022-04-27-23-28-40-568.png
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Login to Namenode WebUI.
> Wait for token to expire. (Or modify the Token refresh time 
> dfs.namenode.delegation.token.renew/update-interval to lower value)
> Refresh the WebUI after the Token expiry.
> Full token information gets printed in WebUI.
>  
> !image-2022-04-27-23-01-16-033.png!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14478) Add libhdfs APIs for openFile

2022-04-24 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-14478.
---
Fix Version/s: 3.3.4
   Resolution: Fixed

> Add libhdfs APIs for openFile
> -
>
> Key: HDFS-14478
> URL: https://issues.apache.org/jira/browse/HDFS-14478
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs, native
>Reporter: Sahil Takiar
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.4
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> HADOOP-15229 added a "FileSystem builder-based openFile() API" that allows 
> specifying configuration values for opening files (similar to HADOOP-14365).
> Support for {{openFile}} will be a little tricky as it is asynchronous and 
> {{FutureDataInputStreamBuilder#build}} returns a {{CompletableFuture}}.
> At a high level, the API for {{openFile}} could look something like this:
> {code:java}
> hdfsFile hdfsOpenFile(hdfsFS fs, const char* path, int flags,
>   int bufferSize, short replication, tSize blocksize);
> hdfsOpenFileBuilder *hdfsOpenFileBuilderAlloc(hdfsFS fs,
> const char *path);
> hdfsOpenFileBuilder *hdfsOpenFileBuilderMust(hdfsOpenFileBuilder *builder,
> const char *key, const char *value);
> hdfsOpenFileBuilder *hdfsOpenFileBuilderOpt(hdfsOpenFileBuilder *builder,
> const char *key, const char *value);
> hdfsOpenFileFuture *hdfsOpenFileBuilderBuild(hdfsOpenFileBuilder *builder);
> void hdfsOpenFileBuilderFree(hdfsOpenFileBuilder *builder);
> hdfsFile hdfsOpenFileFutureGet(hdfsOpenFileFuture *future);
> hdfsFile hdfsOpenFileFutureGetWithTimeout(hdfsOpenFileFuture *future,
> int64_t timeout, javaConcurrentTimeUnit timeUnit);
> int hdfsOpenFileFutureCancel(hdfsOpenFileFuture *future,
> int mayInterruptIfRunning);
> void hdfsOpenFileFutureFree(hdfsOpenFileFuture *future);
> {code}
> Instead of exposing all the functionality of {{CompleteableFuture}} libhdfs 
> would just expose the functionality of {{Future}}.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16501) Print the exception when reporting a bad block

2022-04-19 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-16501.
---
Resolution: Fixed

> Print the exception when reporting a bad block
> --
>
> Key: HDFS-16501
> URL: https://issues.apache.org/jira/browse/HDFS-16501
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: qinyuren
>Assignee: qinyuren
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.4, 3.3.3
>
> Attachments: image-2022-03-10-19-27-31-622.png
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> !image-2022-03-10-19-27-31-622.png|width=847,height=27!
> Currently, volumeScanner will find bad block and report it to namenode 
> without printing the reason why the block is a bad block. I think we should 
> be better print the exception in log file.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16437) ReverseXML processor doesn't accept XML files without the SnapshotDiffSection.

2022-04-18 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-16437.
---
Resolution: Fixed

> ReverseXML processor doesn't accept XML files without the SnapshotDiffSection.
> --
>
> Key: HDFS-16437
> URL: https://issues.apache.org/jira/browse/HDFS-16437
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.1, 3.3.0
>Reporter: yanbin.zhang
>Assignee: yanbin.zhang
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.3.3, 3.2.3
>
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> In a cluster environment without snapshot, if you want to convert back to 
> fsimage through the generated xml, an error will be reported.
> {code:java}
> //代码占位符
> [test@test001 ~]$ hdfs oiv -p ReverseXML -i fsimage_0257220.xml 
> -o fsimage_0257220
> OfflineImageReconstructor failed: FSImage XML ended prematurely, without 
> including section(s) SnapshotDiffSection
> java.io.IOException: FSImage XML ended prematurely, without including 
> section(s) SnapshotDiffSection
>         at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.processXml(OfflineImageReconstructor.java:1765)
>         at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.run(OfflineImageReconstructor.java:1842)
>         at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:211)
>         at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:149)
> 22/01/25 15:56:52 INFO util.ExitUtil: Exiting with status 1: ExitException 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16355) Improve the description of dfs.block.scanner.volume.bytes.per.second

2022-04-18 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-16355.
---
Fix Version/s: 3.3.3
   (was: 3.4.0)
   (was: 3.3.4)
   Resolution: Fixed

FIxed in 3.3.3; updating fix versions as appropriate

> Improve the description of dfs.block.scanner.volume.bytes.per.second
> 
>
> Key: HDFS-16355
> URL: https://issues.apache.org/jira/browse/HDFS-16355
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, hdfs
>Affects Versions: 3.3.1
>Reporter: guophilipse
>Assignee: guophilipse
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.2.4, 3.3.3
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> datanode block scanner will be disabled if 
> `dfs.block.scanner.volume.bytes.per.second` is configured less then or equal 
> to zero, we can improve the desciption



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-11041) Unable to unregister FsDatasetState MBean if DataNode is shutdown twice

2022-04-18 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-11041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-11041.
---
Fix Version/s: 3.3.3
   (was: 3.4.0)
   (was: 3.3.4)
   Resolution: Fixed

FIxed in 3.3.3; updating fix versions as appropriate

> Unable to unregister FsDatasetState MBean if DataNode is shutdown twice
> ---
>
> Key: HDFS-11041
> URL: https://issues.apache.org/jira/browse/HDFS-11041
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Trivial
> Fix For: 2.10.2, 3.3.3, 3.2.3
>
> Attachments: HDFS-11041.01.patch, HDFS-11041.02.patch, 
> HDFS-11041.03.patch
>
>
> I saw error message like the following in some tests
> {noformat}
> 2016-10-21 04:09:03,900 [main] WARN  util.MBeans 
> (MBeans.java:unregister(114)) - Error unregistering 
> Hadoop:service=DataNode,name=FSDatasetState-33cd714c-0b1a-471f-8efe-f431d7d874bc
> javax.management.InstanceNotFoundException: 
> Hadoop:service=DataNode,name=FSDatasetState-33cd714c-0b1a-471f-8efe-f431d7d874bc
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:427)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:415)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:546)
>   at org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:112)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.shutdown(FsDatasetImpl.java:2127)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:2016)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:1985)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1962)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1936)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1929)
>   at 
> org.apache.hadoop.hdfs.TestDatanodeReport.testDatanodeReport(TestDatanodeReport.java:144)
> {noformat}
> The test shuts down datanode, and then shutdown cluster, which shuts down the 
> a datanode twice. Resetting the FsDatasetSpi reference in DataNode to null 
> resolves the issue.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16428) Source path with storagePolicy cause wrong typeConsumed while rename

2022-04-18 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-16428.
---
Fix Version/s: 3.3.3
   (was: 3.4.0)
   (was: 3.3.4)
   Resolution: Fixed

FIxed in 3.3.3; updating fix versions as appropriate

> Source path with storagePolicy cause wrong typeConsumed while rename
> 
>
> Key: HDFS-16428
> URL: https://issues.apache.org/jira/browse/HDFS-16428
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: lei w
>Assignee: lei w
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.3, 3.2.3
>
> Attachments: example.txt
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> When compute quota in rename operation , we use storage policy of the target 
> directory to compute src  quota usage. This will cause wrong value of 
> typeConsumed when source path was setted storage policy. I provided a unit 
> test to present this situation.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16507) [SBN read] Avoid purging edit log which is in progress

2022-04-18 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-16507.
---
Fix Version/s: 3.3.3
   Resolution: Fixed

FIxed in 3.3.3; updating fix versions as appropriate

> [SBN read] Avoid purging edit log which is in progress
> --
>
> Key: HDFS-16507
> URL: https://issues.apache.org/jira/browse/HDFS-16507
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: tomscut
>Assignee: tomscut
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.2.4, 3.3.3
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> We introduced [Standby Read] feature in branch-3.1.0, but found a FATAL 
> exception. It looks like it's purging edit logs which is in process.
> According to the analysis, I suspect that the editlog which is in progress to 
> be purged(after SNN checkpoint) does not finalize(See HDFS-14317) before ANN 
> rolls edit its self. 
> The stack:
> {code:java}
> java.lang.Thread.getStackTrace(Thread.java:1552)
>     org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1032)
>     
> org.apache.hadoop.hdfs.server.namenode.FileJournalManager.purgeLogsOlderThan(FileJournalManager.java:185)
>     
> org.apache.hadoop.hdfs.server.namenode.JournalSet$5.apply(JournalSet.java:623)
>     
> org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:388)
>     
> org.apache.hadoop.hdfs.server.namenode.JournalSet.purgeLogsOlderThan(JournalSet.java:620)
>     
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.purgeLogsOlderThan(FSEditLog.java:1512)
> org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager.purgeOldStorage(NNStorageRetentionManager.java:177)
>     
> org.apache.hadoop.hdfs.server.namenode.FSImage.purgeOldStorage(FSImage.java:1249)
>     
> org.apache.hadoop.hdfs.server.namenode.ImageServlet$2.run(ImageServlet.java:617)
>     
> org.apache.hadoop.hdfs.server.namenode.ImageServlet$2.run(ImageServlet.java:516)
>     java.security.AccessController.doPrivileged(Native Method)
>     javax.security.auth.Subject.doAs(Subject.java:422)
>     
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>     
> org.apache.hadoop.hdfs.server.namenode.ImageServlet.doPut(ImageServlet.java:515)
>     javax.servlet.http.HttpServlet.service(HttpServlet.java:710)
>     javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>     org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
>     
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
>     
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1604)
>     
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>     org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>     
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>     org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>     
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>     
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>     
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>     
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>     org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>     
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>     
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>     
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>     
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>     
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>     org.eclipse.jetty.server.Server.handle(Server.java:539)
>     org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
>     
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>     
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>     org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>     
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>     
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>     
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>     
> 

[jira] [Resolved] (HDFS-16422) Fix thread safety of EC decoding during concurrent preads

2022-04-18 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-16422.
---
Fix Version/s: 3.3.3
   (was: 3.4.0)
   (was: 3.3.4)
   Resolution: Fixed

in release 3.3.3

note that in branch-3.3+ the commit message doesn't include the JIRA ID. added 
during the backport

> Fix thread safety of EC decoding during concurrent preads
> -
>
> Key: HDFS-16422
> URL: https://issues.apache.org/jira/browse/HDFS-16422
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfsclient, ec, erasure-coding
>Affects Versions: 3.3.0, 3.3.1
>Reporter: daimin
>Assignee: daimin
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.3.3, 3.2.3
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Reading data on an erasure-coded file with missing replicas(internal block of 
> block group) will cause online reconstruction: read dataUnits part of data 
> and decode them into the target missing data. Each DFSStripedInputStream 
> object has a RawErasureDecoder object, and when we doing pread concurrently, 
> RawErasureDecoder.decode will be invoked concurrently too. 
> RawErasureDecoder.decode is not thread safe, as a result of that we get wrong 
> data from pread occasionally.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-16355) Improve the description of dfs.block.scanner.volume.bytes.per.second

2022-04-12 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HDFS-16355:
---

> Improve the description of dfs.block.scanner.volume.bytes.per.second
> 
>
> Key: HDFS-16355
> URL: https://issues.apache.org/jira/browse/HDFS-16355
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, hdfs
>Affects Versions: 3.3.1
>Reporter: guophilipse
>Assignee: guophilipse
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.4, 3.3.4
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> datanode block scanner will be disabled if 
> `dfs.block.scanner.volume.bytes.per.second` is configured less then or equal 
> to zero, we can improve the desciption



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-16501) Print the exception when reporting a bad block

2022-04-12 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HDFS-16501:
---

> Print the exception when reporting a bad block
> --
>
> Key: HDFS-16501
> URL: https://issues.apache.org/jira/browse/HDFS-16501
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: qinyuren
>Assignee: qinyuren
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.4, 3.3.4
>
> Attachments: image-2022-03-10-19-27-31-622.png
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> !image-2022-03-10-19-27-31-622.png|width=847,height=27!
> Currently, volumeScanner will find bad block and report it to namenode 
> without printing the reason why the block is a bad block. I think we should 
> be better print the exception in log file.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-11041) Unable to unregister FsDatasetState MBean if DataNode is shutdown twice

2022-04-12 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-11041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HDFS-11041:
---

> Unable to unregister FsDatasetState MBean if DataNode is shutdown twice
> ---
>
> Key: HDFS-11041
> URL: https://issues.apache.org/jira/browse/HDFS-11041
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Trivial
> Fix For: 3.4.0, 2.10.2, 3.2.3, 3.3.4
>
> Attachments: HDFS-11041.01.patch, HDFS-11041.02.patch, 
> HDFS-11041.03.patch
>
>
> I saw error message like the following in some tests
> {noformat}
> 2016-10-21 04:09:03,900 [main] WARN  util.MBeans 
> (MBeans.java:unregister(114)) - Error unregistering 
> Hadoop:service=DataNode,name=FSDatasetState-33cd714c-0b1a-471f-8efe-f431d7d874bc
> javax.management.InstanceNotFoundException: 
> Hadoop:service=DataNode,name=FSDatasetState-33cd714c-0b1a-471f-8efe-f431d7d874bc
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:427)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:415)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:546)
>   at org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:112)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.shutdown(FsDatasetImpl.java:2127)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:2016)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:1985)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1962)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1936)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1929)
>   at 
> org.apache.hadoop.hdfs.TestDatanodeReport.testDatanodeReport(TestDatanodeReport.java:144)
> {noformat}
> The test shuts down datanode, and then shutdown cluster, which shuts down the 
> a datanode twice. Resetting the FsDatasetSpi reference in DataNode to null 
> resolves the issue.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-16428) Source path with storagePolicy cause wrong typeConsumed while rename

2022-04-12 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HDFS-16428:
---

> Source path with storagePolicy cause wrong typeConsumed while rename
> 
>
> Key: HDFS-16428
> URL: https://issues.apache.org/jira/browse/HDFS-16428
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: lei w
>Assignee: lei w
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.3, 3.3.4
>
> Attachments: example.txt
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> When compute quota in rename operation , we use storage policy of the target 
> directory to compute src  quota usage. This will cause wrong value of 
> typeConsumed when source path was setted storage policy. I provided a unit 
> test to present this situation.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-16507) [SBN read] Avoid purging edit log which is in progress

2022-04-12 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HDFS-16507:
---

> [SBN read] Avoid purging edit log which is in progress
> --
>
> Key: HDFS-16507
> URL: https://issues.apache.org/jira/browse/HDFS-16507
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: tomscut
>Assignee: tomscut
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.4, 3.3.4
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> We introduced [Standby Read] feature in branch-3.1.0, but found a FATAL 
> exception. It looks like it's purging edit logs which is in process.
> According to the analysis, I suspect that the editlog which is in progress to 
> be purged(after SNN checkpoint) does not finalize(See HDFS-14317) before ANN 
> rolls edit its self. 
> The stack:
> {code:java}
> java.lang.Thread.getStackTrace(Thread.java:1552)
>     org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1032)
>     
> org.apache.hadoop.hdfs.server.namenode.FileJournalManager.purgeLogsOlderThan(FileJournalManager.java:185)
>     
> org.apache.hadoop.hdfs.server.namenode.JournalSet$5.apply(JournalSet.java:623)
>     
> org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:388)
>     
> org.apache.hadoop.hdfs.server.namenode.JournalSet.purgeLogsOlderThan(JournalSet.java:620)
>     
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.purgeLogsOlderThan(FSEditLog.java:1512)
> org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager.purgeOldStorage(NNStorageRetentionManager.java:177)
>     
> org.apache.hadoop.hdfs.server.namenode.FSImage.purgeOldStorage(FSImage.java:1249)
>     
> org.apache.hadoop.hdfs.server.namenode.ImageServlet$2.run(ImageServlet.java:617)
>     
> org.apache.hadoop.hdfs.server.namenode.ImageServlet$2.run(ImageServlet.java:516)
>     java.security.AccessController.doPrivileged(Native Method)
>     javax.security.auth.Subject.doAs(Subject.java:422)
>     
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>     
> org.apache.hadoop.hdfs.server.namenode.ImageServlet.doPut(ImageServlet.java:515)
>     javax.servlet.http.HttpServlet.service(HttpServlet.java:710)
>     javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>     org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
>     
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
>     
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1604)
>     
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>     org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>     
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>     org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>     
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>     
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>     
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>     
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>     org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>     
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>     
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>     
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>     
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>     
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>     org.eclipse.jetty.server.Server.handle(Server.java:539)
>     org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
>     
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>     
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>     org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>     
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>     
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>     
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>     
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>     
> 

[jira] [Reopened] (HDFS-16437) ReverseXML processor doesn't accept XML files without the SnapshotDiffSection.

2022-04-12 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HDFS-16437:
---

> ReverseXML processor doesn't accept XML files without the SnapshotDiffSection.
> --
>
> Key: HDFS-16437
> URL: https://issues.apache.org/jira/browse/HDFS-16437
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.1, 3.3.0
>Reporter: yanbin.zhang
>Assignee: yanbin.zhang
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.3, 3.3.4
>
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> In a cluster environment without snapshot, if you want to convert back to 
> fsimage through the generated xml, an error will be reported.
> {code:java}
> //代码占位符
> [test@test001 ~]$ hdfs oiv -p ReverseXML -i fsimage_0257220.xml 
> -o fsimage_0257220
> OfflineImageReconstructor failed: FSImage XML ended prematurely, without 
> including section(s) SnapshotDiffSection
> java.io.IOException: FSImage XML ended prematurely, without including 
> section(s) SnapshotDiffSection
>         at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.processXml(OfflineImageReconstructor.java:1765)
>         at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.run(OfflineImageReconstructor.java:1842)
>         at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:211)
>         at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:149)
> 22/01/25 15:56:52 INFO util.ExitUtil: Exiting with status 1: ExitException 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-16422) Fix thread safety of EC decoding during concurrent preads

2022-04-12 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HDFS-16422:
---

> Fix thread safety of EC decoding during concurrent preads
> -
>
> Key: HDFS-16422
> URL: https://issues.apache.org/jira/browse/HDFS-16422
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfsclient, ec, erasure-coding
>Affects Versions: 3.3.0, 3.3.1
>Reporter: daimin
>Assignee: daimin
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.3, 3.3.4
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Reading data on an erasure-coded file with missing replicas(internal block of 
> block group) will cause online reconstruction: read dataUnits part of data 
> and decode them into the target missing data. Each DFSStripedInputStream 
> object has a RawErasureDecoder object, and when we doing pread concurrently, 
> RawErasureDecoder.decode will be invoked concurrently too. 
> RawErasureDecoder.decode is not thread safe, as a result of that we get wrong 
> data from pread occasionally.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16523) Fix dependency error in hadoop-hdfs on M1 Mac

2022-03-29 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-16523.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

> Fix dependency error in hadoop-hdfs on M1 Mac
> -
>
> Key: HDFS-16523
> URL: https://issues.apache.org/jira/browse/HDFS-16523
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.4.0
> Environment: M1 Pro Mac
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> hadoop-hdfs build is failing on docker with M1 Mac.
> {code}
> [WARNING]
> Dependency convergence error for
> org.fusesource.hawtjni:hawtjni-runtime:jar:1.11:provided paths to
> dependency are:
> +-org.apache.hadoop:hadoop-hdfs:jar:3.4.0-SNAPSHOT
>   +-org.openlabtesting.leveldbjni:leveldbjni-all:jar:1.8:compile
> +-org.openlabtesting.leveldbjni:leveldbjni:jar:1.8:provided
>   +-org.fusesource.hawtjni:hawtjni-runtime:jar:1.11:provided
> and
> +-org.apache.hadoop:hadoop-hdfs:jar:3.4.0-SNAPSHOT
>   +-org.openlabtesting.leveldbjni:leveldbjni-all:jar:1.8:compile
> +-org.fusesource.leveldbjni:leveldbjni-osx:jar:1.8:provided
>   +-org.fusesource.leveldbjni:leveldbjni:jar:1.8:provided
> +-org.fusesource.hawtjni:hawtjni-runtime:jar:1.9:provided
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16383) hdfs fail

2021-12-14 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-16383.
---
Resolution: Invalid

> hdfs fail
> -
>
> Key: HDFS-16383
> URL: https://issues.apache.org/jira/browse/HDFS-16383
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: .
>Reporter: Pravin Pawar
>Priority: Minor
>
> .



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16025) adls test suite TestAdlContractGetFileStatusLive failing with no assertJ on the classpath

2021-05-12 Thread Steve Loughran (Jira)
Steve Loughran created HDFS-16025:
-

 Summary: adls test suite TestAdlContractGetFileStatusLive failing 
with no assertJ on the classpath
 Key: HDFS-16025
 URL: https://issues.apache.org/jira/browse/HDFS-16025
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fs/adl, test
Affects Versions: 3.3.1
Reporter: Steve Loughran


Reported on PR #2482: https://github.com/apache/hadoop/pull/2842 ; CNFE on 
assertJ assertions in adls test runs. 

Cause will be HADOOP-17281, which added the asserts to the existing fs contract 
test. We need to mark assertJ as an export of the hadoop-common suite, or work 
out why hadoop-azuredatalake isn't picking itup



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15984) Deleted data on the Web UI must be saved to the trash

2021-04-16 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-15984.
---
Resolution: Duplicate

> Deleted data on the Web UI must be saved to the trash 
> --
>
> Key: HDFS-15984
> URL: https://issues.apache.org/jira/browse/HDFS-15984
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs
>Reporter: Bhavik Patel
>Priority: Major
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
>  
> This can be helpful when the user accidentally deletes data from the Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15983) Deleted data on the Web UI must be saved to the trash

2021-04-16 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-15983.
---
Resolution: Duplicate

> Deleted data on the Web UI must be saved to the trash 
> --
>
> Key: HDFS-15983
> URL: https://issues.apache.org/jira/browse/HDFS-15983
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs
>Reporter: Bhavik Patel
>Priority: Major
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
>  
> This can be helpful when the user accidentally deletes data from the Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12263) Revise StreamCapacities doc to describe the API usage and the requirements for customized OutputStream implemetation

2021-02-10 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-12263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-12263.
---
Resolution: Duplicate

> Revise StreamCapacities doc to describe the API usage and the requirements 
> for customized OutputStream implemetation
> 
>
> Key: HDFS-12263
> URL: https://issues.apache.org/jira/browse/HDFS-12263
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha4
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Major
>
> [~busbey] raised the concerns to call out what is the expected way to call 
> {{StreamCapabilities}} from the client side.   And this doc should also 
> describe the rules for any {{FSOutputStream}} implementation to follow.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15680) Disable Broken Azure Junits

2020-11-26 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-15680.
---
Resolution: Not A Problem

> Disable Broken Azure Junits
> ---
>
> Key: HDFS-15680
> URL: https://issues.apache.org/jira/browse/HDFS-15680
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> There are 6 test classes have been failing on Yetus for several months. 
> They contributed to more than 41 failing tests which makes reviewing Yetus 
> reports every a pain in the neck. Another point is to save the resources and 
> avoiding utilization of ports, memory, and CPU.
> Over the last month, there was some effort to bring the Yetus back to a 
> stable state. However, there is no progress in addressing Azure failures.
> Generally, I do not like to disable failing tests, but for this specific 
> case, I do not assume that it makes any sense to have 41 failing tests from 
> one module for several months. Whenever someone finds that those tests are 
> useful, then they can re-enable the tests on Yetus *_After_* the test is 
> fixed.
> Following a PR, I have to  review that my patch does not cause any failures 
> (include changing error messages in existing tests). A thorough review takes 
> a considerable amount of time browsing the nightly builds and Github reports.
> So, please consider how much time is being spent to review those stack trace 
> over the last months.
> Finally, this is one of the reasons developers tend to ignore the reports, 
> because it would take too much time to review; and by default, the errors are 
> considered irrelevant.
> CC: [~aajisaka], [~elgoiri], [~weichiu], [~ayushtkn]
> {code:bash}
>   hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked 
>hadoop.fs.azure.TestNativeAzureFileSystemMocked 
>hadoop.fs.azure.TestBlobMetadata 
>hadoop.fs.azure.TestNativeAzureFileSystemConcurrency 
>hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck 
>hadoop.fs.azure.TestNativeAzureFileSystemContractMocked 
>hadoop.fs.azure.TestWasbFsck 
>hadoop.fs.azure.TestOutOfBandAzureBlobOperations 
> {code}
> {code:bash}
> org.apache.hadoop.fs.azure.TestBlobMetadata.testFolderMetadata
> org.apache.hadoop.fs.azure.TestBlobMetadata.testFirstContainerVersionMetadata
> org.apache.hadoop.fs.azure.TestBlobMetadata.testPermissionMetadata
> org.apache.hadoop.fs.azure.TestBlobMetadata.testOldPermissionMetadata
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemConcurrency.testNoTempBlobsVisible
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemConcurrency.testLinkBlobs
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked.testListStatusRootDir
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked.testRenameDirectoryMoveToExistingDirectory
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked.testListStatus
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked.testRenameDirectoryAsExistingDirectory
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked.testRenameToDirWithSamePrefixAllowed
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked.testLSRootDir
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked.testDeleteRecursively
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck.testWasbFsck
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testChineseCharactersFolderRename
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testRedoRenameFolderInFolderListingWithZeroByteRenameMetadata
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testRedoRenameFolderInFolderListing
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testUriEncoding
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testDeepFileCreation
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testListDirectory
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testRedoRenameFolderRenameInProgress
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testRenameFolder
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testRenameImplicitFolder
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testRedoRenameFolder
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testStoreDeleteFolder
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testRename
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked.testListStatus
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked.testRenameDirectoryAsEmptyDirectory
> 

[jira] [Resolved] (HDFS-15673) Cannot set priority of datanode process 6667

2020-11-07 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-15673.
---
Resolution: Invalid

> Cannot set priority of datanode process 6667
> 
>
> Key: HDFS-15673
> URL: https://issues.apache.org/jira/browse/HDFS-15673
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: 3.1.1
> Environment: ambari 2.7.4.0 HDP-3.1.4.0 hadoop3.1.1
>Reporter: chenxiaoyong
>Priority: Major
>
> ambari 2.7.4.0 HDP-3.1.4.0 hadoop3.1.1
> When I start datanode by ambari,4 datanode start success,1 datanode start 
> failed
> {code:java}
> stderr:   /var/lib/ambari-agent/data/errors-909.txt
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HDFS/package/scripts/datanode.py",
>  line 126, in 
> DataNode().execute()
>   File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", 
> line 352, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HDFS/package/scripts/datanode.py",
>  line 68, in start
> datanode(action="start")
>   File "/usr/lib/ambari-agent/lib/ambari_commons/os_family_impl.py", line 89, 
> in thunk
> return fn(*args, **kwargs)
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HDFS/package/scripts/hdfs_datanode.py",
>  line 71, in datanode
> create_log_dir=True
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HDFS/package/scripts/utils.py",
>  line 261, in service
> Execute(daemon_cmd, not_if=process_id_exists_command, 
> environment=hadoop_env_exports)
>   File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 
> 166, in __init__
> self.env.run()
>   File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", 
> line 160, in run
> self.run_action(resource, action)
>   File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", 
> line 124, in run_action
> provider_action()
>   File 
> "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", 
> line 263, in action_run
> returns=self.resource.returns)
>   File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 
> 72, in inner
> result = function(command, **kwargs)
>   File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 
> 102, in checked_call
> tries=tries, try_sleep=try_sleep, 
> timeout_kill_strategy=timeout_kill_strategy, returns=returns)
>   File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 
> 150, in _call_wrapper
> result = _call(command, **kwargs_copy)
>   File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 
> 314, in _call
> raise ExecutionFailed(err_msg, code, out, err)
> resource_management.core.exceptions.ExecutionFailed: Execution of 
> 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ;  
> /usr/hdp/3.1.4.0-315/hadoop/bin/hdfs --config 
> /usr/hdp/3.1.4.0-315/hadoop/conf --daemon start datanode'' returned 1. ERROR: 
> Cannot set priority of datanode process 31009
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15587) Hadoop Client version 3.2.1 vulnerability

2020-09-18 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-15587.
---
Resolution: Invalid

> Hadoop Client version 3.2.1 vulnerability
> -
>
> Key: HDFS-15587
> URL: https://issues.apache.org/jira/browse/HDFS-15587
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: Laszlo Czol
>Priority: Minor
>
>  I'm having a problem using hadoop-client version 3.2.1 in my dependency 
> tree. It has a vulnerable jar: org.apache.hadoop : 
> hadoop-mapreduce-client-core : 3.2.1 The code for the vulnerability is: 
> CVE-2017-3166, basically _if a file in an encryption zone with access 
> permissions that make it world readable is localized via YARN's localization 
> mechanism, that file will be stored in a world-readable location and can be 
> shared freely with any application that requests to localize that file_ The 
> problem is that: if I'm updating for the 3.3.0 hadoop-client version the 
> vulnerability remains and I wouldn't make a downgrade for the version 2.8.1 
> which is the next non-vulnerable version.
> Do you have any roadmap or any plan for this?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15471) TestHDFSContractMultipartUploader fails on trunk

2020-08-28 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-15471.
---
Fix Version/s: 3.3.1
   Resolution: Fixed

> TestHDFSContractMultipartUploader fails on trunk
> 
>
> Key: HDFS-15471
> URL: https://issues.apache.org/jira/browse/HDFS-15471
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Ahmed Hussein
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available, test
> Fix For: 3.3.1
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {{TestHDFSContractMultipartUploader}} fails on trunk with 
> {{IllegalArgumentException}}
> {code:bash}
> [ERROR] 
> testConcurrentUploads(org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader)
>   Time elapsed: 0.127 s  <<< ERROR!
> java.lang.IllegalArgumentException
>   at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:127)
>   at 
> org.apache.hadoop.test.LambdaTestUtils$ProportionalRetryInterval.(LambdaTestUtils.java:907)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractMultipartUploaderTest.testConcurrentUploads(AbstractContractMultipartUploaderTest.java:815)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15371) Nonstandard characters exist in NameNode.java

2020-07-14 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-15371.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

fixed in trunk, thanks

> Nonstandard characters exist in NameNode.java
> -
>
> Key: HDFS-15371
> URL: https://issues.apache.org/jira/browse/HDFS-15371
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: jianghua zhu
>Assignee: Zhao Yi Ming
>Priority: Minor
> Fix For: 3.4.0
>
>
> In NameNode.Java, DFS_HA_ZKFC_PORT_KEY has non-standard characters behind it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15466) remove src/main/resources/META-INF/services/org.apache.hadoop.fs.MultipartUploader src/main/resources/META-INF/services/org.apache.hadoop.fs.MultipartUploader META-INF/se

2020-07-13 Thread Steve Loughran (Jira)
Steve Loughran created HDFS-15466:
-

 Summary: remove 
src/main/resources/META-INF/services/org.apache.hadoop.fs.MultipartUploader 
src/main/resources/META-INF/services/org.apache.hadoop.fs.MultipartUploader 
META-INF/services/org.apache.hadoop.fs.MultipartUploader file
 Key: HDFS-15466
 URL: https://issues.apache.org/jira/browse/HDFS-15466
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: fs, fs/s3
Affects Versions: 3.3.1
Reporter: Steve Loughran


Follow-on to HDFS-13934. (and as usual, only noticed once that is in)

we no longer need the service declarations in
src/main/resources/META-INF/services/org.apache.hadoop.fs.MultipartUploader

There's no harm in having them there -the service loading is no longer used- 
but we should still cut it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13934) Multipart uploaders to be created through API call to FileSystem/FileContext, not service loader

2020-07-13 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-13934.
---
Fix Version/s: 3.3.1
   Resolution: Fixed

merged to trunk & 3.3. No plans to backport further

> Multipart uploaders to be created through API call to FileSystem/FileContext, 
> not service loader
> 
>
> Key: HDFS-13934
> URL: https://issues.apache.org/jira/browse/HDFS-13934
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, fs/s3, hdfs
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.1
>
>
> the Multipart Uploaders are created via service loaders. This is troublesome
> # HADOOP-12636, HADOOP-13323, HADOOP-13625 highlight how the load process 
> forces the transient loading of dependencies.  If a dependent class cannot be 
> loaded (e.g aws-sdk is not on the classpath), that service won't load. 
> Without error handling round the load process, this stops any uploader from 
> loading. Even with that error handling, the performance hit of that load, 
> especially with reshaded dependencies, hurts performance (HADOOP-13138).
> # it makes wrapping the the load with any filter impossible, stops transitive 
> binding through viewFS, mocking, etc.
> # It complicates security in a kerberized world. If you have an FS instance 
> of user A, then you should be able to create an MPU instance with that user's 
> permissions. currently, if a service were to try to create one, you'd be 
> looking at doAs() games around the service loading, and a more complex bind 
> process.
> Proposed
> # remove the service loader mech entirely
> # add to FS & FC as createMultipartUploader(path) call, which will create one 
> bound to the current FS, with its permissions, DTs, etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15435) HdfsDtFetcher only fetches first DT of a filesystem

2020-06-24 Thread Steve Loughran (Jira)
Steve Loughran created HDFS-15435:
-

 Summary: HdfsDtFetcher only fetches first DT of a filesystem
 Key: HDFS-15435
 URL: https://issues.apache.org/jira/browse/HDFS-15435
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fs, security
Affects Versions: 3.3.0
Reporter: Steve Loughran


similar to HDFS-15433 -only a single DT per FS is picked up.

Here the fault is in org.apache.hadoop.hdfs.HdfsDtFetcher. 
Found in HADOOP-17077



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15433) hdfs fetchdt command only fetches first DT of a filesystem

2020-06-24 Thread Steve Loughran (Jira)
Steve Loughran created HDFS-15433:
-

 Summary: hdfs fetchdt command only fetches first DT of a filesystem
 Key: HDFS-15433
 URL: https://issues.apache.org/jira/browse/HDFS-15433
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fs
Affects Versions: 3.3.0
Reporter: Steve Loughran


the {{hdfs fetchdt}} command only fetches the first DT of a filesystem, not any 
other tokens issued (e.g KMS tokens)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15101) Only few files[5] created with empty content out of 10000 files ingested to hdfs

2020-01-10 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-15101.
---
Resolution: Cannot Reproduce

> Only few files[5] created with empty content out of 1 files ingested to 
> hdfs
> 
>
> Key: HDFS-15101
> URL: https://issues.apache.org/jira/browse/HDFS-15101
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
> Environment: Production
>Reporter: Chandrashekar S
>Priority: Major
>
> When we ingesting files through spark streaming, we are finding few files are 
> empty and 99.9% files are created properly with contents. In the yarn we 
> found that an interruption generated at each failure to write contents to 
> file.
>  
>  
>  
> 20/01/08 16:43:16 INFO DFSClient: Exception in createBlockOutputStream
> java.io.InterruptedIOException: Interrupted while waiting for IO on channel 
> java.nio.channels.SocketChannel[connected local=/10.136.184.59:51154 
> remote=/10.136.184.59:1019]. 75000 millis timeout left.
>  at 
> org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342)
>  at 
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>  at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
>  at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
>  at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118)
>  at java.io.FilterInputStream.read(FilterInputStream.java:83)
>  at java.io.FilterInputStream.read(FilterInputStream.java:83)
>  at 
> org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2390)
>  at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1455)
>  at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1374)
>  at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:552)
> 20/01/08 16:43:16 INFO BlockManager: Removing RDD 6
> 20/01/08 16:43:16 INFO DFSClient: Abandoning 
> BP-383742638-10.136.184.33-1429219667936:blk_2355453392_1283401139
> 20/01/08 16:43:16 WARN Client: interrupted waiting to send rpc request to 
> server
> java.lang.InterruptedException
>  at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:404)
>  at java.util.concurrent.FutureTask.get(FutureTask.java:191)
>  at org.apache.hadoop.ipc.Client$Connection.sendRpcRequest(Client.java:1094)
>  at org.apache.hadoop.ipc.Client.call(Client.java:1457)
>  at org.apache.hadoop.ipc.Client.call(Client.java:1398)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
>  at com.sun.proxy.$Proxy17.abandonBlock(Unknown Source)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.abandonBlock(ClientNamenodeProtocolTranslatorPB.java:436)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:291)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:203)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:185)
>  at com.sun.proxy.$Proxy18.abandonBlock(Unknown Source)
>  at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1378)
>  at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:552)
> 20/01/08 16:43:16 WARN DFSClient: DataStreamer Exception
> java.io.IOException: java.lang.InterruptedException
>  at org.apache.hadoop.ipc.Client.call(Client.java:1463)
>  at org.apache.hadoop.ipc.Client.call(Client.java:1398)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
>  at com.sun.proxy.$Proxy17.abandonBlock(Unknown Source)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.abandonBlock(ClientNamenodeProtocolTranslatorPB.java:436)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:291)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:203)
>  at 
> 

[jira] [Created] (HDFS-15042) add more tests for ByteBufferPositionedReadable

2019-12-09 Thread Steve Loughran (Jira)
Steve Loughran created HDFS-15042:
-

 Summary: add more tests for ByteBufferPositionedReadable 
 Key: HDFS-15042
 URL: https://issues.apache.org/jira/browse/HDFS-15042
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: fs, test
Affects Versions: 3.3.0
Reporter: Steve Loughran
Assignee: Steve Loughran


There's a few corner cases of ByteBufferPositionedReadable which need to be 
tested, mainly illegal read positions. Add them



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2112) rename is behaving different compared with HDFS

2019-09-11 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDDS-2112.
--
Resolution: Won't Fix

> rename is behaving different compared with HDFS
> ---
>
> Key: HDDS-2112
> URL: https://issues.apache.org/jira/browse/HDDS-2112
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Istvan Fajth
>Priority: Major
> Attachments: demonstrative_test.patch
>
>
> I am attaching a patch file, that introduces two new tests for the 
> OzoneFileSystem implementation which demonstrates the expected behaviour.
> Case 1:
> Given a directory a file "/source/subdir/file", and a directory /target
> When fs.rename("/source/subdir/file", "/target/subdir/file") is called
> Then DistributedFileSystem (HDFS), is returning false from the method, while 
> OzoneFileSystem throws a FileNotFoundException as "/target/subdir" is not 
> existing.
> The expected behaviour would be to return false in this case instead of 
> throwing an exception with that behave the same as DistributedFileSystem does.
>  
> Case 2:
> Given a directory "/source" and a file "/targetFile"
> When fs.rename("/source", "/targetFile") is called
> Then DistributedFileSystem (HDFS), is returning false from the method, while 
> OzoneFileSystem throws a FileAlreadyExistsException as "/targetFile" does 
> exist.
> The expected behaviour would be to return false in this case instead of 
> throwing an exception with that behave the same as DistributedFileSystem does.
>  
> It may be considered as well a bug in HDFS, however it is not clear from the 
> FileSystem interface's documentation on the two rename methods that it 
> defines in which cases an exception should be thrown and in which cases a 
> return false is the expected behaviour.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14319) checksumFS doesn't wrap concat(): concatenated files don't have checksums

2019-02-26 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-14319:
-

 Summary: checksumFS doesn't wrap concat(): concatenated files 
don't have checksums
 Key: HDFS-14319
 URL: https://issues.apache.org/jira/browse/HDFS-14319
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fs
Affects Versions: 3.2.0
Reporter: Steve Loughran


Followon from HADOOP-16107. FilterFS passes through the concat operation, and 
checksum FS doesn't override that call -so files created through concat *do not 
have checksums*.

If people are using a checksummed fs directly with the expectations that they 
will, that expectation is not being met. 

What to do?

* fail always?
* fail if checksums are enabled?
* try and implement the concat operation from raw local up at the checksum level

append() just gives up always; doing the same for concat would be the simplest. 
Again, brings us back to "need a way to see if an FS supports a feature before 
invocation", here checksum fs would reject append and concat



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14093) HDFS to pass new next read pos tests

2018-11-21 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-14093:
-

 Summary: HDFS to pass new next read pos tests
 Key: HDFS-14093
 URL: https://issues.apache.org/jira/browse/HDFS-14093
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Affects Versions: 3.3.0
Reporter: Steve Loughran
 Attachments: HADOOP-15870-002.patch

submit patches of HADOOP-15920 to the HDFS yetus runs, see what they say, tune 
tests/HDFS as a appropriate



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14060) HDFS fetchdt command to return error codes on success/failure

2018-11-09 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-14060:
-

 Summary: HDFS fetchdt command to return error codes on 
success/failure
 Key: HDFS-14060
 URL: https://issues.apache.org/jira/browse/HDFS-14060
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Affects Versions: 3.3.0
Reporter: Steve Loughran


The {{hdfs fetchdt}} command always returns 0, even when there's been an error 
(no token issued, no file to load, usage, etc). This means its not that useful 
as a command line tool for testing or in scripts.

Proposed: exit non-zero for errors; reuse LaucherExitCodes for these



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13951) HDFS DelegationTokenFetcher can't print non-HDFS tokens in a tokenfile

2018-10-02 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-13951:
-

 Summary: HDFS DelegationTokenFetcher can't print non-HDFS tokens 
in a tokenfile
 Key: HDFS-13951
 URL: https://issues.apache.org/jira/browse/HDFS-13951
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 3.2.0
Reporter: Steve Loughran
Assignee: Steve Loughran


the fetchdt command can fetch tokens for filesystems other than hdfs (s3a, 
abfs, etc), but it can't print them, as it assumes all tokens in the file are 
subclasses of 
{{org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier}} 
& uses this fact in its decoding. It deserializes the token byte array without 
checking kind and so ends up with invalid data.

Fix: ask the tokens to decode themselves; only call toStableString() if an HDFS 
token.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-8878) An HDFS built-in DistCp

2018-09-25 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-8878.
--
Resolution: Duplicate

> An HDFS built-in DistCp 
> 
>
> Key: HDFS-8878
> URL: https://issues.apache.org/jira/browse/HDFS-8878
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Linxiao Jin
>Assignee: Linxiao Jin
>Priority: Major
>
> For now, we use DistCp to do directory copy, which works quite good. However, 
> it would be better if there is an HDFS built-in, efficient, directory copy 
> tool. It could be faster by cut off the redundant communication between HDFS, 
> YARN and MapReduce. It could also release the resource DistCp consumed in job 
> tracker and YARN and easier for debugging.
> We need more discussion on the new protocol between NN and DN from different 
> clusters to achieve HDFS-level command sending and data transfer. One 
> available hacky solution could be, the srcNN get the block distribution of 
> the target file, ask each datanode to start a DFSClient and copy their local 
> shortcircuited block as a file in dst cluster. After all the block-file in 
> dst cluster is completed, use a DFSClient to concat them together to form the 
> target destination file. There might be some optimized solution by implement 
> a newly designed protocol to communicate over cluster rather than DFSClient 
> and use methods from lower bottom layer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13940) Implement Multipart-aware discp-equivalent

2018-09-25 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-13940:
-

 Summary: Implement Multipart-aware discp-equivalent
 Key: HDFS-13940
 URL: https://issues.apache.org/jira/browse/HDFS-13940
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: fs
Affects Versions: 3.3.0
Reporter: Steve Loughran


the per block upload would permit a high-performance distcp
* within the same FS
* when writing to object stores with multipart uploads
Implement this, if initially as a proof of concept to validate the completeness 
of the API

One thing to consider is throttling upload bandwidth better: there's the 
potential here to overload the long-haul links more here, even when uploading a 
few files (as the peark bandwidth which a single file being copied ma use 
isO(blocks))



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13937) Multipart Uploader APIs to be marked as private/unstable in 3.2.0

2018-09-24 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-13937:
-

 Summary: Multipart Uploader APIs to be marked as private/unstable 
in 3.2.0
 Key: HDFS-13937
 URL: https://issues.apache.org/jira/browse/HDFS-13937
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: fs
Affects Versions: 3.2.0
Reporter: Steve Loughran
Assignee: Steve Loughran


HDFS-13717 shows that the MPU stuff isn't yet stable. Mark the interfaces as 
private/unstable and postpone the rest of that patch until after.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13936) multipart upload to HDFS to support 0 byte upload

2018-09-24 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-13936:
-

 Summary: multipart upload to HDFS to support 0 byte upload
 Key: HDFS-13936
 URL: https://issues.apache.org/jira/browse/HDFS-13936
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: fs, hdfs
Affects Versions: 3.2.0
Reporter: Steve Loughran
Assignee: Ewan Higgs


MPUs to HDFS fail as you can't concat an empty block. 

Whatever uploads to HDFS needs to recognise that specific case "0-byte file" 
and rather than try and concat things, just create a 0-byte file there.

Without this, you can't use MPU as a replacement for distcp or alternative 
commit protocols.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13934) Multipart uploaders to be created through API call to FileSystem/FileContext, not service loader

2018-09-23 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-13934:
-

 Summary: Multipart uploaders to be created through API call to 
FileSystem/FileContext, not service loader
 Key: HDFS-13934
 URL: https://issues.apache.org/jira/browse/HDFS-13934
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: fs, fs/s3, hdfs
Affects Versions: 3.2.0
Reporter: Steve Loughran


the Multipart Uploaders are created via service loaders. This is troublesome

# HADOOP-12636, HADOOP-13323, HADOOP-13625 highlight how the load process 
forces the transient loading of dependencies.  If a dependent class cannot be 
loaded (e.g aws-sdk is not on the classpath), that service won't load. Without 
error handling round the load process, this stops any uploader from loading. 
Even with that error handling, the performance hit of that load, especially 
with reshaded dependencies, hurts performance (HADOOP-13138).
# it makes wrapping the the load with any filter impossible, stops transitive 
binding through viewFS 
# It complicates security in a kerberized world. If you have an FS instance of 
user A, then you should be able to create an MPU instance with that user's 
permissions. currently, if a service were to try to create one, you'd be 
looking at doAs() games around the service loading, and a more complex bind 
process.

Proposed
# remove the service loader mech entirely
# add to FS & FC as createMultipartUploader(path) call, which will create one 
bound to the current FS, with its permissions, DTs, etc.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13766) HDFS Classes used for implementation of Multipart uploads to move to hadoop-common JAR

2018-09-22 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-13766.
---
Resolution: Duplicate

> HDFS Classes used for implementation of Multipart uploads to move to 
> hadoop-common JAR
> --
>
> Key: HDFS-13766
> URL: https://issues.apache.org/jira/browse/HDFS-13766
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Ewan Higgs
>Priority: Blocker
>
> the multipart upload API uses classes which are only in {{hadoop-hdfs-client}}
> These need to be moved to hadoop-common so that cloud deployments which don't 
> have the hdfs-client JAR on their CP (HD/I, possibly google dataproc) can 
> implement and use the API.
> Sorry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13766) HDFS Classes used for implementation of Multipart uploads to move to hadoop-common JAR

2018-07-25 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-13766:
-

 Summary: HDFS Classes used for implementation of Multipart uploads 
to move to hadoop-common JAR
 Key: HDFS-13766
 URL: https://issues.apache.org/jira/browse/HDFS-13766
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Affects Versions: 3.2.0
Reporter: Steve Loughran


the multipart upload API uses classes which are only in {{hadoop-hdfs-client}}

These need to be moved to hadoop-common so that cloud deployments which don't 
have the hdfs-client JAR on their CP (HD/I, possibly google dataproc) can 
implement and use the API.

Sorry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13713) Add specification of new API to FS specification, with contract tests

2018-07-02 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-13713:
-

 Summary: Add specification of new API to FS specification, with 
contract tests
 Key: HDFS-13713
 URL: https://issues.apache.org/jira/browse/HDFS-13713
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: fs, test
Affects Versions: 3.2.0
Reporter: Steve Loughran
Assignee: Ewan Higgs


There's nothing in the FS spec covering the new API. Add it in a new .md file

* add FS model with the notion of a function mapping (uploadID -> Upload), the 
operations (list, commit, abort). The [TLA+ 
mode|https://issues.apache.org/jira/secure/attachment/12865161/objectstore.pdf]l
 of HADOOP-13786 shows how to do this.
* Contract tests of not just the successful path, but all the invalid ones.
* implementations of the contract tests of all FSs which support the new API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12260) StreamCapabilities.StreamCapability should be public.

2018-03-12 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-12260.
---
Resolution: Won't Fix

> StreamCapabilities.StreamCapability should be public.
> -
>
> Key: HDFS-12260
> URL: https://issues.apache.org/jira/browse/HDFS-12260
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha4
>Reporter: Lei (Eddy) Xu
>Priority: Major
>
> Client should use {{StreamCapability}} enum instead of raw string to query 
> the capability of an OutputStream, for better type safety / IDE supports and 
> etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13113) Use Log.*(Object, Throwable) overload to log exceptions

2018-02-06 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-13113:
-

 Summary: Use Log.*(Object, Throwable) overload to log exceptions
 Key: HDFS-13113
 URL: https://issues.apache.org/jira/browse/HDFS-13113
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, namenode, nfs
Affects Versions: 2.4.0
Reporter: Steve Loughran
Assignee: Andras Bokor


FYI, In HADOOP-10571, [~boky01] is going to clean up a lot of the log 
statements, including some in Datanode and elsewhere.

I'm provisionally +1 on that, but want to run it on the standalone tests (Yetus 
has already done them), and give the HDFS developers warning of a change which 
is going to touch their codebase.

If anyone doesn't want the logging improvements, now is your chance to say so



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12831) HDFS throws FileNotFoundException on getFileBlockLocations(path-to-directory)

2017-11-17 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-12831:
-

 Summary: HDFS throws FileNotFoundException on 
getFileBlockLocations(path-to-directory)
 Key: HDFS-12831
 URL: https://issues.apache.org/jira/browse/HDFS-12831
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.8.1
Reporter: Steve Loughran


The HDFS implementation of {{getFileBlockLocations(path, offset, len)}} throws 
an exception if the path references a directory. 

The base implementation (and all other filesystems) just return an empty array, 
something implemented in {{getFileBlockLocations(filestatsus, offset, len)}}; 
something written up in filesystem.md as the correct behaviour. 

# has been shown to break things: SPARK-14959
# there's no contract tests for these APIs; shows up in HADOOP-15044. 
# even if this is considered a wontfix, it should raise something like 
{{PathIsDirectoryException}} rather than FNFE



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12101) DFSClient.rename() to unwrap ParentNotDirectoryException; define policy for renames under a file

2017-07-07 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-12101:
-

 Summary: DFSClient.rename() to unwrap ParentNotDirectoryException; 
define policy for renames under a file
 Key: HDFS-12101
 URL: https://issues.apache.org/jira/browse/HDFS-12101
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.8.1
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor


HADOOP-14630 adds some contract tests trying to create files or rename files 
*under other files*.

On a rename under an existing file (or dir under an existing file), HDFS fails 
throwing 
{{org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.ParentNotDirectoryException)}}.
 

# is throwing an exception here what people agree is the correct behaviour? If 
so, it can go into the filesystem spec, tests set up to expect it. object 
stores tweaked for consistency. If not, HDFS needs a change.
# At the very least, HDFS should be unwrapping the exception.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-11431) hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider

2017-03-16 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HDFS-11431:
---

> hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider
> ---
>
> Key: HDFS-11431
> URL: https://issues.apache.org/jira/browse/HDFS-11431
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, hdfs-client
>Affects Versions: 2.8.0, 3.0.0-alpha3
>Reporter: Steven Rand
>Assignee: Steven Rand
>Priority: Blocker
>  Labels: maven
> Fix For: 2.8.0, 3.0.0-alpha3
>
> Attachments: HDFS-11431-branch-2.8.0.001.patch, 
> HDFS-11431-branch-2.8.0.002.patch
>
>
> The {{hadoop-hdfs-client-2.8.0.jar}} file does include the 
> {{ConfiguredFailoverProxyProvider}} class. This breaks client applications 
> that use this class to communicate with the active NameNode in an HA 
> deployment of HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11214) Upgrade netty-all to 4.1.1.Final

2016-12-06 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-11214:
-

 Summary: Upgrade netty-all to 4.1.1.Final
 Key: HDFS-11214
 URL: https://issues.apache.org/jira/browse/HDFS-11214
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 2.7.3
Reporter: Steve Loughran
Assignee: Ted Yu
 Attachments: HADOOP-13866.v1.patch

Upgrade Netty

this is a clone of HADOOP-13866, created to kick off yetus on HDFS, that being 
where netty is used



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-10668) Fix intermittently failing UT TestDataNodeMXBean#testDataNodeMXBeanBlockCount

2016-07-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HDFS-10668:
---

beaking the build I'm afraid. From jenkins
{code}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
(default-testCompile) on project hadoop-hdfs: Compilation failure: Compilation 
failure:
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-branch2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMXBean.java:[123,38]
 local variable mbs is accessed from within inner class; needs to be declared 
final
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-branch2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMXBean.java:[123,43]
 local variable mxbeanName is accessed from within inner class; needs to be 
declared final
[ERROR] -> [Help 1]
{code}

Looks a simple fix; reverting the change from 2.8+ until a new patch is ready.

> Fix intermittently failing UT TestDataNodeMXBean#testDataNodeMXBeanBlockCount
> -
>
> Key: HDFS-10668
> URL: https://issues.apache.org/jira/browse/HDFS-10668
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-10668.000.patch
>
>
> h6.Error Message
> {code}
> After delete one file expected:<4> but was:<5>
> {code}
> h6. Stacktrace
> {code}
> java.lang.AssertionError: After delete one file expected:<4> but was:<5>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeMXBean.testDataNodeMXBeanBlockCount(TestDataNodeMXBean.java:124)
> {code}
> Sample failing Jenkins pre-commit built, see 
> [here|https://builds.apache.org/job/PreCommit-HDFS-Build/16094/testReport/org.apache.hadoop.hdfs.server.datanode/TestDataNodeMXBean/testDataNodeMXBeanBlockCount/].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10669) error while creating collection in solr

2016-07-23 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-10669.
---
Resolution: Invalid

closing this as invalid; something to take up on the solr user mailing lists 
I'm afraid.

https://wiki.apache.org/hadoop/InvalidJiraIssues

> error while creating collection in solr
> ---
>
> Key: HDFS-10669
> URL: https://issues.apache.org/jira/browse/HDFS-10669
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: solr cloud mode on apache hadoop
>Reporter: SHIVADEEP GUNDOJU
>
> Hello Team,
>I have configured Solr in cloud mode on my apache hadoop 4 node cluster. I 
> "have created a collection with name tweets". able to use the collection 
> without any issues.
>  When I try to create  new collection . I am getting below error but but 
> directory gets created under solr in hdfs.  please help
> user@Hadoop3:/usr/local/solr_download/solr-5.5.2$ sudo ./bin/solr create -c 
> tweets1  -d data_driven_schema_configs
> Connecting to ZooKeeper at localhost:9983 ...
> Re-using existing configuration directory tweets1
> Creating new collection 'tweets1' using command:
> http://172.16.16.129:8983/solr/admin/collections?action=CREATE=tweets1=1=1=1=tweets1
> ERROR: Failed to create collection 'tweets1' due to: 
> {172.16.16.129:8983_solr=org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://172.16.16.129:8983/solr: Error CREATEing SolrCore 
> 'tweets1_shard1_replica1': Unable to create core [tweets1_shard1_replica1] 
> Caused by: Illegal pattern component: T}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10635) expected/actual parameters inverted in TestGlobPaths assertEquals

2016-07-15 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-10635:
-

 Summary: expected/actual parameters inverted in TestGlobPaths 
assertEquals
 Key: HDFS-10635
 URL: https://issues.apache.org/jira/browse/HDFS-10635
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.8.0
Reporter: Steve Loughran
Priority: Minor


Pretty much all the assertEquals clauses in {{TestGlobPaths}} place the actual 
value first, expected second. That's the wrong order and will lead to 
misleading messages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10484) Can not read file from java.io.IOException: Need XXX bytes, but only YYY bytes available

2016-06-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-10484.
---
Resolution: Cannot Reproduce

> Can not read file from java.io.IOException: Need XXX bytes, but only YYY  
> bytes available
> -
>
> Key: HDFS-10484
> URL: https://issues.apache.org/jira/browse/HDFS-10484
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.0.0-alpha
> Environment: Cloudera 4.1.2,  hadoop-hdfs-2.0.0+552-1.cdh4.1.2.p0.27
>Reporter: pt
>
> We are running CDH 4.1.2 distro and trying to read file from HDFS. It ends up 
> with exception @datanode saying
> 2016-06-02 10:43:26,354 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> DatanodeRegistration(X.X.X.X, 
> storageID=DS-404876644-X.X.X.X-50010-1462535537579, infoPort=50075, 
> ipcPort=50020, storageInfo=lv=-40;cid=cluster18;nsid=2115086255;c=0):Got 
> exception while serving 
> BP-2091182050-X.X.X.X-1358362115729:blk_5037101550399368941_420502314 to 
> /X.X.X.X:58614
> java.io.IOException: Need 10172416 bytes, but only 10072576 bytes available
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.waitForMinLength(BlockSender.java:387)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:189)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:268)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:88)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:63)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:219)
> at java.lang.Thread.run(Thread.java:662)
> 2016-06-02 10:43:26,354 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: 
> app112.rutarget.ru:50010:DataXceiver error processing READ_BLOCK operation 
> src: /X.X.X.X:58614 dest: /X.X.X.X:50010
> java.io.IOException: Need 10172416 bytes, but only 10072576 bytes available
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.waitForMinLength(BlockSender.java:387)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:189)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:268)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:88)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:63)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:219)
> at java.lang.Thread.run(Thread.java:662)
> FSCK shows file as being open for write, however hdfs client that handles 
> writes to this file closed it long time ago -- so file stucked in RBW for a 
> few last days. How can we get actual data  block in this case? I found only 
> binary .meta file on datanode but not actual block with data.
> -- 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10418) NPE in TestDistributedFileSystem.testDFSCloseOrdering

2016-05-17 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-10418.
---
Resolution: Duplicate

you're right: closing

> NPE in TestDistributedFileSystem.testDFSCloseOrdering
> -
>
> Key: HDFS-10418
> URL: https://issues.apache.org/jira/browse/HDFS-10418
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, test
>Affects Versions: 2.8.0
> Environment: Jenkins
>Reporter: Steve Loughran
>Priority: Critical
>
> Jenkins is failing with an NPE in close() —the close op assumes there's 
> always a StorageStatistics. instance. If there isn"t, you get a stack trace



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10418) NPE in TestDistributedFileSystem.testDFSCloseOrdering

2016-05-17 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-10418:
-

 Summary: NPE in TestDistributedFileSystem.testDFSCloseOrdering
 Key: HDFS-10418
 URL: https://issues.apache.org/jira/browse/HDFS-10418
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs, test
Affects Versions: 2.8.0
 Environment: Jenkins
Reporter: Steve Loughran
Priority: Critical


Jenkins is failing with an NPE in close() —the close op assumes there's always 
a StorageStatistics. instance. If there isn"t, you get a stack trace



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10394) move declaration of okhttp version from hdfs-client to hadoop-project POM

2016-05-12 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-10394:
-

 Summary: move declaration of okhttp version from hdfs-client to 
hadoop-project POM
 Key: HDFS-10394
 URL: https://issues.apache.org/jira/browse/HDFS-10394
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: 2.8.0
Reporter: Steve Loughran
Priority: Minor


The POM dependency on okhttp in hadoop-hdfs-client declares its version in that 
POM instead.

the root declaration, including version, must go into the 
hadoop-project/pom.xml so that its easy to track use and have only one place if 
this version were ever to be incremented. As it stands, if any other module 
picked up the library, they could adopt a different version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10304) implement moveToLocal or remove it from the usage list

2016-04-18 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-10304:
-

 Summary: implement moveToLocal or remove it from the usage list
 Key: HDFS-10304
 URL: https://issues.apache.org/jira/browse/HDFS-10304
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: scripts
Affects Versions: 2.8.0
Reporter: Steve Loughran
Priority: Minor


if you get the usage list of {{hdfs dfs}} it tells you of "-moveToLocal". 

If you try to use the command, it tells you off "Option '-moveToLocal' is not 
implemented yet."

Either the command should be implemented, or it should be removed from the 
usage list, as it is not technically a command you can use, except in the 
special case of "I want my shell to print "not implemented yet""



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-10291) TestShortCircuitLocalRead failing

2016-04-14 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-10291:
-

 Summary: TestShortCircuitLocalRead failing
 Key: HDFS-10291
 URL: https://issues.apache.org/jira/browse/HDFS-10291
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.8.0
Reporter: Steve Loughran
Assignee: Steve Loughran


{{TestShortCircuitLocalRead}} failing as length of read is considered off end 
of buffer. There's an off-by-one error somewhere in the test or the new 
validation code



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-10277) PositionedReadable test testReadFullyZeroByteFile failing in HDFS

2016-04-11 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-10277:
-

 Summary: PositionedReadable test testReadFullyZeroByteFile failing 
in HDFS
 Key: HDFS-10277
 URL: https://issues.apache.org/jira/browse/HDFS-10277
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.8.0
 Environment: Jenkins
Reporter: Steve Loughran
Assignee: Steve Loughran


Jenkins is failing after HADOOP-12994, 
{{che.hadoop.fs.contract.AbstractContractSeekTest.testReadFullyZeroByteFile(AbstractContractSeekTest.java:373)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-10254) DfsClient undervalidates args for PositionedReadable operations

2016-04-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-10254.
---
   Resolution: Fixed
Fix Version/s: 2.8.0

> DfsClient undervalidates args for PositionedReadable operations
> ---
>
> Key: HDFS-10254
> URL: https://issues.apache.org/jira/browse/HDFS-10254
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
> Fix For: 2.8.0
>
>
> HDFS should can do stricter checking of the inputs
> # raise an exception on negative offset of destination buffer
> # explicitly raise an EOF exception if the file position is negative
> Optionally: short-circuit read/readfully operations if the byte range is 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-10255) ByteRangeInputStream.readFully leaks stream handles on failure

2016-04-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-10255.
---
   Resolution: Fixed
Fix Version/s: 2.8.0

> ByteRangeInputStream.readFully leaks stream handles on failure
> --
>
> Key: HDFS-10255
> URL: https://issues.apache.org/jira/browse/HDFS-10255
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0
>
>
> in {{ByteRangeInputStream.readFully}}, if the requested amount of data is out 
> of range, the EOFException is thrown —without closing the input stream.
> Fix: move test into the try/finally clause
> using java 7 try-with-resources would be cleaner, but make it harder to 
> switch to aborting TCP channels if that was felt to be needed here



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-10254) DfsClient undervalidates args for PositionedReadable operations

2016-04-03 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-10254:
-

 Summary: DfsClient undervalidates args for PositionedReadable 
operations
 Key: HDFS-10254
 URL: https://issues.apache.org/jira/browse/HDFS-10254
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.8.0
Reporter: Steve Loughran


HDFS should can do stricter checking of the inputs

# raise an exception on negative offset of destination buffer
# explicitly raise an EOF exception if the file position is negative

Optionally: short-circuit read/readfully operations if the byte range is 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9732) DelegationTokenIdentifier.toString() to include superclass .toString() data

2016-02-01 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-9732:


 Summary: DelegationTokenIdentifier.toString() to include 
superclass .toString() data
 Key: HDFS-9732
 URL: https://issues.apache.org/jira/browse/HDFS-9732
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.2
Reporter: Steve Loughran


HDFS {{DelegationTokenIdentifier.toString()}} adds some diagnostics info, 
owner, sequence number. But its superclass,  
{{AbstractDelegationTokenIdentifier}} contains a lot more information, 
including token issue and expiry times.

Because  {{DelegationTokenIdentifier.toString()}} doesn't include this data,
information that is potentially useful for kerberos diagnostics is lost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9708) FSNamesystem.initAuditLoggers() doesn't trim classnames

2016-01-26 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-9708:


 Summary: FSNamesystem.initAuditLoggers() doesn't trim classnames
 Key: HDFS-9708
 URL: https://issues.apache.org/jira/browse/HDFS-9708
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fs
Affects Versions: 2.8.0
Reporter: Steve Loughran


The {{FSNamesystem.initAuditLoggers()}} method reads a list of audit loggers 
from a call to {{ conf.getStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);}}

What it doesn't do is trim each entry -so if there's a space or newline in the
list, the classname is invalid and won't load, so HDFS wont come out to play.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9592) clean up temp dirs in hadoop-project-dist/pom.xml

2015-12-22 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-9592:


 Summary: clean up temp dirs in  hadoop-project-dist/pom.xml
 Key: HDFS-9592
 URL: https://issues.apache.org/jira/browse/HDFS-9592
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: build
Affects Versions: 2.8.0
Reporter: Steve Loughran


Andrew Wang noted in HDFS-9263 that there are various tmp dir definitions in 
{{hadoop-project-dist/pom.xml}} which are creating data in the wrong place: 
clean them up



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9514) TestDistributedFileSystem.testDFSClientPeerWriteTimeout failing; exception being swallowed

2015-12-07 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-9514:


 Summary: TestDistributedFileSystem.testDFSClientPeerWriteTimeout 
failing; exception being swallowed
 Key: HDFS-9514
 URL: https://issues.apache.org/jira/browse/HDFS-9514
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client, test
Affects Versions: 3.0.0
 Environment: jenkins
Reporter: Steve Loughran


{{TestDistributedFileSystem.testDFSClientPeerWriteTimeout}} is failing with the 
wrong exception being raised...reporter isn't using the {{GenericTestUtils}} 
code and so losing the details



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HDFS-9423) Fix intermittent failure of TestEditLogTailer

2015-11-23 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HDFS-9423:
--

Backport to branch 2 breaks the build.

{code}
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 54.827 s
[INFO] Finished at: 2015-11-23T12:19:41+00:00
[INFO] Final Memory: 111M/899M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
(default-testCompile) on project hadoop-hdfs: Compilation failure: Compilation 
failure:
[ERROR] 
/Users/stevel/Projects/Hortonworks/Projects/hadoop-trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestEditLogTailer.java:[63,30]
 cannot find symbol
[ERROR] symbol:   variable DFS_HA_TAILEDITS_ALL_NAMESNODES_RETRY_KEY
[ERROR] location: class org.apache.hadoop.hdfs.DFSConfigKeys
[ERROR] 
/Users/stevel/Projects/Hortonworks/Projects/hadoop-trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestEditLogTailer.java:[124,30]
 cannot find symbol
[ERROR] symbol:   variable DFS_HA_TAILEDITS_ALL_NAMESNODES_RETRY_KEY
[ERROR] location: class org.apache.hadoop.hdfs.DFSConfigKeys
[ERROR] -> [Help 1]
{code}

reverting the patch in branch-2; if we need it in there it'll need an expanded 
patch. 

ps: now that yetus can test branch-2 patches too, we shouldn't be having this 
problem

> Fix intermittent failure of TestEditLogTailer
> -
>
> Key: HDFS-9423
> URL: https://issues.apache.org/jira/browse/HDFS-9423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9423.001.patch
>
>
> TestEditLogTailer sometimes fails because EditLogTailer could exhausts 
> retries before one of the NameNode become active. Maximum retries should be 
> increased to use short tailing period for tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-815) FileContext tests fail on Windows

2015-10-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-815.
-
   Resolution: Cannot Reproduce
Fix Version/s: 2.7.2

These don't appear to happen any more, so closing as cannot reproduce.

> FileContext tests fail on Windows
> -
>
> Key: HDFS-815
> URL: https://issues.apache.org/jira/browse/HDFS-815
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.21.0
> Environment: Windows
>Reporter: Konstantin Shvachko
> Fix For: 2.7.2
>
>
> The following FileContext-related tests are failing on windows because of 
> incorrect use "test.build.data" system property for setting hdfs paths, which 
> end up containing "C:" as a path component, which hdfs does not support.
> {code}
> org.apache.hadoop.fs.TestFcHdfsCreateMkdir
> org.apache.hadoop.fs.TestFcHdfsPermission
> org.apache.hadoop.fs.TestHDFSFileContextMainOperations
> org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9263) tests are using /test/build/data; breaking Jenkins

2015-10-19 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-9263:


 Summary: tests are using /test/build/data; breaking Jenkins
 Key: HDFS-9263
 URL: https://issues.apache.org/jira/browse/HDFS-9263
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
 Environment: Jenkins
Reporter: Steve Loughran
Priority: Blocker


Some of the HDFS tests are using the path {{test/build/data}} to store files, 
so leaking files which fail the new post-build RAT test checks on Jenkins (and 
dirtying all development systems with paths which {{mvn clean}} will miss.

fix



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9241) HDFS clients can't construct HdfsConfiguration instances

2015-10-14 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-9241:


 Summary: HDFS clients can't construct HdfsConfiguration instances
 Key: HDFS-9241
 URL: https://issues.apache.org/jira/browse/HDFS-9241
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Steve Loughran


the changes for the hdfs client classpath make instantiating 
{{HdfsConfiguration}} from the client impossible; it only lives server side. 
This breaks any app which creates one.

I know people will look at the {{@Private}} tag and say "don't do that then", 
but it's worth considering precisely why I, at least, do this: it's the only 
way to guarantee that the hdfs-default and hdfs-site resources get on the 
classpath, including all the security settings. It's precisely the use case 
which {{HdfsConfigurationLoader.init();}} offers internally to the hdfs code.

What am I meant to do now? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HDFS-9215) Suppress the RAT warnings in hdfs-native-client module

2015-10-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HDFS-9215:
--

We're still seeing RAT failures related to tree.h such as in HADOOP-11515

> Suppress the RAT warnings in hdfs-native-client module
> --
>
> Key: HDFS-9215
> URL: https://issues.apache.org/jira/browse/HDFS-9215
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9215.000.patch, HDFS-9215.001.patch, 
> HDFS-9215.002.patch, HDFS-9215.003.patch
>
>
> HDFS-9170 moves the native client implementation to the hdfs-native-client 
> module. This is a follow-up jira to suppress the RAT warning that was 
> suppressed in the original hadoop-hdfs module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9069) TestNameNodeMetricsLogger failing -port in use

2015-09-13 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-9069:


 Summary: TestNameNodeMetricsLogger failing -port in use
 Key: HDFS-9069
 URL: https://issues.apache.org/jira/browse/HDFS-9069
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
 Environment: jenkins
Reporter: Steve Loughran
Priority: Critical


{{TestNameNodeMetricsLogger}} is failing with port in use: it may pick a random 
port, but it doesn't check for it being free, and on a busy jenkins server is 
clearly clashing with other tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8705) BlockStoragePolicySuite uses equalsIgnoreCase for name lookup, won't work in all locales

2015-07-01 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-8705:


 Summary: BlockStoragePolicySuite uses equalsIgnoreCase for name 
lookup, won't work in all locales
 Key: HDFS-8705
 URL: https://issues.apache.org/jira/browse/HDFS-8705
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.8.0
Reporter: Steve Loughran
Priority: Minor


Looking at {{BlockStoragePolicySuite.getPolicy(name)}}, is using 
{{equalsIgnoreCase()}} to find a policy which matches a name.

This will not work in all locales. It must use 
{{toLowerCase(Locale.ENGLISH).equals(name)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8451) DFSClient probe for encryption testing interprets empty URI property for enabled

2015-05-21 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-8451:


 Summary: DFSClient probe for encryption testing interprets empty 
URI property for enabled
 Key: HDFS-8451
 URL: https://issues.apache.org/jira/browse/HDFS-8451
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: encryption
Affects Versions: 2.7.1
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Blocker


HDFS-7931 added a check in DFSClient for encryption 
{{isHDFSEncryptionEnabled()}}, looking for the property 
{{dfs.encryption.key.provider.uri}.

This probe returns true even if the property is empty.

If there is an empty provider.uri property, you get an NPE when a YARN client 
tries to set up the tokens to deploy an AM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8318) Unable to move files on Hdfs

2015-05-04 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-8318.
--
Resolution: Invalid

sorry, not the place for support issues. try mailing lists and/or your vendor's 
support channels.

see: http://wiki.apache.org/hadoop/InvalidJiraIssues

 Unable to move files on Hdfs
 

 Key: HDFS-8318
 URL: https://issues.apache.org/jira/browse/HDFS-8318
 Project: Hadoop HDFS
  Issue Type: Test
  Components: HDFS
 Environment: cdh5.2.1
Reporter: ankush
  Labels: hdfs, hive

 Hi,
 While moving the data from hdfs we get below error,
 Please suggest on this.
 Moving data to: 
 hdfs://nameservice1/tmp/hive-srv-hdp-edh-d/hive_2015-05-04_10-02-39_841_5305383954203911235-1/-ext-1
 Failed with exception Unable to move 
 sourcehdfs://nameservice1/tmp/hive-srv-hdp-edh-d/hive_2015-05-04_10-02-39_841_5305383954203911\
 235-1/-ext-10002 to destination 
 hdfs://nameservice1/tmp/hive-srv-hdp-edh-d/hive_2015-05-04_10-02-39_841_5305383954203911235-1/-ext-\
 1
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.MoveTask
 MapReduce Jobs Launched:
 Stage-Stage-1: Map: 1 Cumulative CPU: 5.83 sec HDFS Read: 553081 HDFS Write: 
 489704 SUCCESS
 Total MapReduce CPU Time Spent: 5 seconds 830 msec
 Error (1). Execution Failed.
 2015-05-04 10:03:13 ERROR (1) in run_hive
 Thanks,
 Ankush



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HDFS-8134) Using OpenJDK on HDFS

2015-04-13 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HDFS-8134:
--

 Using OpenJDK on HDFS
 -

 Key: HDFS-8134
 URL: https://issues.apache.org/jira/browse/HDFS-8134
 Project: Hadoop HDFS
  Issue Type: Task
  Components: benchmarks, performance
 Environment: CentOS7, OpenJDK8 update 40, Oracle JDK8 update 40
Reporter: Yingqi Lu
Assignee: Yingqi Lu

 Dear All,
 We would like to start the effort of certifying OpenJDK with HDFS. The effort 
 includes compiling HDFS source code with OpenJDK and reporting issues if 
 there is any, and completing performance study and comparing all the results 
 with Oracle JDK. The workload we will start with is DFSIOe which is part of 
 the HiBench suite. We can surely add more workloads such as Teragen and etc. 
 into our testing environment if there is any interest from this community. 
 This is our first time to work on this community. Please do let us know your 
 feedback and comments. If you all like the idea and this is the right place 
 to start the effort, we will be sending out the data soon!
 Thanks,
 Yingqi



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8043) NPE in MiniDFSCluster teardown

2015-04-02 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-8043:


 Summary: NPE in MiniDFSCluster teardown
 Key: HDFS-8043
 URL: https://issues.apache.org/jira/browse/HDFS-8043
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
 Environment: jenkins
Reporter: Steve Loughran


NPE surfacing in {{MiniDFSCluster.shutdown}} during test teardown 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8044) NPE in TestCryptoStreamsWithOpensslAesCtrCryptoCodec

2015-04-02 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-8044:


 Summary: NPE in TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
 Key: HDFS-8044
 URL: https://issues.apache.org/jira/browse/HDFS-8044
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
 Environment: ASF Jenkins
Reporter: Steve Loughran


NPE surfacing in {{TestCryptoStreamsWithOpensslAesCtrCryptoCodec}} on  Jenkins




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7724) no suitable constructor found while building Hadoop 2.6.0

2015-02-04 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-7724.
--
Resolution: Cannot Reproduce

 no suitable constructor found while building Hadoop 2.6.0
 ---

 Key: HDFS-7724
 URL: https://issues.apache.org/jira/browse/HDFS-7724
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
 Environment: Windows 7
 Java 1.8.0_25
Reporter: Venkatasubramaniam Ramakrishnan
 Attachments: 0202.txt


 I'm getting the following error while building Hadoop 2.6.0. My objective is 
 to compile Hadoop, and run Pig on it.
 [INFO] BUILD FAILURE
 [INFO] 
 
 [INFO] Total time: 01:25 min
 [INFO] Finished at: 2015-02-02T14:38:51+05:30
 [INFO] Final Memory: 49M/117M
 [INFO] 
 
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-compiler-plugin:2.5.1:compile 
 (default-compile) on project hadoop-auth-examples: Compilation failure
 [ERROR] 
 D:\h\hadoop-2.6.0-src\hadoop-common-project\hadoop-auth-examples\src\main\java\org\apache\hadoop\security\authentication\examples\WhoClient.java:[36,31]
  error: no suitable constructor found for AuthenticatedURL(no arguments)
 [ERROR] - [Help 1]
 org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
 goal org.apache.maven.plugins:maven-compiler-plugin:2.5.1:compile 
 (default-compile) on project hadoop-auth-examples: Compilation failure
 D:\h\hadoop-2.6.0-src\hadoop-common-project\hadoop-auth-examples\src\main\java\org\apache\hadoop\security\authentication\examples\WhoClient.java:[36,31]
  error: no suitable constructor found for AuthenticatedURL(no arguments)
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:212)
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
   at 
 org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
   at 
 org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
   at 
 org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
   at 
 org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:120)
   at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:347)
   at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:154)
   at org.apache.maven.cli.MavenCli.execute(MavenCli.java:582)
   at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:214)
   at org.apache.maven.cli.MavenCli.main(MavenCli.java:158)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:483)
   at 
 org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
   at 
 org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
   at 
 org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
   at 
 org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
 Caused by: org.apache.maven.plugin.CompilationFailureException: Compilation 
 failure
 D:\h\hadoop-2.6.0-src\hadoop-common-project\hadoop-auth-examples\src\main\java\org\apache\hadoop\security\authentication\examples\WhoClient.java:[36,31]
  error: no suitable constructor found for AuthenticatedURL(no arguments)
   at 
 org.apache.maven.plugin.AbstractCompilerMojo.execute(AbstractCompilerMojo.java:729)
   at org.apache.maven.plugin.CompilerMojo.execute(CompilerMojo.java:128)
   at 
 org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132)
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
   ... 19 more
 [ERROR] 
 [ERROR] 
 [ERROR] For more information about the errors and possible solutions, please 
 read the following articles:
 [ERROR] [Help 1] 
 http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
 [ERROR] 
 [ERROR] After correcting the problems, you can resume the build with the 
 command
 [ERROR]   mvn goals -rf :hadoop-auth-examples



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-6903) Crc32 checksum errors in Big-Endian Architecture

2015-01-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-6903.
--
Resolution: Duplicate

 Crc32 checksum errors in Big-Endian Architecture
 

 Key: HDFS-6903
 URL: https://issues.apache.org/jira/browse/HDFS-6903
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0, 2.4.1, 2.6.0
 Environment: PowerPC RHEL 7  6.5 ( ppc64 - Big-Endian )
Reporter: Ayappan
Priority: Blocker

 Native Crc32 checksum calculation is not handled in Big-Endian 
 Architecture.In this case, the platform is ppc64. Due to this several 
 testcases in HDFS module fails.
 Running org.apache.hadoop.hdfs.TestAppendDifferentChecksum
 Tests run: 3, Failures: 0, Errors: 2, Skipped: 1, Time elapsed: 13.274 sec 
  FAILURE! - in org.apache.hadoop.hdfs.TestAppendDifferentChecksum
 testAlgoSwitchRandomized(org.apache.hadoop.hdfs.TestAppendDifferentChecksum)  
 Time elapsed: 7.141 sec   ERROR!
 java.io.IOException: p=/testAlgoSwitchRandomized, length=28691, i=12288
 at org.apache.hadoop.util.NativeCrc32.nativeVerifyChunkedSums(Native 
 Method)
 at 
 org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:57)
 at 
 org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:291)
 at 
 org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket(RemoteBlockReader2.java:202)
 at 
 org.apache.hadoop.hdfs.RemoteBlockReader2.read(RemoteBlockReader2.java:137)
 at 
 org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:682)
 at 
 org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:738)
 at 
 org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:795)
 at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:836)
 at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:644)
 at java.io.FilterInputStream.read(FilterInputStream.java:83)
 at 
 org.apache.hadoop.hdfs.AppendTestUtil.check(AppendTestUtil.java:129)
 at 
 org.apache.hadoop.hdfs.TestAppendDifferentChecksum.testAlgoSwitchRandomized(TestAppendDifferentChecksum.java:130)
 testSwitchAlgorithms(org.apache.hadoop.hdfs.TestAppendDifferentChecksum)  
 Time elapsed: 1.394 sec   ERROR!
 java.io.IOException: p=/testSwitchAlgorithms, length=3000, i=0
 at org.apache.hadoop.util.NativeCrc32.nativeVerifyChunkedSums(Native 
 Method)
 at 
 org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:57)
 at 
 org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:291)
 at 
 org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket(RemoteBlockReader2.java:202)
 at 
 org.apache.hadoop.hdfs.RemoteBlockReader2.read(RemoteBlockReader2.java:137)
 at 
 org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:682)
 at 
 org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:738)
 at 
 org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:795)
 at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:836)
 at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:644)
 at java.io.FilterInputStream.read(FilterInputStream.java:83)
 at 
 org.apache.hadoop.hdfs.AppendTestUtil.check(AppendTestUtil.java:129)
 at 
 org.apache.hadoop.hdfs.TestAppendDifferentChecksum.testSwitchAlgorithms(TestAppendDifferentChecksum.java:94)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-42) NetUtils.createSocketAddr NPEs if dfs.datanode.ipc.address is not set for a data node

2015-01-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-42?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-42.

   Resolution: Duplicate
Fix Version/s: 0.21.0

 NetUtils.createSocketAddr NPEs if dfs.datanode.ipc.address is not set for a 
 data node
 -

 Key: HDFS-42
 URL: https://issues.apache.org/jira/browse/HDFS-42
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.20.1, 0.20.2, 0.21.0, 0.22.0
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor
  Labels: newbie
 Fix For: 0.21.0


 DataNode.startDatanode assumes that a configuration always returns a non-null 
 dfs.datanode.ipc.address value, as the result is passed straight down to 
 NetUtils.createSocketAddr
 InetSocketAddress ipcAddr = NetUtils.createSocketAddr(
 conf.get(dfs.datanode.ipc.address));
 which triggers an NPE
 Caused by: java.lang.NullPointerException
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:130)
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:119)
 at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:353)
 at org.apache.hadoop.dfs.DataNode.(DataNode.java:185)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >