[jira] [Updated] (HDFS-14308) DFSStripedInputStream curStripeBuf is not freed by unbuffer()

2019-09-17 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14308:
---
Component/s: ec

> DFSStripedInputStream curStripeBuf is not freed by unbuffer()
> -
>
> Key: HDFS-14308
> URL: https://issues.apache.org/jira/browse/HDFS-14308
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.0.0
>Reporter: Joe McDonnell
>Priority: Major
> Attachments: ec_heap_dump.png
>
>
> Some users of HDFS cache opened HDFS file handles to avoid repeated 
> roundtrips to the NameNode. For example, Impala caches up to 20,000 HDFS file 
> handles by default. Recent tests on erasure coded files show that the open 
> file handles can consume a large amount of memory when not in use.
> For example, here is output from Impala's JMX endpoint when 608 file handles 
> are cached
> {noformat}
> {
> "name": "java.nio:type=BufferPool,name=direct",
> "modelerType": "sun.management.ManagementFactoryHelper$1",
> "Name": "direct",
> "TotalCapacity": 1921048960,
> "MemoryUsed": 1921048961,
> "Count": 633,
> "ObjectName": "java.nio:type=BufferPool,name=direct"
> },{noformat}
> This shows direct buffer memory usage of 3MB per DFSStripedInputStream. 
> Attached is output from Eclipse MAT showing that the direct buffers come from 
> DFSStripedInputStream objects. Both Impala and HBase call unbuffer() when a 
> file handle is being cached and potentially unused for significant chunks of 
> time, yet this shows that the memory remains in use.
> To support caching file handles on erasure coded files, DFSStripedInputStream 
> should avoid holding buffers after the unbuffer() call. See HDFS-7694. 
> "unbuffer()" is intended to move an input stream to a lower memory state to 
> support these caching use cases. In particular, the curStripeBuf seems to be 
> allocated from the BUFFER_POOL on a resetCurStripeBuffer(true) call. It is 
> not freed until close().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14844) Make buffer of BlockReaderRemote#newBlockReader#BufferedOutputStream configurable

2019-09-17 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14844:
---
Attachment: HDFS-14844.005.patch

> Make buffer of BlockReaderRemote#newBlockReader#BufferedOutputStream  
> configurable
> --
>
> Key: HDFS-14844
> URL: https://issues.apache.org/jira/browse/HDFS-14844
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-14844.001.patch, HDFS-14844.002.patch, 
> HDFS-14844.003.patch, HDFS-14844.004.patch, HDFS-14844.005.patch
>
>
> details for HDFS-14820



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14844) Make buffer of BlockReaderRemote#newBlockReader#BufferedOutputStream configurable

2019-09-17 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16932013#comment-16932013
 ] 

Lisheng Sun commented on HDFS-14844:


[~elgoiri] I agree your option. And I confirmed all failed UTs are ok in 
local. So they are unrelated to this patch.
Fixed the  the indentation (too many spaces) in BlockReaderRemote#401-402. 
Uploaded the v005 patch. Thank you a lot. [~elgoiri]

> Make buffer of BlockReaderRemote#newBlockReader#BufferedOutputStream  
> configurable
> --
>
> Key: HDFS-14844
> URL: https://issues.apache.org/jira/browse/HDFS-14844
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-14844.001.patch, HDFS-14844.002.patch, 
> HDFS-14844.003.patch, HDFS-14844.004.patch, HDFS-14844.005.patch
>
>
> details for HDFS-14820



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2020) Remove mTLS from Ozone GRPC

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2020?focusedWorklogId=314129=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314129
 ]

ASF GitHub Bot logged work on HDDS-2020:


Author: ASF GitHub Bot
Created on: 18/Sep/19 03:23
Start Date: 18/Sep/19 03:23
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #1369: HDDS-2020. Remove 
mTLS from Ozone GRPC. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/1369#issuecomment-532502002
 
 
   /retest
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314129)
Time Spent: 2h 10m  (was: 2h)

> Remove mTLS from Ozone GRPC
> ---
>
> Key: HDDS-2020
> URL: https://issues.apache.org/jira/browse/HDDS-2020
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Generic GRPC support mTLS for mutual authentication. However, Ozone has built 
> in block token mechanism for server to authenticate the client. We only need 
> TLS for client to authenticate the server and wire encryption. 
> Remove the mTLS support also simplify the GRPC server/client configuration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14308) DFSStripedInputStream curStripeBuf is not freed by unbuffer()

2019-09-17 Thread Zhao Yi Ming (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16932008#comment-16932008
 ] 

Zhao Yi Ming commented on HDFS-14308:
-

We hit the direct buffer memory OOM when use Hbase bulk load for HDFS EC 
folder. Read some code, there is a potential risk in the ElasticByteBufferPool, 
as following code show, the tree check the key, and in HDFS client  
DFSStripedInputStream allocateDirect buffer pass the parameter is cellSize * 
dataBlkNum, here the question is if there are many different cellSize, it can 
introduce the direct buffer memory OOM.
{code:java}
// code placeholder
  public synchronized void putBuffer(ByteBuffer buffer) {
    buffer.clear();
    TreeMap tree = getBufferTree(buffer.isDirect());
    while (true) {
      Key key = new Key(buffer.capacity(), System.nanoTime());
      if (!tree.containsKey(key)) {
        tree.put(key, buffer);
        return;
      }
      // Buffers are indexed by (capacity, time).
      // If our key is not unique on the first try, we try again, since the
      // time will be different.  Since we use nanoseconds, it's pretty
      // unlikely that we'll loop even once, unless the system clock has a
      // poor granularity.
    }
  }
{code}
 

 

Wrote a simple test as following it can recreate the problem.

Please set the JVM arguments first, then run the test, it will hit the OOM.

-Xmx64m
-Xms64m
-Xmn32m
-XX:+UseConcMarkSweepGC
-XX:+PrintGCDetails
-XX:MaxDirectMemorySize=10M

 
{code:java}
// code placeholder
public class TestEBBP {


private static final ByteBufferPool BUFFER_POOL = new 
ElasticByteBufferPool();

@Test
public void testOOM() {
for (int i = 0; i < 100; i++) {
ByteBuffer buffer = BUFFER_POOL.getBuffer(true, 1024 * 
6 * i);
BUFFER_POOL.putBuffer(buffer);
}
System.out.println(((ElasticByteBufferPool)BUFFER_POOL).size(true));
}
}
{code}
 

I am NOT pretty sure whether this is root cause for this issue, but wrote it 
out for FYI.

 

 

> DFSStripedInputStream curStripeBuf is not freed by unbuffer()
> -
>
> Key: HDFS-14308
> URL: https://issues.apache.org/jira/browse/HDFS-14308
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Joe McDonnell
>Priority: Major
> Attachments: ec_heap_dump.png
>
>
> Some users of HDFS cache opened HDFS file handles to avoid repeated 
> roundtrips to the NameNode. For example, Impala caches up to 20,000 HDFS file 
> handles by default. Recent tests on erasure coded files show that the open 
> file handles can consume a large amount of memory when not in use.
> For example, here is output from Impala's JMX endpoint when 608 file handles 
> are cached
> {noformat}
> {
> "name": "java.nio:type=BufferPool,name=direct",
> "modelerType": "sun.management.ManagementFactoryHelper$1",
> "Name": "direct",
> "TotalCapacity": 1921048960,
> "MemoryUsed": 1921048961,
> "Count": 633,
> "ObjectName": "java.nio:type=BufferPool,name=direct"
> },{noformat}
> This shows direct buffer memory usage of 3MB per DFSStripedInputStream. 
> Attached is output from Eclipse MAT showing that the direct buffers come from 
> DFSStripedInputStream objects. Both Impala and HBase call unbuffer() when a 
> file handle is being cached and potentially unused for significant chunks of 
> time, yet this shows that the memory remains in use.
> To support caching file handles on erasure coded files, DFSStripedInputStream 
> should avoid holding buffers after the unbuffer() call. See HDFS-7694. 
> "unbuffer()" is intended to move an input stream to a lower memory state to 
> support these caching use cases. In particular, the curStripeBuf seems to be 
> allocated from the BUFFER_POOL on a resetCurStripeBuffer(true) call. It is 
> not freed until close().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14768) In some cases, erasure blocks are corruption when they are reconstruct.

2019-09-17 Thread guojh (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931985#comment-16931985
 ] 

guojh commented on HDFS-14768:
--

[~zhaoyim] I am confused too. [~surendrasingh] Could you help us?

> In some cases, erasure blocks are corruption  when they are reconstruct.
> 
>
> Key: HDFS-14768
> URL: https://issues.apache.org/jira/browse/HDFS-14768
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, erasure-coding, hdfs, namenode
>Affects Versions: 3.0.2
>Reporter: guojh
>Assignee: guojh
>Priority: Major
>  Labels: patch
> Fix For: 3.3.0
>
> Attachments: 1568275810244.jpg, 1568276338275.jpg, 1568771471942.jpg, 
> HDFS-14768.000.patch, HDFS-14768.001.patch, HDFS-14768.002.patch, 
> HDFS-14768.jpg, guojh_UT_after_deomission.txt, 
> guojh_UT_before_deomission.txt, zhaoyiming_UT_after_deomission.txt, 
> zhaoyiming_UT_beofre_deomission.txt
>
>
> Policy is RS-6-3-1024K, version is hadoop 3.0.2;
> We suppose a file's block Index is [0,1,2,3,4,5,6,7,8], And decommission 
> index[3,4], increase the index 6 datanode's
> pendingReplicationWithoutTargets  that make it large than 
> replicationStreamsHardLimit(we set 14). Then, After the method 
> chooseSourceDatanodes of BlockMananger, the liveBlockIndices is 
> [0,1,2,3,4,5,7,8], Block Counter is, Live:7, Decommission:2. 
> In method scheduleReconstruction of BlockManager, the additionalReplRequired 
> is 9 - 7 = 2. After Namenode choose two target Datanode, will assign a 
> erasureCode task to target datanode.
> When datanode get the task will build  targetIndices from liveBlockIndices 
> and target length. the code is blow.
> {code:java}
> // code placeholder
> targetIndices = new short[targets.length];
> private void initTargetIndices() { 
>   BitSet bitset = reconstructor.getLiveBitSet();
>   int m = 0; hasValidTargets = false; 
>   for (int i = 0; i < dataBlkNum + parityBlkNum; i++) {  
> if (!bitset.get) {    
>   if (reconstructor.getBlockLen > 0) {
>        if (m < targets.length) {
>          targetIndices[m++] = (short)i;
>          hasValidTargets = true;
>         }
>       }
>     }
>  }
> {code}
> targetIndices[0]=6, and targetIndices[1] is aways 0 from initial value.
> The StripedReader is  aways create reader from first 6 index block, and is 
> [0,1,2,3,4,5]
> Use the index [0,1,2,3,4,5] to build target index[6,0] will trigger the isal 
> bug. the block index6's data is corruption(all data is zero).
> I write a unit test can stabilize repreduce.
> {code:java}
> // code placeholder
> private int replicationStreamsHardLimit = 
> DFSConfigKeys.DFS_NAMENODE_REPLICATION_STREAMS_HARD_LIMIT_DEFAULT;
> numDNs = dataBlocks + parityBlocks + 10;
> @Test(timeout = 24)
> public void testFileDecommission() throws Exception {
>   LOG.info("Starting test testFileDecommission");
>   final Path ecFile = new Path(ecDir, "testFileDecommission");
>   int writeBytes = cellSize * dataBlocks;
>   writeStripedFile(dfs, ecFile, writeBytes);
>   Assert.assertEquals(0, bm.numOfUnderReplicatedBlocks());
>   FileChecksum fileChecksum1 = dfs.getFileChecksum(ecFile, writeBytes);
>   final INodeFile fileNode = cluster.getNamesystem().getFSDirectory()
>   .getINode4Write(ecFile.toString()).asFile();
>   LocatedBlocks locatedBlocks =
>   StripedFileTestUtil.getLocatedBlocks(ecFile, dfs);
>   LocatedBlock lb = dfs.getClient().getLocatedBlocks(ecFile.toString(), 0)
>   .get(0);
>   DatanodeInfo[] dnLocs = lb.getLocations();
>   LocatedStripedBlock lastBlock =
>   (LocatedStripedBlock)locatedBlocks.getLastLocatedBlock();
>   DatanodeInfo[] storageInfos = lastBlock.getLocations();
>   //
>   DatanodeDescriptor datanodeDescriptor = 
> cluster.getNameNode().getNamesystem()
>   
> .getBlockManager().getDatanodeManager().getDatanode(storageInfos[6].getDatanodeUuid());
>   BlockInfo firstBlock = fileNode.getBlocks()[0];
>   DatanodeStorageInfo[] dStorageInfos = bm.getStorages(firstBlock);
>   // the first heartbeat will consume 3 replica tasks
>   for (int i = 0; i <= replicationStreamsHardLimit + 3; i++) {
> BlockManagerTestUtil.addBlockToBeReplicated(datanodeDescriptor, new 
> Block(i),
> new DatanodeStorageInfo[]{dStorageInfos[0]});
>   }
>   assertEquals(dataBlocks + parityBlocks, dnLocs.length);
>   int[] decommNodeIndex = {3, 4};
>   final List decommisionNodes = new ArrayList();
>   // add the node which will be decommissioning
>   decommisionNodes.add(dnLocs[decommNodeIndex[0]]);
>   decommisionNodes.add(dnLocs[decommNodeIndex[1]]);
>   decommissionNode(0, decommisionNodes, AdminStates.DECOMMISSIONED);
>   assertEquals(decommisionNodes.size(), fsn.getNumDecomLiveDataNodes());
>   

[jira] [Commented] (HDFS-14461) RBF: Fix intermittently failing kerberos related unit test

2019-09-17 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931984#comment-16931984
 ] 

Íñigo Goiri commented on HDFS-14461:


As [~eyang] mentioned in HDFS-14609, this JIRA interacts with that one.
Not sure how to break the deadlock here.
Can we test both patches together here?

> RBF: Fix intermittently failing kerberos related unit test
> --
>
> Key: HDFS-14461
> URL: https://issues.apache.org/jira/browse/HDFS-14461
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-14461.001.patch, HDFS-14461.002.patch
>
>
> TestRouterHttpDelegationToken#testGetDelegationToken fails intermittently. It 
> may be due to some race condition before using the keytab that's created for 
> testing.
>  
> {code:java}
>  Failed
> org.apache.hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken.testGetDelegationToken
>  Failing for the past 1 build (Since 
> [!https://builds.apache.org/static/1e9ab9cc/images/16x16/red.png! 
> #26721|https://builds.apache.org/job/PreCommit-HDFS-Build/26721/] )
>  [Took 89 
> ms.|https://builds.apache.org/job/PreCommit-HDFS-Build/26721/testReport/org.apache.hadoop.hdfs.server.federation.security/TestRouterHttpDelegationToken/testGetDelegationToken/history]
>   
>  Error Message
> org.apache.hadoop.security.KerberosAuthException: failure to login: for 
> principal: router/localh...@example.com from keytab 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/target/test/data/SecurityConfUtil/test.keytab
>  javax.security.auth.login.LoginException: Integrity check on decrypted field 
> failed (31) - PREAUTH_FAILED
> h3. Stacktrace
> org.apache.hadoop.service.ServiceStateException: 
> org.apache.hadoop.security.KerberosAuthException: failure to login: for 
> principal: router/localh...@example.com from keytab 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/target/test/data/SecurityConfUtil/test.keytab
>  javax.security.auth.login.LoginException: Integrity check on decrypted field 
> failed (31) - PREAUTH_FAILED at 
> org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:105)
>  at org.apache.hadoop.service.AbstractService.init(AbstractService.java:173) 
> at 
> org.apache.hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken.setup(TestRouterHttpDelegationToken.java:99)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) 
> Caused by: org.apache.hadoop.security.KerberosAuthException: failure to 
> login: for principal: router/localh...@example.com from keytab 
> 

[jira] [Issue Comment Deleted] (HDFS-14768) In some cases, erasure blocks are corruption when they are reconstruct.

2019-09-17 Thread guojh (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guojh updated HDFS-14768:
-
Comment: was deleted

(was: [~zhaoyim] I am confused too. If you add break point in the check parity 
like the following screenshot,you can see the parity block is exception, but if 
you add break point in other place, the UT may passed, may be the block is 
still change, But I don't make out. [~surendrasingh] can you help me? Thanks!  
!1568771471942.jpg!)

> In some cases, erasure blocks are corruption  when they are reconstruct.
> 
>
> Key: HDFS-14768
> URL: https://issues.apache.org/jira/browse/HDFS-14768
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, erasure-coding, hdfs, namenode
>Affects Versions: 3.0.2
>Reporter: guojh
>Assignee: guojh
>Priority: Major
>  Labels: patch
> Fix For: 3.3.0
>
> Attachments: 1568275810244.jpg, 1568276338275.jpg, 1568771471942.jpg, 
> HDFS-14768.000.patch, HDFS-14768.001.patch, HDFS-14768.002.patch, 
> HDFS-14768.jpg, guojh_UT_after_deomission.txt, 
> guojh_UT_before_deomission.txt, zhaoyiming_UT_after_deomission.txt, 
> zhaoyiming_UT_beofre_deomission.txt
>
>
> Policy is RS-6-3-1024K, version is hadoop 3.0.2;
> We suppose a file's block Index is [0,1,2,3,4,5,6,7,8], And decommission 
> index[3,4], increase the index 6 datanode's
> pendingReplicationWithoutTargets  that make it large than 
> replicationStreamsHardLimit(we set 14). Then, After the method 
> chooseSourceDatanodes of BlockMananger, the liveBlockIndices is 
> [0,1,2,3,4,5,7,8], Block Counter is, Live:7, Decommission:2. 
> In method scheduleReconstruction of BlockManager, the additionalReplRequired 
> is 9 - 7 = 2. After Namenode choose two target Datanode, will assign a 
> erasureCode task to target datanode.
> When datanode get the task will build  targetIndices from liveBlockIndices 
> and target length. the code is blow.
> {code:java}
> // code placeholder
> targetIndices = new short[targets.length];
> private void initTargetIndices() { 
>   BitSet bitset = reconstructor.getLiveBitSet();
>   int m = 0; hasValidTargets = false; 
>   for (int i = 0; i < dataBlkNum + parityBlkNum; i++) {  
> if (!bitset.get) {    
>   if (reconstructor.getBlockLen > 0) {
>        if (m < targets.length) {
>          targetIndices[m++] = (short)i;
>          hasValidTargets = true;
>         }
>       }
>     }
>  }
> {code}
> targetIndices[0]=6, and targetIndices[1] is aways 0 from initial value.
> The StripedReader is  aways create reader from first 6 index block, and is 
> [0,1,2,3,4,5]
> Use the index [0,1,2,3,4,5] to build target index[6,0] will trigger the isal 
> bug. the block index6's data is corruption(all data is zero).
> I write a unit test can stabilize repreduce.
> {code:java}
> // code placeholder
> private int replicationStreamsHardLimit = 
> DFSConfigKeys.DFS_NAMENODE_REPLICATION_STREAMS_HARD_LIMIT_DEFAULT;
> numDNs = dataBlocks + parityBlocks + 10;
> @Test(timeout = 24)
> public void testFileDecommission() throws Exception {
>   LOG.info("Starting test testFileDecommission");
>   final Path ecFile = new Path(ecDir, "testFileDecommission");
>   int writeBytes = cellSize * dataBlocks;
>   writeStripedFile(dfs, ecFile, writeBytes);
>   Assert.assertEquals(0, bm.numOfUnderReplicatedBlocks());
>   FileChecksum fileChecksum1 = dfs.getFileChecksum(ecFile, writeBytes);
>   final INodeFile fileNode = cluster.getNamesystem().getFSDirectory()
>   .getINode4Write(ecFile.toString()).asFile();
>   LocatedBlocks locatedBlocks =
>   StripedFileTestUtil.getLocatedBlocks(ecFile, dfs);
>   LocatedBlock lb = dfs.getClient().getLocatedBlocks(ecFile.toString(), 0)
>   .get(0);
>   DatanodeInfo[] dnLocs = lb.getLocations();
>   LocatedStripedBlock lastBlock =
>   (LocatedStripedBlock)locatedBlocks.getLastLocatedBlock();
>   DatanodeInfo[] storageInfos = lastBlock.getLocations();
>   //
>   DatanodeDescriptor datanodeDescriptor = 
> cluster.getNameNode().getNamesystem()
>   
> .getBlockManager().getDatanodeManager().getDatanode(storageInfos[6].getDatanodeUuid());
>   BlockInfo firstBlock = fileNode.getBlocks()[0];
>   DatanodeStorageInfo[] dStorageInfos = bm.getStorages(firstBlock);
>   // the first heartbeat will consume 3 replica tasks
>   for (int i = 0; i <= replicationStreamsHardLimit + 3; i++) {
> BlockManagerTestUtil.addBlockToBeReplicated(datanodeDescriptor, new 
> Block(i),
> new DatanodeStorageInfo[]{dStorageInfos[0]});
>   }
>   assertEquals(dataBlocks + parityBlocks, dnLocs.length);
>   int[] decommNodeIndex = {3, 4};
>   final List decommisionNodes = new ArrayList();
>   // add the node which will be decommissioning
>   decommisionNodes.add(dnLocs[decommNodeIndex[0]]);
>   

[jira] [Commented] (HDFS-14768) In some cases, erasure blocks are corruption when they are reconstruct.

2019-09-17 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931967#comment-16931967
 ] 

Hadoop QA commented on HDFS-14768:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HDFS-14768 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-14768 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27896/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> In some cases, erasure blocks are corruption  when they are reconstruct.
> 
>
> Key: HDFS-14768
> URL: https://issues.apache.org/jira/browse/HDFS-14768
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, erasure-coding, hdfs, namenode
>Affects Versions: 3.0.2
>Reporter: guojh
>Assignee: guojh
>Priority: Major
>  Labels: patch
> Fix For: 3.3.0
>
> Attachments: 1568275810244.jpg, 1568276338275.jpg, 1568771471942.jpg, 
> HDFS-14768.000.patch, HDFS-14768.001.patch, HDFS-14768.002.patch, 
> HDFS-14768.jpg, guojh_UT_after_deomission.txt, 
> guojh_UT_before_deomission.txt, zhaoyiming_UT_after_deomission.txt, 
> zhaoyiming_UT_beofre_deomission.txt
>
>
> Policy is RS-6-3-1024K, version is hadoop 3.0.2;
> We suppose a file's block Index is [0,1,2,3,4,5,6,7,8], And decommission 
> index[3,4], increase the index 6 datanode's
> pendingReplicationWithoutTargets  that make it large than 
> replicationStreamsHardLimit(we set 14). Then, After the method 
> chooseSourceDatanodes of BlockMananger, the liveBlockIndices is 
> [0,1,2,3,4,5,7,8], Block Counter is, Live:7, Decommission:2. 
> In method scheduleReconstruction of BlockManager, the additionalReplRequired 
> is 9 - 7 = 2. After Namenode choose two target Datanode, will assign a 
> erasureCode task to target datanode.
> When datanode get the task will build  targetIndices from liveBlockIndices 
> and target length. the code is blow.
> {code:java}
> // code placeholder
> targetIndices = new short[targets.length];
> private void initTargetIndices() { 
>   BitSet bitset = reconstructor.getLiveBitSet();
>   int m = 0; hasValidTargets = false; 
>   for (int i = 0; i < dataBlkNum + parityBlkNum; i++) {  
> if (!bitset.get) {    
>   if (reconstructor.getBlockLen > 0) {
>        if (m < targets.length) {
>          targetIndices[m++] = (short)i;
>          hasValidTargets = true;
>         }
>       }
>     }
>  }
> {code}
> targetIndices[0]=6, and targetIndices[1] is aways 0 from initial value.
> The StripedReader is  aways create reader from first 6 index block, and is 
> [0,1,2,3,4,5]
> Use the index [0,1,2,3,4,5] to build target index[6,0] will trigger the isal 
> bug. the block index6's data is corruption(all data is zero).
> I write a unit test can stabilize repreduce.
> {code:java}
> // code placeholder
> private int replicationStreamsHardLimit = 
> DFSConfigKeys.DFS_NAMENODE_REPLICATION_STREAMS_HARD_LIMIT_DEFAULT;
> numDNs = dataBlocks + parityBlocks + 10;
> @Test(timeout = 24)
> public void testFileDecommission() throws Exception {
>   LOG.info("Starting test testFileDecommission");
>   final Path ecFile = new Path(ecDir, "testFileDecommission");
>   int writeBytes = cellSize * dataBlocks;
>   writeStripedFile(dfs, ecFile, writeBytes);
>   Assert.assertEquals(0, bm.numOfUnderReplicatedBlocks());
>   FileChecksum fileChecksum1 = dfs.getFileChecksum(ecFile, writeBytes);
>   final INodeFile fileNode = cluster.getNamesystem().getFSDirectory()
>   .getINode4Write(ecFile.toString()).asFile();
>   LocatedBlocks locatedBlocks =
>   StripedFileTestUtil.getLocatedBlocks(ecFile, dfs);
>   LocatedBlock lb = dfs.getClient().getLocatedBlocks(ecFile.toString(), 0)
>   .get(0);
>   DatanodeInfo[] dnLocs = lb.getLocations();
>   LocatedStripedBlock lastBlock =
>   (LocatedStripedBlock)locatedBlocks.getLastLocatedBlock();
>   DatanodeInfo[] storageInfos = lastBlock.getLocations();
>   //
>   DatanodeDescriptor datanodeDescriptor = 
> cluster.getNameNode().getNamesystem()
>   
> .getBlockManager().getDatanodeManager().getDatanode(storageInfos[6].getDatanodeUuid());
>   BlockInfo firstBlock = fileNode.getBlocks()[0];
>   DatanodeStorageInfo[] dStorageInfos = bm.getStorages(firstBlock);
>   // the first heartbeat will consume 3 replica tasks
>   for (int i = 0; i <= replicationStreamsHardLimit + 3; i++) {
>  

[jira] [Commented] (HDFS-14768) In some cases, erasure blocks are corruption when they are reconstruct.

2019-09-17 Thread guojh (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931965#comment-16931965
 ] 

guojh commented on HDFS-14768:
--

[~zhaoyim] I am confused too. If you add break point in the check parity like 
the following screenshot,you can see the parity block is exception, but if you 
add break point in other place, the UT may passed, may be the block is still 
change, But I don't make out. [~surendrasingh] can you help me? Thanks!  
!1568771471942.jpg!

> In some cases, erasure blocks are corruption  when they are reconstruct.
> 
>
> Key: HDFS-14768
> URL: https://issues.apache.org/jira/browse/HDFS-14768
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, erasure-coding, hdfs, namenode
>Affects Versions: 3.0.2
>Reporter: guojh
>Assignee: guojh
>Priority: Major
>  Labels: patch
> Fix For: 3.3.0
>
> Attachments: 1568275810244.jpg, 1568276338275.jpg, 1568771471942.jpg, 
> HDFS-14768.000.patch, HDFS-14768.001.patch, HDFS-14768.002.patch, 
> HDFS-14768.jpg, guojh_UT_after_deomission.txt, 
> guojh_UT_before_deomission.txt, zhaoyiming_UT_after_deomission.txt, 
> zhaoyiming_UT_beofre_deomission.txt
>
>
> Policy is RS-6-3-1024K, version is hadoop 3.0.2;
> We suppose a file's block Index is [0,1,2,3,4,5,6,7,8], And decommission 
> index[3,4], increase the index 6 datanode's
> pendingReplicationWithoutTargets  that make it large than 
> replicationStreamsHardLimit(we set 14). Then, After the method 
> chooseSourceDatanodes of BlockMananger, the liveBlockIndices is 
> [0,1,2,3,4,5,7,8], Block Counter is, Live:7, Decommission:2. 
> In method scheduleReconstruction of BlockManager, the additionalReplRequired 
> is 9 - 7 = 2. After Namenode choose two target Datanode, will assign a 
> erasureCode task to target datanode.
> When datanode get the task will build  targetIndices from liveBlockIndices 
> and target length. the code is blow.
> {code:java}
> // code placeholder
> targetIndices = new short[targets.length];
> private void initTargetIndices() { 
>   BitSet bitset = reconstructor.getLiveBitSet();
>   int m = 0; hasValidTargets = false; 
>   for (int i = 0; i < dataBlkNum + parityBlkNum; i++) {  
> if (!bitset.get) {    
>   if (reconstructor.getBlockLen > 0) {
>        if (m < targets.length) {
>          targetIndices[m++] = (short)i;
>          hasValidTargets = true;
>         }
>       }
>     }
>  }
> {code}
> targetIndices[0]=6, and targetIndices[1] is aways 0 from initial value.
> The StripedReader is  aways create reader from first 6 index block, and is 
> [0,1,2,3,4,5]
> Use the index [0,1,2,3,4,5] to build target index[6,0] will trigger the isal 
> bug. the block index6's data is corruption(all data is zero).
> I write a unit test can stabilize repreduce.
> {code:java}
> // code placeholder
> private int replicationStreamsHardLimit = 
> DFSConfigKeys.DFS_NAMENODE_REPLICATION_STREAMS_HARD_LIMIT_DEFAULT;
> numDNs = dataBlocks + parityBlocks + 10;
> @Test(timeout = 24)
> public void testFileDecommission() throws Exception {
>   LOG.info("Starting test testFileDecommission");
>   final Path ecFile = new Path(ecDir, "testFileDecommission");
>   int writeBytes = cellSize * dataBlocks;
>   writeStripedFile(dfs, ecFile, writeBytes);
>   Assert.assertEquals(0, bm.numOfUnderReplicatedBlocks());
>   FileChecksum fileChecksum1 = dfs.getFileChecksum(ecFile, writeBytes);
>   final INodeFile fileNode = cluster.getNamesystem().getFSDirectory()
>   .getINode4Write(ecFile.toString()).asFile();
>   LocatedBlocks locatedBlocks =
>   StripedFileTestUtil.getLocatedBlocks(ecFile, dfs);
>   LocatedBlock lb = dfs.getClient().getLocatedBlocks(ecFile.toString(), 0)
>   .get(0);
>   DatanodeInfo[] dnLocs = lb.getLocations();
>   LocatedStripedBlock lastBlock =
>   (LocatedStripedBlock)locatedBlocks.getLastLocatedBlock();
>   DatanodeInfo[] storageInfos = lastBlock.getLocations();
>   //
>   DatanodeDescriptor datanodeDescriptor = 
> cluster.getNameNode().getNamesystem()
>   
> .getBlockManager().getDatanodeManager().getDatanode(storageInfos[6].getDatanodeUuid());
>   BlockInfo firstBlock = fileNode.getBlocks()[0];
>   DatanodeStorageInfo[] dStorageInfos = bm.getStorages(firstBlock);
>   // the first heartbeat will consume 3 replica tasks
>   for (int i = 0; i <= replicationStreamsHardLimit + 3; i++) {
> BlockManagerTestUtil.addBlockToBeReplicated(datanodeDescriptor, new 
> Block(i),
> new DatanodeStorageInfo[]{dStorageInfos[0]});
>   }
>   assertEquals(dataBlocks + parityBlocks, dnLocs.length);
>   int[] decommNodeIndex = {3, 4};
>   final List decommisionNodes = new ArrayList();
>   // add the node which will be decommissioning
>   

[jira] [Updated] (HDFS-14768) In some cases, erasure blocks are corruption when they are reconstruct.

2019-09-17 Thread guojh (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guojh updated HDFS-14768:
-
Attachment: 1568771471942.jpg

> In some cases, erasure blocks are corruption  when they are reconstruct.
> 
>
> Key: HDFS-14768
> URL: https://issues.apache.org/jira/browse/HDFS-14768
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, erasure-coding, hdfs, namenode
>Affects Versions: 3.0.2
>Reporter: guojh
>Assignee: guojh
>Priority: Major
>  Labels: patch
> Fix For: 3.3.0
>
> Attachments: 1568275810244.jpg, 1568276338275.jpg, 1568771471942.jpg, 
> HDFS-14768.000.patch, HDFS-14768.001.patch, HDFS-14768.002.patch, 
> HDFS-14768.jpg, guojh_UT_after_deomission.txt, 
> guojh_UT_before_deomission.txt, zhaoyiming_UT_after_deomission.txt, 
> zhaoyiming_UT_beofre_deomission.txt
>
>
> Policy is RS-6-3-1024K, version is hadoop 3.0.2;
> We suppose a file's block Index is [0,1,2,3,4,5,6,7,8], And decommission 
> index[3,4], increase the index 6 datanode's
> pendingReplicationWithoutTargets  that make it large than 
> replicationStreamsHardLimit(we set 14). Then, After the method 
> chooseSourceDatanodes of BlockMananger, the liveBlockIndices is 
> [0,1,2,3,4,5,7,8], Block Counter is, Live:7, Decommission:2. 
> In method scheduleReconstruction of BlockManager, the additionalReplRequired 
> is 9 - 7 = 2. After Namenode choose two target Datanode, will assign a 
> erasureCode task to target datanode.
> When datanode get the task will build  targetIndices from liveBlockIndices 
> and target length. the code is blow.
> {code:java}
> // code placeholder
> targetIndices = new short[targets.length];
> private void initTargetIndices() { 
>   BitSet bitset = reconstructor.getLiveBitSet();
>   int m = 0; hasValidTargets = false; 
>   for (int i = 0; i < dataBlkNum + parityBlkNum; i++) {  
> if (!bitset.get) {    
>   if (reconstructor.getBlockLen > 0) {
>        if (m < targets.length) {
>          targetIndices[m++] = (short)i;
>          hasValidTargets = true;
>         }
>       }
>     }
>  }
> {code}
> targetIndices[0]=6, and targetIndices[1] is aways 0 from initial value.
> The StripedReader is  aways create reader from first 6 index block, and is 
> [0,1,2,3,4,5]
> Use the index [0,1,2,3,4,5] to build target index[6,0] will trigger the isal 
> bug. the block index6's data is corruption(all data is zero).
> I write a unit test can stabilize repreduce.
> {code:java}
> // code placeholder
> private int replicationStreamsHardLimit = 
> DFSConfigKeys.DFS_NAMENODE_REPLICATION_STREAMS_HARD_LIMIT_DEFAULT;
> numDNs = dataBlocks + parityBlocks + 10;
> @Test(timeout = 24)
> public void testFileDecommission() throws Exception {
>   LOG.info("Starting test testFileDecommission");
>   final Path ecFile = new Path(ecDir, "testFileDecommission");
>   int writeBytes = cellSize * dataBlocks;
>   writeStripedFile(dfs, ecFile, writeBytes);
>   Assert.assertEquals(0, bm.numOfUnderReplicatedBlocks());
>   FileChecksum fileChecksum1 = dfs.getFileChecksum(ecFile, writeBytes);
>   final INodeFile fileNode = cluster.getNamesystem().getFSDirectory()
>   .getINode4Write(ecFile.toString()).asFile();
>   LocatedBlocks locatedBlocks =
>   StripedFileTestUtil.getLocatedBlocks(ecFile, dfs);
>   LocatedBlock lb = dfs.getClient().getLocatedBlocks(ecFile.toString(), 0)
>   .get(0);
>   DatanodeInfo[] dnLocs = lb.getLocations();
>   LocatedStripedBlock lastBlock =
>   (LocatedStripedBlock)locatedBlocks.getLastLocatedBlock();
>   DatanodeInfo[] storageInfos = lastBlock.getLocations();
>   //
>   DatanodeDescriptor datanodeDescriptor = 
> cluster.getNameNode().getNamesystem()
>   
> .getBlockManager().getDatanodeManager().getDatanode(storageInfos[6].getDatanodeUuid());
>   BlockInfo firstBlock = fileNode.getBlocks()[0];
>   DatanodeStorageInfo[] dStorageInfos = bm.getStorages(firstBlock);
>   // the first heartbeat will consume 3 replica tasks
>   for (int i = 0; i <= replicationStreamsHardLimit + 3; i++) {
> BlockManagerTestUtil.addBlockToBeReplicated(datanodeDescriptor, new 
> Block(i),
> new DatanodeStorageInfo[]{dStorageInfos[0]});
>   }
>   assertEquals(dataBlocks + parityBlocks, dnLocs.length);
>   int[] decommNodeIndex = {3, 4};
>   final List decommisionNodes = new ArrayList();
>   // add the node which will be decommissioning
>   decommisionNodes.add(dnLocs[decommNodeIndex[0]]);
>   decommisionNodes.add(dnLocs[decommNodeIndex[1]]);
>   decommissionNode(0, decommisionNodes, AdminStates.DECOMMISSIONED);
>   assertEquals(decommisionNodes.size(), fsn.getNumDecomLiveDataNodes());
>   bm.getDatanodeManager().removeDatanode(datanodeDescriptor);
>   //assertNull(checkFile(dfs, ecFile, 9, decommisionNodes, numDNs));

[jira] [Work logged] (HDDS-2137) HddsClientUtils and OzoneUtils have duplicate verifyResourceName()

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2137?focusedWorklogId=314094=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314094
 ]

ASF GitHub Bot logged work on HDDS-2137:


Author: ASF GitHub Bot
Created on: 18/Sep/19 00:55
Start Date: 18/Sep/19 00:55
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1455: 
HDDS-2137 : OzoneUtils to verify resourceName using HddsClientUtils
URL: https://github.com/apache/hadoop/pull/1455#discussion_r325443331
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/HddsClientUtils.java
 ##
 @@ -126,75 +126,63 @@ public static long formatDateTime(String date) throws 
ParseException {
 .toInstant().toEpochMilli();
   }
 
-
-
   /**
* verifies that bucket name / volume name is a valid DNS name.
*
* @param resName Bucket or volume Name to be validated
*
* @throws IllegalArgumentException
*/
-  public static void verifyResourceName(String resName)
-  throws IllegalArgumentException {
-
+  public static void verifyResourceName(String resName) throws 
IllegalArgumentException {
 if (resName == null) {
   throw new IllegalArgumentException("Bucket or Volume name is null");
 }
 
-if ((resName.length() < OzoneConsts.OZONE_MIN_BUCKET_NAME_LENGTH) ||
-(resName.length() > OzoneConsts.OZONE_MAX_BUCKET_NAME_LENGTH)) {
+if (resName.length() < OzoneConsts.OZONE_MIN_BUCKET_NAME_LENGTH ||
+resName.length() > OzoneConsts.OZONE_MAX_BUCKET_NAME_LENGTH) {
   throw new IllegalArgumentException(
-  "Bucket or Volume length is illegal, " +
-  "valid length is 3-63 characters");
+  "Bucket or Volume length is illegal, valid length is 3-63 
characters");
 }
 
-if ((resName.charAt(0) == '.') || (resName.charAt(0) == '-')) {
+if (resName.charAt(0) == '.' || resName.charAt(0) == '-') {
   throw new IllegalArgumentException(
   "Bucket or Volume name cannot start with a period or dash");
 }
 
 if ((resName.charAt(resName.length() - 1) == '.') ||
 
 Review comment:
   `if (resName.charAt(resName.length() - 1) == '-' || 
resName.charAt(resName.length() - 1) == '-') {`
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314094)
Time Spent: 1h 50m  (was: 1h 40m)

> HddsClientUtils and OzoneUtils have duplicate verifyResourceName()
> --
>
> Key: HDDS-2137
> URL: https://issues.apache.org/jira/browse/HDDS-2137
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> HddsClientUtils and OzoneUtils can share the method to verify resource name 
> that verifies if the bucket/volume name is a valid DNS name.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2137) HddsClientUtils and OzoneUtils have duplicate verifyResourceName()

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2137?focusedWorklogId=314095=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314095
 ]

ASF GitHub Bot logged work on HDDS-2137:


Author: ASF GitHub Bot
Created on: 18/Sep/19 00:55
Start Date: 18/Sep/19 00:55
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1455: 
HDDS-2137 : OzoneUtils to verify resourceName using HddsClientUtils
URL: https://github.com/apache/hadoop/pull/1455#discussion_r325443331
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/HddsClientUtils.java
 ##
 @@ -126,75 +126,63 @@ public static long formatDateTime(String date) throws 
ParseException {
 .toInstant().toEpochMilli();
   }
 
-
-
   /**
* verifies that bucket name / volume name is a valid DNS name.
*
* @param resName Bucket or volume Name to be validated
*
* @throws IllegalArgumentException
*/
-  public static void verifyResourceName(String resName)
-  throws IllegalArgumentException {
-
+  public static void verifyResourceName(String resName) throws 
IllegalArgumentException {
 if (resName == null) {
   throw new IllegalArgumentException("Bucket or Volume name is null");
 }
 
-if ((resName.length() < OzoneConsts.OZONE_MIN_BUCKET_NAME_LENGTH) ||
-(resName.length() > OzoneConsts.OZONE_MAX_BUCKET_NAME_LENGTH)) {
+if (resName.length() < OzoneConsts.OZONE_MIN_BUCKET_NAME_LENGTH ||
+resName.length() > OzoneConsts.OZONE_MAX_BUCKET_NAME_LENGTH) {
   throw new IllegalArgumentException(
-  "Bucket or Volume length is illegal, " +
-  "valid length is 3-63 characters");
+  "Bucket or Volume length is illegal, valid length is 3-63 
characters");
 }
 
-if ((resName.charAt(0) == '.') || (resName.charAt(0) == '-')) {
+if (resName.charAt(0) == '.' || resName.charAt(0) == '-') {
   throw new IllegalArgumentException(
   "Bucket or Volume name cannot start with a period or dash");
 }
 
 if ((resName.charAt(resName.length() - 1) == '.') ||
 
 Review comment:
   `if (resName.charAt(resName.length() - 1 == '-' || 
resName.charAt(resName.length() - 1 == '-') {`
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314095)
Time Spent: 2h  (was: 1h 50m)

> HddsClientUtils and OzoneUtils have duplicate verifyResourceName()
> --
>
> Key: HDDS-2137
> URL: https://issues.apache.org/jira/browse/HDDS-2137
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> HddsClientUtils and OzoneUtils can share the method to verify resource name 
> that verifies if the bucket/volume name is a valid DNS name.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2137) HddsClientUtils and OzoneUtils have duplicate verifyResourceName()

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2137?focusedWorklogId=314093=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314093
 ]

ASF GitHub Bot logged work on HDDS-2137:


Author: ASF GitHub Bot
Created on: 18/Sep/19 00:54
Start Date: 18/Sep/19 00:54
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1455: 
HDDS-2137 : OzoneUtils to verify resourceName using HddsClientUtils
URL: https://github.com/apache/hadoop/pull/1455#discussion_r325443145
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/HddsClientUtils.java
 ##
 @@ -126,75 +126,63 @@ public static long formatDateTime(String date) throws 
ParseException {
 .toInstant().toEpochMilli();
   }
 
-
-
   /**
* verifies that bucket name / volume name is a valid DNS name.
*
* @param resName Bucket or volume Name to be validated
*
* @throws IllegalArgumentException
*/
-  public static void verifyResourceName(String resName)
-  throws IllegalArgumentException {
-
+  public static void verifyResourceName(String resName) throws 
IllegalArgumentException {
 if (resName == null) {
   throw new IllegalArgumentException("Bucket or Volume name is null");
 }
 
-if ((resName.length() < OzoneConsts.OZONE_MIN_BUCKET_NAME_LENGTH) ||
-(resName.length() > OzoneConsts.OZONE_MAX_BUCKET_NAME_LENGTH)) {
+if (resName.length() < OzoneConsts.OZONE_MIN_BUCKET_NAME_LENGTH ||
+resName.length() > OzoneConsts.OZONE_MAX_BUCKET_NAME_LENGTH) {
   throw new IllegalArgumentException(
-  "Bucket or Volume length is illegal, " +
-  "valid length is 3-63 characters");
+  "Bucket or Volume length is illegal, valid length is 3-63 
characters");
 }
 
-if ((resName.charAt(0) == '.') || (resName.charAt(0) == '-')) {
+if (resName.charAt(0) == '.' || resName.charAt(0) == '-') {
   throw new IllegalArgumentException(
   "Bucket or Volume name cannot start with a period or dash");
 }
 
 if ((resName.charAt(resName.length() - 1) == '.') ||
 
 Review comment:
   I see the braces are removed everywhere, to be consistent can we remove from 
here too?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314093)
Time Spent: 1h 40m  (was: 1.5h)

> HddsClientUtils and OzoneUtils have duplicate verifyResourceName()
> --
>
> Key: HDDS-2137
> URL: https://issues.apache.org/jira/browse/HDDS-2137
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> HddsClientUtils and OzoneUtils can share the method to verify resource name 
> that verifies if the bucket/volume name is a valid DNS name.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14609) RBF: Security should use common AuthenticationFilter

2019-09-17 Thread Eric Yang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931942#comment-16931942
 ] 

Eric Yang commented on HDFS-14609:
--

It would be great to have a follow up for HDFS-14461.  I have reservation on 
giving +1 because I don't have full visibility if the both patches would be 
doing the right thing together.  Tentatively +1.

> RBF: Security should use common AuthenticationFilter
> 
>
> Key: HDFS-14609
> URL: https://issues.apache.org/jira/browse/HDFS-14609
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: CR Hota
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-14609.001.patch, HDFS-14609.002.patch, 
> HDFS-14609.003.patch, HDFS-14609.004.patch, HDFS-14609.005.patch, 
> HDFS-14609.006.patch
>
>
> We worked on router based federation security as part of HDFS-13532. We kept 
> it compatible with the way namenode works. However with HADOOP-16314 and 
> HDFS-16354 in trunk, auth filters seems to have been changed causing tests to 
> fail.
> Changes are needed appropriately in RBF, mainly fixing broken tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2144) MR job failing on secure Ozone cluster

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2144?focusedWorklogId=314092=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314092
 ]

ASF GitHub Bot logged work on HDDS-2144:


Author: ASF GitHub Bot
Created on: 18/Sep/19 00:43
Start Date: 18/Sep/19 00:43
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1466: HDDS-2144. MR 
job failing on secure Ozone cluster.
URL: https://github.com/apache/hadoop/pull/1466#issuecomment-532469685
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 77 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 29 | hadoop-ozone in trunk failed. |
   | -1 | compile | 20 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 60 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 910 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 189 | trunk passed |
   | 0 | spotbugs | 218 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 26 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 35 | hadoop-ozone in the patch failed. |
   | -1 | compile | 24 | hadoop-ozone in the patch failed. |
   | -1 | javac | 24 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 60 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 784 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 158 | the patch passed |
   | -1 | findbugs | 23 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 294 | hadoop-hdds in the patch passed. |
   | -1 | unit | 25 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 3570 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1466/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1466 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux cbbfbcbc7c91 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3cf6e42 |
   | Default Java | 1.8.0_212 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1466/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1466/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1466/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1466/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1466/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1466/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1466/1/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1466/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1466/1/testReport/ |
   | Max. process+thread count | 429 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common U: hadoop-ozone/common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1466/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:

[jira] [Resolved] (HDDS-2139) Update BeanUtils and Jackson Databind dependency versions

2019-09-17 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2139.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

Thank You [~hanishakoneru] for the contribution.

I have committed this to the trunk.

> Update BeanUtils and Jackson Databind dependency versions
> -
>
> Key: HDDS-2139
> URL: https://issues.apache.org/jira/browse/HDDS-2139
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The following Ozone dependencies have known security vulnerabilities. We 
> should update them to newer/ latest versions.
>  * Apache Common BeanUtils version 1.9.3
>  * Fasterxml Jackson version 2.9.5



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2139) Update BeanUtils and Jackson Databind dependency versions

2019-09-17 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931938#comment-16931938
 ] 

Hudson commented on HDDS-2139:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17319 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17319/])
HDDS-2139. Update BeanUtils and Jackson Databind dependency versions (bharat: 
rev 0dbfc4d9f29d50b42e1b7d87c8351822fab99b5e)
* (edit) pom.ozone.xml


> Update BeanUtils and Jackson Databind dependency versions
> -
>
> Key: HDDS-2139
> URL: https://issues.apache.org/jira/browse/HDDS-2139
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The following Ozone dependencies have known security vulnerabilities. We 
> should update them to newer/ latest versions.
>  * Apache Common BeanUtils version 1.9.3
>  * Fasterxml Jackson version 2.9.5



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2139) Update BeanUtils and Jackson Databind dependency versions

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2139?focusedWorklogId=314091=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314091
 ]

ASF GitHub Bot logged work on HDDS-2139:


Author: ASF GitHub Bot
Created on: 18/Sep/19 00:39
Start Date: 18/Sep/19 00:39
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1456: HDDS-2139. 
Update BeanUtils and Jackson Databind dependency versions.
URL: https://github.com/apache/hadoop/pull/1456#issuecomment-532468971
 
 
   +1 LGTM.
   I have committed this to trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314091)
Time Spent: 40m  (was: 0.5h)

> Update BeanUtils and Jackson Databind dependency versions
> -
>
> Key: HDDS-2139
> URL: https://issues.apache.org/jira/browse/HDDS-2139
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The following Ozone dependencies have known security vulnerabilities. We 
> should update them to newer/ latest versions.
>  * Apache Common BeanUtils version 1.9.3
>  * Fasterxml Jackson version 2.9.5



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2139) Update BeanUtils and Jackson Databind dependency versions

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2139?focusedWorklogId=314090=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314090
 ]

ASF GitHub Bot logged work on HDDS-2139:


Author: ASF GitHub Bot
Created on: 18/Sep/19 00:39
Start Date: 18/Sep/19 00:39
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1456: 
HDDS-2139. Update BeanUtils and Jackson Databind dependency versions.
URL: https://github.com/apache/hadoop/pull/1456
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314090)
Time Spent: 0.5h  (was: 20m)

> Update BeanUtils and Jackson Databind dependency versions
> -
>
> Key: HDDS-2139
> URL: https://issues.apache.org/jira/browse/HDDS-2139
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The following Ozone dependencies have known security vulnerabilities. We 
> should update them to newer/ latest versions.
>  * Apache Common BeanUtils version 1.9.3
>  * Fasterxml Jackson version 2.9.5



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2057) Incorrect Default OM Port in Ozone FS URI Error Message

2019-09-17 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2057:
-
Labels:   (was: pull-request-available)

> Incorrect Default OM Port in Ozone FS URI Error Message
> ---
>
> Key: HDDS-2057
> URL: https://issues.apache.org/jira/browse/HDDS-2057
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Minor
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> The error message displayed from BasicOzoneFilesystem.initialize specifies 
> 5678 as the OM port. This is not the default port.
> "Ozone file system URL " +
>  "should be one of the following formats: " +
>  "o3fs://bucket.volume/key OR " +
>  "o3fs://bucket.volume.om-host.example.com/key OR " +
>  "o3fs://bucket.volume.om-host.example.com:5678/key";
>  
> This should be fixed to pull the default value from the configuration 
> parameter, instead of a hard-coded value.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2136) OM block allocation metric not paired with its failures

2019-09-17 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931935#comment-16931935
 ] 

Hudson commented on HDDS-2136:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17318 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17318/])
HDDS-2136. OM block allocation metric not paired with its failures (bharat: rev 
b88b6826c9a31554233b6ca69dc065a001253f30)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMMetrics.java


> OM block allocation metric not paired with its failures
> ---
>
> Key: HDDS-2136
> URL: https://issues.apache.org/jira/browse/HDDS-2136
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: allocation_failures.png, allocations.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Block allocation count and block allocation failure count are shown in 
> separate graphs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2136) OM block allocation metric not paired with its failures

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2136?focusedWorklogId=314086=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314086
 ]

ASF GitHub Bot logged work on HDDS-2136:


Author: ASF GitHub Bot
Created on: 18/Sep/19 00:31
Start Date: 18/Sep/19 00:31
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1460: HDDS-2136. OM 
block allocation metric not paired with its failures
URL: https://github.com/apache/hadoop/pull/1460#issuecomment-532466882
 
 
   Thank You @adoroszlai for the contribution.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314086)
Time Spent: 50m  (was: 40m)

> OM block allocation metric not paired with its failures
> ---
>
> Key: HDDS-2136
> URL: https://issues.apache.org/jira/browse/HDDS-2136
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: allocation_failures.png, allocations.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Block allocation count and block allocation failure count are shown in 
> separate graphs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2136) OM block allocation metric not paired with its failures

2019-09-17 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2136:
-
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> OM block allocation metric not paired with its failures
> ---
>
> Key: HDDS-2136
> URL: https://issues.apache.org/jira/browse/HDDS-2136
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: allocation_failures.png, allocations.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Block allocation count and block allocation failure count are shown in 
> separate graphs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2136) OM block allocation metric not paired with its failures

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2136?focusedWorklogId=314085=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314085
 ]

ASF GitHub Bot logged work on HDDS-2136:


Author: ASF GitHub Bot
Created on: 18/Sep/19 00:31
Start Date: 18/Sep/19 00:31
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1460: 
HDDS-2136. OM block allocation metric not paired with its failures
URL: https://github.com/apache/hadoop/pull/1460
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314085)
Time Spent: 40m  (was: 0.5h)

> OM block allocation metric not paired with its failures
> ---
>
> Key: HDDS-2136
> URL: https://issues.apache.org/jira/browse/HDDS-2136
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: allocation_failures.png, allocations.png
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Block allocation count and block allocation failure count are shown in 
> separate graphs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2142) OM metrics mismatch (abort multipart request)

2019-09-17 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931933#comment-16931933
 ] 

Hudson commented on HDDS-2142:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17317 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17317/])
HDDS-2142. OM metrics mismatch (abort multipart request) (#1461) (bharat: rev 
a9ba2b6710e808821f13c8557fba501c8c236093)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3MultipartUploadAbortRequest.java


> OM metrics mismatch (abort multipart request)
> -
>
> Key: HDDS-2142
> URL: https://issues.apache.org/jira/browse/HDDS-2142
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: abort_multipart-new.png, abort_multipart.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> AbortMultipartUpload failure count can be higher than request count.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2142) OM metrics mismatch (abort multipart request)

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2142?focusedWorklogId=314082=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314082
 ]

ASF GitHub Bot logged work on HDDS-2142:


Author: ASF GitHub Bot
Created on: 18/Sep/19 00:27
Start Date: 18/Sep/19 00:27
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1461: HDDS-2142. OM 
metrics mismatch (abort multipart request)
URL: https://github.com/apache/hadoop/pull/1461#issuecomment-532465789
 
 
   Thank You @adoroszlai for the contribution and @arp7 for the review.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314082)
Time Spent: 50m  (was: 40m)

> OM metrics mismatch (abort multipart request)
> -
>
> Key: HDDS-2142
> URL: https://issues.apache.org/jira/browse/HDDS-2142
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: abort_multipart-new.png, abort_multipart.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> AbortMultipartUpload failure count can be higher than request count.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2142) OM metrics mismatch (abort multipart request)

2019-09-17 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2142:
-
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> OM metrics mismatch (abort multipart request)
> -
>
> Key: HDDS-2142
> URL: https://issues.apache.org/jira/browse/HDDS-2142
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: abort_multipart-new.png, abort_multipart.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> AbortMultipartUpload failure count can be higher than request count.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2142) OM metrics mismatch (abort multipart request)

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2142?focusedWorklogId=314081=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314081
 ]

ASF GitHub Bot logged work on HDDS-2142:


Author: ASF GitHub Bot
Created on: 18/Sep/19 00:27
Start Date: 18/Sep/19 00:27
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1461: 
HDDS-2142. OM metrics mismatch (abort multipart request)
URL: https://github.com/apache/hadoop/pull/1461
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314081)
Time Spent: 40m  (was: 0.5h)

> OM metrics mismatch (abort multipart request)
> -
>
> Key: HDDS-2142
> URL: https://issues.apache.org/jira/browse/HDDS-2142
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: abort_multipart-new.png, abort_multipart.png
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> AbortMultipartUpload failure count can be higher than request count.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2143) Rename classes under package org.apache.hadoop.utils

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2143?focusedWorklogId=314071=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314071
 ]

ASF GitHub Bot logged work on HDDS-2143:


Author: ASF GitHub Bot
Created on: 18/Sep/19 00:17
Start Date: 18/Sep/19 00:17
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1465: HDDS-2143. 
Rename classes under package org.apache.hadoop.utils.
URL: https://github.com/apache/hadoop/pull/1465#issuecomment-532463617
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 35 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 9 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 61 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 72 | Maven dependency ordering for branch |
   | -1 | mvninstall | 33 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 154 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 993 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 150 | trunk passed |
   | 0 | spotbugs | 175 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 23 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | -1 | mvninstall | 30 | hadoop-ozone in the patch failed. |
   | -1 | compile | 21 | hadoop-ozone in the patch failed. |
   | -1 | javac | 50 | hadoop-hdds generated 6 new + 21 unchanged - 6 fixed = 
27 total (was 27) |
   | -1 | javac | 21 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 57 | hadoop-hdds: The patch generated 19 new + 1708 
unchanged - 13 fixed = 1727 total (was 1721) |
   | -0 | checkstyle | 78 | hadoop-ozone: The patch generated 75 new + 2851 
unchanged - 75 fixed = 2926 total (was 2926) |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | whitespace | 0 | The patch has 4 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 716 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 74 | hadoop-hdds generated 1 new + 15 unchanged - 1 fixed = 
16 total (was 16) |
   | -1 | javadoc | 50 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 22 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 257 | hadoop-hdds in the patch passed. |
   | -1 | unit | 24 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3554 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1465/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1465 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ce9423a26897 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3cf6e42 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1465/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1465/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1465/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1465/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1465/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1465/1/artifact/out/diff-compile-javac-hadoop-hdds.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1465/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1465/1/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1465/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1465/1/artifact/out/whitespace-eol.txt
 |
   | javadoc | 

[jira] [Updated] (HDDS-2144) MR job failing on secure Ozone cluster

2019-09-17 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2144:
-
Target Version/s: 0.4.1, 0.5.0
  Status: Patch Available  (was: Open)

> MR job failing on secure Ozone cluster
> --
>
> Key: HDDS-2144
> URL: https://issues.apache.org/jira/browse/HDDS-2144
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Failing with below error:
> Caused by: Client cannot authenticate via:[TOKEN, KERBEROS]
> org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
> via:[TOKEN, KERBEROS]
> at 
> org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:173)
> at 
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:390)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:617)
> at org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:411)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:804)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:800)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:800)
> at org.apache.hadoop.ipc.Client$Connection.access$3700(Client.java:411)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1572)
> at org.apache.hadoop.ipc.Client.call(Client.java:1403)
> at org.apache.hadoop.ipc.Client.call(Client.java:1367)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy79.submitRequest(Unknown Source)
> at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy79.submitRequest(Unknown Source)
> at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.submitRequest(OzoneManagerProtocolClientSideTranslatorPB.java:332)
> at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.getServiceList(OzoneManagerProtocolClientSideTranslatorPB.java:1163)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.hdds.tracing.TraceAllMethod.invoke(TraceAllMethod.java:66)
> at com.sun.proxy.$Proxy80.getServiceList(Unknown Source)
> at 
> org.apache.hadoop.ozone.client.rpc.RpcClient.getScmAddressForClient(RpcClient.java:248)
> at org.apache.hadoop.ozone.client.rpc.RpcClient.(RpcClient.java:167)
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:256)
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:239)
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:203)
> at 
> org.apache.hadoop.fs.ozone.BasicOzoneClientAdapterImpl.(BasicOzoneClientAdapterImpl.java:161)
> at 
> org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl.(OzoneClientAdapterImpl.java:50)
> at 
> org.apache.hadoop.fs.ozone.OzoneFileSystem.createAdapter(OzoneFileSystem.java:102)
> at 
> org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.initialize(BasicOzoneFileSystem.java:155)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
> at 

[jira] [Work logged] (HDDS-2144) MR job failing on secure Ozone cluster

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2144?focusedWorklogId=314057=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314057
 ]

ASF GitHub Bot logged work on HDDS-2144:


Author: ASF GitHub Bot
Created on: 17/Sep/19 23:42
Start Date: 17/Sep/19 23:42
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1466: 
HDDS-2144. MR job failing on secure Ozone cluster.
URL: https://github.com/apache/hadoop/pull/1466
 
 
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314057)
Remaining Estimate: 0h
Time Spent: 10m

> MR job failing on secure Ozone cluster
> --
>
> Key: HDDS-2144
> URL: https://issues.apache.org/jira/browse/HDDS-2144
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Failing with below error:
> Caused by: Client cannot authenticate via:[TOKEN, KERBEROS]
> org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
> via:[TOKEN, KERBEROS]
> at 
> org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:173)
> at 
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:390)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:617)
> at org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:411)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:804)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:800)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:800)
> at org.apache.hadoop.ipc.Client$Connection.access$3700(Client.java:411)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1572)
> at org.apache.hadoop.ipc.Client.call(Client.java:1403)
> at org.apache.hadoop.ipc.Client.call(Client.java:1367)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy79.submitRequest(Unknown Source)
> at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy79.submitRequest(Unknown Source)
> at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.submitRequest(OzoneManagerProtocolClientSideTranslatorPB.java:332)
> at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.getServiceList(OzoneManagerProtocolClientSideTranslatorPB.java:1163)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.hdds.tracing.TraceAllMethod.invoke(TraceAllMethod.java:66)
> at com.sun.proxy.$Proxy80.getServiceList(Unknown Source)
> at 
> org.apache.hadoop.ozone.client.rpc.RpcClient.getScmAddressForClient(RpcClient.java:248)
> at org.apache.hadoop.ozone.client.rpc.RpcClient.(RpcClient.java:167)
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:256)
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:239)
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:203)
> at 
> 

[jira] [Updated] (HDDS-2144) MR job failing on secure Ozone cluster

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2144:
-
Labels: pull-request-available  (was: )

> MR job failing on secure Ozone cluster
> --
>
> Key: HDDS-2144
> URL: https://issues.apache.org/jira/browse/HDDS-2144
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Blocker
>  Labels: pull-request-available
>
> Failing with below error:
> Caused by: Client cannot authenticate via:[TOKEN, KERBEROS]
> org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
> via:[TOKEN, KERBEROS]
> at 
> org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:173)
> at 
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:390)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:617)
> at org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:411)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:804)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:800)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:800)
> at org.apache.hadoop.ipc.Client$Connection.access$3700(Client.java:411)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1572)
> at org.apache.hadoop.ipc.Client.call(Client.java:1403)
> at org.apache.hadoop.ipc.Client.call(Client.java:1367)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy79.submitRequest(Unknown Source)
> at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy79.submitRequest(Unknown Source)
> at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.submitRequest(OzoneManagerProtocolClientSideTranslatorPB.java:332)
> at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.getServiceList(OzoneManagerProtocolClientSideTranslatorPB.java:1163)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.hdds.tracing.TraceAllMethod.invoke(TraceAllMethod.java:66)
> at com.sun.proxy.$Proxy80.getServiceList(Unknown Source)
> at 
> org.apache.hadoop.ozone.client.rpc.RpcClient.getScmAddressForClient(RpcClient.java:248)
> at org.apache.hadoop.ozone.client.rpc.RpcClient.(RpcClient.java:167)
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:256)
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:239)
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:203)
> at 
> org.apache.hadoop.fs.ozone.BasicOzoneClientAdapterImpl.(BasicOzoneClientAdapterImpl.java:161)
> at 
> org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl.(OzoneClientAdapterImpl.java:50)
> at 
> org.apache.hadoop.fs.ozone.OzoneFileSystem.createAdapter(OzoneFileSystem.java:102)
> at 
> org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.initialize(BasicOzoneFileSystem.java:155)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
> at org.apache.hadoop.yarn.util.FSDownload.verifyAndCopy(FSDownload.java:268)
> at 

[jira] [Updated] (HDDS-2144) MR job failing on secure Ozone cluster

2019-09-17 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2144:
-
Priority: Blocker  (was: Major)

> MR job failing on secure Ozone cluster
> --
>
> Key: HDDS-2144
> URL: https://issues.apache.org/jira/browse/HDDS-2144
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Blocker
>
> Failing with below error:
> Caused by: Client cannot authenticate via:[TOKEN, KERBEROS]
> org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
> via:[TOKEN, KERBEROS]
> at 
> org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:173)
> at 
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:390)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:617)
> at org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:411)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:804)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:800)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:800)
> at org.apache.hadoop.ipc.Client$Connection.access$3700(Client.java:411)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1572)
> at org.apache.hadoop.ipc.Client.call(Client.java:1403)
> at org.apache.hadoop.ipc.Client.call(Client.java:1367)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy79.submitRequest(Unknown Source)
> at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy79.submitRequest(Unknown Source)
> at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.submitRequest(OzoneManagerProtocolClientSideTranslatorPB.java:332)
> at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.getServiceList(OzoneManagerProtocolClientSideTranslatorPB.java:1163)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.hdds.tracing.TraceAllMethod.invoke(TraceAllMethod.java:66)
> at com.sun.proxy.$Proxy80.getServiceList(Unknown Source)
> at 
> org.apache.hadoop.ozone.client.rpc.RpcClient.getScmAddressForClient(RpcClient.java:248)
> at org.apache.hadoop.ozone.client.rpc.RpcClient.(RpcClient.java:167)
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:256)
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:239)
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:203)
> at 
> org.apache.hadoop.fs.ozone.BasicOzoneClientAdapterImpl.(BasicOzoneClientAdapterImpl.java:161)
> at 
> org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl.(OzoneClientAdapterImpl.java:50)
> at 
> org.apache.hadoop.fs.ozone.OzoneFileSystem.createAdapter(OzoneFileSystem.java:102)
> at 
> org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.initialize(BasicOzoneFileSystem.java:155)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
> at org.apache.hadoop.yarn.util.FSDownload.verifyAndCopy(FSDownload.java:268)
> at 

[jira] [Created] (HDDS-2144) MR job failing on secure Ozone cluster

2019-09-17 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2144:


 Summary: MR job failing on secure Ozone cluster
 Key: HDDS-2144
 URL: https://issues.apache.org/jira/browse/HDDS-2144
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


Failing with below error:
Caused by: Client cannot authenticate via:[TOKEN, KERBEROS]
org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
via:[TOKEN, KERBEROS]
at 
org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:173)
at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:390)
at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:617)
at org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:411)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:804)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:800)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:800)
at org.apache.hadoop.ipc.Client$Connection.access$3700(Client.java:411)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1572)
at org.apache.hadoop.ipc.Client.call(Client.java:1403)
at org.apache.hadoop.ipc.Client.call(Client.java:1367)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy79.submitRequest(Unknown Source)
at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy79.submitRequest(Unknown Source)
at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.submitRequest(OzoneManagerProtocolClientSideTranslatorPB.java:332)
at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.getServiceList(OzoneManagerProtocolClientSideTranslatorPB.java:1163)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hdds.tracing.TraceAllMethod.invoke(TraceAllMethod.java:66)
at com.sun.proxy.$Proxy80.getServiceList(Unknown Source)
at 
org.apache.hadoop.ozone.client.rpc.RpcClient.getScmAddressForClient(RpcClient.java:248)
at org.apache.hadoop.ozone.client.rpc.RpcClient.(RpcClient.java:167)
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:256)
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:239)
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:203)
at 
org.apache.hadoop.fs.ozone.BasicOzoneClientAdapterImpl.(BasicOzoneClientAdapterImpl.java:161)
at 
org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl.(OzoneClientAdapterImpl.java:50)
at 
org.apache.hadoop.fs.ozone.OzoneFileSystem.createAdapter(OzoneFileSystem.java:102)
at 
org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.initialize(BasicOzoneFileSystem.java:155)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
at org.apache.hadoop.yarn.util.FSDownload.verifyAndCopy(FSDownload.java:268)
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:67)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:414)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:411)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at 

[jira] [Work logged] (HDDS-730) ozone fs cli prints hadoop fs in usage

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-730?focusedWorklogId=314050=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314050
 ]

ASF GitHub Bot logged work on HDDS-730:
---

Author: ASF GitHub Bot
Created on: 17/Sep/19 23:15
Start Date: 17/Sep/19 23:15
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1464: HDDS-730. Ozone 
fs cli prints hadoop fs in usage.
URL: https://github.com/apache/hadoop/pull/1464#issuecomment-532437413
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 15 | Maven dependency ordering for branch |
   | -1 | mvninstall | 31 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 53 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 794 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 162 | trunk passed |
   | 0 | spotbugs | 173 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 32 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | -1 | mvninstall | 35 | hadoop-ozone in the patch failed. |
   | -1 | compile | 27 | hadoop-ozone in the patch failed. |
   | -1 | javac | 27 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 32 | hadoop-ozone: The patch generated 30 new + 0 
unchanged - 0 fixed = 30 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 33 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 669 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | the patch passed |
   | -1 | findbugs | 29 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 256 | hadoop-hdds in the patch failed. |
   | -1 | unit | 30 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 3310 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1464/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1464 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs 
compile javac javadoc mvninstall shadedclient findbugs checkstyle |
   | uname | Linux 67d77efce9db 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f580a87 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1464/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1464/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1464/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1464/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1464/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1464/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1464/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1464/1/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1464/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1464/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 

[jira] [Work logged] (HDDS-2121) Create a shaded ozone filesystem (client) jar

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2121?focusedWorklogId=314049=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314049
 ]

ASF GitHub Bot logged work on HDDS-2121:


Author: ASF GitHub Bot
Created on: 17/Sep/19 23:11
Start Date: 17/Sep/19 23:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1452: HDDS-2121. 
Create a shaded ozone filesystem (client) jar
URL: https://github.com/apache/hadoop/pull/1452#issuecomment-532436493
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 2872 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 38 | hadoop-ozone in trunk failed. |
   | -1 | compile | 24 | hadoop-ozone in trunk failed. |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1210 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 168 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 30 | hadoop-ozone in the patch failed. |
   | -1 | compile | 22 | hadoop-ozone in the patch failed. |
   | -1 | javac | 22 | hadoop-ozone in the patch failed. |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 727 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 152 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 270 | hadoop-hdds in the patch passed. |
   | -1 | unit | 25 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 5807 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1452 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 629d1f742a64 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f580a87 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/5/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/5/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/5/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/5/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/5/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/5/testReport/ |
   | Max. process+thread count | 467 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozonefs-lib-current U: 
hadoop-ozone/ozonefs-lib-current |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/5/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314049)
Time Spent: 1h 20m  (was: 1h 10m)

> Create a shaded ozone filesystem (client) jar
> -
>
> Key: HDDS-2121
> URL: https://issues.apache.org/jira/browse/HDDS-2121
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Arpit Agarwal
>

[jira] [Commented] (HDFS-14846) libhdfs tests are failing on trunk due to jni usage bugs

2019-09-17 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931909#comment-16931909
 ] 

Anu Engineer commented on HDFS-14846:
-

Thank you for your contribution. I have committed this patch to the trunk 
branch.

> libhdfs tests are failing on trunk due to jni usage bugs
> 
>
> Key: HDFS-14846
> URL: https://issues.apache.org/jira/browse/HDFS-14846
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs, native
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
>
> While working on HDFS-14564, I noticed that the libhdfs tests are failing on 
> trunk (both on Hadoop QA and locally). I did some digging and found out that 
> the {{-Xcheck:jni}} flag is causing a bunch of crashes. I haven't been able 
> to pinpoint what caused this regression, but my best guess is that an upgrade 
> in the JDK we use in Hadoop QA started causing these failures. I looked back 
> at some old JIRAs and it looks like the tests work on Java 1.8.0_212, but 
> Hadoop QA is running 1.8.0_222 (as is my local env) (I couldn't confirm this 
> theory because I'm having trouble getting Java 1.8.0_212 installed next to 
> 1.8.0_222 on my Ubuntu machine) (even after re-winding the commit history 
> back to a known good commit where the libhdfs passed, the tests still fail, 
> so I don't think a code change caused the regressions).
> The failures are a bunch of "FATAL ERROR in native method: Bad global or 
> local ref passed to JNI" errors. After doing some debugging, it looks like 
> {{-Xcheck:jni}} now errors out if any code tries to pass a local ref to 
> {{DeleteLocalRef}} twice (previously it looked like it didn't complain) (we 
> have some checks to avoid this, but it looks like they don't work as 
> expected).
> There are a few places in the libhdfs code where this pattern causes a crash, 
> as well as one place in {{JniBasedUnixGroupsMapping}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14846) libhdfs tests are failing on trunk due to jni usage bugs

2019-09-17 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931908#comment-16931908
 ] 

Hudson commented on HDFS-14846:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17316 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17316/])
HDFS-14846: libhdfs tests are failing on trunk due to jni usage bugs 
(aengineer: rev 3cf6e4272f192c69a161307ad9d35142c5a845c5)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/hdfs.c
* (edit) 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsMapping.c
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/native_mini_dfs.c


> libhdfs tests are failing on trunk due to jni usage bugs
> 
>
> Key: HDFS-14846
> URL: https://issues.apache.org/jira/browse/HDFS-14846
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs, native
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
>
> While working on HDFS-14564, I noticed that the libhdfs tests are failing on 
> trunk (both on Hadoop QA and locally). I did some digging and found out that 
> the {{-Xcheck:jni}} flag is causing a bunch of crashes. I haven't been able 
> to pinpoint what caused this regression, but my best guess is that an upgrade 
> in the JDK we use in Hadoop QA started causing these failures. I looked back 
> at some old JIRAs and it looks like the tests work on Java 1.8.0_212, but 
> Hadoop QA is running 1.8.0_222 (as is my local env) (I couldn't confirm this 
> theory because I'm having trouble getting Java 1.8.0_212 installed next to 
> 1.8.0_222 on my Ubuntu machine) (even after re-winding the commit history 
> back to a known good commit where the libhdfs passed, the tests still fail, 
> so I don't think a code change caused the regressions).
> The failures are a bunch of "FATAL ERROR in native method: Bad global or 
> local ref passed to JNI" errors. After doing some debugging, it looks like 
> {{-Xcheck:jni}} now errors out if any code tries to pass a local ref to 
> {{DeleteLocalRef}} twice (previously it looked like it didn't complain) (we 
> have some checks to avoid this, but it looks like they don't work as 
> expected).
> There are a few places in the libhdfs code where this pattern causes a crash, 
> as well as one place in {{JniBasedUnixGroupsMapping}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=314048=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314048
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 17/Sep/19 23:02
Start Date: 17/Sep/19 23:02
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1277: 
HDDS-1054. List Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#discussion_r325421790
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerRequestHandler.java
 ##
 @@ -822,6 +822,7 @@ private MultipartInfoInitiateResponse 
initiateMultiPartUpload(
 .setBucketName(keyArgs.getBucketName())
 .setKeyName(keyArgs.getKeyName())
 .setType(keyArgs.getType())
+.setFactor(keyArgs.getFactor())
 
 Review comment:
   Minor: We don't need this, as in line 827 it is already being done.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314048)
Time Spent: 10h 20m  (was: 10h 10m)

> List Multipart uploads in a bucket
> --
>
> Key: HDDS-1054
> URL: https://issues.apache.org/jira/browse/HDDS-1054
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 10h 20m
>  Remaining Estimate: 0h
>
> This Jira is to implement in ozone to list of in-progress multipart uploads 
> in a bucket.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14609) RBF: Security should use common AuthenticationFilter

2019-09-17 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931888#comment-16931888
 ] 

Íñigo Goiri commented on HDFS-14609:


+1 on  [^HDFS-14609.006.patch].
I think [~crh] already gave his official blessing.
[~tasanuma] and [~eyang] seems supportive but let's get an official +1 to make 
sure we are all in the same page.
(I cannot wait for this failing unit test to go away, the flaky part is still 
there but it's not as annoying and HDFS-14461 should take care of it).


> RBF: Security should use common AuthenticationFilter
> 
>
> Key: HDFS-14609
> URL: https://issues.apache.org/jira/browse/HDFS-14609
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: CR Hota
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-14609.001.patch, HDFS-14609.002.patch, 
> HDFS-14609.003.patch, HDFS-14609.004.patch, HDFS-14609.005.patch, 
> HDFS-14609.006.patch
>
>
> We worked on router based federation security as part of HDFS-13532. We kept 
> it compatible with the way namenode works. However with HADOOP-16314 and 
> HDFS-16354 in trunk, auth filters seems to have been changed causing tests to 
> fail.
> Changes are needed appropriately in RBF, mainly fixing broken tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14609) RBF: Security should use common AuthenticationFilter

2019-09-17 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931888#comment-16931888
 ] 

Íñigo Goiri edited comment on HDFS-14609 at 9/17/19 10:54 PM:
--

+1 on  [^HDFS-14609.006.patch].
I think [~crh] already gave his official blessing.
[~tasanuma] and [~eyang] seem supportive but let's get an official +1 to make 
sure we are all in the same page.
(I cannot wait for this failing unit test to go away, the flaky part is still 
there but it's not as annoying and HDFS-14461 should take care of it).



was (Author: elgoiri):
+1 on  [^HDFS-14609.006.patch].
I think [~crh] already gave his official blessing.
[~tasanuma] and [~eyang] seems supportive but let's get an official +1 to make 
sure we are all in the same page.
(I cannot wait for this failing unit test to go away, the flaky part is still 
there but it's not as annoying and HDFS-14461 should take care of it).


> RBF: Security should use common AuthenticationFilter
> 
>
> Key: HDFS-14609
> URL: https://issues.apache.org/jira/browse/HDFS-14609
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: CR Hota
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-14609.001.patch, HDFS-14609.002.patch, 
> HDFS-14609.003.patch, HDFS-14609.004.patch, HDFS-14609.005.patch, 
> HDFS-14609.006.patch
>
>
> We worked on router based federation security as part of HDFS-13532. We kept 
> it compatible with the way namenode works. However with HADOOP-16314 and 
> HDFS-16354 in trunk, auth filters seems to have been changed causing tests to 
> fail.
> Changes are needed appropriately in RBF, mainly fixing broken tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14795) Add Throttler for writing block

2019-09-17 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931887#comment-16931887
 ] 

Íñigo Goiri commented on HDFS-14795:


Thanks [~leosun08] for the work.
Committed to trunk.

> Add Throttler for writing block
> ---
>
> Key: HDFS-14795
> URL: https://issues.apache.org/jira/browse/HDFS-14795
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14795.001.patch, HDFS-14795.002.patch, 
> HDFS-14795.003.patch, HDFS-14795.004.patch, HDFS-14795.005.patch, 
> HDFS-14795.006.patch, HDFS-14795.007.patch, HDFS-14795.008.patch, 
> HDFS-14795.009.patch, HDFS-14795.010.patch, HDFS-14795.011.patch, 
> HDFS-14795.012.patch
>
>
> DataXceiver#writeBlock
> {code:java}
> blockReceiver.receiveBlock(mirrorOut, mirrorIn, replyOut,
> mirrorAddr, null, targets, false);
> {code}
> As above code, DataXceiver#writeBlock doesn't throttler.
>  I think it is necessary to throttle for writing block, while add throttler 
> in stage of PIPELINE_SETUP_APPEND_RECOVERY or 
> PIPELINE_SETUP_STREAMING_RECOVERY.
> Default throttler value is still null.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14795) Add Throttler for writing block

2019-09-17 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-14795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14795:
---
Fix Version/s: 3.3.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Add Throttler for writing block
> ---
>
> Key: HDFS-14795
> URL: https://issues.apache.org/jira/browse/HDFS-14795
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14795.001.patch, HDFS-14795.002.patch, 
> HDFS-14795.003.patch, HDFS-14795.004.patch, HDFS-14795.005.patch, 
> HDFS-14795.006.patch, HDFS-14795.007.patch, HDFS-14795.008.patch, 
> HDFS-14795.009.patch, HDFS-14795.010.patch, HDFS-14795.011.patch, 
> HDFS-14795.012.patch
>
>
> DataXceiver#writeBlock
> {code:java}
> blockReceiver.receiveBlock(mirrorOut, mirrorIn, replyOut,
> mirrorAddr, null, targets, false);
> {code}
> As above code, DataXceiver#writeBlock doesn't throttler.
>  I think it is necessary to throttle for writing block, while add throttler 
> in stage of PIPELINE_SETUP_APPEND_RECOVERY or 
> PIPELINE_SETUP_STREAMING_RECOVERY.
> Default throttler value is still null.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2143) Rename classes under package org.apache.hadoop.utils

2019-09-17 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2143:
-
Affects Version/s: 0.5.0
   Status: Patch Available  (was: Open)

> Rename classes under package org.apache.hadoop.utils
> 
>
> Key: HDDS-2143
> URL: https://issues.apache.org/jira/browse/HDDS-2143
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Rename classes under package org.apache.hadoop.utils -> 
> org.apache.hadoop.hdds.utils in hadoop-hdds-common
>  
> Now, with current way, we might collide with hadoop classes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2143) Rename classes under package org.apache.hadoop.utils

2019-09-17 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2143:
-
Target Version/s: 0.5.0

> Rename classes under package org.apache.hadoop.utils
> 
>
> Key: HDDS-2143
> URL: https://issues.apache.org/jira/browse/HDDS-2143
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Rename classes under package org.apache.hadoop.utils -> 
> org.apache.hadoop.hdds.utils in hadoop-hdds-common
>  
> Now, with current way, we might collide with hadoop classes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2143) Rename classes under package org.apache.hadoop.utils

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2143:
-
Labels: pull-request-available  (was: )

> Rename classes under package org.apache.hadoop.utils
> 
>
> Key: HDDS-2143
> URL: https://issues.apache.org/jira/browse/HDDS-2143
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> Rename classes under package org.apache.hadoop.utils -> 
> org.apache.hadoop.hdds.utils in hadoop-hdds-common
>  
> Now, with current way, we might collide with hadoop classes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2143) Rename classes under package org.apache.hadoop.utils

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2143?focusedWorklogId=314044=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314044
 ]

ASF GitHub Bot logged work on HDDS-2143:


Author: ASF GitHub Bot
Created on: 17/Sep/19 22:48
Start Date: 17/Sep/19 22:48
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1465: 
HDDS-2143. Rename classes under package org.apache.hadoop.utils.
URL: https://github.com/apache/hadoop/pull/1465
 
 
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314044)
Remaining Estimate: 0h
Time Spent: 10m

> Rename classes under package org.apache.hadoop.utils
> 
>
> Key: HDDS-2143
> URL: https://issues.apache.org/jira/browse/HDDS-2143
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Rename classes under package org.apache.hadoop.utils -> 
> org.apache.hadoop.hdds.utils in hadoop-hdds-common
>  
> Now, with current way, we might collide with hadoop classes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14795) Add Throttler for writing block

2019-09-17 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931886#comment-16931886
 ] 

Hudson commented on HDFS-14795:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17315 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17315/])
HDFS-14795. Add Throttler for writing block. Contributed by Lisheng Sun. 
(inigoiri: rev f580a87079bb47bf92d254677745d067b6bc8fde)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestTransferRbw.java


> Add Throttler for writing block
> ---
>
> Key: HDFS-14795
> URL: https://issues.apache.org/jira/browse/HDFS-14795
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-14795.001.patch, HDFS-14795.002.patch, 
> HDFS-14795.003.patch, HDFS-14795.004.patch, HDFS-14795.005.patch, 
> HDFS-14795.006.patch, HDFS-14795.007.patch, HDFS-14795.008.patch, 
> HDFS-14795.009.patch, HDFS-14795.010.patch, HDFS-14795.011.patch, 
> HDFS-14795.012.patch
>
>
> DataXceiver#writeBlock
> {code:java}
> blockReceiver.receiveBlock(mirrorOut, mirrorIn, replyOut,
> mirrorAddr, null, targets, false);
> {code}
> As above code, DataXceiver#writeBlock doesn't throttler.
>  I think it is necessary to throttle for writing block, while add throttler 
> in stage of PIPELINE_SETUP_APPEND_RECOVERY or 
> PIPELINE_SETUP_STREAMING_RECOVERY.
> Default throttler value is still null.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2143) Rename classes under package org.apache.hadoop.utils

2019-09-17 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-2143:


Assignee: Bharat Viswanadham

> Rename classes under package org.apache.hadoop.utils
> 
>
> Key: HDDS-2143
> URL: https://issues.apache.org/jira/browse/HDDS-2143
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> Rename classes under package org.apache.hadoop.utils -> 
> org.apache.hadoop.hdds.utils in hadoop-hdds-common
>  
> Now, with current way, we might collide with hadoop classes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2143) Rename classes under package org.apache.hadoop.utils

2019-09-17 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2143:
-
Description: 
Rename classes under package org.apache.hadoop.utils -> 
org.apache.hadoop.hdds.utils in hadoop-hdds-common

 

Now, with current way, we might collide with hadoop classes.

> Rename classes under package org.apache.hadoop.utils
> 
>
> Key: HDDS-2143
> URL: https://issues.apache.org/jira/browse/HDDS-2143
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
> Environment: Rename classes under package org.apache.hadoop.utils -> 
> org.apache.hadoop.hdds.utils in hadoop-hdds-common
>  
> Now, with current way, we might collide with hadoop classes.
>Reporter: Bharat Viswanadham
>Priority: Major
>
> Rename classes under package org.apache.hadoop.utils -> 
> org.apache.hadoop.hdds.utils in hadoop-hdds-common
>  
> Now, with current way, we might collide with hadoop classes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2143) Rename classes under package org.apache.hadoop.utils

2019-09-17 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2143:
-
Environment: (was: Rename classes under package org.apache.hadoop.utils 
-> org.apache.hadoop.hdds.utils in hadoop-hdds-common

 

Now, with current way, we might collide with hadoop classes.)

> Rename classes under package org.apache.hadoop.utils
> 
>
> Key: HDDS-2143
> URL: https://issues.apache.org/jira/browse/HDDS-2143
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Major
>
> Rename classes under package org.apache.hadoop.utils -> 
> org.apache.hadoop.hdds.utils in hadoop-hdds-common
>  
> Now, with current way, we might collide with hadoop classes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2143) Rename classes under package org.apache.hadoop.utils

2019-09-17 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2143:


 Summary: Rename classes under package org.apache.hadoop.utils
 Key: HDDS-2143
 URL: https://issues.apache.org/jira/browse/HDDS-2143
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
 Environment: Rename classes under package org.apache.hadoop.utils -> 
org.apache.hadoop.hdds.utils in hadoop-hdds-common

 

Now, with current way, we might collide with hadoop classes.
Reporter: Bharat Viswanadham






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2121) Create a shaded ozone filesystem (client) jar

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2121?focusedWorklogId=314038=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314038
 ]

ASF GitHub Bot logged work on HDDS-2121:


Author: ASF GitHub Bot
Created on: 17/Sep/19 22:25
Start Date: 17/Sep/19 22:25
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1452: HDDS-2121. 
Create a shaded ozone filesystem (client) jar
URL: https://github.com/apache/hadoop/pull/1452#issuecomment-532426055
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 51 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 34 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-ozone in trunk failed. |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1150 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 178 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 35 | hadoop-ozone in the patch failed. |
   | -1 | compile | 24 | hadoop-ozone in the patch failed. |
   | -1 | javac | 24 | hadoop-ozone in the patch failed. |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 799 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 178 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 250 | hadoop-hdds in the patch failed. |
   | -1 | unit | 29 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 35 | The patch does not generate ASF License warnings. |
   | | | 3069 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1452 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 48f13474670c 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / eefe9bc |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/4/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/4/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/4/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/4/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/4/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/4/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/4/testReport/ |
   | Max. process+thread count | 553 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozonefs-lib-current U: 
hadoop-ozone/ozonefs-lib-current |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/4/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314038)
Time Spent: 1h 10m  (was: 1h)

> Create a shaded ozone filesystem (client) jar
> 

[jira] [Commented] (HDDS-1933) Datanode should use hostname in place of ip addresses to allow DN's to work when ipaddress change

2019-09-17 Thread Siddharth Wagle (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931834#comment-16931834
 ] 

Siddharth Wagle commented on HDDS-1933:
---

We can get away with still preserving ipAddress in the yaml and also 
DatanodeDetails object, I think only change needed to fix this is here:

https://github.com/apache/hadoop/blob/eefe9bc85ccdabc2b7303969934dbce98f2b31b5/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java#L263

Currently, we rely on this flag _dfs.datanode.use.datanode.hostname_ to decide 
whether to use hostname or ipAddress in order to map to DatanodeDetails.UUID 
and the UUID is the system identifier already so, no downstream change should 
be necessary. [~msingh]/[~elek] thoughts?

> Datanode should use hostname in place of ip addresses to allow DN's to work 
> when ipaddress change
> -
>
> Key: HDDS-1933
> URL: https://issues.apache.org/jira/browse/HDDS-1933
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Priority: Blocker
>
> This was noticed by [~elek] while deploying Ozone on Kubernetes based 
> environment.
> When the datanode ip address change on restart, the Datanode details cease to 
> be correct for the datanode. and this prevents the cluster from functioning 
> after a restart.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14795) Add Throttler for writing block

2019-09-17 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931833#comment-16931833
 ] 

Íñigo Goiri commented on HDFS-14795:


+1 on [^HDFS-14795.012.patch].

> Add Throttler for writing block
> ---
>
> Key: HDFS-14795
> URL: https://issues.apache.org/jira/browse/HDFS-14795
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-14795.001.patch, HDFS-14795.002.patch, 
> HDFS-14795.003.patch, HDFS-14795.004.patch, HDFS-14795.005.patch, 
> HDFS-14795.006.patch, HDFS-14795.007.patch, HDFS-14795.008.patch, 
> HDFS-14795.009.patch, HDFS-14795.010.patch, HDFS-14795.011.patch, 
> HDFS-14795.012.patch
>
>
> DataXceiver#writeBlock
> {code:java}
> blockReceiver.receiveBlock(mirrorOut, mirrorIn, replyOut,
> mirrorAddr, null, targets, false);
> {code}
> As above code, DataXceiver#writeBlock doesn't throttler.
>  I think it is necessary to throttle for writing block, while add throttler 
> in stage of PIPELINE_SETUP_APPEND_RECOVERY or 
> PIPELINE_SETUP_STREAMING_RECOVERY.
> Default throttler value is still null.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6524) Choosing datanode retries times considering with block replica number

2019-09-17 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-6524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931831#comment-16931831
 ] 

Íñigo Goiri commented on HDFS-6524:
---

Can we have some more coverage for this?
We should cover the old behavior and the new one.

> Choosing datanode  retries times considering with block replica number
> --
>
> Key: HDFS-6524
> URL: https://issues.apache.org/jira/browse/HDFS-6524
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha1
>Reporter: Liang Xie
>Assignee: Lisheng Sun
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6524.001.patch, HDFS-6524.002.patch, 
> HDFS-6524.003.patch, HDFS-6524.004.patch, HDFS-6524.005(2).patch, 
> HDFS-6524.005.patch, HDFS-6524.txt
>
>
> Currently the chooseDataNode() does retry with the setting: 
> dfsClientConf.maxBlockAcquireFailures, which by default is 3 
> (DFS_CLIENT_MAX_BLOCK_ACQUIRE_FAILURES_DEFAULT = 3), it would be better 
> having another option, block replication factor. One cluster with only  two 
> block replica setting, or using Reed-solomon encoding solution with one 
> replica factor. It helps to reduce the long tail latency.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14833) RBF: Router Update Doesn't Sync Quota

2019-09-17 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931828#comment-16931828
 ] 

Íñigo Goiri commented on HDFS-14833:


* In RouterAdminServer#updateMountTableEntry(), I would move {{MountTable 
mountTable = request.getEntry();}} to the very beginning and use that value in 
the {{getMountPoint()}}. Probably call it {{updateEntry}} like in the other 
method.
* I like the {{oldEntry}} naming. Easier to read now.
* Extract RouterAdminServer#348 a little. Too many chain getters make it hard 
to read.
* If we are going to merge ifs, at some point we should have a function for 
{{nsQuota != HdfsConstants.QUOTA_DONT_SET || ssQuota != 
HdfsConstants.QUOTA_DONT_SET}}. Otherwise we could separate the {{if 
(router.isQuotaEnabled())}}.

> RBF: Router Update Doesn't Sync Quota
> -
>
> Key: HDFS-14833
> URL: https://issues.apache.org/jira/browse/HDFS-14833
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14833-01.patch, HDFS-14833-02.patch, 
> HDFS-14833-03.patch, HDFS-14833-04.patch
>
>
> HDFS-14777 Added a check to prevent RPC call, It checks whether in the 
> present state whether quota is changing. 
> But ignores the part that if the locations are changed. if the location is 
> changed the new destination should be synchronized with the mount entry 
> quota. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2121) Create a shaded ozone filesystem (client) jar

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2121?focusedWorklogId=313935=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313935
 ]

ASF GitHub Bot logged work on HDDS-2121:


Author: ASF GitHub Bot
Created on: 17/Sep/19 21:46
Start Date: 17/Sep/19 21:46
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1452: 
HDDS-2121. Create a shaded ozone filesystem (client) jar
URL: https://github.com/apache/hadoop/pull/1452#discussion_r325400148
 
 

 ##
 File path: hadoop-ozone/ozonefs-lib-current/pom.xml
 ##
 @@ -83,6 +63,78 @@
   true
 
   
+  
+org.apache.maven.plugins
+maven-shade-plugin
+
+  
+package
+
+  shade
+
+
+  
+
+  classworlds:classworlds
 
 Review comment:
   Do we need this?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313935)
Time Spent: 1h  (was: 50m)

> Create a shaded ozone filesystem (client) jar
> -
>
> Key: HDDS-2121
> URL: https://issues.apache.org/jira/browse/HDDS-2121
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Arpit Agarwal
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> We need a shaded Ozonefs jar that does not include Hadoop ecosystem 
> components (Hadoop, HDFS, Ratis, Zookeeper).
> A common expected use case for Ozone is Hadoop clients (3.2.0 and later) 
> wanting to access Ozone via the Ozone Filesystem interface. For these 
> clients, we want to add Ozone file system jar to the classpath, however we 
> want to use Hadoop ecosystem dependencies that are `provided` and already 
> expected to be in the client classpath.
> Note that this is different from the legacy jar which bundles a shaded Hadoop 
> 3.2.0.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14844) Make buffer of BlockReaderRemote#newBlockReader#BufferedOutputStream configurable

2019-09-17 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931827#comment-16931827
 ] 

Íñigo Goiri commented on HDFS-14844:


The failed unit tests seem unrelated and I think this works.
The only issue is the lack of test coverage.
Not sure what we can add easily to make sure the parameter kicks in.

A minor comment: we should fix the indentation (too many spaces) in 
BlockReaderRemote#401-402.

> Make buffer of BlockReaderRemote#newBlockReader#BufferedOutputStream  
> configurable
> --
>
> Key: HDFS-14844
> URL: https://issues.apache.org/jira/browse/HDFS-14844
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-14844.001.patch, HDFS-14844.002.patch, 
> HDFS-14844.003.patch, HDFS-14844.004.patch
>
>
> details for HDFS-14820



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2081) Fix TestRatisPipelineProvider#testCreatePipelinesDnExclude

2019-09-17 Thread Siddharth Wagle (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle reassigned HDDS-2081:
-

Assignee: Aravindan Vijayan

> Fix TestRatisPipelineProvider#testCreatePipelinesDnExclude
> --
>
> Key: HDDS-2081
> URL: https://issues.apache.org/jira/browse/HDDS-2081
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Dinesh Chitlangia
>Assignee: Aravindan Vijayan
>Priority: Major
>
> {code:java}
> ---
> Test set: org.apache.hadoop.hdds.scm.pipeline.TestRatisPipelineProvider
> ---
> Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.374 s <<< 
> FAILURE! - in org.apache.hadoop.hdds.scm.pipeline.TestRatisPipelineProvider
> testCreatePipelinesDnExclude(org.apache.hadoop.hdds.scm.pipeline.TestRatisPipelineProvider)
>   Time elapsed: 0.044 s  <<< ERROR!
> org.apache.hadoop.hdds.scm.pipeline.InsufficientDatanodesException: Cannot 
> create pipeline of factor 3 using 2 nodes.
>   at 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider.create(RatisPipelineProvider.java:151)
>   at 
> org.apache.hadoop.hdds.scm.pipeline.TestRatisPipelineProvider.testCreatePipelinesDnExclude(TestRatisPipelineProvider.java:182)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-730) ozone fs cli prints hadoop fs in usage

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-730?focusedWorklogId=313906=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313906
 ]

ASF GitHub Bot logged work on HDDS-730:
---

Author: ASF GitHub Bot
Created on: 17/Sep/19 21:00
Start Date: 17/Sep/19 21:00
Worklog Time Spent: 10m 
  Work Description: cxorm commented on pull request #1464: HDDS-730. Ozone 
fs cli prints hadoop fs in usage.
URL: https://github.com/apache/hadoop/pull/1464
 
 
   Create OzoneFsShell that extends hadoop FsShell
   
   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313906)
Time Spent: 40m  (was: 0.5h)

> ozone fs cli prints hadoop fs in usage
> --
>
> Key: HDDS-730
> URL: https://issues.apache.org/jira/browse/HDDS-730
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: fscmd.png, fswith_nonexsitcmd.png, 
> image-2018-10-24-17-15-39-097.png
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> ozone fs cli help/usage page contains Usage: hadoop fs [ generic options ] 
> I believe the usage should be updated.
> Check line 3 of screenshot.
> !image-2018-10-24-17-15-39-097.png|width=1693,height=1512!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-730) ozone fs cli prints hadoop fs in usage

2019-09-17 Thread YiSheng Lien (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931814#comment-16931814
 ] 

YiSheng Lien commented on HDDS-730:
---

Thanks [~elek] the idea:)

I create the OzoneFsShell that extends FsShell, and it works.

Attachments are demo on my computer.

> ozone fs cli prints hadoop fs in usage
> --
>
> Key: HDDS-730
> URL: https://issues.apache.org/jira/browse/HDDS-730
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: fscmd.png, fswith_nonexsitcmd.png, 
> image-2018-10-24-17-15-39-097.png
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> ozone fs cli help/usage page contains Usage: hadoop fs [ generic options ] 
> I believe the usage should be updated.
> Check line 3 of screenshot.
> !image-2018-10-24-17-15-39-097.png|width=1693,height=1512!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-730) ozone fs cli prints hadoop fs in usage

2019-09-17 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien updated HDDS-730:
--
Attachment: fswith_nonexsitcmd.png
fscmd.png

> ozone fs cli prints hadoop fs in usage
> --
>
> Key: HDDS-730
> URL: https://issues.apache.org/jira/browse/HDDS-730
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: fscmd.png, fswith_nonexsitcmd.png, 
> image-2018-10-24-17-15-39-097.png
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> ozone fs cli help/usage page contains Usage: hadoop fs [ generic options ] 
> I believe the usage should be updated.
> Check line 3 of screenshot.
> !image-2018-10-24-17-15-39-097.png|width=1693,height=1512!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDDS-730) ozone fs cli prints hadoop fs in usage

2019-09-17 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien updated HDDS-730:
--
Comment: was deleted

(was: Attachments are demo !Screenshot from 2019-09-17 17-48-30.png! on my 
machine.)

> ozone fs cli prints hadoop fs in usage
> --
>
> Key: HDDS-730
> URL: https://issues.apache.org/jira/browse/HDDS-730
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: image-2018-10-24-17-15-39-097.png
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> ozone fs cli help/usage page contains Usage: hadoop fs [ generic options ] 
> I believe the usage should be updated.
> Check line 3 of screenshot.
> !image-2018-10-24-17-15-39-097.png|width=1693,height=1512!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDDS-730) ozone fs cli prints hadoop fs in usage

2019-09-17 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien updated HDDS-730:
--
Comment: was deleted

(was: Attachments are demo on my computer.

!ozone-cli-fs.png!)

> ozone fs cli prints hadoop fs in usage
> --
>
> Key: HDDS-730
> URL: https://issues.apache.org/jira/browse/HDDS-730
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: image-2018-10-24-17-15-39-097.png
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> ozone fs cli help/usage page contains Usage: hadoop fs [ generic options ] 
> I believe the usage should be updated.
> Check line 3 of screenshot.
> !image-2018-10-24-17-15-39-097.png|width=1693,height=1512!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-730) ozone fs cli prints hadoop fs in usage

2019-09-17 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien updated HDDS-730:
--
Attachment: (was: ozone-cli-fs.png)

> ozone fs cli prints hadoop fs in usage
> --
>
> Key: HDDS-730
> URL: https://issues.apache.org/jira/browse/HDDS-730
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: image-2018-10-24-17-15-39-097.png
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> ozone fs cli help/usage page contains Usage: hadoop fs [ generic options ] 
> I believe the usage should be updated.
> Check line 3 of screenshot.
> !image-2018-10-24-17-15-39-097.png|width=1693,height=1512!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-730) ozone fs cli prints hadoop fs in usage

2019-09-17 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien updated HDDS-730:
--
Attachment: (was: ozone-cli-fs-withnonexist.png)

> ozone fs cli prints hadoop fs in usage
> --
>
> Key: HDDS-730
> URL: https://issues.apache.org/jira/browse/HDDS-730
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: image-2018-10-24-17-15-39-097.png
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> ozone fs cli help/usage page contains Usage: hadoop fs [ generic options ] 
> I believe the usage should be updated.
> Check line 3 of screenshot.
> !image-2018-10-24-17-15-39-097.png|width=1693,height=1512!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14675) Increase Balancer Defaults Further

2019-09-17 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14675:
---
Release Note: 
Increase default bandwidth limit for rebalancing per DataNode 
(dfs.datanode.balance.bandwidthPerSec) from 10MB/s to 100MB/s.

Increase default maximum threads of DataNode balancer 
(dfs.datanode.balance.max.concurrent.moves) from 50 to 100.

> Increase Balancer Defaults Further
> --
>
> Key: HDFS-14675
> URL: https://issues.apache.org/jira/browse/HDFS-14675
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14675.001.patch
>
>
> HDFS-10297 increased the balancer defaults to 50 for 
> dfs.datanode.balance.max.concurrent.moves and to 10MB/s for 
> dfs.datanode.balance.bandwidthPerSec.
> We have found that these settings often have to be increased further as users 
> find the balancer operates too slowly with 50 and 10MB/s. We often recommend 
> moving concurrent moves to between 200 and 300 and setting the bandwidth to 
> 100 or even 1000MB/s, and these settings seem to work well in practice.
> I would like to suggest we increase the balancer defaults further. I would 
> suggest 100 for concurrent moves and 100MB/s for the bandwidth, but I would 
> like to know what others think on this topic too.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14846) libhdfs tests are failing on trunk due to jni usage bugs

2019-09-17 Thread Sahil Takiar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HDFS-14846:

Description: 
While working on HDFS-14564, I noticed that the libhdfs tests are failing on 
trunk (both on Hadoop QA and locally). I did some digging and found out that 
the {{-Xcheck:jni}} flag is causing a bunch of crashes. I haven't been able to 
pinpoint what caused this regression, but my best guess is that an upgrade in 
the JDK we use in Hadoop QA started causing these failures. I looked back at 
some old JIRAs and it looks like the tests work on Java 1.8.0_212, but Hadoop 
QA is running 1.8.0_222 (as is my local env) (I couldn't confirm this theory 
because I'm having trouble getting Java 1.8.0_212 installed next to 1.8.0_222 
on my Ubuntu machine) (even after re-winding the commit history back to a known 
good commit where the libhdfs passed, the tests still fail, so I don't think a 
code change caused the regressions).

The failures are a bunch of "FATAL ERROR in native method: Bad global or local 
ref passed to JNI" errors. After doing some debugging, it looks like 
{{-Xcheck:jni}} now errors out if any code tries to pass a local ref to 
{{DeleteLocalRef}} twice (previously it looked like it didn't complain) (we 
have some checks to avoid this, but it looks like they don't work as expected).

There are a few places in the libhdfs code where this pattern causes a crash, 
as well as one place in {{JniBasedUnixGroupsMapping}}.

  was:
While working on HDFS-14564, I noticed that the libhdfs tests are failing on 
trunk (both on hadoop-yetus and locally). I dig some digging and found out that 
the {{-Xcheck:jni}} flag is causing a bunch of crashes. I haven't been able to 
pinpoint what caused this regression, but my best guess is that an upgrade in 
the JDK we use in hadoop-yetus started causing these failures. I looked back at 
some old JIRAs and it looks like the tests work on Java 1.8.0_212, but yetus is 
running 1.8.0_222 (as is my local env) (I couldn't confirm this theory because 
I'm having trouble getting install 1.8.0_212 next to 1.8.0_222 on my Ubuntu 
machine) (even after re-winding the commit history back to a known good commit 
where the libhdfs passed, the tests still fail, so I don't think a code change 
caused the regressions).

The failures are a bunch of "FATAL ERROR in native method: Bad global or local 
ref passed to JNI" errors. After doing some debugging, it looks like 
{{-Xcheck:jni}} now errors out if any code tries to pass a local ref to 
{{DeleteLocalRef}} twice (previously it looked like it didn't complain) (we 
have some checks to avoid this, but it looks like they don't work as expected).

There are a few places in the libhdfs code where this pattern causes a crash, 
as well as one place in {{JniBasedUnixGroupsMapping}}.


> libhdfs tests are failing on trunk due to jni usage bugs
> 
>
> Key: HDFS-14846
> URL: https://issues.apache.org/jira/browse/HDFS-14846
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs, native
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
>
> While working on HDFS-14564, I noticed that the libhdfs tests are failing on 
> trunk (both on Hadoop QA and locally). I did some digging and found out that 
> the {{-Xcheck:jni}} flag is causing a bunch of crashes. I haven't been able 
> to pinpoint what caused this regression, but my best guess is that an upgrade 
> in the JDK we use in Hadoop QA started causing these failures. I looked back 
> at some old JIRAs and it looks like the tests work on Java 1.8.0_212, but 
> Hadoop QA is running 1.8.0_222 (as is my local env) (I couldn't confirm this 
> theory because I'm having trouble getting Java 1.8.0_212 installed next to 
> 1.8.0_222 on my Ubuntu machine) (even after re-winding the commit history 
> back to a known good commit where the libhdfs passed, the tests still fail, 
> so I don't think a code change caused the regressions).
> The failures are a bunch of "FATAL ERROR in native method: Bad global or 
> local ref passed to JNI" errors. After doing some debugging, it looks like 
> {{-Xcheck:jni}} now errors out if any code tries to pass a local ref to 
> {{DeleteLocalRef}} twice (previously it looked like it didn't complain) (we 
> have some checks to avoid this, but it looks like they don't work as 
> expected).
> There are a few places in the libhdfs code where this pattern causes a crash, 
> as well as one place in {{JniBasedUnixGroupsMapping}}.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14655) [SBN Read] Namenode crashes if one of The JN is down

2019-09-17 Thread Konstantin Shvachko (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931797#comment-16931797
 ] 

Konstantin Shvachko commented on HDFS-14655:


Hey guys, would it possible to add a test case? We clearly didn't capture it in 
testing.

> [SBN Read] Namenode crashes if one of The JN is down
> 
>
> Key: HDFS-14655
> URL: https://issues.apache.org/jira/browse/HDFS-14655
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Harshakiran Reddy
>Assignee: Ayush Saxena
>Priority: Critical
> Attachments: HDFS-14655-01.patch, HDFS-14655-02.patch, 
> HDFS-14655-03.patch, HDFS-14655-04.patch, HDFS-14655.poc.patch
>
>
> {noformat}
> 2019-07-04 17:35:54,064 | INFO  | Logger channel (from parallel executor) to 
> XXX/XXX | Retrying connect to server: XXX/XXX. Already tried 
> 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, 
> sleepTime=1000 MILLISECONDS) | Client.java:975
> 2019-07-04 17:35:54,087 | FATAL | Edit log tailer | Unknown error encountered 
> while tailing edits. Shutting down standby NN. | EditLogTailer.java:474
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:717)
>   at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
>   at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1378)
>   at 
> com.google.common.util.concurrent.MoreExecutors$ListeningDecorator.execute(MoreExecutors.java:440)
>   at 
> com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:56)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.getJournaledEdits(IPCLoggerChannel.java:565)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.getJournaledEdits(AsyncLoggerSet.java:272)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectRpcInputStreams(QuorumJournalManager.java:533)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:508)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:275)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1681)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1714)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:307)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$300(EditLogTailer.java:410)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:360)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:483)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
> 2019-07-04 17:35:54,112 | INFO  | Edit log tailer | Exiting with status 1: 
> java.lang.OutOfMemoryError: unable to create new native thread | 
> ExitUtil.java:210
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14845) Request is a replay (34) error in httpfs

2019-09-17 Thread Eric Yang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931795#comment-16931795
 ] 

Eric Yang commented on HDFS-14845:
--

[~Prabhu Joseph] Thank you for patch 002.

{quote}But most of the testcases related to HttpFSServerWebServer (eg: 
TestHttpFSServer) requires more changes as they did not use HttpServer2 and so 
the filter initializers are not called, instead it uses a Test Jetty Server 
with HttpFSServerWebApp which are failing as the filter won't have any configs.

Please let me know if we can handle this in a separate improvement Jira.{quote}

All HttpFS unit tests are passing on my system.  Which test requires a separate 
ticket?

{quote}Have changed the HttpFSAuthenticationFilter$getConfiguration to honor 
the hadoop.http.authentication configs which will be overridden by 
httpfs.authentication configs.{quote}

Patch 2 works for these configuration:

{code}

  hadoop.http.authentication.type
  kerberos



  hadoop.http.authentication.kerberos.principal
  HTTP/host-1.example@example.com



  hadoop.http.authentication.kerberos.keytab
  /etc/security/keytabs/spnego.service.keytab



  hadoop.http.filter.initializers
  
org.apache.hadoop.security.authentication.server.ProxyUserAuthenticationFilterInitializer,org.apache.hadoop.security.HttpCrossOriginFilterInitializer



  httpfs.authentication.type
  kerberos



  hadoop.authentication.type
  kerberos



  httpfs.hadoop.authentication.type
  kerberos



  httpfs.authentication.kerberos.principal
  HTTP/host-1.example@example.com



  httpfs.authentication.kerberos.keytab
  /etc/security/keytabs/spnego.service.keytab



  httpfs.hadoop.authentication.kerberos.principal
  nn/host-1.example@example.com



  httpfs.hadoop.authentication.kerberos.keytab
  /etc/security/keytabs/hdfs.service.keytab

{code}

It doesn't work when configuration skips httpfs.hadoop.authentication.type, 
httpfs.authentication.kerberos.keytab and 
httpfs.hadoop.authentication.kerberos.principal.  httpfs server doesn't start 
when these config are missing.  I think some logic to map the configuration are 
missing in patch 002.

> Request is a replay (34) error in httpfs
> 
>
> Key: HDFS-14845
> URL: https://issues.apache.org/jira/browse/HDFS-14845
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.3.0
> Environment: Kerberos and ZKDelgationTokenSecretManager enabled in 
> HttpFS
>Reporter: Akira Ajisaka
>Assignee: Prabhu Joseph
>Priority: Critical
> Attachments: HDFS-14845-001.patch, HDFS-14845-002.patch
>
>
> We are facing "Request is a replay (34)" error when accessing to HDFS via 
> httpfs on trunk.
> {noformat}
> % curl -i --negotiate -u : "https://:4443/webhdfs/v1/?op=liststatus"
> HTTP/1.1 401 Authentication required
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> WWW-Authenticate: Negotiate
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 271
> HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism 
> level: Request is a replay (34))
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> (snip)
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 413
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403
> Problem accessing /webhdfs/v1/. Reason:
> GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))
> 
> 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2020) Remove mTLS from Ozone GRPC

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2020?focusedWorklogId=313887=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313887
 ]

ASF GitHub Bot logged work on HDDS-2020:


Author: ASF GitHub Bot
Created on: 17/Sep/19 20:20
Start Date: 17/Sep/19 20:20
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1369: HDDS-2020. 
Remove mTLS from Ozone GRPC. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/1369#issuecomment-532385397
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 13 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 65 | Maven dependency ordering for branch |
   | -1 | mvninstall | 30 | hadoop-ozone in trunk failed. |
   | -1 | compile | 20 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 97 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 836 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | trunk passed |
   | 0 | spotbugs | 175 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 26 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for patch |
   | -1 | mvninstall | 31 | hadoop-ozone in the patch failed. |
   | -1 | compile | 24 | hadoop-ozone in the patch failed. |
   | -1 | cc | 24 | hadoop-ozone in the patch failed. |
   | -1 | javac | 24 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 47 | hadoop-hdds: The patch generated 64 new + 908 
unchanged - 46 fixed = 972 total (was 954) |
   | -0 | checkstyle | 50 | hadoop-ozone: The patch generated 77 new + 973 
unchanged - 64 fixed = 1050 total (was 1037) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 680 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 87 | hadoop-ozone generated 3 new + 252 unchanged - 2 fixed 
= 255 total (was 254) |
   | -1 | findbugs | 27 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 229 | hadoop-hdds in the patch passed. |
   | -1 | unit | 28 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | |  | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1369 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml cc |
   | uname | Linux 30df383d112f 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / eefe9bc |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/9/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/9/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/9/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/9/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/9/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | cc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/9/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/9/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/9/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/9/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/9/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/9/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 

[jira] [Work logged] (HDDS-2032) Ozone client should retry writes in case of any ratis/stateMachine exceptions

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2032?focusedWorklogId=313864=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313864
 ]

ASF GitHub Bot logged work on HDDS-2032:


Author: ASF GitHub Bot
Created on: 17/Sep/19 19:15
Start Date: 17/Sep/19 19:15
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1420: HDDS-2032. Ozone 
client should retry writes in case of any ratis/stateMachine exceptions.
URL: https://github.com/apache/hadoop/pull/1420#issuecomment-532362125
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 78 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for branch |
   | -1 | mvninstall | 28 | hadoop-ozone in trunk failed. |
   | -1 | compile | 19 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 51 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 943 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | trunk passed |
   | 0 | spotbugs | 173 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 23 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | -1 | mvninstall | 30 | hadoop-ozone in the patch failed. |
   | -1 | compile | 21 | hadoop-ozone in the patch failed. |
   | -1 | javac | 21 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 25 | hadoop-hdds: The patch generated 2 new + 40 
unchanged - 3 fixed = 42 total (was 43) |
   | -0 | checkstyle | 27 | hadoop-ozone: The patch generated 2 new + 144 
unchanged - 2 fixed = 146 total (was 146) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 728 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 66 | hadoop-hdds in the patch passed. |
   | +1 | javadoc | 83 | hadoop-ozone generated 0 new + 253 unchanged - 2 fixed 
= 253 total (was 255) |
   | -1 | findbugs | 23 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 263 | hadoop-hdds in the patch passed. |
   | -1 | unit | 25 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 3399 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1420/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1420 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ea6521a1b3ea 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / eefe9bc |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1420/3/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1420/3/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1420/3/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1420/3/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1420/3/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1420/3/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1420/3/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1420/3/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1420/3/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1420/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 

[jira] [Commented] (HDFS-14850) Optimize FileSystemAccessService#getFileSystemConfiguration

2019-09-17 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931746#comment-16931746
 ] 

Hadoop QA commented on HDFS-14850:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 17s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-httpfs: The 
patch generated 1 new + 48 unchanged - 0 fixed = 49 total (was 48) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
57s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:39e82acc485 |
| JIRA Issue | HDFS-14850 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12980528/HDFS-14850.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e577ee4625cc 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / eefe9bc |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27895/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27895/testReport/ |
| Max. process+thread count | 615 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: 

[jira] [Resolved] (HDDS-1593) Improve logging for failures during pipeline creation and usage.

2019-09-17 Thread Siddharth Wagle (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle resolved HDDS-1593.
---
Resolution: Done

> Improve logging for failures during pipeline creation and usage.
> 
>
> Key: HDDS-1593
> URL: https://issues.apache.org/jira/browse/HDDS-1593
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
>
> When pipeline creation fails, then the pipeline ID along with all the nodes 
> in the pipeline should be printed. Also the node for which pipeline creation 
> failed should also be printed as well.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2116) Create SCMPipelineAllocationManager as background thread for pipeline creation

2019-09-17 Thread Siddharth Wagle (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDDS-2116:
--
Summary: Create SCMPipelineAllocationManager as background thread for 
pipeline creation  (was: Create SCMAllocationManager as background thread for 
pipeline creation)

> Create SCMPipelineAllocationManager as background thread for pipeline creation
> --
>
> Key: HDDS-2116
> URL: https://issues.apache.org/jira/browse/HDDS-2116
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Li Cheng
>Priority: Major
>
> SCMAllocationManager can be leveraged to get a candidate set of datanodes 
> based on placement policies. And it should make the pipeline creation process 
> to be async and multi-thread. This should be done when we encounter with 
> performance bottleneck in terms of pipeline creation.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14355) Implement HDFS cache on SCM by using pure java mapped byte buffer

2019-09-17 Thread Uma Maheswara Rao G (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-14355:
---
Priority: Major  (was: Minor)

> Implement HDFS cache on SCM by using pure java mapped byte buffer
> -
>
> Key: HDFS-14355
> URL: https://issues.apache.org/jira/browse/HDFS-14355
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14355.000.patch, HDFS-14355.001.patch, 
> HDFS-14355.002.patch, HDFS-14355.003.patch, HDFS-14355.004.patch, 
> HDFS-14355.005.patch, HDFS-14355.006.patch, HDFS-14355.007.patch, 
> HDFS-14355.008.patch, HDFS-14355.009.patch
>
>
> This task is to implement the caching to persistent memory using pure 
> {{java.nio.MappedByteBuffer}}, which could be useful in case native support 
> isn't available or convenient in some environments or platforms.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14355) Implement HDFS cache on SCM by using pure java mapped byte buffer

2019-09-17 Thread Uma Maheswara Rao G (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-14355:
---
Priority: Minor  (was: Major)

> Implement HDFS cache on SCM by using pure java mapped byte buffer
> -
>
> Key: HDFS-14355
> URL: https://issues.apache.org/jira/browse/HDFS-14355
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14355.000.patch, HDFS-14355.001.patch, 
> HDFS-14355.002.patch, HDFS-14355.003.patch, HDFS-14355.004.patch, 
> HDFS-14355.005.patch, HDFS-14355.006.patch, HDFS-14355.007.patch, 
> HDFS-14355.008.patch, HDFS-14355.009.patch
>
>
> This task is to implement the caching to persistent memory using pure 
> {{java.nio.MappedByteBuffer}}, which could be useful in case native support 
> isn't available or convenient in some environments or platforms.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14818) Check native pmdk lib by 'hadoop checknative' command

2019-09-17 Thread Uma Maheswara Rao G (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-14818:
---
Priority: Minor  (was: Major)

> Check native pmdk lib by 'hadoop checknative' command
> -
>
> Key: HDFS-14818
> URL: https://issues.apache.org/jira/browse/HDFS-14818
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: native
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Minor
> Attachments: HDFS-14818.000.patch, HDFS-14818.001.patch, 
> check_native_after_building_with_PMDK.png, 
> check_native_after_building_with_PMDK_using_NAME_instead_of_REALPATH.png, 
> check_native_after_building_without_PMDK.png
>
>
> Currently, 'hadoop checknative' command supports checking native libs, such 
> as zlib, snappy, openssl and ISA-L etc. It's necessary to include pmdk lib in 
> the checking.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2116) Create SCMAllocationManager as background thread for pipeline creation

2019-09-17 Thread Siddharth Wagle (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931689#comment-16931689
 ] 

Siddharth Wagle commented on HDDS-2116:
---

Hi [~timmylicheng], we can certainly punt it if there is no need for it. But, 
the intention was to encapsulate the instantiation of pipeline allocation 
strategies in the AllocationManager based on different configurations. The 
reason it might not be needed is if we are already capturing this behavior 
somewhere else. It is ok to leave this open, I suspect we would need it.

> Create SCMAllocationManager as background thread for pipeline creation
> --
>
> Key: HDDS-2116
> URL: https://issues.apache.org/jira/browse/HDDS-2116
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Li Cheng
>Priority: Major
>
> SCMAllocationManager can be leveraged to get a candidate set of datanodes 
> based on placement policies. And it should make the pipeline creation process 
> to be async and multi-thread. This should be done when we encounter with 
> performance bottleneck in terms of pipeline creation.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14655) [SBN Read] Namenode crashes if one of The JN is down

2019-09-17 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931684#comment-16931684
 ] 

Ayush Saxena commented on HDFS-14655:
-

[~xkrogen] [~vagarychen] [~shv] any further comments...

> [SBN Read] Namenode crashes if one of The JN is down
> 
>
> Key: HDFS-14655
> URL: https://issues.apache.org/jira/browse/HDFS-14655
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Harshakiran Reddy
>Assignee: Ayush Saxena
>Priority: Critical
> Attachments: HDFS-14655-01.patch, HDFS-14655-02.patch, 
> HDFS-14655-03.patch, HDFS-14655-04.patch, HDFS-14655.poc.patch
>
>
> {noformat}
> 2019-07-04 17:35:54,064 | INFO  | Logger channel (from parallel executor) to 
> XXX/XXX | Retrying connect to server: XXX/XXX. Already tried 
> 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, 
> sleepTime=1000 MILLISECONDS) | Client.java:975
> 2019-07-04 17:35:54,087 | FATAL | Edit log tailer | Unknown error encountered 
> while tailing edits. Shutting down standby NN. | EditLogTailer.java:474
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:717)
>   at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
>   at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1378)
>   at 
> com.google.common.util.concurrent.MoreExecutors$ListeningDecorator.execute(MoreExecutors.java:440)
>   at 
> com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:56)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.getJournaledEdits(IPCLoggerChannel.java:565)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.getJournaledEdits(AsyncLoggerSet.java:272)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectRpcInputStreams(QuorumJournalManager.java:533)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:508)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:275)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1681)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1714)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:307)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$300(EditLogTailer.java:410)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:360)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:483)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
> 2019-07-04 17:35:54,112 | INFO  | Edit log tailer | Exiting with status 1: 
> java.lang.OutOfMemoryError: unable to create new native thread | 
> ExitUtil.java:210
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14833) RBF: Router Update Doesn't Sync Quota

2019-09-17 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931682#comment-16931682
 ] 

Ayush Saxena commented on HDFS-14833:
-

[~elgoiri] can you help review. :)

> RBF: Router Update Doesn't Sync Quota
> -
>
> Key: HDFS-14833
> URL: https://issues.apache.org/jira/browse/HDFS-14833
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14833-01.patch, HDFS-14833-02.patch, 
> HDFS-14833-03.patch, HDFS-14833-04.patch
>
>
> HDFS-14777 Added a check to prevent RPC call, It checks whether in the 
> present state whether quota is changing. 
> But ignores the part that if the locations are changed. if the location is 
> changed the new destination should be synchronized with the mount entry 
> quota. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2120) Remove hadoop classes from ozonefs-current jar

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2120?focusedWorklogId=313812=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313812
 ]

ASF GitHub Bot logged work on HDDS-2120:


Author: ASF GitHub Bot
Created on: 17/Sep/19 17:18
Start Date: 17/Sep/19 17:18
Worklog Time Spent: 10m 
  Work Description: arp7 commented on issue #1434: HDDS-2120. Remove hadoop 
classes from ozonefs-current jar
URL: https://github.com/apache/hadoop/pull/1434#issuecomment-532316448
 
 
   It looks like this failed compilation in Jenkins... does the Jenkins job 
need to be updated to use the separated pom compilation command?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313812)
Time Spent: 1h 50m  (was: 1h 40m)

> Remove hadoop classes from ozonefs-current jar
> --
>
> Key: HDDS-2120
> URL: https://issues.apache.org/jira/browse/HDDS-2120
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> We have two kind of ozone file system jars: current and legacy. current is 
> designed to work only with exactly the same hadoop version which is used for 
> compilation (3.2 as of now).
> But as of now the hadoop classes are included in the current jar which is not 
> necessary as the jar is expected to be used in an environment where  the 
> hadoop classes (exactly the same hadoop classes) are already there. They can 
> be excluded.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2141) Missing total number of operations

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2141?focusedWorklogId=313811=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313811
 ]

ASF GitHub Bot logged work on HDDS-2141:


Author: ASF GitHub Bot
Created on: 17/Sep/19 17:18
Start Date: 17/Sep/19 17:18
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1462: HDDS-2141. 
Missing total number of operations
URL: https://github.com/apache/hadoop/pull/1462#issuecomment-532316424
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 164 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 35 | hadoop-ozone in trunk failed. |
   | -1 | compile | 25 | hadoop-ozone in trunk failed. |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1088 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 206 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 41 | hadoop-ozone in the patch failed. |
   | -1 | jshint | 106 | The patch generated 1393 new + 2737 unchanged - 0 
fixed = 4130 total (was 2737) |
   | -1 | compile | 26 | hadoop-ozone in the patch failed. |
   | -1 | javac | 26 | hadoop-ozone in the patch failed. |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 802 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 265 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 227 | hadoop-hdds in the patch failed. |
   | -1 | unit | 108 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 114 | The patch does not generate ASF License warnings. |
   | | | 3663 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.lock.TestLockManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1462/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1462 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient jshint |
   | uname | Linux 03d06927b330 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c474e24 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1462/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1462/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1462/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | jshint | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1462/1/artifact/out/diff-patch-jshint.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1462/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1462/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1462/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1462/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1462/1/testReport/ |
   | Max. process+thread count | 306 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1462/1/console |
   | versions | git=2.7.4 maven=3.3.9 jshint=2.10.2 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313811)
Time Spent: 

[jira] [Work logged] (HDDS-2141) Missing total number of operations

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2141?focusedWorklogId=313810=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313810
 ]

ASF GitHub Bot logged work on HDDS-2141:


Author: ASF GitHub Bot
Created on: 17/Sep/19 17:17
Start Date: 17/Sep/19 17:17
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1462: 
HDDS-2141. Missing total number of operations
URL: https://github.com/apache/hadoop/pull/1462#discussion_r325288674
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/resources/webapps/ozoneManager/ozoneManager.js
 ##
 @@ -87,6 +88,7 @@
 if (name == "Ops") {
 groupedMetrics.nums[type].ops = 
metrics[key]
 } else {
+groupedMetrics.nums[type].total += 
metrics[key]
 
 Review comment:
   jshint:84:W033:Missing semicolon.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313810)
Time Spent: 0.5h  (was: 20m)

> Missing total number of operations
> --
>
> Key: HDDS-2141
> URL: https://issues.apache.org/jira/browse/HDDS-2141
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Attachments: missing_total.png, total-new.png
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Total number of operations is missing from some metrics graphs.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1949) Missing or error-prone test cleanup

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1949?focusedWorklogId=313796=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313796
 ]

ASF GitHub Bot logged work on HDDS-1949:


Author: ASF GitHub Bot
Created on: 17/Sep/19 16:57
Start Date: 17/Sep/19 16:57
Worklog Time Spent: 10m 
  Work Description: arp7 commented on issue #1365: HDDS-1949. Missing or 
error-prone test cleanup
URL: https://github.com/apache/hadoop/pull/1365#issuecomment-532308400
 
 
   Hi @adoroszlai , I am +1 on the patch. Can you please resolve the conflicts 
so we cna get a new test run?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313796)
Time Spent: 1h 20m  (was: 1h 10m)

> Missing or error-prone test cleanup
> ---
>
> Key: HDDS-1949
> URL: https://issues.apache.org/jira/browse/HDDS-1949
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Some integration tests do not clean up after themselves.  Some only clean up 
> if the test is successful.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=313792=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313792
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 17/Sep/19 16:50
Start Date: 17/Sep/19 16:50
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1277: HDDS-1054. List 
Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#issuecomment-532305637
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 48 | Maven dependency ordering for branch |
   | -1 | mvninstall | 33 | hadoop-ozone in trunk failed. |
   | -1 | compile | 23 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 124 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1003 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | trunk passed |
   | 0 | spotbugs | 184 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 26 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 32 | Maven dependency ordering for patch |
   | -1 | mvninstall | 35 | hadoop-ozone in the patch failed. |
   | -1 | compile | 27 | hadoop-ozone in the patch failed. |
   | -1 | cc | 27 | hadoop-ozone in the patch failed. |
   | -1 | javac | 27 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 31 | hadoop-hdds: The patch generated 5 new + 9 
unchanged - 1 fixed = 14 total (was 10) |
   | -0 | checkstyle | 97 | hadoop-ozone: The patch generated 448 new + 2401 
unchanged - 92 fixed = 2849 total (was 2493) |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | whitespace | 0 | The patch has 1 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 765 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 91 | hadoop-ozone generated 9 new + 249 unchanged - 7 fixed 
= 258 total (was 256) |
   | -1 | findbugs | 28 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 247 | hadoop-hdds in the patch failed. |
   | -1 | unit | 31 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 35 | The patch does not generate ASF License warnings. |
   | | | 3706 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1277 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 4392a4c17837 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c474e24 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | cc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | whitespace | 

[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=313791=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313791
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 17/Sep/19 16:50
Start Date: 17/Sep/19 16:50
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1277: 
HDDS-1054. List Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#discussion_r325276989
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmMultipartUploadListParts.java
 ##
 @@ -39,11 +45,15 @@
  // A list can be truncated if the number of parts exceeds the limit
  // returned in the MaxParts element.
   private boolean truncated;
+
   private final List partInfoList = new ArrayList<>();
 
   public OmMultipartUploadListParts(HddsProtos.ReplicationType type,
+  HddsProtos.ReplicationFactor factor,
   int nextMarker, boolean truncate) {
 this.replicationType = type;
+this.replicationFactor = factor;
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313791)
Time Spent: 10h  (was: 9h 50m)

> List Multipart uploads in a bucket
> --
>
> Key: HDDS-1054
> URL: https://issues.apache.org/jira/browse/HDDS-1054
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 10h
>  Remaining Estimate: 0h
>
> This Jira is to implement in ozone to list of in-progress multipart uploads 
> in a bucket.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html]
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-730) ozone fs cli prints hadoop fs in usage

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-730?focusedWorklogId=313790=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313790
 ]

ASF GitHub Bot logged work on HDDS-730:
---

Author: ASF GitHub Bot
Created on: 17/Sep/19 16:49
Start Date: 17/Sep/19 16:49
Worklog Time Spent: 10m 
  Work Description: cxorm commented on pull request #1459: HDDS-730. Ozone 
fs cli prints hadoop fs in usage.
URL: https://github.com/apache/hadoop/pull/1459
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313790)
Time Spent: 0.5h  (was: 20m)

> ozone fs cli prints hadoop fs in usage
> --
>
> Key: HDDS-730
> URL: https://issues.apache.org/jira/browse/HDDS-730
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: image-2018-10-24-17-15-39-097.png, 
> ozone-cli-fs-withnonexist.png, ozone-cli-fs.png
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> ozone fs cli help/usage page contains Usage: hadoop fs [ generic options ] 
> I believe the usage should be updated.
> Check line 3 of screenshot.
> !image-2018-10-24-17-15-39-097.png|width=1693,height=1512!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6524) Choosing datanode retries times considering with block replica number

2019-09-17 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-6524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931647#comment-16931647
 ] 

Hadoop QA commented on HDFS-6524:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
57s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m 23s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}152m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.TestRollingUpgrade |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:39e82acc485 |
| JIRA Issue | HDFS-6524 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12980507/HDFS-6524.005%282%29.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6c1487bb7a5b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7f90731 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 

[jira] [Updated] (HDFS-14771) Backport HDFS-14617 to branch-2 (Improve fsimage load time by writing sub-sections to the fsimage index)

2019-09-17 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14771:
---
Fix Version/s: 2.10.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Pushed to branch-2. Thanks!

> Backport HDFS-14617 to branch-2 (Improve fsimage load time by writing 
> sub-sections to the fsimage index)
> 
>
> Key: HDFS-14771
> URL: https://issues.apache.org/jira/browse/HDFS-14771
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
>  Labels: release-blocker
> Fix For: 2.10.0
>
> Attachments: HDFS-14771.branch-2.001.patch, 
> HDFS-14771.branch-2.002.patch, HDFS-14771.branch-2.003.patch
>
>
> This JIRA aims to backport HDFS-14617 to branch-2: fsimage load time by 
> writing sub-sections to the fsimage index.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2142) OM metrics mismatch (abort multipart request)

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2142?focusedWorklogId=313785=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313785
 ]

ASF GitHub Bot logged work on HDDS-2142:


Author: ASF GitHub Bot
Created on: 17/Sep/19 16:27
Start Date: 17/Sep/19 16:27
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1461: HDDS-2142. OM 
metrics mismatch (abort multipart request)
URL: https://github.com/apache/hadoop/pull/1461#issuecomment-532297169
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 160 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 33 | hadoop-ozone in trunk failed. |
   | -1 | compile | 23 | hadoop-ozone in trunk failed. |
   | -0 | checkstyle | 38 | The patch fails to run checkstyle in hadoop-ozone |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1041 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 90 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 182 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 30 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 35 | hadoop-ozone in the patch failed. |
   | -1 | compile | 26 | hadoop-ozone in the patch failed. |
   | -1 | javac | 26 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 39 | The patch fails to run checkstyle in hadoop-ozone |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 808 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 97 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 26 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 344 | hadoop-hdds in the patch failed. |
   | -1 | unit | 31 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 37 | The patch does not generate ASF License warnings. |
   | | | 3879 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1461/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1461 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ec2052e77e5a 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c474e24 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1461/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1461/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1461/1/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1461/out/maven-branch-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1461/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1461/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1461/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1461/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1461/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1461/1/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1461/out/maven-patch-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1461/1/artifact/out/patch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 

[jira] [Work logged] (HDDS-2032) Ozone client should retry writes in case of any ratis/stateMachine exceptions

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2032?focusedWorklogId=313783=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313783
 ]

ASF GitHub Bot logged work on HDDS-2032:


Author: ASF GitHub Bot
Created on: 17/Sep/19 16:23
Start Date: 17/Sep/19 16:23
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #1420: HDDS-2032. 
Ozone client should retry writes in case of any ratis/stateMachine exceptions.
URL: https://github.com/apache/hadoop/pull/1420
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313783)
Time Spent: 1h 20m  (was: 1h 10m)

> Ozone client should retry writes in case of any ratis/stateMachine exceptions
> -
>
> Key: HDDS-2032
> URL: https://issues.apache.org/jira/browse/HDDS-2032
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Currently, Ozone client retry writes on a different pipeline or container in 
> case of some specific exceptions. But in case, it sees exception such as 
> DISK_FULL, CONTAINER_UNHEALTHY or any corruption , it just aborts the write. 
> In general, the every such exception on the client should be a retriable  
> exception in ozone client and on some specific exceptions, it should take 
> some more specific exception like excluding certain containers or pipelines 
> while retrying or informing SCM of a corrupt replica etc.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2032) Ozone client should retry writes in case of any ratis/stateMachine exceptions

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2032?focusedWorklogId=313780=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313780
 ]

ASF GitHub Bot logged work on HDDS-2032:


Author: ASF GitHub Bot
Created on: 17/Sep/19 16:20
Start Date: 17/Sep/19 16:20
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #1420: HDDS-2032. 
Ozone client should retry writes in case of any ratis/stateMachine exceptions.
URL: https://github.com/apache/hadoop/pull/1420
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313780)
Time Spent: 1h 10m  (was: 1h)

> Ozone client should retry writes in case of any ratis/stateMachine exceptions
> -
>
> Key: HDDS-2032
> URL: https://issues.apache.org/jira/browse/HDDS-2032
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Currently, Ozone client retry writes on a different pipeline or container in 
> case of some specific exceptions. But in case, it sees exception such as 
> DISK_FULL, CONTAINER_UNHEALTHY or any corruption , it just aborts the write. 
> In general, the every such exception on the client should be a retriable  
> exception in ozone client and on some specific exceptions, it should take 
> some more specific exception like excluding certain containers or pipelines 
> while retrying or informing SCM of a corrupt replica etc.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2132) TestKeyValueContainer is failing

2019-09-17 Thread Shashikant Banerjee (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931609#comment-16931609
 ] 

Shashikant Banerjee commented on HDDS-2132:
---

Until and unless, a putBlock command gets executed on a container in a dn, 
rocks db won't have an entry for BCSID. Till then, the BCSID is 0 in memory for 
the container.

> TestKeyValueContainer is failing
> 
>
> Key: HDDS-2132
> URL: https://issues.apache.org/jira/browse/HDDS-2132
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Nanda kumar
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> {{TestKeyValueContainer}} is failing with the following exception 
> {noformat}
> [ERROR] 
> testContainerImportExport(org.apache.hadoop.ozone.container.keyvalue.TestKeyValueContainer)
>   Time elapsed: 0.173 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:187)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.helpers.KeyValueContainerUtil.parseKVContainerData(KeyValueContainerUtil.java:201)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.importContainerData(KeyValueContainer.java:500)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.TestKeyValueContainer.testContainerImportExport(TestKeyValueContainer.java:235)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14850) Optimize FileSystemAccessService#getFileSystemConfiguration

2019-09-17 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14850:
---
Component/s: performance

> Optimize FileSystemAccessService#getFileSystemConfiguration
> ---
>
> Key: HDFS-14850
> URL: https://issues.apache.org/jira/browse/HDFS-14850
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, performance
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14850.001.patch, HDFS-14850.002.patch
>
>
> {code:java}
>  @Override
>   public Configuration getFileSystemConfiguration() {
> Configuration conf = new Configuration(true);
> ConfigurationUtils.copy(serviceHadoopConf, conf);
> conf.setBoolean(FILE_SYSTEM_SERVICE_CREATED, true);
> // Force-clear server-side umask to make HttpFS match WebHDFS behavior
> conf.set(FsPermission.UMASK_LABEL, "000");
> return conf;
>   }
> {code}
> As above code,when call 
> FileSystemAccessService#getFileSystemConfiguration,current code  new 
> Configuration every time.  
> It is not necessary and affects performance. I think it only need to new 
> Configuration in FileSystemAccessService#init once and  
> FileSystemAccessService#getFileSystemConfiguration get it.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14850) Optimize FileSystemAccessService#getFileSystemConfiguration

2019-09-17 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14850:
---
Component/s: httpfs

> Optimize FileSystemAccessService#getFileSystemConfiguration
> ---
>
> Key: HDFS-14850
> URL: https://issues.apache.org/jira/browse/HDFS-14850
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14850.001.patch, HDFS-14850.002.patch
>
>
> {code:java}
>  @Override
>   public Configuration getFileSystemConfiguration() {
> Configuration conf = new Configuration(true);
> ConfigurationUtils.copy(serviceHadoopConf, conf);
> conf.setBoolean(FILE_SYSTEM_SERVICE_CREATED, true);
> // Force-clear server-side umask to make HttpFS match WebHDFS behavior
> conf.set(FsPermission.UMASK_LABEL, "000");
> return conf;
>   }
> {code}
> As above code,when call 
> FileSystemAccessService#getFileSystemConfiguration,current code  new 
> Configuration every time.  
> It is not necessary and affects performance. I think it only need to new 
> Configuration in FileSystemAccessService#init once and  
> FileSystemAccessService#getFileSystemConfiguration get it.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14850) Optimize FileSystemAccessService#getFileSystemConfiguration

2019-09-17 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14850:
---
Attachment: HDFS-14850.002.patch

> Optimize FileSystemAccessService#getFileSystemConfiguration
> ---
>
> Key: HDFS-14850
> URL: https://issues.apache.org/jira/browse/HDFS-14850
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14850.001.patch, HDFS-14850.002.patch
>
>
> {code:java}
>  @Override
>   public Configuration getFileSystemConfiguration() {
> Configuration conf = new Configuration(true);
> ConfigurationUtils.copy(serviceHadoopConf, conf);
> conf.setBoolean(FILE_SYSTEM_SERVICE_CREATED, true);
> // Force-clear server-side umask to make HttpFS match WebHDFS behavior
> conf.set(FsPermission.UMASK_LABEL, "000");
> return conf;
>   }
> {code}
> As above code,when call 
> FileSystemAccessService#getFileSystemConfiguration,current code  new 
> Configuration every time.  
> It is not necessary and affects performance. I think it only need to new 
> Configuration in FileSystemAccessService#init once and  
> FileSystemAccessService#getFileSystemConfiguration get it.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2032) Ozone client should retry writes in case of any ratis/stateMachine exceptions

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2032?focusedWorklogId=313754=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313754
 ]

ASF GitHub Bot logged work on HDDS-2032:


Author: ASF GitHub Bot
Created on: 17/Sep/19 15:40
Start Date: 17/Sep/19 15:40
Worklog Time Spent: 10m 
  Work Description: mukul1987 commented on issue #1420: HDDS-2032. Ozone 
client should retry writes in case of any ratis/stateMachine exceptions.
URL: https://github.com/apache/hadoop/pull/1420#issuecomment-532278299
 
 
   Thanks for working on this @bshashikant , there are some conflicts with this 
patch. Can you please rebase.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313754)
Time Spent: 1h  (was: 50m)

> Ozone client should retry writes in case of any ratis/stateMachine exceptions
> -
>
> Key: HDDS-2032
> URL: https://issues.apache.org/jira/browse/HDDS-2032
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Currently, Ozone client retry writes on a different pipeline or container in 
> case of some specific exceptions. But in case, it sees exception such as 
> DISK_FULL, CONTAINER_UNHEALTHY or any corruption , it just aborts the write. 
> In general, the every such exception on the client should be a retriable  
> exception in ozone client and on some specific exceptions, it should take 
> some more specific exception like excluding certain containers or pipelines 
> while retrying or informing SCM of a corrupt replica etc.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >