[jira] [Created] (HDDS-889) Support uploading a part file in ozone

2018-11-30 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-889:
---

 Summary: Support uploading a part file in ozone
 Key: HDDS-889
 URL: https://issues.apache.org/jira/browse/HDDS-889
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


This Jira is to track the work required to create an API in Client to create an 
part file during multipart upload. 

This Jira does following changes:
 # Add a new API in ClientProtocol.java createMultipartKey which creates a 
multipart key.
 # In Om End add a new API tocommitMultipartUploadPart similar to commit for 
key, here we commit the part key of a multipart upload. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14121) Log message about the old hosts file format is misleading

2018-11-30 Thread Daniel Templeton (JIRA)
Daniel Templeton created HDFS-14121:
---

 Summary: Log message about the old hosts file format is misleading
 Key: HDFS-14121
 URL: https://issues.apache.org/jira/browse/HDFS-14121
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Daniel Templeton


In {{CombinedHostsFileReader.readFile()}} we have the following:

{code}  LOG.warn("{} has invalid JSON format." +
  "Try the old format without top-level token defined.", 
hostsFile);{code}

That message is trying to say that we tried parsing the hosts file as a 
well-formed JSON file and failed, so we're going to try again assuming that 
it's in the old badly-formed format.  What it actually says is that the hosts 
fie is bad, and the admin should try switching to the old format.  Those are 
two very different things.

While were in there, we should refactor the logging so that instead of 
reporting that we're going to try using a different parser (who the heck 
cares?), we report that the we had to use the old parser to successfully parse 
the hosts file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14120) ORFPP should also clone DT for the virtual IP

2018-11-30 Thread Chen Liang (JIRA)
Chen Liang created HDFS-14120:
-

 Summary: ORFPP should also clone DT for the virtual IP
 Key: HDFS-14120
 URL: https://issues.apache.org/jira/browse/HDFS-14120
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-12943
Reporter: Chen Liang
Assignee: Chen Liang


Currently with HDFS-14017, ORFPP behaves the similar way on handling delegation 
as ConfiguredFailoverProxyProvider. Specifically, given the delegation token 
associated with name service ID, it clones the DTs for all the corresponding 
physical addresses. But ORFPP requires more work than CFPP in the sense that it 
also leverages VIP address for failover, meaning in addition to cloning DT for 
physical addresses, ORFPP also needs to clone DT for the VIP address, which is 
missed from HDFS-14017.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14119) GreedyPlanner Parameter Logging

2018-11-30 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created HDFS-14119:
--

 Summary: GreedyPlanner Parameter Logging
 Key: HDFS-14119
 URL: https://issues.apache.org/jira/browse/HDFS-14119
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Affects Versions: 3.3.0
Reporter: BELUGA BEHR
 Attachments: HDFS-14119.1.patch

1. Do not use {{String.format()}} in conjunction with SLF4J.  Superfluous.
{code:java}
String message = String
.format("Compute Plan for Node : %s:%d took %d ms ",
node.getDataNodeName(), node.getDataNodePort(),
endTime - startTime);
LOG.info(message);{code}

2. Do not call an explicit toString() on an object with SLF4J parameter. 
Otherwise, the string will be created and then thrown away if the logger is not 
set to debug level.  Just pass the object itself and the framework will call 
{{toString}} if needed.
{code}
LOG.debug("Step : {} ",  nextStep.toString());
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-888) Fix TestOzoneRpcClient test

2018-11-30 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-888:
---

 Summary: Fix TestOzoneRpcClient test
 Key: HDDS-888
 URL: https://issues.apache.org/jira/browse/HDDS-888
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


testPutAndGetKeyWithDnRestart is failing with below error.

 
{code:java}
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: 
ContainerID 21 does not exist
at 
org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.validateContainerResponse(ContainerProtocolCalls.java:496)
 at 
org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.getBlock(ContainerProtocolCalls.java:109)
 at 
org.apache.hadoop.ozone.client.io.ChunkGroupInputStream.getFromOmKeyInfo(ChunkGroupInputStream.java:300)
 at org.apache.hadoop.ozone.client.rpc.RpcClient.getKey(RpcClient.java:539)
 at 
org.apache.hadoop.ozone.web.client.TestKeys.runTestPutAndGetKeyWithDnRestart(TestKeys.java:358)
 at 
org.apache.hadoop.ozone.web.client.TestKeys.testPutAndGetKeyWithDnRestart(TestKeys.java:337)
{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14118) RBF: Use DNS to help resolve routers

2018-11-30 Thread Fengnan Li (JIRA)
Fengnan Li created HDFS-14118:
-

 Summary: RBF: Use DNS to help resolve routers
 Key: HDFS-14118
 URL: https://issues.apache.org/jira/browse/HDFS-14118
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Fengnan Li


Clients will need to know about routers to talk to the HDFS cluster 
(obviously), and having routers updating (adding/removing) will have to make 
every client change, which is a painful process.

DNS can be used here to resolve the single domain name clients knows to a list 
of routers in the current config. However, DNS won't be able to consider only 
resolving to the working router based on certain health thresholds.

There are some ways about how this can be solved. One way is to have a separate 
script to regularly check the status of the router and update the DNS records 
if a router fails the health thresholds. In this way, security might be 
carefully considered for this way. Another way is to have the client do the 
normal connecting/failover after they get the list of routers, which requires 
the change of current failover proxy provider.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-11-30 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/973/

[Nov 29, 2018 1:47:57 AM] (ajay) HDDS-642. Add chill mode exit condition for 
pipeline availability.
[Nov 29, 2018 3:12:17 PM] (stevel) HDFS-13713. Add specification of Multipart 
Upload API to FS
[Nov 29, 2018 4:13:34 PM] (bibinchundatt) YARN-8948. PlacementRule interface 
should be for all YarnSchedulers.
[Nov 29, 2018 4:32:59 PM] (bibinchundatt) YARN-9069. Fix 
SchedulerInfo#getSchedulerType for custom schedulers.
[Nov 29, 2018 4:35:20 PM] (ajay) HDDS-808. Simplify OMAction and DNAction 
classes used for AuditLogging.
[Nov 29, 2018 4:36:39 PM] (mackrorysd) HADOOP-14927. ITestS3GuardTool failures 
in testDestroyNoBucket().
[Nov 29, 2018 4:50:08 PM] (shashikant) HDDS-850. ReadStateMachineData hits 
OverlappingFileLockException in
[Nov 29, 2018 5:52:11 PM] (stevel) HADOOP-15959. Revert "HADOOP-12751. While 
using kerberos Hadoop
[Nov 29, 2018 6:48:27 PM] (brahma) HDFS-14095. EC: Track Erasure Coding 
commands in DFS statistics.
[Nov 29, 2018 7:37:36 PM] (xyao) HDDS-877. Ensure correct surefire version for 
Ozone test. Contributed by
[Nov 29, 2018 9:55:21 PM] (szetszwo) HDFS-14112. Avoid recursive call to 
external authorizer for
[Nov 29, 2018 10:56:07 PM] (wangda) YARN-9010. Fix the incorrect trailing slash 
deletion in constructor




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.crypto.key.kms.server.TestKMS 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.hdfs.TestReconstructStripedFile 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/973/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/973/artifact/out/diff-compile-javac-root.txt
  [336K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/973/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/973/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/973/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/973/artifact/out/diff-patch-pylint.txt
  [40K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/973/artifact/out/diff-patch-shellcheck.txt
  [68K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/973/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/973/artifact/out/whitespace-eol.txt
  [9.3M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/973/artifact/out/whitespace-tabs.txt
  [1.1M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/973/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/973/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/973/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/973/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/973/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/973/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/973/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/973/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/973/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/973/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/973/artifact/out/branch-findbugs-hadoop-ozone_s3gateway.txt
  [4.0K]
   

[jira] [Created] (HDDS-887) Add StatemachineContext info to Dispatcher from containerStateMachine

2018-11-30 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-887:


 Summary: Add StatemachineContext info to Dispatcher from 
containerStateMachine
 Key: HDDS-887
 URL: https://issues.apache.org/jira/browse/HDDS-887
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Datanode
Affects Versions: 0.4.0
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.4.0


As a part of transaction like writeChunk, readChunk, putBlock etc, there are 
some specific info set which is required for executing the transactions on the 
HddsDispatcher. Right now, all these protocol specfic info is added as part of 
ContainerCommandRequestProto object which is visible to client. This Jira aims 
to add the protocol specfic info in a context object and pass it to dispatcher 
and remove the visibility from clinet by removing it out of 
ContainerCommandRequestProto. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-886) Unnecessary buffer copy in HddsDispatcher#dispatch

2018-11-30 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDDS-886:


 Summary: Unnecessary buffer copy in HddsDispatcher#dispatch
 Key: HDDS-886
 URL: https://issues.apache.org/jira/browse/HDDS-886
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Reporter: Lokesh Jain
Assignee: Lokesh Jain
 Fix For: 0.4.0


In HddsDispatcher#dispatch precondition not null check converts container 
command to a string object. This is done even for a write chunk command which 
means we copy the chunk data to a string.
{code:java}
// code placeholderpublic ContainerCommandResponseProto dispatch(
ContainerCommandRequestProto msg) {
  LOG.trace("Command {}, trace ID: {} ", msg.getCmdType().toString(),
  msg.getTraceID());
  Preconditions.checkNotNull(msg.toString());

{code}
The precondition needs to check only the msg.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org