[jira] [Created] (HDFS-13861) Illegal Router Router Admin Command Leads to Printing Usage For All Commands

2018-08-23 Thread Ayush Saxena (JIRA)
Ayush Saxena created HDFS-13861:
---

 Summary: Illegal Router Router Admin Command Leads to Printing 
Usage For All Commands 
 Key: HDFS-13861
 URL: https://issues.apache.org/jira/browse/HDFS-13861
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ayush Saxena
Assignee: Ayush Saxena


When an illegal argument is passed for any router admin command it prints usage 
for all the admin command it should be specific to the command used and print 
the usage only for that command.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13828) DataNode breaching Xceiver Count

2018-08-23 Thread Amithsha (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amithsha resolved HDFS-13828.
-
Resolution: Not A Problem

> DataNode breaching Xceiver Count
> 
>
> Key: HDFS-13828
> URL: https://issues.apache.org/jira/browse/HDFS-13828
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Amithsha
>Priority: Critical
>
> We were observing the breach of the xceiver count 4096, On a particular set 
> of nodes from 5 - 8 nodes in a 900 nodes cluster.
> And we stopped the datanode services on those nodes and made to replicate 
> across the cluster. After that also, we observed the same issue on a new set 
> of nodes.
> Q1: Why on a particular node, and also after decommissioning the node the 
> data should be replicated across the cluster, But why again difference set of 
> node?
> Assumptions :
> Reading a particular block/ data on that node might be the cause for this but 
> it should be mitigated after the decommission but not why? So suspected that 
> those MR jobs are triggered from Hive, so the query might be referring to the 
> same block mulitple times  in different stages and creating this issue?
> From Thread Dump :
> Thread dump of datanode says that out of 4090+ xceiver threads created on 
> that node nearly 4000+ where belong to the same AppId of multiple mappers 
> with state no operation.
>  
> Any suggestions on this?
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-372) There are two buffer copies in ChunkOutputStream

2018-08-23 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HDDS-372:


 Summary: There are two buffer copies in ChunkOutputStream
 Key: HDDS-372
 URL: https://issues.apache.org/jira/browse/HDDS-372
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Client
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze


Currently, there are two buffer copies in ChunkOutputStream
# from byte[] to ByteBuffer, and
# from ByteBuffer to ByteString.

We should eliminate the ByteBuffer in the middle.

For zero copy io, we should support WritableByteChannel instead of 
OutputStream.  It won't be done in this JIRA.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-371) Add RetriableException class in Ozone

2018-08-23 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-371:


 Summary: Add RetriableException class in Ozone
 Key: HDDS-371
 URL: https://issues.apache.org/jira/browse/HDDS-371
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.3.0


Certain Exception thrown by a server can be because server is in a state
where request cannot be processed temporarily.
 Ozone Client may retry the request. If the service is up, the server may be 
able to
 process a retried request. This Jira aims to introduce notion of 
RetriableException in Ozone.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-08-23 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/877/

[Aug 22, 2018 3:43:40 AM] (yqlin) HDFS-13821. RBF: Add 
dfs.federation.router.mount-table.cache.enable so
[Aug 22, 2018 5:04:15 PM] (hanishakoneru) HDDS-265. Move 
numPendingDeletionBlocks and deleteTransactionId from
[Aug 22, 2018 5:54:10 PM] (xyao) HDDS-350. ContainerMapping#flushContainerInfo 
doesn't set containerId.
[Aug 22, 2018 9:48:22 PM] (aengineer) HDDS-342. Add example byteman script to 
print out hadoop rpc traffic.
[Aug 23, 2018 1:55:14 AM] (aengineer) HDDS-356. Support ColumnFamily based 
RockDBStore and TableStore.




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine
 
   Unread field:FSBasedSubmarineStorageImpl.java:[line 39] 
   Found reliance on default encoding in 
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component):in 
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component): new java.io.FileWriter(File) At 
YarnServiceJobSubmitter.java:[line 192] 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component) may fail to clean up java.io.Writer on checked exception 
Obligation to clean up resource created at YarnServiceJobSubmitter.java:to 
clean up java.io.Writer on checked exception Obligation to clean up resource 
created at YarnServiceJobSubmitter.java:[line 192] is not discharged 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceUtils.getComponentArrayJson(String,
 int, String) concatenates strings using + in a loop At 
YarnServiceUtils.java:using + in a loop At YarnServiceUtils.java:[line 72] 

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_libhdfs_threaded_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.balancer.TestBalancer 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestMRTimelineEventHandling 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/877/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/877/artifact/out/diff-compile-javac-root.txt
  [328K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/877/artifact/out/diff-checkstyle-root.txt
  [17M]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/877/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/877/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/877/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/877/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/877/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/877/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/877/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/877/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-submarine-warnings.html
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/877/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [68K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/877/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [60K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/877/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/877/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [60K]
   

Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-08-23 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/567/

[Aug 22, 2018 11:18:55 PM] (aw) YETUS-660. checkstyle should report when it 
fails to execute
[Aug 22, 2018 11:19:40 PM] (aw) YETUS-611. xml test should specfically say 
which files are broken
[Aug 22, 2018 11:25:05 PM] (aw) YETUS-668. EOL 0.4.0 and 0.5.0
[Aug 22, 2018 5:04:15 PM] (hanishakoneru) HDDS-265. Move 
numPendingDeletionBlocks and deleteTransactionId from
[Aug 22, 2018 5:54:10 PM] (xyao) HDDS-350. ContainerMapping#flushContainerInfo 
doesn't set containerId.
[Aug 22, 2018 9:48:22 PM] (aengineer) HDDS-342. Add example byteman script to 
print out hadoop rpc traffic.
[Aug 23, 2018 1:55:14 AM] (aengineer) HDDS-356. Support ColumnFamily based 
RockDBStore and TableStore.
[Aug 23, 2018 4:35:43 AM] (sunilg) YARN-8015. Support all types of placement 
constraint support for


ERROR: File 'out/email-report.txt' does not exist

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

[jira] [Created] (HDFS-13860) Space character in the filePath is encode as "+" while creating files in WebHDFS

2018-08-23 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDFS-13860:
--

 Summary: Space character in the filePath is encode as "+" while 
creating files in WebHDFS 
 Key: HDFS-13860
 URL: https://issues.apache.org/jira/browse/HDFS-13860
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Shashikant Banerjee


$ ./hdfs dfs -mkdir webhdfs://127.0.0.1/tmp1/"file 1"

2018-08-23 15:16:08,258 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable

HW15685:bin sbanerjee$ ./hdfs dfs -ls webhdfs://127.0.0.1/tmp1

2018-08-23 15:16:21,244 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable

Found 1 items

drwxr-xr-x   - sbanerjee hadoop          0 2018-08-23 15:16 
webhdfs://127.0.0.1/tmp1/file+1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13859) Add update replicaInfo's volume in LocalReplica#updateWithReplica

2018-08-23 Thread liaoyuxiangqin (JIRA)
liaoyuxiangqin created HDFS-13859:
-

 Summary: Add update replicaInfo's volume  in 
LocalReplica#updateWithReplica
 Key: HDFS-13859
 URL: https://issues.apache.org/jira/browse/HDFS-13859
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.2.0
Reporter: liaoyuxiangqin
Assignee: liaoyuxiangqin


    When DirectoryScanner used diff ScanInfo to check and update with memBlock, 
i found the LocalReplica#updateWithReplica only update the diskfile path but 
without replicaInfo's volume. And may be the memblock ' volume is diff with the 
diskfile path before directory scan, so i think need to update the volume 
meanwhile,so the replicaInfo's meminfo is consistent with disk storage。The 
relation code as follows:
{code:java}
public void updateWithReplica(StorageLocation replicaLocation) {
// for local replicas, the replica location is assumed to be a file.
File diskFile = null;
try {
  diskFile = new File(replicaLocation.getUri());
} catch (IllegalArgumentException e) {
  diskFile = null;
}

if (null == diskFile) {
  setDirInternal(null);
} else {
  setDirInternal(diskFile.getParentFile());
}
  }
{code}
Thanks all!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13858) RBF: dfsrouteradmin safemode command is accepting any valid/invalid second argument. Add check to have single valid argument to safemode command

2018-08-23 Thread Soumyapn (JIRA)
Soumyapn created HDFS-13858:
---

 Summary: RBF: dfsrouteradmin safemode command is accepting any 
valid/invalid second argument. Add check to have single valid argument to 
safemode command
 Key: HDFS-13858
 URL: https://issues.apache.org/jira/browse/HDFS-13858
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: federation
Reporter: Soumyapn


*Scenario:*

Current behaviour for the dfsrouteradmin command is: First argument should be 
valid one. What ever value we give as the second argument, the command is 
successfull.

 

*Examples:*

hdfs dfsrouteradmin -safemode enter leave

hdfs dfsrouteradmin -safemode leave enter

hdfs dfsrouteradmin -safemode get jashfuesfhsk

hdfs dfsrouteradmin -safemode leave leave

 

With the above examples, command is successfull for the first argument.

 

*Expected:*

Add check to have single valid argument to the safemode command

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13857) RBF: Choose to enable the default nameservice to write files.

2018-08-23 Thread yanghuafeng (JIRA)
yanghuafeng created HDFS-13857:
--

 Summary: RBF: Choose to enable the default nameservice to write 
files.
 Key: HDFS-13857
 URL: https://issues.apache.org/jira/browse/HDFS-13857
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: federation, hdfs
Affects Versions: 2.9.1, 3.1.0, 3.0.0
Reporter: yanghuafeng
Assignee: yanghuafeng


The default nameservice can provide some default properties for the namenode 
protocol. And if we cannot find the path, we will get a location in default 
nameservice. From my side as cluster administrator, we need all files to be 
written in the location from the MountTableEntry. If no responding location, 
some error will return. It is not better to happen some files are written in 
some unknown location. We should provide a specific parameter to enable the 
default nameservice to store files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13856) RBF: RouterAdmin should support -refresh.

2018-08-23 Thread yanghuafeng (JIRA)
yanghuafeng created HDFS-13856:
--

 Summary: RBF: RouterAdmin should support -refresh.
 Key: HDFS-13856
 URL: https://issues.apache.org/jira/browse/HDFS-13856
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: federation, hdfs
Affects Versions: 2.9.1, 3.1.0, 3.0.0
Reporter: yanghuafeng
Assignee: yanghuafeng


Like namenode router should support refresh policy individually. For example, 
we have implemented simple password authentication per rpc connection. The 
password dict can be refreshed by generic refresh policy. We also want to 
support this in RouterAdminServer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13855) RBF: Router WebUI cannot display capacity and dn exactly when nameservice all in Federation Cluster.

2018-08-23 Thread yanghuafeng (JIRA)
yanghuafeng created HDFS-13855:
--

 Summary: RBF: Router WebUI cannot display capacity and dn exactly 
when nameservice all in Federation Cluster.
 Key: HDFS-13855
 URL: https://issues.apache.org/jira/browse/HDFS-13855
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: federation, hdfs
Affects Versions: 2.9.1, 3.1.0, 3.0.0
Reporter: yanghuafeng
Assignee: yanghuafeng


Now the FederationMetrics aggregate the capacity and dn from different 
nameservice. But it is not correct to aggregate when all the nameservice from 
the same Federation Cluster. We should only display one nameservice information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org