[jira] [Updated] (HDFS-13131) Modifying testcase testEnableAndDisableErasureCodingPolicy

2018-02-10 Thread chencan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDFS-13131:
---
Attachment: HDFS-13131.patch

> Modifying testcase testEnableAndDisableErasureCodingPolicy
> --
>
> Key: HDFS-13131
> URL: https://issues.apache.org/jira/browse/HDFS-13131
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: chencan
>Priority: Minor
> Attachments: HDFS-13131.patch
>
>
> In testcase testEnableAndDisableErasureCodingPolicy in 
> TestDistributedFileSystem.java, when enable or disable an 
> ErasureCodingPolicy, we should query from enabledPoliciesByName other than 
> policiesByName to Check whether this policy has been set up successfully.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13131) Modifying testcase testEnableAndDisableErasureCodingPolicy

2018-02-10 Thread chencan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDFS-13131:
---
Status: Patch Available  (was: Open)

> Modifying testcase testEnableAndDisableErasureCodingPolicy
> --
>
> Key: HDFS-13131
> URL: https://issues.apache.org/jira/browse/HDFS-13131
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: chencan
>Priority: Minor
> Attachments: HDFS-13131.patch
>
>
> In testcase testEnableAndDisableErasureCodingPolicy in 
> TestDistributedFileSystem.java, when enable or disable an 
> ErasureCodingPolicy, we should query from enabledPoliciesByName other than 
> policiesByName to Check whether this policy has been set up successfully.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13131) Modifying testcase testEnableAndDisableErasureCodingPolicy

2018-02-10 Thread chencan (JIRA)
chencan created HDFS-13131:
--

 Summary: Modifying testcase testEnableAndDisableErasureCodingPolicy
 Key: HDFS-13131
 URL: https://issues.apache.org/jira/browse/HDFS-13131
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: chencan


In testcase testEnableAndDisableErasureCodingPolicy in 
TestDistributedFileSystem.java, when enable or disable an ErasureCodingPolicy, 
we should query from enabledPoliciesByName other than policiesByName to Check 
whether this policy has been set up successfully.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13130) Log object instance get incorrectly in SlowDiskTracker

2018-02-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16359773#comment-16359773
 ] 

Hudson commented on HDFS-13130:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13641 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13641/])
HDFS-13130. Log object instance get incorrectly in SlowDiskTracker. (yqlin: rev 
25fbec67d1c01cc3531b51d9e2ec03e5c3591a7e)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/SlowDiskTracker.java


> Log object instance get incorrectly in SlowDiskTracker
> --
>
> Key: HDFS-13130
> URL: https://issues.apache.org/jira/browse/HDFS-13130
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Jianfei Jiang
>Assignee: Jianfei Jiang
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HDFS-13130.patch
>
>
> In class org.apache.hadoop.hdfs.server.blockmanagement.*SlowDiskTracker*, the 
> LOG is targeted to *SlowPeerTracker*.class incorrectly.
> {code:java}
> public class SlowDiskTracker {
>  public static final Logger LOG =
>  LoggerFactory.getLogger(SlowPeerTracker.class);{code}
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13130) Log object instance get incorrectly in SlowDiskTracker

2018-02-10 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13130:
-
Affects Version/s: 3.0.0

> Log object instance get incorrectly in SlowDiskTracker
> --
>
> Key: HDFS-13130
> URL: https://issues.apache.org/jira/browse/HDFS-13130
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Jianfei Jiang
>Assignee: Jianfei Jiang
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HDFS-13130.patch
>
>
> In class org.apache.hadoop.hdfs.server.blockmanagement.*SlowDiskTracker*, the 
> LOG is targeted to *SlowPeerTracker*.class incorrectly.
> {code:java}
> public class SlowDiskTracker {
>  public static final Logger LOG =
>  LoggerFactory.getLogger(SlowPeerTracker.class);{code}
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13130) Log object instance get incorrectly in SlowDiskTracker

2018-02-10 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13130:
-
  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: 3.1.0
Target Version/s: 3.1.0
  Status: Resolved  (was: Patch Available)

Thanks [~jiangjianfei] for the contribution. Committed this to trunk.

> Log object instance get incorrectly in SlowDiskTracker
> --
>
> Key: HDFS-13130
> URL: https://issues.apache.org/jira/browse/HDFS-13130
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Jianfei Jiang
>Assignee: Jianfei Jiang
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HDFS-13130.patch
>
>
> In class org.apache.hadoop.hdfs.server.blockmanagement.*SlowDiskTracker*, the 
> LOG is targeted to *SlowPeerTracker*.class incorrectly.
> {code:java}
> public class SlowDiskTracker {
>  public static final Logger LOG =
>  LoggerFactory.getLogger(SlowPeerTracker.class);{code}
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8693) refreshNamenodes does not support adding a new standby to a running DN

2018-02-10 Thread lindongdong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16359766#comment-16359766
 ] 

lindongdong commented on HDFS-8693:
---

I meet some errors about this patch.

If the cluster has 3 nodes: A, B, C, and the NNs is in A, B.

When we remove B, and install a new SNN in C, all DNs fail to register to the 
new SNN. Error like the below:
{code:java}
2018-02-09 19:49:02,728 | WARN | DataNode: 
[[[DISK]file:/_1/b-b_2/bb_3/b_4/b-5/B-2/B-3/B-4/bbb-b/hadoop/data1/dn/]]
 heartbeating to 189-219-255-103/189.219.255.103:25006 | Problem connecting to 
server: 189-219-255-103/189.219.255.103:25006 | BPServiceActor.java:197
2018-02-09 19:49:07,731 | WARN | DataNode: 
[[[DISK]file:/_1/b-b_2/bb_3/b_4/b-5/B-2/B-3/B-4/bbb-b/hadoop/data1/dn/]]
 heartbeating to 189-219-255-103/189.219.255.103:25006 | Exception encountered 
while connecting to the server : javax.security.sasl.SaslException: GSS 
initiate failed [Caused by GSSException: No valid credentials provided 
(Mechanism level: Failed to find any Kerberos tgt)] | Client.java:726
{code}

> refreshNamenodes does not support adding a new standby to a running DN
> --
>
> Key: HDFS-8693
> URL: https://issues.apache.org/jira/browse/HDFS-8693
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, ha
>Affects Versions: 2.6.0
>Reporter: Jian Fang
>Assignee: Ajith S
>Priority: Critical
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4
>
> Attachments: HDFS-8693.02.patch, HDFS-8693.03.patch, HDFS-8693.1.patch
>
>
> I tried to run the following command on a Hadoop 2.6.0 cluster with HA 
> support 
> $ hdfs dfsadmin -refreshNamenodes datanode-host:port
> to refresh name nodes on data nodes after I replaced one name node with a new 
> one so that I don't need to restart the data nodes. However, I got the 
> following error:
> refreshNamenodes: HA does not currently support adding a new standby to a 
> running DN. Please do a rolling restart of DNs to reconfigure the list of NNs.
> I checked the 2.6.0 code and the error was thrown by the following code 
> snippet, which led me to this JIRA.
> void refreshNNList(ArrayList addrs) throws IOException {
> Set oldAddrs = Sets.newHashSet();
> for (BPServiceActor actor : bpServices)
> { oldAddrs.add(actor.getNNSocketAddress()); }
> Set newAddrs = Sets.newHashSet(addrs);
> if (!Sets.symmetricDifference(oldAddrs, newAddrs).isEmpty())
> { // Keep things simple for now -- we can implement this at a later date. 
> throw new IOException( "HA does not currently support adding a new standby to 
> a running DN. " + "Please do a rolling restart of DNs to reconfigure the list 
> of NNs."); }
> }
> Looks like this the refreshNameNodes command is an uncompleted feature. 
> Unfortunately, the new name node on a replacement is critical for auto 
> provisioning a hadoop cluster with HDFS HA support. Without this support, the 
> HA feature could not really be used. I also observed that the new standby 
> name node on the replacement instance could stuck in safe mode because no 
> data nodes check in with it. Even with a rolling restart, it may take quite 
> some time to restart all data nodes if we have a big cluster, for example, 
> with 4000 data nodes, let alone restarting DN is way too intrusive and it is 
> not a preferable operation in production. It also increases the chance for a 
> double failure because the standby name node is not really ready for a 
> failover in the case that the current active name node fails. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13117) Proposal to support writing replications to HDFS asynchronously

2018-02-10 Thread xuchuanyin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16359764#comment-16359764
 ] 

xuchuanyin commented on HDFS-13117:
---

[~jojochuang] Actually I've tested it in a 3-node cluster. The time to copy 
file from local disk to HDFS with 3 replication is about 
{color:#FF}*300ms*{color} and the time to change the HDFS file from 
1-replica to 3-replica costs about {color:#FF}*10ms*{color} or less. (This 
does not contain the time to write local disk and the time to write first 
replica to HDFS).

 

Besides, skip writing to local disk will {color:#FF}save about 33% amount 
of disk write I/O{color}.

> Proposal to support writing replications to HDFS asynchronously
> ---
>
> Key: HDFS-13117
> URL: https://issues.apache.org/jira/browse/HDFS-13117
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: xuchuanyin
>Priority: Major
>
> My initial question was as below:
> ```
> I've learned that When We write data to HDFS using the interface provided by 
> HDFS such as 'FileSystem.create', our client will block until all the blocks 
> and their replications are done. This will cause efficiency problem if we use 
> HDFS as our final data storage. And many of my colleagues write the data to 
> local disk in the main thread and copy it to HDFS in another thread. 
> Obviously, it increases the disk I/O.
>  
>    So, is there a way to optimize this usage? I don't want to increase the 
> disk I/O, neither do I want to be blocked during the writing of extra 
> replications.
>   How about writing to HDFS by specifying only one replication in the main 
> thread and set the actual number of replication in another thread? Or is 
> there any better way to do this?
> ```
>  
> So my proposal here is to support writing extra replications to HDFS 
> asynchronously. User can set a minimum replicator as acceptable number of 
> replications ( < default or expected replicator). When writing to HDFS, user 
> will only be blocked until the minimum replicator has been finished and HDFS 
> will continue to complete the extra replications in background.Since HDFS 
> will periodically check the integrity of all the replications, we can also 
> leave this work to HDFS itself.
>  
> There are ways to provide the interfaces:
> 1. Creating a series of interfaces by adding `acceptableReplication` 
> parameter to the current interfaces as below:
> ```
> Before:
> FSDataOutputStream create(Path f,
>   boolean overwrite,
>   int bufferSize,
>   short replication,
>   long blockSize
> ) throws IOException
>  
> After:
> FSDataOutputStream create(Path f,
>   boolean overwrite,
>   int bufferSize,
>   short replication,
>   short acceptableReplication, // minimum number of replication to finish 
> before return
>   long blockSize
> ) throws IOException
> ```
>  
> 2. Adding the `acceptableReplication` and `asynchronous` to the runtime (or 
> default) configuration, so user will not have to change any interface and 
> will benefit from this feature.
>  
> How do you think about this?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13130) Log object instance get incorrectly in SlowDiskTracker

2018-02-10 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13130:
-
Summary: Log object instance get incorrectly in SlowDiskTracker  (was: Log 
error: Incorrect class given in LoggerFactory.getLogger)

> Log object instance get incorrectly in SlowDiskTracker
> --
>
> Key: HDFS-13130
> URL: https://issues.apache.org/jira/browse/HDFS-13130
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jianfei Jiang
>Assignee: Jianfei Jiang
>Priority: Minor
> Attachments: HDFS-13130.patch
>
>
> In class org.apache.hadoop.hdfs.server.blockmanagement.*SlowDiskTracker*, the 
> LOG is targeted to *SlowPeerTracker*.class incorrectly.
> {code:java}
> public class SlowDiskTracker {
>  public static final Logger LOG =
>  LoggerFactory.getLogger(SlowPeerTracker.class);{code}
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13130) Log error: Incorrect class given in LoggerFactory.getLogger

2018-02-10 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16359762#comment-16359762
 ] 

Yiqun Lin commented on HDFS-13130:
--

Failed UTs are not related.
+1, will commit shortly.

> Log error: Incorrect class given in LoggerFactory.getLogger
> ---
>
> Key: HDFS-13130
> URL: https://issues.apache.org/jira/browse/HDFS-13130
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jianfei Jiang
>Assignee: Jianfei Jiang
>Priority: Minor
> Attachments: HDFS-13130.patch
>
>
> In class org.apache.hadoop.hdfs.server.blockmanagement.*SlowDiskTracker*, the 
> LOG is targeted to *SlowPeerTracker*.class incorrectly.
> {code:java}
> public class SlowDiskTracker {
>  public static final Logger LOG =
>  LoggerFactory.getLogger(SlowPeerTracker.class);{code}
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-12571) Ozone: remove spaces from the beginning of the hdfs script

2018-02-10 Thread Apache Spark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark updated HDFS-12571:

Comment: was deleted

(was: I still meet this issue in hadoop-3.0.0 with docekr of centos7

 
{code:java}
/software/hadoop-3.0.0/libexec/hadoop-functions.sh: line 398: syntax error near 
unexpected token `<'
/software/hadoop-3.0.0/libexec/hadoop-functions.sh: line 398: `  done < <(for 
text in "${input[@]}"; do'
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 70: 
hadoop_deprecate_envvar: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 87: hadoop_bootstrap: 
command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 104: hadoop_parse_args: 
command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 105: shift: : numeric 
argument required
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 110: hadoop_find_confdir: 
command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 111: 
hadoop_exec_hadoopenv: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 112: 
hadoop_import_shellprofiles: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 113: 
hadoop_exec_userfuncs: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 119: 
hadoop_exec_user_hadoopenv: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 120: 
hadoop_verify_confdir: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 122: 
hadoop_deprecate_envvar: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 123: 
hadoop_deprecate_envvar: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 124: 
hadoop_deprecate_envvar: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 129: hadoop_os_tricks: 
command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 131: hadoop_java_setup: 
command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 133: hadoop_basic_init: 
command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 140: 
hadoop_shellprofiles_init: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 143: 
hadoop_add_javalibpath: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 144: 
hadoop_add_javalibpath: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 146: 
hadoop_shellprofiles_nativelib: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 152: 
hadoop_add_common_to_classpath: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 153: 
hadoop_shellprofiles_classpath: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 157: 
hadoop_exec_hadooprc: command not found
{code}
 )

> Ozone: remove spaces from the beginning of the hdfs script  
> 
>
> Key: HDFS-12571
> URL: https://issues.apache.org/jira/browse/HDFS-12571
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Critical
>  Labels: ozoneMerge
> Fix For: HDFS-7240
>
> Attachments: HDFS-12571-HDFS-7240.001.patch
>
>
> It seems that during one of the previous merge some unnecessary spaces has 
> been added to the hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs file.
> After a dist build I can not start server with the hdfs command:
> {code}
> /tmp/hadoop-3.1.0-SNAPSHOT/bin/../libexec/hadoop-functions.sh: line 398: 
> syntax error near unexpected token `<'
> /tmp/hadoop-3.1.0-SNAPSHOT/bin/../libexec/hadoop-functions.sh: line 398: `  
> done < <(for text in "${input[@]}"; do'
> /tmp/hadoop-3.1.0-SNAPSHOT/bin/../libexec/hadoop-config.sh: line 70: 
> hadoop_deprecate_envvar: command not found
> /tmp/hadoop-3.1.0-SNAPSHOT/bin/../libexec/hadoop-config.sh: line 87: 
> hadoop_bootstrap: command not found
> /tmp/hadoop-3.1.0-SNAPSHOT/bin/../libexec/hadoop-config.sh: line 104: 
> hadoop_parse_args: command not found
> /tmp/hadoop-3.1.0-SNAPSHOT/bin/../libexec/hadoop-config.sh: line 105: shift: 
> : numeric argument required
> /tmp/hadoop-3.1.0-SNAPSHOT/bin/../libexec/hadoop-config.sh: line 110: 
> hadoop_find_confdir: command not found
> /tmp/hadoop-3.1.0-SNAPSHOT/bin/../libexec/hadoop-config.sh: line 111: 
> hadoop_exec_hadoopenv: command not found
> /tmp/hadoop-3.1.0-SNAPSHOT/bin/../libexec/hadoop-config.sh: line 112: 
> hadoop_import_shellprofiles: command not found
> {code}
> See the space at here:
> https://github.com/apache/hadoop/blob/d0bd0f623338dbb558d0dee5e747001d825d92c5/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
> Or see the latest version at:
> https://github.com/apache/hadoop/blob/HDFS-7240/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
> 

[jira] [Comment Edited] (HDFS-12571) Ozone: remove spaces from the beginning of the hdfs script

2018-02-10 Thread Apache Spark (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16359694#comment-16359694
 ] 

Apache Spark edited comment on HDFS-12571 at 2/11/18 1:22 AM:
--

I still meet this issue in hadoop-3.0.0 with docekr of centos7

 
{code:java}
/software/hadoop-3.0.0/libexec/hadoop-functions.sh: line 398: syntax error near 
unexpected token `<'
/software/hadoop-3.0.0/libexec/hadoop-functions.sh: line 398: `  done < <(for 
text in "${input[@]}"; do'
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 70: 
hadoop_deprecate_envvar: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 87: hadoop_bootstrap: 
command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 104: hadoop_parse_args: 
command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 105: shift: : numeric 
argument required
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 110: hadoop_find_confdir: 
command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 111: 
hadoop_exec_hadoopenv: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 112: 
hadoop_import_shellprofiles: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 113: 
hadoop_exec_userfuncs: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 119: 
hadoop_exec_user_hadoopenv: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 120: 
hadoop_verify_confdir: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 122: 
hadoop_deprecate_envvar: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 123: 
hadoop_deprecate_envvar: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 124: 
hadoop_deprecate_envvar: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 129: hadoop_os_tricks: 
command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 131: hadoop_java_setup: 
command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 133: hadoop_basic_init: 
command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 140: 
hadoop_shellprofiles_init: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 143: 
hadoop_add_javalibpath: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 144: 
hadoop_add_javalibpath: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 146: 
hadoop_shellprofiles_nativelib: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 152: 
hadoop_add_common_to_classpath: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 153: 
hadoop_shellprofiles_classpath: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 157: 
hadoop_exec_hadooprc: command not found
{code}
 


was (Author: apachespark):
I still meet this issue in hadoop-3.0.0 with centos7

 
{code:java}
/software/hadoop-3.0.0/libexec/hadoop-functions.sh: line 398: syntax error near 
unexpected token `<'
/software/hadoop-3.0.0/libexec/hadoop-functions.sh: line 398: `  done < <(for 
text in "${input[@]}"; do'
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 70: 
hadoop_deprecate_envvar: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 87: hadoop_bootstrap: 
command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 104: hadoop_parse_args: 
command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 105: shift: : numeric 
argument required
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 110: hadoop_find_confdir: 
command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 111: 
hadoop_exec_hadoopenv: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 112: 
hadoop_import_shellprofiles: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 113: 
hadoop_exec_userfuncs: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 119: 
hadoop_exec_user_hadoopenv: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 120: 
hadoop_verify_confdir: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 122: 
hadoop_deprecate_envvar: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 123: 
hadoop_deprecate_envvar: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 124: 
hadoop_deprecate_envvar: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 129: hadoop_os_tricks: 
command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 131: hadoop_java_setup: 
command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 133: hadoop_basic_init: 
command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 140: 
hadoop_shellprofiles_init: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 143: 

[jira] [Commented] (HDFS-12571) Ozone: remove spaces from the beginning of the hdfs script

2018-02-10 Thread Apache Spark (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16359694#comment-16359694
 ] 

Apache Spark commented on HDFS-12571:
-

I still meet this issue in hadoop-3.0.0 with centos7

 
{code:java}
/software/hadoop-3.0.0/libexec/hadoop-functions.sh: line 398: syntax error near 
unexpected token `<'
/software/hadoop-3.0.0/libexec/hadoop-functions.sh: line 398: `  done < <(for 
text in "${input[@]}"; do'
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 70: 
hadoop_deprecate_envvar: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 87: hadoop_bootstrap: 
command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 104: hadoop_parse_args: 
command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 105: shift: : numeric 
argument required
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 110: hadoop_find_confdir: 
command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 111: 
hadoop_exec_hadoopenv: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 112: 
hadoop_import_shellprofiles: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 113: 
hadoop_exec_userfuncs: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 119: 
hadoop_exec_user_hadoopenv: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 120: 
hadoop_verify_confdir: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 122: 
hadoop_deprecate_envvar: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 123: 
hadoop_deprecate_envvar: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 124: 
hadoop_deprecate_envvar: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 129: hadoop_os_tricks: 
command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 131: hadoop_java_setup: 
command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 133: hadoop_basic_init: 
command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 140: 
hadoop_shellprofiles_init: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 143: 
hadoop_add_javalibpath: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 144: 
hadoop_add_javalibpath: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 146: 
hadoop_shellprofiles_nativelib: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 152: 
hadoop_add_common_to_classpath: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 153: 
hadoop_shellprofiles_classpath: command not found
/software/hadoop-3.0.0/libexec/hadoop-config.sh: line 157: 
hadoop_exec_hadooprc: command not found
{code}
 

> Ozone: remove spaces from the beginning of the hdfs script  
> 
>
> Key: HDFS-12571
> URL: https://issues.apache.org/jira/browse/HDFS-12571
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Critical
>  Labels: ozoneMerge
> Fix For: HDFS-7240
>
> Attachments: HDFS-12571-HDFS-7240.001.patch
>
>
> It seems that during one of the previous merge some unnecessary spaces has 
> been added to the hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs file.
> After a dist build I can not start server with the hdfs command:
> {code}
> /tmp/hadoop-3.1.0-SNAPSHOT/bin/../libexec/hadoop-functions.sh: line 398: 
> syntax error near unexpected token `<'
> /tmp/hadoop-3.1.0-SNAPSHOT/bin/../libexec/hadoop-functions.sh: line 398: `  
> done < <(for text in "${input[@]}"; do'
> /tmp/hadoop-3.1.0-SNAPSHOT/bin/../libexec/hadoop-config.sh: line 70: 
> hadoop_deprecate_envvar: command not found
> /tmp/hadoop-3.1.0-SNAPSHOT/bin/../libexec/hadoop-config.sh: line 87: 
> hadoop_bootstrap: command not found
> /tmp/hadoop-3.1.0-SNAPSHOT/bin/../libexec/hadoop-config.sh: line 104: 
> hadoop_parse_args: command not found
> /tmp/hadoop-3.1.0-SNAPSHOT/bin/../libexec/hadoop-config.sh: line 105: shift: 
> : numeric argument required
> /tmp/hadoop-3.1.0-SNAPSHOT/bin/../libexec/hadoop-config.sh: line 110: 
> hadoop_find_confdir: command not found
> /tmp/hadoop-3.1.0-SNAPSHOT/bin/../libexec/hadoop-config.sh: line 111: 
> hadoop_exec_hadoopenv: command not found
> /tmp/hadoop-3.1.0-SNAPSHOT/bin/../libexec/hadoop-config.sh: line 112: 
> hadoop_import_shellprofiles: command not found
> {code}
> See the space at here:
> https://github.com/apache/hadoop/blob/d0bd0f623338dbb558d0dee5e747001d825d92c5/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
> Or see the latest version at:
> https://github.com/apache/hadoop/blob/HDFS-7240/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
> To 

[jira] [Updated] (HDFS-13130) Log error: Incorrect class given in LoggerFactory.getLogger

2018-02-10 Thread Jianfei Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianfei Jiang updated HDFS-13130:
-
Description: 
In class org.apache.hadoop.hdfs.server.blockmanagement.SlowDiskTracker, the LOG 
is targeted to *SlowPeerTracker*.class incorrectly.
{code:java}
public class SlowDiskTracker {
 public static final Logger LOG =
 LoggerFactory.getLogger(SlowPeerTracker.class);{code}
 

 

 

  was:
In class org.apache.hadoop.hdfs.server.blockmanagement.SlowDiskTracker, the LOG 
is targeted to SlowPeerTracker.class incorrectly**

*public class SlowDiskTracker {
public static final Logger LOG =
LoggerFactory.getLogger(SlowPeerTracker.class);*

 

 

 


> Log error: Incorrect class given in LoggerFactory.getLogger
> ---
>
> Key: HDFS-13130
> URL: https://issues.apache.org/jira/browse/HDFS-13130
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jianfei Jiang
>Assignee: Jianfei Jiang
>Priority: Minor
> Attachments: HDFS-13130.patch
>
>
> In class org.apache.hadoop.hdfs.server.blockmanagement.SlowDiskTracker, the 
> LOG is targeted to *SlowPeerTracker*.class incorrectly.
> {code:java}
> public class SlowDiskTracker {
>  public static final Logger LOG =
>  LoggerFactory.getLogger(SlowPeerTracker.class);{code}
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13130) Log error: Incorrect class given in LoggerFactory.getLogger

2018-02-10 Thread Jianfei Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianfei Jiang updated HDFS-13130:
-
Description: 
In class org.apache.hadoop.hdfs.server.blockmanagement.*SlowDiskTracker*, the 
LOG is targeted to *SlowPeerTracker*.class incorrectly.
{code:java}
public class SlowDiskTracker {
 public static final Logger LOG =
 LoggerFactory.getLogger(SlowPeerTracker.class);{code}
 

 

 

  was:
In class org.apache.hadoop.hdfs.server.blockmanagement.SlowDiskTracker, the LOG 
is targeted to *SlowPeerTracker*.class incorrectly.
{code:java}
public class SlowDiskTracker {
 public static final Logger LOG =
 LoggerFactory.getLogger(SlowPeerTracker.class);{code}
 

 

 


> Log error: Incorrect class given in LoggerFactory.getLogger
> ---
>
> Key: HDFS-13130
> URL: https://issues.apache.org/jira/browse/HDFS-13130
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jianfei Jiang
>Assignee: Jianfei Jiang
>Priority: Minor
> Attachments: HDFS-13130.patch
>
>
> In class org.apache.hadoop.hdfs.server.blockmanagement.*SlowDiskTracker*, the 
> LOG is targeted to *SlowPeerTracker*.class incorrectly.
> {code:java}
> public class SlowDiskTracker {
>  public static final Logger LOG =
>  LoggerFactory.getLogger(SlowPeerTracker.class);{code}
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13022) Block Storage: Kubernetes dynamic persistent volume provisioner

2018-02-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16359632#comment-16359632
 ] 

genericqa commented on HDFS-13022:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
43s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
38s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
14s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
6s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-minicluster {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
9s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
44s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
24s{color} | {color:green} root: The patch generated 0 new + 1 unchanged - 1 
fixed = 1 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-minicluster {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
7s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
59s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 23s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-minicluster in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}227m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || 

[jira] [Commented] (HDFS-13052) WebHDFS: Add support for snasphot diff

2018-02-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16359597#comment-16359597
 ] 

genericqa commented on HDFS-13052:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 44s{color} | {color:orange} hadoop-hdfs-project: The patch generated 5 new + 
235 unchanged - 1 fixed = 240 total (was 236) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
31s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 32s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}169m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDecommission |
|   | hadoop.hdfs.TestErasureCodingPolicies |
|   | hadoop.hdfs.TestFileChecksum |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.TestDFSStripedOutputStream |
|   | hadoop.hdfs.TestErasureCodingExerciseAPIs |
|   | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-13052 |
| JIRA Patch URL | 

[jira] [Commented] (HDFS-13130) Log error: Incorrect class given in LoggerFactory.getLogger

2018-02-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16359574#comment-16359574
 ] 

genericqa commented on HDFS-13130:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 32m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}168m 46s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}259m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.TestEncryptedTransfer |
|   | hadoop.hdfs.TestSnapshotCommands |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDFSPermission |
|   | hadoop.hdfs.server.blockmanagement.TestSlowDiskTracker |
|   | hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality |
|   | hadoop.hdfs.TestWriteConfigurationToDFS |
|   | hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.federation.router.TestRouterQuota |
|   | hadoop.hdfs.TestDFSRollback |
|   | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
|   | hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.TestErasureCodingExerciseAPIs |
|   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | hadoop.hdfs.TestSetrepIncreasing |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead |
|   

[jira] [Commented] (HDFS-13052) WebHDFS: Add support for snasphot diff

2018-02-10 Thread Lokesh Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16359483#comment-16359483
 ] 

Lokesh Jain commented on HDFS-13052:


[~xyao] Thanks for pointing it out! v5 patch addresses your comments.

> WebHDFS: Add support for snasphot diff
> --
>
> Key: HDFS-13052
> URL: https://issues.apache.org/jira/browse/HDFS-13052
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDFS-13052.001.patch, HDFS-13052.002.patch, 
> HDFS-13052.003.patch, HDFS-13052.004.patch, HDFS-13052.005.patch
>
>
> This Jira aims to implement snapshot diff operation for webHdfs filesystem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13052) WebHDFS: Add support for snasphot diff

2018-02-10 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDFS-13052:
---
Attachment: HDFS-13052.005.patch

> WebHDFS: Add support for snasphot diff
> --
>
> Key: HDFS-13052
> URL: https://issues.apache.org/jira/browse/HDFS-13052
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDFS-13052.001.patch, HDFS-13052.002.patch, 
> HDFS-13052.003.patch, HDFS-13052.004.patch, HDFS-13052.005.patch
>
>
> This Jira aims to implement snapshot diff operation for webHdfs filesystem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13022) Block Storage: Kubernetes dynamic persistent volume provisioner

2018-02-10 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16359471#comment-16359471
 ] 

Elek, Marton commented on HDFS-13022:
-

I uploaded a new version with fixed maven dependency: shadedclient check also 
should pass now (I excluded the new dependency from minicluster). Previous 
tests were unrelated the next run hopefully also prove this (no code has been 
changed)

> Block Storage: Kubernetes dynamic persistent volume provisioner
> ---
>
> Key: HDFS-13022
> URL: https://issues.apache.org/jira/browse/HDFS-13022
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDFS-13022-HDFS-7240.001.patch, 
> HDFS-13022-HDFS-7240.002.patch, HDFS-13022-HDFS-7240.003.patch, 
> HDFS-13022-HDFS-7240.004.patch, HDFS-13022-HDFS-7240.005.patch, 
> HDFS-13022-HDFS-7240.006.patch, HDFS-13022-HDFS-7240.007.patch
>
>
> {color:#FF}{color}
> With HDFS-13017 and HDFS-13018 the cblock/jscsi server could be used in a 
> kubernetes cluster as the backend for iscsi persistent volumes.
> Unfortunatelly we need to create all the required cblocks manually with 'hdfs 
> cblok -c user volume...' for all the Persistent Volumes.
>  
> But it could be handled with a simple optional component. An additional 
> service could listen on the kubernetes event stream. In case of new 
> PersistentVolumeClaim (where the storageClassName is cblock) the cblock 
> server could create cblock in advance AND create the persistent volume could 
> be created.
>  
> The code is very simple, and this additional component could be optional in 
> the cblock server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13022) Block Storage: Kubernetes dynamic persistent volume provisioner

2018-02-10 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-13022:

Attachment: HDFS-13022-HDFS-7240.007.patch

> Block Storage: Kubernetes dynamic persistent volume provisioner
> ---
>
> Key: HDFS-13022
> URL: https://issues.apache.org/jira/browse/HDFS-13022
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDFS-13022-HDFS-7240.001.patch, 
> HDFS-13022-HDFS-7240.002.patch, HDFS-13022-HDFS-7240.003.patch, 
> HDFS-13022-HDFS-7240.004.patch, HDFS-13022-HDFS-7240.005.patch, 
> HDFS-13022-HDFS-7240.006.patch, HDFS-13022-HDFS-7240.007.patch
>
>
> {color:#FF}{color}
> With HDFS-13017 and HDFS-13018 the cblock/jscsi server could be used in a 
> kubernetes cluster as the backend for iscsi persistent volumes.
> Unfortunatelly we need to create all the required cblocks manually with 'hdfs 
> cblok -c user volume...' for all the Persistent Volumes.
>  
> But it could be handled with a simple optional component. An additional 
> service could listen on the kubernetes event stream. In case of new 
> PersistentVolumeClaim (where the storageClassName is cblock) the cblock 
> server could create cblock in advance AND create the persistent volume could 
> be created.
>  
> The code is very simple, and this additional component could be optional in 
> the cblock server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13130) Log error: Incorrect class given in LoggerFactory.getLogger

2018-02-10 Thread Jianfei Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianfei Jiang updated HDFS-13130:
-
Attachment: HDFS-13130.patch

> Log error: Incorrect class given in LoggerFactory.getLogger
> ---
>
> Key: HDFS-13130
> URL: https://issues.apache.org/jira/browse/HDFS-13130
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jianfei Jiang
>Assignee: Jianfei Jiang
>Priority: Minor
> Attachments: HDFS-13130.patch
>
>
> In class org.apache.hadoop.hdfs.server.blockmanagement.SlowDiskTracker, the 
> LOG is targeted to SlowPeerTracker.class incorrectly**
> *public class SlowDiskTracker {
> public static final Logger LOG =
> LoggerFactory.getLogger(SlowPeerTracker.class);*
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13130) Log error: Incorrect class given in LoggerFactory.getLogger

2018-02-10 Thread Jianfei Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianfei Jiang updated HDFS-13130:
-
Status: Patch Available  (was: Open)

> Log error: Incorrect class given in LoggerFactory.getLogger
> ---
>
> Key: HDFS-13130
> URL: https://issues.apache.org/jira/browse/HDFS-13130
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jianfei Jiang
>Assignee: Jianfei Jiang
>Priority: Minor
> Attachments: HDFS-13130.patch
>
>
> In class org.apache.hadoop.hdfs.server.blockmanagement.SlowDiskTracker, the 
> LOG is targeted to SlowPeerTracker.class incorrectly**
> *public class SlowDiskTracker {
> public static final Logger LOG =
> LoggerFactory.getLogger(SlowPeerTracker.class);*
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13130) Log error: Incorrect class given in LoggerFactory.getLogger

2018-02-10 Thread Jianfei Jiang (JIRA)
Jianfei Jiang created HDFS-13130:


 Summary: Log error: Incorrect class given in 
LoggerFactory.getLogger
 Key: HDFS-13130
 URL: https://issues.apache.org/jira/browse/HDFS-13130
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jianfei Jiang
Assignee: Jianfei Jiang


In class org.apache.hadoop.hdfs.server.blockmanagement.SlowDiskTracker, the LOG 
is targeted to SlowPeerTracker.class incorrectly**

*public class SlowDiskTracker {
public static final Logger LOG =
LoggerFactory.getLogger(SlowPeerTracker.class);*

 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13126) Backport [HDFS-7959] to branch-2.7 to re-enable HTTP request logging for WebHDFS

2018-02-10 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-13126:

Fix Version/s: 2.7.6

> Backport [HDFS-7959] to branch-2.7 to re-enable HTTP request logging for 
> WebHDFS
> 
>
> Key: HDFS-13126
> URL: https://issues.apache.org/jira/browse/HDFS-13126
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, webhdfs
>Affects Versions: 2.7.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Fix For: 2.7.6
>
> Attachments: HDFS-13126-branch-2.7.000.patch
>
>
> Due to HDFS-7279, starting in 2.7.0, the DataNode HTTP Request logs no longer 
> include WebHDFS requests because the HTTP request logging is done internal to 
> {{HttpServer2}}, which is no longer used (replaced by Netty). This was fixed 
> in HDFS-7959 but not added to branch-2.7 where the original breakage occurs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org