[jira] [Comment Edited] (HDFS-11851) getGlobalJNIEnv() may deadlock if exception is thrown

2017-09-18 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16171148#comment-16171148
 ] 

John Zhuge edited comment on HDFS-11851 at 9/19/17 5:52 AM:


Got a similar SEGV. Details in 
https://gist.github.com/jzhuge/23f6e058e1035528c86a383ad80dbb3f.

hs_err_pid14147.log: 
https://gist.github.com/jzhuge/106d87f10ec1bf6671b65cd8fad55ff8


was (Author: jzhuge):
Got a similar SEGV. Details in 
https://gist.github.com/jzhuge/23f6e058e1035528c86a383ad80dbb3f.

> getGlobalJNIEnv() may deadlock if exception is thrown
> -
>
> Key: HDFS-11851
> URL: https://issues.apache.org/jira/browse/HDFS-11851
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 3.0.0-alpha4
>Reporter: Henry Robinson
>Assignee: Sailesh Mukil
>Priority: Blocker
> Fix For: 3.0.0-alpha4
>
> Attachments: HDFS-11851.000.patch, HDFS-11851.001.patch, 
> HDFS-11851.002.patch, HDFS-11851.003.patch, HDFS-11851.004.patch, 
> HDFS-11851.005.patch
>
>
> HDFS-11529 introduced a deadlock into {{getGlobalJNIEnv()}} if an exception 
> is thrown. {{getGlobalJNIEnv()}} holds {{jvmMutex}}, but 
> {{printExceptionAndFree()}} will eventually try to acquire that lock in 
> {{setTLSExceptionStrings()}}.
> The exception might get caught from {{loadFileSystems}}:
> {code}
> jthr = invokeMethod(env, NULL, STATIC, NULL,
>  "org/apache/hadoop/fs/FileSystem",
>  "loadFileSystems", "()V");
> if (jthr) {
> printExceptionAndFree(env, jthr, PRINT_EXC_ALL, 
> "loadFileSystems");
> }
> }
> {code}
> and here's the relevant parts of the stack trace from where I call this API 
> in Impala, which uses {{libhdfs}}:
> {code}
> #0  __lll_lock_wait () at 
> ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
> #1  0x74a8d657 in _L_lock_909 () from 
> /lib/x86_64-linux-gnu/libpthread.so.0
> #2  0x74a8d480 in __GI___pthread_mutex_lock (mutex=0x47ce960 
> ) at ../nptl/pthread_mutex_lock.c:79
> #3  0x02f06056 in mutexLock (m=) at 
> /data/2/jenkins/workspace/impala-hadoop-dependency/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/mutexes.c:28
> #4  0x02efe817 in setTLSExceptionStrings (rootCause=0x0, 
> stackTrace=0x0) at 
> /data/2/jenkins/workspace/impala-hadoop-dependency/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.c:581
> #5  0x02f065d7 in printExceptionAndFreeV (env=0x513c1e8, 
> exc=0x508a8c0, noPrintFlags=, fmt=0x34349cf "loadFileSystems", 
> ap=0x7fffb660)
> at 
> /data/2/jenkins/workspace/impala-hadoop-dependency/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/exception.c:183
> #6  0x02f0683d in printExceptionAndFree (env=, 
> exc=, noPrintFlags=, fmt=)
> at 
> /data/2/jenkins/workspace/impala-hadoop-dependency/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/exception.c:213
> #7  0x02eff60f in getGlobalJNIEnv () at 
> /data/2/jenkins/workspace/impala-hadoop-dependency/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.c:463
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11851) getGlobalJNIEnv() may deadlock if exception is thrown

2017-09-18 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16171148#comment-16171148
 ] 

John Zhuge commented on HDFS-11851:
---

Got a similar SEGV. Details in 
https://gist.github.com/jzhuge/23f6e058e1035528c86a383ad80dbb3f.

> getGlobalJNIEnv() may deadlock if exception is thrown
> -
>
> Key: HDFS-11851
> URL: https://issues.apache.org/jira/browse/HDFS-11851
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 3.0.0-alpha4
>Reporter: Henry Robinson
>Assignee: Sailesh Mukil
>Priority: Blocker
> Fix For: 3.0.0-alpha4
>
> Attachments: HDFS-11851.000.patch, HDFS-11851.001.patch, 
> HDFS-11851.002.patch, HDFS-11851.003.patch, HDFS-11851.004.patch, 
> HDFS-11851.005.patch
>
>
> HDFS-11529 introduced a deadlock into {{getGlobalJNIEnv()}} if an exception 
> is thrown. {{getGlobalJNIEnv()}} holds {{jvmMutex}}, but 
> {{printExceptionAndFree()}} will eventually try to acquire that lock in 
> {{setTLSExceptionStrings()}}.
> The exception might get caught from {{loadFileSystems}}:
> {code}
> jthr = invokeMethod(env, NULL, STATIC, NULL,
>  "org/apache/hadoop/fs/FileSystem",
>  "loadFileSystems", "()V");
> if (jthr) {
> printExceptionAndFree(env, jthr, PRINT_EXC_ALL, 
> "loadFileSystems");
> }
> }
> {code}
> and here's the relevant parts of the stack trace from where I call this API 
> in Impala, which uses {{libhdfs}}:
> {code}
> #0  __lll_lock_wait () at 
> ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
> #1  0x74a8d657 in _L_lock_909 () from 
> /lib/x86_64-linux-gnu/libpthread.so.0
> #2  0x74a8d480 in __GI___pthread_mutex_lock (mutex=0x47ce960 
> ) at ../nptl/pthread_mutex_lock.c:79
> #3  0x02f06056 in mutexLock (m=) at 
> /data/2/jenkins/workspace/impala-hadoop-dependency/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/mutexes.c:28
> #4  0x02efe817 in setTLSExceptionStrings (rootCause=0x0, 
> stackTrace=0x0) at 
> /data/2/jenkins/workspace/impala-hadoop-dependency/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.c:581
> #5  0x02f065d7 in printExceptionAndFreeV (env=0x513c1e8, 
> exc=0x508a8c0, noPrintFlags=, fmt=0x34349cf "loadFileSystems", 
> ap=0x7fffb660)
> at 
> /data/2/jenkins/workspace/impala-hadoop-dependency/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/exception.c:183
> #6  0x02f0683d in printExceptionAndFree (env=, 
> exc=, noPrintFlags=, fmt=)
> at 
> /data/2/jenkins/workspace/impala-hadoop-dependency/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/exception.c:213
> #7  0x02eff60f in getGlobalJNIEnv () at 
> /data/2/jenkins/workspace/impala-hadoop-dependency/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.c:463
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11851) getGlobalJNIEnv() may deadlock if exception is thrown

2017-09-18 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16171134#comment-16171134
 ] 

John Zhuge edited comment on HDFS-11851 at 9/19/17 5:21 AM:


You are right because 
"/opt/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p2550.2807/lib/hadoop/libexec/../../hadoop/.//*"
 should point to hadoop-common jar:
{noformat}
# ls 
/opt/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p2550.2807/lib/hadoop/libexec/../../hadoop/.//hadoop-common*jar
/opt/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p2550.2807/lib/hadoop/libexec/../../hadoop/.//hadoop-common-2.6.0-cdh5.12.1.jar
/opt/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p2550.2807/lib/hadoop/libexec/../../hadoop/.//hadoop-common-2.6.0-cdh5.12.1-tests.jar
/opt/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p2550.2807/lib/hadoop/libexec/../../hadoop/.//hadoop-common.jar
/opt/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p2550.2807/lib/hadoop/libexec/../../hadoop/.//hadoop-common-tests.jar
{noformat}

How did you hit the exception? Please set "ulimit -c unlimited" before 
reproducing the issue in order to generate a core dump. Upload the core dump or 
run "gdb  " and then "bt" to get the stack trace.


was (Author: jzhuge):
You are right because 
"/opt/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p2550.2807/lib/hadoop/libexec/../../hadoop/.//*"
 should point to hadoop-common jar:
{noformat}
# ls 
/opt/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p2550.2807/lib/hadoop/libexec/../../hadoop/.//hadoop-common*jar
/opt/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p2550.2807/lib/hadoop/libexec/../../hadoop/.//hadoop-common-2.6.0-cdh5.12.1.jar
/opt/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p2550.2807/lib/hadoop/libexec/../../hadoop/.//hadoop-common-2.6.0-cdh5.12.1-tests.jar
/opt/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p2550.2807/lib/hadoop/libexec/../../hadoop/.//hadoop-common.jar
/opt/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p2550.2807/lib/hadoop/libexec/../../hadoop/.//hadoop-common-tests.jar
{noformat}

How did you hit the exception? Please set "ulimit -c unlimited" before 
reproducing the issue with a core dump. Upload the core dump or run "gdb  
" and then "bt" to get the stack trace.

> getGlobalJNIEnv() may deadlock if exception is thrown
> -
>
> Key: HDFS-11851
> URL: https://issues.apache.org/jira/browse/HDFS-11851
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 3.0.0-alpha4
>Reporter: Henry Robinson
>Assignee: Sailesh Mukil
>Priority: Blocker
> Fix For: 3.0.0-alpha4
>
> Attachments: HDFS-11851.000.patch, HDFS-11851.001.patch, 
> HDFS-11851.002.patch, HDFS-11851.003.patch, HDFS-11851.004.patch, 
> HDFS-11851.005.patch
>
>
> HDFS-11529 introduced a deadlock into {{getGlobalJNIEnv()}} if an exception 
> is thrown. {{getGlobalJNIEnv()}} holds {{jvmMutex}}, but 
> {{printExceptionAndFree()}} will eventually try to acquire that lock in 
> {{setTLSExceptionStrings()}}.
> The exception might get caught from {{loadFileSystems}}:
> {code}
> jthr = invokeMethod(env, NULL, STATIC, NULL,
>  "org/apache/hadoop/fs/FileSystem",
>  "loadFileSystems", "()V");
> if (jthr) {
> printExceptionAndFree(env, jthr, PRINT_EXC_ALL, 
> "loadFileSystems");
> }
> }
> {code}
> and here's the relevant parts of the stack trace from where I call this API 
> in Impala, which uses {{libhdfs}}:
> {code}
> #0  __lll_lock_wait () at 
> ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
> #1  0x74a8d657 in _L_lock_909 () from 
> /lib/x86_64-linux-gnu/libpthread.so.0
> #2  0x74a8d480 in __GI___pthread_mutex_lock (mutex=0x47ce960 
> ) at ../nptl/pthread_mutex_lock.c:79
> #3  0x02f06056 in mutexLock (m=) at 
> /data/2/jenkins/workspace/impala-hadoop-dependency/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/mutexes.c:28
> #4  0x02efe817 in setTLSExceptionStrings (rootCause=0x0, 
> stackTrace=0x0) at 
> /data/2/jenkins/workspace/impala-hadoop-dependency/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.c:581
> #5  0x02f065d7 in printExceptionAndFreeV (env=0x513c1e8, 
> exc=0x508a8c0, noPrintFlags=, fmt=0x34349cf "loadFileSystems", 
> ap=0x7fffb660)
> at 
> /data/2/jenkins/workspace/impala-hadoop-dependency/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/exception.c:183
> #6  0x02f0683d in printExceptionAndFree (env=, 
> exc=, noPrintFlags=, fmt=)
> at 
> /data/2/jenkins/workspace/impala-hadoop-dependency/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/exception.c:213
> #7  0x02eff60f in getGlobalJNIEnv () at 
> 

[jira] [Commented] (HDFS-11851) getGlobalJNIEnv() may deadlock if exception is thrown

2017-09-18 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16171134#comment-16171134
 ] 

John Zhuge commented on HDFS-11851:
---

You are right because 
"/opt/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p2550.2807/lib/hadoop/libexec/../../hadoop/.//*"
 should point to hadoop-common jar:
{noformat}
# ls 
/opt/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p2550.2807/lib/hadoop/libexec/../../hadoop/.//hadoop-common*jar
/opt/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p2550.2807/lib/hadoop/libexec/../../hadoop/.//hadoop-common-2.6.0-cdh5.12.1.jar
/opt/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p2550.2807/lib/hadoop/libexec/../../hadoop/.//hadoop-common-2.6.0-cdh5.12.1-tests.jar
/opt/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p2550.2807/lib/hadoop/libexec/../../hadoop/.//hadoop-common.jar
/opt/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p2550.2807/lib/hadoop/libexec/../../hadoop/.//hadoop-common-tests.jar
{noformat}

How did you hit the exception? Please set "ulimit -c unlimited" before 
reproducing the issue with a core dump. Upload the core dump or run "gdb  
" and then "bt" to get the stack trace.

> getGlobalJNIEnv() may deadlock if exception is thrown
> -
>
> Key: HDFS-11851
> URL: https://issues.apache.org/jira/browse/HDFS-11851
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 3.0.0-alpha4
>Reporter: Henry Robinson
>Assignee: Sailesh Mukil
>Priority: Blocker
> Fix For: 3.0.0-alpha4
>
> Attachments: HDFS-11851.000.patch, HDFS-11851.001.patch, 
> HDFS-11851.002.patch, HDFS-11851.003.patch, HDFS-11851.004.patch, 
> HDFS-11851.005.patch
>
>
> HDFS-11529 introduced a deadlock into {{getGlobalJNIEnv()}} if an exception 
> is thrown. {{getGlobalJNIEnv()}} holds {{jvmMutex}}, but 
> {{printExceptionAndFree()}} will eventually try to acquire that lock in 
> {{setTLSExceptionStrings()}}.
> The exception might get caught from {{loadFileSystems}}:
> {code}
> jthr = invokeMethod(env, NULL, STATIC, NULL,
>  "org/apache/hadoop/fs/FileSystem",
>  "loadFileSystems", "()V");
> if (jthr) {
> printExceptionAndFree(env, jthr, PRINT_EXC_ALL, 
> "loadFileSystems");
> }
> }
> {code}
> and here's the relevant parts of the stack trace from where I call this API 
> in Impala, which uses {{libhdfs}}:
> {code}
> #0  __lll_lock_wait () at 
> ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
> #1  0x74a8d657 in _L_lock_909 () from 
> /lib/x86_64-linux-gnu/libpthread.so.0
> #2  0x74a8d480 in __GI___pthread_mutex_lock (mutex=0x47ce960 
> ) at ../nptl/pthread_mutex_lock.c:79
> #3  0x02f06056 in mutexLock (m=) at 
> /data/2/jenkins/workspace/impala-hadoop-dependency/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/mutexes.c:28
> #4  0x02efe817 in setTLSExceptionStrings (rootCause=0x0, 
> stackTrace=0x0) at 
> /data/2/jenkins/workspace/impala-hadoop-dependency/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.c:581
> #5  0x02f065d7 in printExceptionAndFreeV (env=0x513c1e8, 
> exc=0x508a8c0, noPrintFlags=, fmt=0x34349cf "loadFileSystems", 
> ap=0x7fffb660)
> at 
> /data/2/jenkins/workspace/impala-hadoop-dependency/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/exception.c:183
> #6  0x02f0683d in printExceptionAndFree (env=, 
> exc=, noPrintFlags=, fmt=)
> at 
> /data/2/jenkins/workspace/impala-hadoop-dependency/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/exception.c:213
> #7  0x02eff60f in getGlobalJNIEnv () at 
> /data/2/jenkins/workspace/impala-hadoop-dependency/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.c:463
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11851) getGlobalJNIEnv() may deadlock if exception is thrown

2017-09-18 Thread Ruslan Dautkhanov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16171130#comment-16171130
 ] 

Ruslan Dautkhanov commented on HDFS-11851:
--

[~jzhuge], yes I did {{export CLASSPATH=`hadoop classpath`}} before so 
{{CLASSPATH}} does have all required jars including {{hadoop-common.jar}}
I think some elements of that hs_err..log file are misleading as it shows 
internal events including when the classes were discovered, 
for example,
{noformat}
Event: 0.067 Thread 0x00e0 Exception  
(0x000580169ab0) thrown at 
[/HUDSON3/workspace/8-2-build-linux-amd64/jdk8u141/9370/hotspot/src/share/vm/classfile/systemDictionary.cpp,
 line 199]
Event: 0.067 Thread 0x00e0 Exception  (0x00058016c1f0) thrown 
at 
[/HUDSON3/workspace/8-2-build-linux-amd64/jdk8u141/9370/hotspot/src/share/vm/classfile/systemDictionary.cpp,
 line 199]
Event: 0.068 Thread 0x00e0 Exception  (0x00058016e708) thrown 
at 
[/HUDSON3/workspace/8-2-build-linux-amd64/jdk8u141/9370/hotspot/src/share/vm/classfile/systemDictionary.cpp,
 line 199]
{noformat}

but then right down below you see 
{noformat}
Event: 0.066 loading class java/io/FileNotFoundException
Event: 0.066 loading class java/io/IOException
Event: 0.066 loading class java/io/IOException done
Event: 0.066 loading class java/io/FileNotFoundException done
Event: 0.066 loading class java/security/PrivilegedActionException
Event: 0.066 loading class java/security/PrivilegedActionException done
Event: 0.067 loading class org/apache/commons/lang/exception/ExceptionUtils
Event: 0.067 loading class org/apache/commons/lang/exception/ExceptionUtils done
Event: 0.067 loading class org/apache/commons/lang/exception/ExceptionUtils
Event: 0.067 loading class org/apache/commons/lang/exception/ExceptionUtils done
{noformat}

Notice {{org/apache/commons/lang/exception/ExceptionUtils}} for example list 
twice in that "Internal exceptions (10 events)" section, but then shown as 
loaded fine in "Events (10 events)" section. So I think "Internal exceptions" 
is misleading? 



> getGlobalJNIEnv() may deadlock if exception is thrown
> -
>
> Key: HDFS-11851
> URL: https://issues.apache.org/jira/browse/HDFS-11851
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 3.0.0-alpha4
>Reporter: Henry Robinson
>Assignee: Sailesh Mukil
>Priority: Blocker
> Fix For: 3.0.0-alpha4
>
> Attachments: HDFS-11851.000.patch, HDFS-11851.001.patch, 
> HDFS-11851.002.patch, HDFS-11851.003.patch, HDFS-11851.004.patch, 
> HDFS-11851.005.patch
>
>
> HDFS-11529 introduced a deadlock into {{getGlobalJNIEnv()}} if an exception 
> is thrown. {{getGlobalJNIEnv()}} holds {{jvmMutex}}, but 
> {{printExceptionAndFree()}} will eventually try to acquire that lock in 
> {{setTLSExceptionStrings()}}.
> The exception might get caught from {{loadFileSystems}}:
> {code}
> jthr = invokeMethod(env, NULL, STATIC, NULL,
>  "org/apache/hadoop/fs/FileSystem",
>  "loadFileSystems", "()V");
> if (jthr) {
> printExceptionAndFree(env, jthr, PRINT_EXC_ALL, 
> "loadFileSystems");
> }
> }
> {code}
> and here's the relevant parts of the stack trace from where I call this API 
> in Impala, which uses {{libhdfs}}:
> {code}
> #0  __lll_lock_wait () at 
> ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
> #1  0x74a8d657 in _L_lock_909 () from 
> /lib/x86_64-linux-gnu/libpthread.so.0
> #2  0x74a8d480 in __GI___pthread_mutex_lock (mutex=0x47ce960 
> ) at ../nptl/pthread_mutex_lock.c:79
> #3  0x02f06056 in mutexLock (m=) at 
> /data/2/jenkins/workspace/impala-hadoop-dependency/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/mutexes.c:28
> #4  0x02efe817 in setTLSExceptionStrings (rootCause=0x0, 
> stackTrace=0x0) at 
> /data/2/jenkins/workspace/impala-hadoop-dependency/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.c:581
> #5  0x02f065d7 in printExceptionAndFreeV (env=0x513c1e8, 
> exc=0x508a8c0, noPrintFlags=, fmt=0x34349cf "loadFileSystems", 
> ap=0x7fffb660)
> at 
> /data/2/jenkins/workspace/impala-hadoop-dependency/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/exception.c:183
> #6  0x02f0683d in printExceptionAndFree (env=, 
> exc=, noPrintFlags=, fmt=)
> at 
> /data/2/jenkins/workspace/impala-hadoop-dependency/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/exception.c:213
> #7  0x02eff60f in getGlobalJNIEnv () at 
> /data/2/jenkins/workspace/impala-hadoop-dependency/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.c:463
> {code}



--
This message was sent by 

[jira] [Commented] (HDFS-11799) Introduce a config to allow setting up write pipeline with fewer nodes than replication factor

2017-09-18 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16171128#comment-16171128
 ] 

Yongjun Zhang commented on HDFS-11799:
--

Many thanks [~brahmareddy]. +1 pending jenkins.


> Introduce a config to allow setting up write pipeline with fewer nodes than 
> replication factor
> --
>
> Key: HDFS-11799
> URL: https://issues.apache.org/jira/browse/HDFS-11799
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-11799-002.patch, HDFS-11799-003.patch, 
> HDFS-11799-004.patch, HDFS-11799-005.patch, HDFS-11799-006.patch, 
> HDFS-11799-007.patch, HDFS-11799-008.patch, HDFS-11799-009.patch, 
> HDFS-11799.patch
>
>
> During pipeline recovery, if not enough DNs can be found, if 
> dfs.client.block.write.replace-datanode-on-failure.best-effort
> is enabled, we let the pipeline to continue, even if there is a single DN.
> Similarly, when we create the write pipeline initially, if for some reason we 
> can't find enough DNs, we can have a similar config to enable writing with a 
> single DN.
> More study will be done.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11799) Introduce a config to allow setting up write pipeline with fewer nodes than replication factor

2017-09-18 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-11799:

Attachment: HDFS-11799-009.patch

Uploaded the patch to fix the checkstyle,I ignored this,yongjun thanks for 
reminding.

> Introduce a config to allow setting up write pipeline with fewer nodes than 
> replication factor
> --
>
> Key: HDFS-11799
> URL: https://issues.apache.org/jira/browse/HDFS-11799
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-11799-002.patch, HDFS-11799-003.patch, 
> HDFS-11799-004.patch, HDFS-11799-005.patch, HDFS-11799-006.patch, 
> HDFS-11799-007.patch, HDFS-11799-008.patch, HDFS-11799-009.patch, 
> HDFS-11799.patch
>
>
> During pipeline recovery, if not enough DNs can be found, if 
> dfs.client.block.write.replace-datanode-on-failure.best-effort
> is enabled, we let the pipeline to continue, even if there is a single DN.
> Similarly, when we create the write pipeline initially, if for some reason we 
> can't find enough DNs, we can have a similar config to enable writing with a 
> single DN.
> More study will be done.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12466) Ozone: KSM: Add log message to print host and port used by KSM during startup

2017-09-18 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16171102#comment-16171102
 ] 

Weiwei Yang commented on HDFS-12466:


BTW, I am reducing this JIRA's priority since this is mostly a log improvement. 
Let me know if you have a different opinion. Thanks.

> Ozone: KSM: Add log message to print host and port used by KSM during startup
> -
>
> Key: HDFS-12466
> URL: https://issues.apache.org/jira/browse/HDFS-12466
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
>Priority: Minor
> Attachments: HDFS-12466-HDFS-7240.000.patch
>
>
> This jira is to add a log message during KSM startup which will log the host 
> and port used by KSM server.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11799) Introduce a config to allow setting up write pipeline with fewer nodes than replication factor

2017-09-18 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16171105#comment-16171105
 ] 

Brahma Reddy Battula commented on HDFS-11799:
-

Test Failures are unrelated. [~yzhangal] can you take look now..?

> Introduce a config to allow setting up write pipeline with fewer nodes than 
> replication factor
> --
>
> Key: HDFS-11799
> URL: https://issues.apache.org/jira/browse/HDFS-11799
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-11799-002.patch, HDFS-11799-003.patch, 
> HDFS-11799-004.patch, HDFS-11799-005.patch, HDFS-11799-006.patch, 
> HDFS-11799-007.patch, HDFS-11799-008.patch, HDFS-11799.patch
>
>
> During pipeline recovery, if not enough DNs can be found, if 
> dfs.client.block.write.replace-datanode-on-failure.best-effort
> is enabled, we let the pipeline to continue, even if there is a single DN.
> Similarly, when we create the write pipeline initially, if for some reason we 
> can't find enough DNs, we can have a similar config to enable writing with a 
> single DN.
> More study will be done.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12466) Ozone: KSM: Add log message to print host and port used by KSM during startup

2017-09-18 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12466:
---
Priority: Minor  (was: Major)

> Ozone: KSM: Add log message to print host and port used by KSM during startup
> -
>
> Key: HDFS-12466
> URL: https://issues.apache.org/jira/browse/HDFS-12466
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
>Priority: Minor
> Attachments: HDFS-12466-HDFS-7240.000.patch
>
>
> This jira is to add a log message during KSM startup which will log the host 
> and port used by KSM server.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12466) Ozone: KSM: Add log message to print host and port used by KSM during startup

2017-09-18 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16171098#comment-16171098
 ] 

Weiwei Yang edited comment on HDFS-12466 at 9/19/17 4:26 AM:
-

Hi [~anu]/[~nandakumar131]

Not sure if there is a good place to add such log. Usually when a client gets a 
connection problem, the error message should have the info about what address 
it failed to connect. When it is 0.0.0.0 and connect from a remote host, it's 
easy to figure out this is a configuration problem. Do you mean right now the 
error message is not clear to detect such problem? How it is like now?

Thanks


was (Author: cheersyang):
Not sure if there is a good place to add such log. Usually when a client gets a 
connection problem, the error message should have the info about what address 
it failed to connect. When it is 0.0.0.0 and connect from a remote host, it's 
easy to figure out this is a configuration problem. Do you mean right now the 
error message is not clear to detect such problem? How it is like now?

> Ozone: KSM: Add log message to print host and port used by KSM during startup
> -
>
> Key: HDFS-12466
> URL: https://issues.apache.org/jira/browse/HDFS-12466
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-12466-HDFS-7240.000.patch
>
>
> This jira is to add a log message during KSM startup which will log the host 
> and port used by KSM server.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12466) Ozone: KSM: Add log message to print host and port used by KSM during startup

2017-09-18 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16171098#comment-16171098
 ] 

Weiwei Yang commented on HDFS-12466:


Not sure if there is a good place to add such log. Usually when a client gets a 
connection problem, the error message should have the info about what address 
it failed to connect. When it is 0.0.0.0 and connect from a remote host, it's 
easy to figure out this is a configuration problem. Do you mean right now the 
error message is not clear to detect such problem? How it is like now?

> Ozone: KSM: Add log message to print host and port used by KSM during startup
> -
>
> Key: HDFS-12466
> URL: https://issues.apache.org/jira/browse/HDFS-12466
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-12466-HDFS-7240.000.patch
>
>
> This jira is to add a log message during KSM startup which will log the host 
> and port used by KSM server.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12480) TestNameNodeMetrics#testTransactionAndCheckpointMetrics Fails in trunk

2017-09-18 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16171083#comment-16171083
 ] 

Brahma Reddy Battula commented on HDFS-12480:
-

+1, Re-triggered the jenkins.

> TestNameNodeMetrics#testTransactionAndCheckpointMetrics Fails in trunk
> --
>
> Key: HDFS-12480
> URL: https://issues.apache.org/jira/browse/HDFS-12480
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Hanisha Koneru
> Attachments: HDFS-12480.001.patch
>
>
> {noformat}
> java.lang.AssertionError: Bad value for metric LastWrittenTransactionId 
> expected:<3> but was:<4>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.test.MetricsAsserts.assertGauge(MetricsAsserts.java:189)
>   at 
> org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics.testTransactionAndCheckpointMetrics(TestNameNodeMetrics.java:854)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11851) getGlobalJNIEnv() may deadlock if exception is thrown

2017-09-18 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16171071#comment-16171071
 ] 

John Zhuge commented on HDFS-11851:
---

This NoClassDefFoundError indicates misconfigured classpath for hadoop-common 
jar:
{noformat}
Event: 0.067 Thread 0x00e0 Exception  
(0x000580169ab0) thrown at 
[/HUDSON3/workspace/8-2-build-linux-amd64/jdk8u141/9370/hotspot/src/share/vm/classfile/systemDictionary.cpp,
 line 199]
{noformat}
Could you please double check {{CLASSPATH}} to make sure hadoop-common jar can 
be found?


> getGlobalJNIEnv() may deadlock if exception is thrown
> -
>
> Key: HDFS-11851
> URL: https://issues.apache.org/jira/browse/HDFS-11851
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 3.0.0-alpha4
>Reporter: Henry Robinson
>Assignee: Sailesh Mukil
>Priority: Blocker
> Fix For: 3.0.0-alpha4
>
> Attachments: HDFS-11851.000.patch, HDFS-11851.001.patch, 
> HDFS-11851.002.patch, HDFS-11851.003.patch, HDFS-11851.004.patch, 
> HDFS-11851.005.patch
>
>
> HDFS-11529 introduced a deadlock into {{getGlobalJNIEnv()}} if an exception 
> is thrown. {{getGlobalJNIEnv()}} holds {{jvmMutex}}, but 
> {{printExceptionAndFree()}} will eventually try to acquire that lock in 
> {{setTLSExceptionStrings()}}.
> The exception might get caught from {{loadFileSystems}}:
> {code}
> jthr = invokeMethod(env, NULL, STATIC, NULL,
>  "org/apache/hadoop/fs/FileSystem",
>  "loadFileSystems", "()V");
> if (jthr) {
> printExceptionAndFree(env, jthr, PRINT_EXC_ALL, 
> "loadFileSystems");
> }
> }
> {code}
> and here's the relevant parts of the stack trace from where I call this API 
> in Impala, which uses {{libhdfs}}:
> {code}
> #0  __lll_lock_wait () at 
> ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
> #1  0x74a8d657 in _L_lock_909 () from 
> /lib/x86_64-linux-gnu/libpthread.so.0
> #2  0x74a8d480 in __GI___pthread_mutex_lock (mutex=0x47ce960 
> ) at ../nptl/pthread_mutex_lock.c:79
> #3  0x02f06056 in mutexLock (m=) at 
> /data/2/jenkins/workspace/impala-hadoop-dependency/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/mutexes.c:28
> #4  0x02efe817 in setTLSExceptionStrings (rootCause=0x0, 
> stackTrace=0x0) at 
> /data/2/jenkins/workspace/impala-hadoop-dependency/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.c:581
> #5  0x02f065d7 in printExceptionAndFreeV (env=0x513c1e8, 
> exc=0x508a8c0, noPrintFlags=, fmt=0x34349cf "loadFileSystems", 
> ap=0x7fffb660)
> at 
> /data/2/jenkins/workspace/impala-hadoop-dependency/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/exception.c:183
> #6  0x02f0683d in printExceptionAndFree (env=, 
> exc=, noPrintFlags=, fmt=)
> at 
> /data/2/jenkins/workspace/impala-hadoop-dependency/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/exception.c:213
> #7  0x02eff60f in getGlobalJNIEnv () at 
> /data/2/jenkins/workspace/impala-hadoop-dependency/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.c:463
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12454) Ozone : the sample ozone-site.xml in OzoneGettingStarted does not work

2017-09-18 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16171064#comment-16171064
 ] 

Weiwei Yang commented on HDFS-12454:


Hi [~vagarychen]

Thanks for addressing these problems and working on the fix, it looks good to 
me. Please see my review comments

*OzoneGettingStarted.md*

# Can we revise the description of {{ozone.metadata.dirs}}, currently it only 
says this dir is used for container metadata by datanode, but it actually is 
also used by KSM and SCM services.
# line 140: since datanode.id now is not required, can we remove this line?

*OzoneUtils.java*

# line 346 - 347: can we instead of using hard coded path delimiter, use 
{{Paths.get(...)}} to resolve paths? It is supposed to work for both windows 
and linux better.
# line 339 and 341, this can be replaced by {{Strings.isNullOrEmpty()}}

Thanks

> Ozone : the sample ozone-site.xml in OzoneGettingStarted does not work
> --
>
> Key: HDFS-12454
> URL: https://issues.apache.org/jira/browse/HDFS-12454
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Blocker
>  Labels: ozoneMerge
> Attachments: HDFS-12454-HDFS-7240.001.patch, 
> HDFS-12454-HDFS-7240.002.patch, HDFS-12454-HDFS-7240.003.patch
>
>
> In OzoneGettingStarted.md there is a sample ozone-site.xml file. But there 
> are a few issues with it.
> 1.
> {code}
> 
>   ozone.scm.block.client.address
>   scm.hadoop.apache.org
> 
>  
> ozone.ksm.address
> ksm.hadoop.apache.org
>   
> {code}
> The value should be an address instead.
> 2.
> {{datanode.ObjectStoreHandler.(ObjectStoreHandler.java:103)}} requires 
> {{ozone.scm.client.address}} to be set, which is missing from this sample 
> file. Missing this config will seem to cause failure on starting datanode.
> 3.
> {code}
> 
>   ozone.scm.names
>   scm.hadoop.apache.org
> 
> {code}
> This value did not make much sense to, I found the comment in 
> {{ScmConfigKeys}} that says
> {code}
> // ozone.scm.names key is a set of DNS | DNS:PORT | IP Address | IP:PORT.
> // Written as a comma separated string. e.g. scm1, scm2:8020, 7.7.7.7:
> {code}
> So maybe we should write something like scm1 as value here.
> 4. I'm not entirely sure about this, but 
> [here|https://wiki.apache.org/hadoop/Ozone#Configuration] it says 
> {code}
> 
> ozone.handler.type
> local
>   
> {code}
> is also part of minimum setting, do we need to add this [~anu]?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12466) Ozone: KSM: Add log message to print host and port used by KSM during startup

2017-09-18 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170997#comment-16170997
 ] 

Anu Engineer edited comment on HDFS-12466 at 9/19/17 2:07 AM:
--

[~nandakumar131]  What we need to do is warn any client attempting to connect 
to KSM but the address matches the default 0.0.0.0 address. With a single node 
deployment, the code will work since clients will be able to find the KSM even 
if this address is not explicitly configured. 

However, if the data node or the client is on a different machine then it stops 
working. So the same config file that works on a single node deployment will 
not work on when we have more than one machine. I think we should log that make 
it very easy for a user to find what the issue is. 

[~cheersyang], [~xyao], [~msingh] comments?



was (Author: anu):
[~nandakumar131] Yes, please let us warn if the KSM is not setup and we are 
going to use the default 0.0.0.0 address.

> Ozone: KSM: Add log message to print host and port used by KSM during startup
> -
>
> Key: HDFS-12466
> URL: https://issues.apache.org/jira/browse/HDFS-12466
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-12466-HDFS-7240.000.patch
>
>
> This jira is to add a log message during KSM startup which will log the host 
> and port used by KSM server.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12466) Ozone: KSM: Add log message to print host and port used by KSM during startup

2017-09-18 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170997#comment-16170997
 ] 

Anu Engineer commented on HDFS-12466:
-

[~nandakumar131] Yes, please let us warn if the KSM is not setup and we are 
going to use the default 0.0.0.0 address.

> Ozone: KSM: Add log message to print host and port used by KSM during startup
> -
>
> Key: HDFS-12466
> URL: https://issues.apache.org/jira/browse/HDFS-12466
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-12466-HDFS-7240.000.patch
>
>
> This jira is to add a log message during KSM startup which will log the host 
> and port used by KSM server.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12454) Ozone : the sample ozone-site.xml in OzoneGettingStarted does not work

2017-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170946#comment-16170946
 ] 

Hadoop QA commented on HDFS-12454:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
19s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
43s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
51s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}118m 31s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}163m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.web.client.TestKeys |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 |
|   | hadoop.hdfs.TestFileAppendRestart |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.ozone.container.common.TestDatanodeStateMachine |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | 

[jira] [Commented] (HDFS-11035) Better documentation for maintenace mode and upgrade domain

2017-09-18 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170944#comment-16170944
 ] 

Ming Ma commented on HDFS-11035:


Thanks [~ctrezzo]. I will commit it to trunk, branch-3.0 and branch-2 by EOD 
tomorrow in case [~jojochuang] [~manojg] [~eddyxu] have any additional comments.

> Better documentation for maintenace mode and upgrade domain
> ---
>
> Key: HDFS-11035
> URL: https://issues.apache.org/jira/browse/HDFS-11035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, documentation
>Affects Versions: 2.9.0
>Reporter: Wei-Chiu Chuang
>Assignee: Ming Ma
> Attachments: HDFS-11035-2.patch, HDFS-11035.patch
>
>
> HDFS-7541 added upgrade domain and HDFS-7877 added maintenance mode. Existing 
> documentation about these two features are scarce and the implementation have 
> evolved from the original design doc. Looking at code and Javadoc and I still 
> don't quite get how I can get datanodes into maintenance mode/ set up a 
> upgrade domain.
> File this jira to propose that we write an up-to-date description of these 
> two features.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11035) Better documentation for maintenace mode and upgrade domain

2017-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170931#comment-16170931
 ] 

Hadoop QA commented on HDFS-11035:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 13m 
55s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-11035 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12887772/HDFS-11035-2.patch |
| Optional Tests |  asflicense  mvnsite  xml  |
| uname | Linux e28a02ac314a 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 56ef527 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21204/artifact/patchprocess/branch-mvninstall-root.txt
 |
| modules | C: hadoop-project hadoop-hdfs-project/hadoop-hdfs U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21204/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Better documentation for maintenace mode and upgrade domain
> ---
>
> Key: HDFS-11035
> URL: https://issues.apache.org/jira/browse/HDFS-11035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, documentation
>Affects Versions: 2.9.0
>Reporter: Wei-Chiu Chuang
>Assignee: Ming Ma
> Attachments: HDFS-11035-2.patch, HDFS-11035.patch
>
>
> HDFS-7541 added upgrade domain and HDFS-7877 added maintenance mode. Existing 
> documentation about these two features are scarce and the implementation have 
> evolved from the original design doc. Looking at code and Javadoc and I still 
> don't quite get how I can get datanodes into maintenance mode/ set up a 
> upgrade domain.
> File this jira to propose that we write an up-to-date description of these 
> two features.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11035) Better documentation for maintenace mode and upgrade domain

2017-09-18 Thread Chris Trezzo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170926#comment-16170926
 ] 

Chris Trezzo commented on HDFS-11035:
-

+1 HDFS-11035-2.patch looks good to me.

I looked into the above failure and it seems unrelated. Apache Hadoop Client 
Packaging Invariants seems to be complaining about all of the duplicate shaded 
classes.

> Better documentation for maintenace mode and upgrade domain
> ---
>
> Key: HDFS-11035
> URL: https://issues.apache.org/jira/browse/HDFS-11035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, documentation
>Affects Versions: 2.9.0
>Reporter: Wei-Chiu Chuang
>Assignee: Ming Ma
> Attachments: HDFS-11035-2.patch, HDFS-11035.patch
>
>
> HDFS-7541 added upgrade domain and HDFS-7877 added maintenance mode. Existing 
> documentation about these two features are scarce and the implementation have 
> evolved from the original design doc. Looking at code and Javadoc and I still 
> don't quite get how I can get datanodes into maintenance mode/ set up a 
> upgrade domain.
> File this jira to propose that we write an up-to-date description of these 
> two features.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11035) Better documentation for maintenace mode and upgrade domain

2017-09-18 Thread Chris Trezzo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170926#comment-16170926
 ] 

Chris Trezzo edited comment on HDFS-11035 at 9/19/17 12:05 AM:
---

+1 HDFS-11035-2.patch looks good to me. Thanks [~mingma]!

I looked into the above failure and it seems unrelated. Apache Hadoop Client 
Packaging Invariants seems to be complaining about all of the duplicate shaded 
classes.


was (Author: ctrezzo):
+1 HDFS-11035-2.patch looks good to me.

I looked into the above failure and it seems unrelated. Apache Hadoop Client 
Packaging Invariants seems to be complaining about all of the duplicate shaded 
classes.

> Better documentation for maintenace mode and upgrade domain
> ---
>
> Key: HDFS-11035
> URL: https://issues.apache.org/jira/browse/HDFS-11035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, documentation
>Affects Versions: 2.9.0
>Reporter: Wei-Chiu Chuang
>Assignee: Ming Ma
> Attachments: HDFS-11035-2.patch, HDFS-11035.patch
>
>
> HDFS-7541 added upgrade domain and HDFS-7877 added maintenance mode. Existing 
> documentation about these two features are scarce and the implementation have 
> evolved from the original design doc. Looking at code and Javadoc and I still 
> don't quite get how I can get datanodes into maintenance mode/ set up a 
> upgrade domain.
> File this jira to propose that we write an up-to-date description of these 
> two features.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12486) GetConf to get journalnodeslist

2017-09-18 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDFS-12486:
-

Assignee: Bharat Viswanadham

> GetConf to get journalnodeslist
> ---
>
> Key: HDFS-12486
> URL: https://issues.apache.org/jira/browse/HDFS-12486
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>
> GetConf command to list journal nodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12486) GetConf to get journalnodeslist

2017-09-18 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDFS-12486:
-

 Summary: GetConf to get journalnodeslist
 Key: HDFS-12486
 URL: https://issues.apache.org/jira/browse/HDFS-12486
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Bharat Viswanadham


GetConf command to list journal nodes.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12480) TestNameNodeMetrics#testTransactionAndCheckpointMetrics Fails in trunk

2017-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170921#comment-16170921
 ] 

Hadoop QA commented on HDFS-12480:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}299m 
22s{color} | {color:red} Docker failed to build yetus/hadoop:tp-19353. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-12480 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12887717/HDFS-12480.001.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21198/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestNameNodeMetrics#testTransactionAndCheckpointMetrics Fails in trunk
> --
>
> Key: HDFS-12480
> URL: https://issues.apache.org/jira/browse/HDFS-12480
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Hanisha Koneru
> Attachments: HDFS-12480.001.patch
>
>
> {noformat}
> java.lang.AssertionError: Bad value for metric LastWrittenTransactionId 
> expected:<3> but was:<4>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.test.MetricsAsserts.assertGauge(MetricsAsserts.java:189)
>   at 
> org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics.testTransactionAndCheckpointMetrics(TestNameNodeMetrics.java:854)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11035) Better documentation for maintenace mode and upgrade domain

2017-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170879#comment-16170879
 ] 

Hadoop QA commented on HDFS-11035:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 19m  
1s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-11035 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12887772/HDFS-11035-2.patch |
| Optional Tests |  asflicense  mvnsite  xml  |
| uname | Linux a899e11dfd64 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3cf3540 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21203/artifact/patchprocess/branch-mvninstall-root.txt
 |
| modules | C: hadoop-project hadoop-hdfs-project/hadoop-hdfs U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21203/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Better documentation for maintenace mode and upgrade domain
> ---
>
> Key: HDFS-11035
> URL: https://issues.apache.org/jira/browse/HDFS-11035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, documentation
>Affects Versions: 2.9.0
>Reporter: Wei-Chiu Chuang
>Assignee: Ming Ma
> Attachments: HDFS-11035-2.patch, HDFS-11035.patch
>
>
> HDFS-7541 added upgrade domain and HDFS-7877 added maintenance mode. Existing 
> documentation about these two features are scarce and the implementation have 
> evolved from the original design doc. Looking at code and Javadoc and I still 
> don't quite get how I can get datanodes into maintenance mode/ set up a 
> upgrade domain.
> File this jira to propose that we write an up-to-date description of these 
> two features.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12017) Ozone: Container: Move IPC port to 98xx range

2017-09-18 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170872#comment-16170872
 ] 

Xiaoyu Yao commented on HDFS-12017:
---

Thanks [~nandakumar131] for working on this. Patch looks good to me. I just 
have few questions:

1. We have two duplicated references OzoneConfigKeys#DFS_CONTAINER_IPC_PORT and 
ScmConfigKeys#DFS_CONTAINER_IPC_PORT to the same key "dfs.container.ipc". Can 
we consolidate it into one? I think we should keep the 
OzoneConfigKeys#DFS_CONTAINER_IPC_PORT and remove the usage of 
ScmConfigKeys#DFS_CONTAINER_IPC_PORT if possible.

2. I notice that we also have Ratis IPC port in 50012, should we change it to 
98xx range as well?
{code}
  public static final String DFS_CONTAINER_RATIS_IPC_PORT =
  "dfs.container.ratis.ipc";
  public static final int DFS_CONTAINER_RATIS_IPC_PORT_DEFAULT = 50012;
{code}

> Ozone: Container: Move IPC port to 98xx range
> -
>
> Key: HDFS-12017
> URL: https://issues.apache.org/jira/browse/HDFS-12017
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Nandakumar
>  Labels: ozoneMerge
> Attachments: HDFS-12017-HDFS-7240.000.patch
>
>
> We use port 50011 port -- This is an old format of choosing port. Hadoop 3 
> has already moved to port ranges under 98xx. In fact all of KSM/SCM and 
> CBlock is using 98xx range. We should move 50011 to a port under 98xx.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12371) "BlockVerificationFailures" and "BlocksVerified" show up as 0 in Datanode JMX

2017-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170871#comment-16170871
 ] 

Hadoop QA commented on HDFS-12371:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 57s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}127m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestReplication |
|   | hadoop.hdfs.TestReconstructStripedFile |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12371 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12887747/HDFS-12371.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9f0b7d369651 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 29dd551 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21200/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21200/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21200/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> "BlockVerificationFailures" and "BlocksVerified" show up as 0 in 

[jira] [Commented] (HDFS-12340) Ozone: C/C++ implementation of ozone client using curl

2017-09-18 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170863#comment-16170863
 ] 

Anu Engineer commented on HDFS-12340:
-

Postponing this patch. Removing from the patch queue. Setting it to open again 
so it drops off from the filters.

> Ozone: C/C++ implementation of ozone client using curl
> --
>
> Key: HDFS-12340
> URL: https://issues.apache.org/jira/browse/HDFS-12340
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>  Labels: OzonePostMerge
> Fix For: HDFS-7240
>
> Attachments: HDFS-12340-HDFS-7240.001.patch, 
> HDFS-12340-HDFS-7240.002.patch, main.C, ozoneClient.C, ozoneClient.h
>
>
> This Jira is introduced for implementation of ozone client in C/C++ using 
> curl library.
> All these calls will make use of HTTP protocol and would require libcurl. The 
> libcurl API are referenced from here:
> https://curl.haxx.se/libcurl/
> Additional details would be posted along with the patches.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12340) Ozone: C/C++ implementation of ozone client using curl

2017-09-18 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12340:

Status: Open  (was: Patch Available)

> Ozone: C/C++ implementation of ozone client using curl
> --
>
> Key: HDFS-12340
> URL: https://issues.apache.org/jira/browse/HDFS-12340
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>  Labels: OzonePostMerge
> Fix For: HDFS-7240
>
> Attachments: HDFS-12340-HDFS-7240.001.patch, 
> HDFS-12340-HDFS-7240.002.patch, main.C, ozoneClient.C, ozoneClient.h
>
>
> This Jira is introduced for implementation of ozone client in C/C++ using 
> curl library.
> All these calls will make use of HTTP protocol and would require libcurl. The 
> libcurl API are referenced from here:
> https://curl.haxx.se/libcurl/
> Additional details would be posted along with the patches.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12480) TestNameNodeMetrics#testTransactionAndCheckpointMetrics Fails in trunk

2017-09-18 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170862#comment-16170862
 ] 

Chen Liang commented on HDFS-12480:
---

Thanks [~hkoneru] for taking care of this. I tested the patch locally with 10 
runs, the test passed. The patch LGTM, pending Jenkins.

> TestNameNodeMetrics#testTransactionAndCheckpointMetrics Fails in trunk
> --
>
> Key: HDFS-12480
> URL: https://issues.apache.org/jira/browse/HDFS-12480
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Hanisha Koneru
> Attachments: HDFS-12480.001.patch
>
>
> {noformat}
> java.lang.AssertionError: Bad value for metric LastWrittenTransactionId 
> expected:<3> but was:<4>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.test.MetricsAsserts.assertGauge(MetricsAsserts.java:189)
>   at 
> org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics.testTransactionAndCheckpointMetrics(TestNameNodeMetrics.java:854)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12484) hdfs dfs -expunge requires superuser permission after 2.8

2017-09-18 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-12484:
---
Hadoop Flags: Incompatible change

> hdfs dfs -expunge requires superuser permission after 2.8
> -
>
> Key: HDFS-12484
> URL: https://issues.apache.org/jira/browse/HDFS-12484
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>
> Hadoop 2.8 added a feature to support trash inside encryption zones.
> However, it breaks the existing -expunge semantics because now a user must 
> have superuser permission in order to -expunge. The reason behind that is 
> that -expunge gets all encryption zone paths using 
> DFSClient#listEncryptionZones, which requires super user permission.
> Not sure what's the best way to address this, so file this jira to invite 
> comments.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12484) hdfs dfs -expunge requires superuser permission after 2.8

2017-09-18 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-12484:
---
Description: 
Hadoop 2.8 added a feature to support trash inside encryption zones, which is a 
great feature to have.

However, it breaks the existing -expunge semantics because now a user must have 
superuser permission in order to -expunge. The reason behind that is that 
-expunge gets all encryption zone paths using DFSClient#listEncryptionZones, 
which requires super user permission.

Not sure what's the best way to address this, so file this jira to invite 
comments.

  was:
Hadoop 2.8 added a feature to support trash inside encryption zones.

However, it breaks the existing -expunge semantics because now a user must have 
superuser permission in order to -expunge. The reason behind that is that 
-expunge gets all encryption zone paths using DFSClient#listEncryptionZones, 
which requires super user permission.

Not sure what's the best way to address this, so file this jira to invite 
comments.


> hdfs dfs -expunge requires superuser permission after 2.8
> -
>
> Key: HDFS-12484
> URL: https://issues.apache.org/jira/browse/HDFS-12484
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>
> Hadoop 2.8 added a feature to support trash inside encryption zones, which is 
> a great feature to have.
> However, it breaks the existing -expunge semantics because now a user must 
> have superuser permission in order to -expunge. The reason behind that is 
> that -expunge gets all encryption zone paths using 
> DFSClient#listEncryptionZones, which requires super user permission.
> Not sure what's the best way to address this, so file this jira to invite 
> comments.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12485) expunge may not remove trash from encryption zone

2017-09-18 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-12485:
---
Description: 
This is related to HDFS-12484, but turns out that even if I have super user 
permission, -expunge may not remove trash either.

If I log into Linux as root, and then login as the superuser h...@example.com
{noformat}
[root@nightly511-1 ~]# hdfs dfs -rm /scale/b
17/09/18 15:21:32 INFO fs.TrashPolicyDefault: Moved: 'hdfs://ns1/scale/b' to 
trash at: hdfs://ns1/scale/.Trash/hdfs/Current/scale/b
[root@nightly511-1 ~]# hdfs dfs -expunge
17/09/18 15:21:59 INFO fs.TrashPolicyDefault: 
TrashPolicyDefault#deleteCheckpoint for trashRoot: hdfs://ns1/user/hdfs/.Trash
17/09/18 15:21:59 INFO fs.TrashPolicyDefault: 
TrashPolicyDefault#deleteCheckpoint for trashRoot: hdfs://ns1/user/hdfs/.Trash
17/09/18 15:21:59 INFO fs.TrashPolicyDefault: Deleted trash checkpoint: 
/user/hdfs/.Trash/170918143916
17/09/18 15:21:59 INFO fs.TrashPolicyDefault: 
TrashPolicyDefault#createCheckpoint for trashRoot: hdfs://ns1/user/hdfs/.Trash
[root@nightly511-1 ~]# hdfs dfs -ls hdfs://ns1/scale/.Trash/hdfs/Current/scale/b
-rw-r--r--   3 hdfs systest  0 2017-09-18 15:21 
hdfs://ns1/scale/.Trash/hdfs/Current/scale/b
{noformat}

expunge does not remove trash under /scale, because it does not know I am 
'hdfs' user.

{code:title=DistributedFileSystem#getTrashRoots}
Path ezTrashRoot = new Path(it.next().getPath(),
FileSystem.TRASH_PREFIX);
if (!exists(ezTrashRoot)) {
  continue;
}
if (allUsers) {
  for (FileStatus candidate : listStatus(ezTrashRoot)) {
if (exists(candidate.getPath())) {
  ret.add(candidate);
}
  }
} else {
  Path userTrash = new Path(ezTrashRoot, System.getProperty(
  "user.name")); --> bug
  try {
ret.add(getFileStatus(userTrash));
  } catch (FileNotFoundException ignored) {
  }
}
{code}

It should use UGI for user name, rather than system login user name.

  was:
If I log into Linux as root, and then login as the superuser h...@example.com
{noformat}
[root@nightly511-1 ~]# hdfs dfs -rm /scale/b
17/09/18 15:21:32 INFO fs.TrashPolicyDefault: Moved: 'hdfs://ns1/scale/b' to 
trash at: hdfs://ns1/scale/.Trash/hdfs/Current/scale/b
[root@nightly511-1 ~]# hdfs dfs -expunge
17/09/18 15:21:59 INFO fs.TrashPolicyDefault: 
TrashPolicyDefault#deleteCheckpoint for trashRoot: hdfs://ns1/user/hdfs/.Trash
17/09/18 15:21:59 INFO fs.TrashPolicyDefault: 
TrashPolicyDefault#deleteCheckpoint for trashRoot: hdfs://ns1/user/hdfs/.Trash
17/09/18 15:21:59 INFO fs.TrashPolicyDefault: Deleted trash checkpoint: 
/user/hdfs/.Trash/170918143916
17/09/18 15:21:59 INFO fs.TrashPolicyDefault: 
TrashPolicyDefault#createCheckpoint for trashRoot: hdfs://ns1/user/hdfs/.Trash
[root@nightly511-1 ~]# hdfs dfs -ls hdfs://ns1/scale/.Trash/hdfs/Current/scale/b
-rw-r--r--   3 hdfs systest  0 2017-09-18 15:21 
hdfs://ns1/scale/.Trash/hdfs/Current/scale/b
{noformat}

expunge does not remove trash under /scale, because it does not know I am 
'hdfs' user.

{code:title=DistributedFileSystem#getTrashRoots}
Path ezTrashRoot = new Path(it.next().getPath(),
FileSystem.TRASH_PREFIX);
if (!exists(ezTrashRoot)) {
  continue;
}
if (allUsers) {
  for (FileStatus candidate : listStatus(ezTrashRoot)) {
if (exists(candidate.getPath())) {
  ret.add(candidate);
}
  }
} else {
  Path userTrash = new Path(ezTrashRoot, System.getProperty(
  "user.name")); --> bug
  try {
ret.add(getFileStatus(userTrash));
  } catch (FileNotFoundException ignored) {
  }
}
{code}

It should use UGI for user name, rather than system login user name.


> expunge may not remove trash from encryption zone
> -
>
> Key: HDFS-12485
> URL: https://issues.apache.org/jira/browse/HDFS-12485
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>
> This is related to HDFS-12484, but turns out that even if I have super user 
> permission, -expunge may not remove trash either.
> If I log into Linux as root, and then login as the superuser h...@example.com
> {noformat}
> [root@nightly511-1 ~]# hdfs dfs -rm /scale/b
> 17/09/18 15:21:32 INFO fs.TrashPolicyDefault: Moved: 'hdfs://ns1/scale/b' to 
> trash at: hdfs://ns1/scale/.Trash/hdfs/Current/scale/b
> [root@nightly511-1 ~]# hdfs dfs -expunge
> 17/09/18 15:21:59 INFO fs.TrashPolicyDefault: 
> TrashPolicyDefault#deleteCheckpoint for trashRoot: hdfs://ns1/user/hdfs/.Trash
> 17/09/18 15:21:59 INFO 

[jira] [Updated] (HDFS-12485) expunge may not remove trash from encryption zone

2017-09-18 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-12485:
---
Summary: expunge may not remove trash from encryption zone  (was: expunge 
may not remove trash from non-home directory encryption zone)

> expunge may not remove trash from encryption zone
> -
>
> Key: HDFS-12485
> URL: https://issues.apache.org/jira/browse/HDFS-12485
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>
> If I log into Linux as root, and then login as the superuser h...@example.com
> {noformat}
> [root@nightly511-1 ~]# hdfs dfs -rm /scale/b
> 17/09/18 15:21:32 INFO fs.TrashPolicyDefault: Moved: 'hdfs://ns1/scale/b' to 
> trash at: hdfs://ns1/scale/.Trash/hdfs/Current/scale/b
> [root@nightly511-1 ~]# hdfs dfs -expunge
> 17/09/18 15:21:59 INFO fs.TrashPolicyDefault: 
> TrashPolicyDefault#deleteCheckpoint for trashRoot: hdfs://ns1/user/hdfs/.Trash
> 17/09/18 15:21:59 INFO fs.TrashPolicyDefault: 
> TrashPolicyDefault#deleteCheckpoint for trashRoot: hdfs://ns1/user/hdfs/.Trash
> 17/09/18 15:21:59 INFO fs.TrashPolicyDefault: Deleted trash checkpoint: 
> /user/hdfs/.Trash/170918143916
> 17/09/18 15:21:59 INFO fs.TrashPolicyDefault: 
> TrashPolicyDefault#createCheckpoint for trashRoot: hdfs://ns1/user/hdfs/.Trash
> [root@nightly511-1 ~]# hdfs dfs -ls 
> hdfs://ns1/scale/.Trash/hdfs/Current/scale/b
> -rw-r--r--   3 hdfs systest  0 2017-09-18 15:21 
> hdfs://ns1/scale/.Trash/hdfs/Current/scale/b
> {noformat}
> expunge does not remove trash under /scale, because it does not know I am 
> 'hdfs' user.
> {code:title=DistributedFileSystem#getTrashRoots}
> Path ezTrashRoot = new Path(it.next().getPath(),
> FileSystem.TRASH_PREFIX);
> if (!exists(ezTrashRoot)) {
>   continue;
> }
> if (allUsers) {
>   for (FileStatus candidate : listStatus(ezTrashRoot)) {
> if (exists(candidate.getPath())) {
>   ret.add(candidate);
> }
>   }
> } else {
>   Path userTrash = new Path(ezTrashRoot, System.getProperty(
>   "user.name")); --> bug
>   try {
> ret.add(getFileStatus(userTrash));
>   } catch (FileNotFoundException ignored) {
>   }
> }
> {code}
> It should use UGI for user name, rather than system login user name.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12485) expunge may not remove trash from non-home directory encryption zone

2017-09-18 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-12485:
--

 Summary: expunge may not remove trash from non-home directory 
encryption zone
 Key: HDFS-12485
 URL: https://issues.apache.org/jira/browse/HDFS-12485
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0-alpha1, 2.8.0
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang


If I log into Linux as root, and then login as the superuser h...@example.com
{noformat}
[root@nightly511-1 ~]# hdfs dfs -rm /scale/b
17/09/18 15:21:32 INFO fs.TrashPolicyDefault: Moved: 'hdfs://ns1/scale/b' to 
trash at: hdfs://ns1/scale/.Trash/hdfs/Current/scale/b
[root@nightly511-1 ~]# hdfs dfs -expunge
17/09/18 15:21:59 INFO fs.TrashPolicyDefault: 
TrashPolicyDefault#deleteCheckpoint for trashRoot: hdfs://ns1/user/hdfs/.Trash
17/09/18 15:21:59 INFO fs.TrashPolicyDefault: 
TrashPolicyDefault#deleteCheckpoint for trashRoot: hdfs://ns1/user/hdfs/.Trash
17/09/18 15:21:59 INFO fs.TrashPolicyDefault: Deleted trash checkpoint: 
/user/hdfs/.Trash/170918143916
17/09/18 15:21:59 INFO fs.TrashPolicyDefault: 
TrashPolicyDefault#createCheckpoint for trashRoot: hdfs://ns1/user/hdfs/.Trash
[root@nightly511-1 ~]# hdfs dfs -ls hdfs://ns1/scale/.Trash/hdfs/Current/scale/b
-rw-r--r--   3 hdfs systest  0 2017-09-18 15:21 
hdfs://ns1/scale/.Trash/hdfs/Current/scale/b
{noformat}

expunge does not remove trash under /scale, because it does not know I am 
'hdfs' user.

{code:title=DistributedFileSystem#getTrashRoots}
Path ezTrashRoot = new Path(it.next().getPath(),
FileSystem.TRASH_PREFIX);
if (!exists(ezTrashRoot)) {
  continue;
}
if (allUsers) {
  for (FileStatus candidate : listStatus(ezTrashRoot)) {
if (exists(candidate.getPath())) {
  ret.add(candidate);
}
  }
} else {
  Path userTrash = new Path(ezTrashRoot, System.getProperty(
  "user.name")); --> bug
  try {
ret.add(getFileStatus(userTrash));
  } catch (FileNotFoundException ignored) {
  }
}
{code}

It should use UGI for user name, rather than system login user name.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11035) Better documentation for maintenace mode and upgrade domain

2017-09-18 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-11035:
---
Attachment: HDFS-11035-2.patch

The new patch has added the new docs to site.xml and fixed couple nits. Thanks 
[~ctrezzo] for the review.

> Better documentation for maintenace mode and upgrade domain
> ---
>
> Key: HDFS-11035
> URL: https://issues.apache.org/jira/browse/HDFS-11035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, documentation
>Affects Versions: 2.9.0
>Reporter: Wei-Chiu Chuang
>Assignee: Ming Ma
> Attachments: HDFS-11035-2.patch, HDFS-11035.patch
>
>
> HDFS-7541 added upgrade domain and HDFS-7877 added maintenance mode. Existing 
> documentation about these two features are scarce and the implementation have 
> evolved from the original design doc. Looking at code and Javadoc and I still 
> don't quite get how I can get datanodes into maintenance mode/ set up a 
> upgrade domain.
> File this jira to propose that we write an up-to-date description of these 
> two features.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12484) hdfs dfs -expunge requires superuser permission after 2.8

2017-09-18 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-12484:
--

 Summary: hdfs dfs -expunge requires superuser permission after 2.8
 Key: HDFS-12484
 URL: https://issues.apache.org/jira/browse/HDFS-12484
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fs
Affects Versions: 3.0.0-alpha1, 2.8.0
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang


Hadoop 2.8 added a feature to support trash inside encryption zones.

However, it breaks the existing -expunge semantics because now a user must have 
superuser permission in order to -expunge. The reason behind that is that 
-expunge gets all encryption zone paths using DFSClient#listEncryptionZones, 
which requires super user permission.

Not sure what's the best way to address this, so file this jira to invite 
comments.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12385) Ozone: OzoneClient: Refactoring OzoneClient API

2017-09-18 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-12385:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: HDFS-7240
Target Version/s: HDFS-7240
  Status: Resolved  (was: Patch Available)

Thanks [~nandakumar131] for the contribution. I've commit the patch to the 
feature branch.

> Ozone: OzoneClient: Refactoring OzoneClient API
> ---
>
> Key: HDFS-12385
> URL: https://issues.apache.org/jira/browse/HDFS-12385
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
>  Labels: ozoneMerge
> Fix For: HDFS-7240
>
> Attachments: HDFS-12385-HDFS-7240.000.patch, 
> HDFS-12385-HDFS-7240.001.patch, OzoneClient.pdf
>
>
> This jira is for refactoring {{OzoneClient}} API. [^OzoneClient.pdf] will 
> give an idea on how the API will look.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12385) Ozone: OzoneClient: Refactoring OzoneClient API

2017-09-18 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170807#comment-16170807
 ] 

Xiaoyu Yao commented on HDFS-12385:
---

Thanks [~nandakumar131] for the update. Patch v2 looks good to me, +1. I will 
commit it shortly.

> Ozone: OzoneClient: Refactoring OzoneClient API
> ---
>
> Key: HDFS-12385
> URL: https://issues.apache.org/jira/browse/HDFS-12385
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
>  Labels: ozoneMerge
> Attachments: HDFS-12385-HDFS-7240.000.patch, 
> HDFS-12385-HDFS-7240.001.patch, OzoneClient.pdf
>
>
> This jira is for refactoring {{OzoneClient}} API. [^OzoneClient.pdf] will 
> give an idea on how the API will look.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12385) Ozone: OzoneClient: Refactoring OzoneClient API

2017-09-18 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170807#comment-16170807
 ] 

Xiaoyu Yao edited comment on HDFS-12385 at 9/18/17 10:13 PM:
-

Thanks [~nandakumar131] for the update. Patch v1 looks good to me, +1. I will 
commit it shortly.


was (Author: xyao):
Thanks [~nandakumar131] for the update. Patch v2 looks good to me, +1. I will 
commit it shortly.

> Ozone: OzoneClient: Refactoring OzoneClient API
> ---
>
> Key: HDFS-12385
> URL: https://issues.apache.org/jira/browse/HDFS-12385
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
>  Labels: ozoneMerge
> Attachments: HDFS-12385-HDFS-7240.000.patch, 
> HDFS-12385-HDFS-7240.001.patch, OzoneClient.pdf
>
>
> This jira is for refactoring {{OzoneClient}} API. [^OzoneClient.pdf] will 
> give an idea on how the API will look.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12481) Ozone: Corona: Support for variable key length in offline mode

2017-09-18 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12481:

  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: HDFS-7240
Target Version/s: HDFS-7240
  Status: Resolved  (was: Patch Available)

[~nandakumar131] Thank you for the contribution. I have committed the patch to 
the feature branch.

> Ozone: Corona: Support for variable key length in offline mode
> --
>
> Key: HDFS-12481
> URL: https://issues.apache.org/jira/browse/HDFS-12481
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
> Fix For: HDFS-7240
>
> Attachments: HDFS-12481-HDFS-7240.000.patch
>
>
> This jira is to bring support in corona to take key length from user and 
> generate random data of that length to write into ozone.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12481) Ozone: Corona: Support for variable key length in offline mode

2017-09-18 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170778#comment-16170778
 ] 

Anu Engineer commented on HDFS-12481:
-

I will commit this shortly.

> Ozone: Corona: Support for variable key length in offline mode
> --
>
> Key: HDFS-12481
> URL: https://issues.apache.org/jira/browse/HDFS-12481
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-12481-HDFS-7240.000.patch
>
>
> This jira is to bring support in corona to take key length from user and 
> generate random data of that length to write into ozone.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12273) Federation UI

2017-09-18 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170772#comment-16170772
 ] 

Ravi Prakash commented on HDFS-12273:
-

I eyeballed the much smaller patch and it seems fine to me. Let's please file 
the JIRA any way, and we can pore over this and the trunk code altogether just 
once.

> Federation UI
> -
>
> Key: HDFS-12273
> URL: https://issues.apache.org/jira/browse/HDFS-12273
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Fix For: HDFS-10467
>
> Attachments: federationUI-1.png, federationUI-2.png, 
> federationUI-3.png, HDFS-12273-HDFS-10467-000.patch, 
> HDFS-12273-HDFS-10467-001.patch, HDFS-12273-HDFS-10467-002.patch, 
> HDFS-12273-HDFS-10467-003.patch, HDFS-12273-HDFS-10467-004.patch
>
>
> Add the Web UI to the Router to expose the status of the federated cluster. 
> It includes the federation metrics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12481) Ozone: Corona: Support for variable key length in offline mode

2017-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170771#comment-16170771
 ] 

Hadoop QA commented on HDFS-12481:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
59s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}110m 14s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}141m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.TestDFSClientRetries |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.TestListFilesInFileContext |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050 |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12481 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12887721/HDFS-12481-HDFS-7240.000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 84180ae56abf 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 1caf637 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 

[jira] [Updated] (HDFS-12454) Ozone : the sample ozone-site.xml in OzoneGettingStarted does not work

2017-09-18 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12454:
--
Attachment: HDFS-12454-HDFS-7240.003.patch

v003 patch to rebase.

> Ozone : the sample ozone-site.xml in OzoneGettingStarted does not work
> --
>
> Key: HDFS-12454
> URL: https://issues.apache.org/jira/browse/HDFS-12454
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Blocker
>  Labels: ozoneMerge
> Attachments: HDFS-12454-HDFS-7240.001.patch, 
> HDFS-12454-HDFS-7240.002.patch, HDFS-12454-HDFS-7240.003.patch
>
>
> In OzoneGettingStarted.md there is a sample ozone-site.xml file. But there 
> are a few issues with it.
> 1.
> {code}
> 
>   ozone.scm.block.client.address
>   scm.hadoop.apache.org
> 
>  
> ozone.ksm.address
> ksm.hadoop.apache.org
>   
> {code}
> The value should be an address instead.
> 2.
> {{datanode.ObjectStoreHandler.(ObjectStoreHandler.java:103)}} requires 
> {{ozone.scm.client.address}} to be set, which is missing from this sample 
> file. Missing this config will seem to cause failure on starting datanode.
> 3.
> {code}
> 
>   ozone.scm.names
>   scm.hadoop.apache.org
> 
> {code}
> This value did not make much sense to, I found the comment in 
> {{ScmConfigKeys}} that says
> {code}
> // ozone.scm.names key is a set of DNS | DNS:PORT | IP Address | IP:PORT.
> // Written as a comma separated string. e.g. scm1, scm2:8020, 7.7.7.7:
> {code}
> So maybe we should write something like scm1 as value here.
> 4. I'm not entirely sure about this, but 
> [here|https://wiki.apache.org/hadoop/Ozone#Configuration] it says 
> {code}
> 
> ozone.handler.type
> local
>   
> {code}
> is also part of minimum setting, do we need to add this [~anu]?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12454) Ozone : the sample ozone-site.xml in OzoneGettingStarted does not work

2017-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170757#comment-16170757
 ] 

Hadoop QA commented on HDFS-12454:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 10s{color} 
| {color:red} HDFS-12454 does not apply to HDFS-7240. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-12454 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12887755/HDFS-12454-HDFS-7240.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21201/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone : the sample ozone-site.xml in OzoneGettingStarted does not work
> --
>
> Key: HDFS-12454
> URL: https://issues.apache.org/jira/browse/HDFS-12454
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Blocker
>  Labels: ozoneMerge
> Attachments: HDFS-12454-HDFS-7240.001.patch, 
> HDFS-12454-HDFS-7240.002.patch
>
>
> In OzoneGettingStarted.md there is a sample ozone-site.xml file. But there 
> are a few issues with it.
> 1.
> {code}
> 
>   ozone.scm.block.client.address
>   scm.hadoop.apache.org
> 
>  
> ozone.ksm.address
> ksm.hadoop.apache.org
>   
> {code}
> The value should be an address instead.
> 2.
> {{datanode.ObjectStoreHandler.(ObjectStoreHandler.java:103)}} requires 
> {{ozone.scm.client.address}} to be set, which is missing from this sample 
> file. Missing this config will seem to cause failure on starting datanode.
> 3.
> {code}
> 
>   ozone.scm.names
>   scm.hadoop.apache.org
> 
> {code}
> This value did not make much sense to, I found the comment in 
> {{ScmConfigKeys}} that says
> {code}
> // ozone.scm.names key is a set of DNS | DNS:PORT | IP Address | IP:PORT.
> // Written as a comma separated string. e.g. scm1, scm2:8020, 7.7.7.7:
> {code}
> So maybe we should write something like scm1 as value here.
> 4. I'm not entirely sure about this, but 
> [here|https://wiki.apache.org/hadoop/Ozone#Configuration] it says 
> {code}
> 
> ozone.handler.type
> local
>   
> {code}
> is also part of minimum setting, do we need to add this [~anu]?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12473) Change hosts JSON file format

2017-09-18 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170746#comment-16170746
 ] 

Ming Ma commented on HDFS-12473:


ah got it. So the assumption of "backward compatibility isn't an issue as long 
as the feature hasn't been officially released" isn't true all the time. While 
it is generally better to keep the code clean without unnecessary handling, for 
this specific issue it seems ok to include backward compatibility for 
unreleased feature given it doesn't complicate the code much. Can you check if 
4.patch is ready for commit?

> Change hosts JSON file format
> -
>
> Key: HDFS-12473
> URL: https://issues.apache.org/jira/browse/HDFS-12473
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HDFS-12473-2.patch, HDFS-12473-3.patch, 
> HDFS-12473-4.patch, HDFS-12473-5.patch, HDFS-12473.patch
>
>
> The existing host JSON file format doesn't have a top-level token.
> {noformat}
>   {"hostName": "host1"}
>   {"hostName": "host2", "upgradeDomain": "ud0"}
>   {"hostName": "host3", "adminState": "DECOMMISSIONED"}
>   {"hostName": "host4", "upgradeDomain": "ud2", "adminState": 
> "DECOMMISSIONED"}
>   {"hostName": "host5", "port": 8090}
>   {"hostName": "host6", "adminState": "IN_MAINTENANCE"}
>   {"hostName": "host7", "adminState": "IN_MAINTENANCE", 
> "maintenanceExpireTimeInMS": "112233"}
> {noformat}
> Instead, to conform with the JSON standard it should be like
> {noformat}
> [
>   {"hostName": "host1"},
>   {"hostName": "host2", "upgradeDomain": "ud0"},
>   {"hostName": "host3", "adminState": "DECOMMISSIONED"},
>   {"hostName": "host4", "upgradeDomain": "ud2", "adminState": 
> "DECOMMISSIONED"},
>   {"hostName": "host5", "port": 8090},
>   {"hostName": "host6", "adminState": "IN_MAINTENANCE"},
>   {"hostName": "host7", "adminState": "IN_MAINTENANCE", 
> "maintenanceExpireTimeInMS": "112233"}
> ]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12420) Disable Namenode format for prod clusters when data already exists

2017-09-18 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170697#comment-16170697
 ] 

Ajay Kumar commented on HDFS-12420:
---

Below test cases are failing irrespective of patch. Rest of them are passing 
locally.
TestNamenodeRetryCache
TestRetryCacheWithHA
TestNameNodeMetrics

> Disable Namenode format for prod clusters when data already exists
> --
>
> Key: HDFS-12420
> URL: https://issues.apache.org/jira/browse/HDFS-12420
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12420.01.patch, HDFS-12420.02.patch, 
> HDFS-12420.03.patch, HDFS-12420.04.patch, HDFS-12420.05.patch, 
> HDFS-12420.06.patch, HDFS-12420.07.patch
>
>
> Disable NameNode format to avoid accidental formatting of Namenode in 
> production cluster. If someone really wants to delete the complete fsImage, 
> they can first delete the metadata dir and then run {code} hdfs namenode 
> -format{code} manually.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12482) Provide a configuration to adjust the weight of EC recovery tasks to adjust the speed of recovery

2017-09-18 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HDFS-12482:


 Summary: Provide a configuration to adjust the weight of EC 
recovery tasks to adjust the speed of recovery
 Key: HDFS-12482
 URL: https://issues.apache.org/jira/browse/HDFS-12482
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: erasure-coding
Affects Versions: 3.0.0-alpha4
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor


The relative speed of EC recovery comparing to 3x replica recovery is a 
function of (EC codec, number of sources, NIC speed, and CPU speed, and etc). 

Currently the EC recovery has a fixed {{xmitsInProgress}} of {{max(# of 
sources, # of targets)}} comparing to {{1}} for 3x replica recovery, and NN 
uses {{xmitsInProgress}} to decide how much recovery tasks to schedule to the 
DataNode this we can add a coefficient for user to tune the weight of EC 
recovery tasks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12483) Provide a configuration to adjust the weight of EC recovery tasks to adjust the speed of recovery

2017-09-18 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HDFS-12483:


 Summary: Provide a configuration to adjust the weight of EC 
recovery tasks to adjust the speed of recovery
 Key: HDFS-12483
 URL: https://issues.apache.org/jira/browse/HDFS-12483
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: erasure-coding
Affects Versions: 3.0.0-alpha4
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor


The relative speed of EC recovery comparing to 3x replica recovery is a 
function of (EC codec, number of sources, NIC speed, and CPU speed, and etc). 

Currently the EC recovery has a fixed {{xmitsInProgress}} of {{max(# of 
sources, # of targets)}} comparing to {{1}} for 3x replica recovery, and NN 
uses {{xmitsInProgress}} to decide how much recovery tasks to schedule to the 
DataNode this we can add a coefficient for user to tune the weight of EC 
recovery tasks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12371) "BlockVerificationFailures" and "BlocksVerified" show up as 0 in Datanode JMX

2017-09-18 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-12371:
--
Attachment: HDFS-12371.002.patch

Thanks [~kihwal] for reviewing the patch. I have updated the patch to not 
increment the blockVerificationFailure count if block is null.

> "BlockVerificationFailures" and "BlocksVerified" show up as 0 in Datanode JMX
> -
>
> Key: HDFS-12371
> URL: https://issues.apache.org/jira/browse/HDFS-12371
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.7.1
>Reporter: Sai Nukavarapu
>Assignee: Hanisha Koneru
> Attachments: HDFS-12371.001.patch, HDFS-12371.002.patch
>
>
> "BlockVerificationFailures" and "BlocksVerified" show up as 0 in Datanode JMX
> Looking at the code, i see below description.
> {noformat}
> `BlockVerificationFailures` | Total number of verifications failures | 
> `BlocksVerified` | Total number of blocks verified | 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12473) Change hosts JSON file format

2017-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170652#comment-16170652
 ] 

Hadoop QA commented on HDFS-12473:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} hadoop-hdfs-project: The patch generated 0 new + 2 
unchanged - 5 fixed = 2 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
10s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m 14s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}118m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication |
|   | hadoop.hdfs.TestWriteReadStripedFile |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12473 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12887716/HDFS-12473-5.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ca20639166d8 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 29dd551 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21197/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| 

[jira] [Updated] (HDFS-12323) NameNode terminates after full GC thinking QJM unresponsive if full GC is much longer than timeout

2017-09-18 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12323:
---
Fix Version/s: 3.0.0-beta1

> NameNode terminates after full GC thinking QJM unresponsive if full GC is 
> much longer than timeout
> --
>
> Key: HDFS-12323
> URL: https://issues.apache.org/jira/browse/HDFS-12323
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, qjm
>Affects Versions: 2.7.4
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.3, 2.7.5, 3.1.0
>
> Attachments: HDFS-12323.000.patch, HDFS-12323.001.patch, 
> HDFS-12323.002.patch, HDFS-12323.003.patch, HDFS-12323.004.patch
>
>
> HDFS-10733 attempted to fix the issue where the Namenode process would 
> terminate itself if it had a GC pause which lasted longer than the QJM 
> timeout, since it would think that the QJM had taken too long to respond. 
> However, it only bumps up the timeout expiration by one timeout length, so if 
> the GC pause was e.g. 2x the length of the timeout, a TimeoutException will 
> be thrown and the NN will still terminate itself.
> Thanks to [~yangjiandan] for noting this issue as a comment on HDFS-10733; we 
> have also seen this issue on a real cluster even after HDFS-10733 is applied.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12395) Support erasure coding policy operations in namenode edit log

2017-09-18 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170628#comment-16170628
 ] 

Kihwal Lee commented on HDFS-12395:
---

bq.  since we don't need to support upgrade from alpha->beta
In that case, it should be fine. I was thinking it needs to be bumped up since 
we already had a release with EC.

> Support erasure coding policy operations in namenode edit log
> -
>
> Key: HDFS-12395
> URL: https://issues.apache.org/jira/browse/HDFS-12395
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Reporter: SammiChen
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Fix For: 3.0.0-beta1
>
> Attachments: editsStored, HDFS-12395.001.patch, HDFS-12395.002.patch, 
> HDFS-12395.003.patch, HDFS-12395.004.patch
>
>
> Support add, remove, disable, enable erasure coding policy operation in edit 
> log. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12395) Support erasure coding policy operations in namenode edit log

2017-09-18 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170622#comment-16170622
 ] 

Andrew Wang commented on HDFS-12395:


[~kihwal] EC added LV -64 when the branch was merged. Since we've been in alpha 
up to now, the thinking was that we can piggyback on the existing EC layout 
version since we don't need to support upgrade from alpha->beta.

> Support erasure coding policy operations in namenode edit log
> -
>
> Key: HDFS-12395
> URL: https://issues.apache.org/jira/browse/HDFS-12395
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Reporter: SammiChen
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Fix For: 3.0.0-beta1
>
> Attachments: editsStored, HDFS-12395.001.patch, HDFS-12395.002.patch, 
> HDFS-12395.003.patch, HDFS-12395.004.patch
>
>
> Support add, remove, disable, enable erasure coding policy operation in edit 
> log. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12395) Support erasure coding policy operations in namenode edit log

2017-09-18 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170622#comment-16170622
 ] 

Andrew Wang edited comment on HDFS-12395 at 9/18/17 8:22 PM:
-

[~kihwal] EC added LV \-64 when the branch was merged. Since we've been in 
alpha up to now, the thinking was that we can piggyback on the existing EC 
layout version since we don't need to support upgrade from alpha->beta.


was (Author: andrew.wang):
[~kihwal] EC added LV -64 when the branch was merged. Since we've been in alpha 
up to now, the thinking was that we can piggyback on the existing EC layout 
version since we don't need to support upgrade from alpha->beta.

> Support erasure coding policy operations in namenode edit log
> -
>
> Key: HDFS-12395
> URL: https://issues.apache.org/jira/browse/HDFS-12395
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Reporter: SammiChen
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Fix For: 3.0.0-beta1
>
> Attachments: editsStored, HDFS-12395.001.patch, HDFS-12395.002.patch, 
> HDFS-12395.003.patch, HDFS-12395.004.patch
>
>
> Support add, remove, disable, enable erasure coding policy operation in edit 
> log. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12110) libhdfs++: Rebase 8707 branch onto an up to date version of trunk

2017-09-18 Thread Deepak Majeti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Majeti reassigned HDFS-12110:


Assignee: Deepak Majeti  (was: James Clampffer)

> libhdfs++: Rebase 8707 branch onto an up to date version of trunk
> -
>
> Key: HDFS-12110
> URL: https://issues.apache.org/jira/browse/HDFS-12110
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Deepak Majeti
>
> It's been way too long since this has been done and it's time to start 
> knocking down blockers for merging into trunk.  Can most likely just 
> copy/paste the libhdfs++ directory into a newer version of master.  Want to 
> track it in a jira since it's likely to cause conflicts when pulling the 
> updated branch for the first time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12395) Support erasure coding policy operations in namenode edit log

2017-09-18 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170598#comment-16170598
 ] 

Kihwal Lee commented on HDFS-12395:
---

You need to increment the NN layout version.  

> Support erasure coding policy operations in namenode edit log
> -
>
> Key: HDFS-12395
> URL: https://issues.apache.org/jira/browse/HDFS-12395
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Reporter: SammiChen
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Fix For: 3.0.0-beta1
>
> Attachments: editsStored, HDFS-12395.001.patch, HDFS-12395.002.patch, 
> HDFS-12395.003.patch, HDFS-12395.004.patch
>
>
> Support add, remove, disable, enable erasure coding policy operation in edit 
> log. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12371) "BlockVerificationFailures" and "BlocksVerified" show up as 0 in Datanode JMX

2017-09-18 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170581#comment-16170581
 ] 

Kihwal Lee commented on HDFS-12371:
---

Test failures are not related. They either pass when ran locally or are known 
issues. E.g. HDFS-12480 is addressing TestNameNodeMetrics.

The block verification failure count was intended for actual checksum errors. I 
don't think we need to increment the failure count here.  Other than that, the 
patch looks good.
{code}
if (block == null) {
  metrics.incrBlockVerificationFailures();
  return -1; // block not found.
}
{code}

> "BlockVerificationFailures" and "BlocksVerified" show up as 0 in Datanode JMX
> -
>
> Key: HDFS-12371
> URL: https://issues.apache.org/jira/browse/HDFS-12371
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.7.1
>Reporter: Sai Nukavarapu
>Assignee: Hanisha Koneru
> Attachments: HDFS-12371.001.patch
>
>
> "BlockVerificationFailures" and "BlocksVerified" show up as 0 in Datanode JMX
> Looking at the code, i see below description.
> {noformat}
> `BlockVerificationFailures` | Total number of verifications failures | 
> `BlocksVerified` | Total number of blocks verified | 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12481) Ozone: Corona: Support for variable key length in offline mode

2017-09-18 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170541#comment-16170541
 ] 

Anu Engineer commented on HDFS-12481:
-

+1, pending Jenkins. I have tested this in a cluster and verified this is 
working.

> Ozone: Corona: Support for variable key length in offline mode
> --
>
> Key: HDFS-12481
> URL: https://issues.apache.org/jira/browse/HDFS-12481
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-12481-HDFS-7240.000.patch
>
>
> This jira is to bring support in corona to take key length from user and 
> generate random data of that length to write into ozone.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12320) Add quantiles for transactions batched in Journal sync

2017-09-18 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170536#comment-16170536
 ] 

Hanisha Koneru commented on HDFS-12320:
---

The test failures look unrelated and pass locally for me.

> Add  quantiles for transactions batched in Journal sync
> ---
>
> Key: HDFS-12320
> URL: https://issues.apache.org/jira/browse/HDFS-12320
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: metrics, namenode
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-12320.001.patch
>
>
> We currently track the overall count of the transactions which were batch 
> during journal sync through the metric _TransactionsBatchedInSync_. It will 
> be useful to to have a quantile to measure the transactions batched together 
> over a period. This would give a better understanding of the distribution of 
> the batching. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11873) Ozone: Object store handler supports reusing http client for multiple requests.

2017-09-18 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11873:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

Thanks all for the discussion and reviews. I've commit the fix to the feature 
branch. 

> Ozone: Object store handler supports reusing http client for multiple 
> requests.
> ---
>
> Key: HDFS-11873
> URL: https://issues.apache.org/jira/browse/HDFS-11873
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Xiaoyu Yao
>Priority: Critical
>  Labels: ozoneMerge
> Fix For: HDFS-7240
>
> Attachments: HDFS-11873-HDFS-7240.001.patch, 
> HDFS-11873-HDFS-7240.002.patch, HDFS-11873-HDFS-7240.003.patch, 
> HDFS-11873-HDFS-7240.004.patch, HDFS-11873-HDFS-7240.testcase.patch
>
>
> This issue was found when I worked on HDFS-11846. Instead of creating a new 
> http client instance per request, I tried to reuse {{CloseableHttpClient}} in 
> {{OzoneClient}} class in a {{PoolingHttpClientConnectionManager}}. However, 
> every second request from the http client hangs, which could not get 
> dispatched to {{ObjectStoreJerseyContainer}}. There seems to be something 
> wrong in the netty pipeline, this jira aims to 1) fix the problem in the 
> server side 2) use the pool for client http clients to reduce the resource 
> overhead.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11873) Ozone: Object store handler support reuse http client for multiple requests.

2017-09-18 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11873:
--
Summary: Ozone: Object store handler support reuse http client for multiple 
requests.  (was: Ozone: Object store handler cannot serve multiple requests 
from single http client)

> Ozone: Object store handler support reuse http client for multiple requests.
> 
>
> Key: HDFS-11873
> URL: https://issues.apache.org/jira/browse/HDFS-11873
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Xiaoyu Yao
>Priority: Critical
>  Labels: ozoneMerge
> Attachments: HDFS-11873-HDFS-7240.001.patch, 
> HDFS-11873-HDFS-7240.002.patch, HDFS-11873-HDFS-7240.003.patch, 
> HDFS-11873-HDFS-7240.004.patch, HDFS-11873-HDFS-7240.testcase.patch
>
>
> This issue was found when I worked on HDFS-11846. Instead of creating a new 
> http client instance per request, I tried to reuse {{CloseableHttpClient}} in 
> {{OzoneClient}} class in a {{PoolingHttpClientConnectionManager}}. However, 
> every second request from the http client hangs, which could not get 
> dispatched to {{ObjectStoreJerseyContainer}}. There seems to be something 
> wrong in the netty pipeline, this jira aims to 1) fix the problem in the 
> server side 2) use the pool for client http clients to reduce the resource 
> overhead.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11799) Introduce a config to allow setting up write pipeline with fewer nodes than replication factor

2017-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170508#comment-16170508
 ] 

Hadoop QA commented on HDFS-11799:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 40s{color} | {color:orange} hadoop-hdfs-project: The patch generated 7 new + 
156 unchanged - 0 fixed = 163 total (was 156) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
12s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m 53s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}121m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.TestParallelShortCircuitReadNoChecksum |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-11799 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12887692/HDFS-11799-008.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 7e86f3f0216e 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (HDFS-11873) Ozone: Object store handler supports reusing http client for multiple requests.

2017-09-18 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11873:
--
Summary: Ozone: Object store handler supports reusing http client for 
multiple requests.  (was: Ozone: Object store handler support reuse http client 
for multiple requests.)

> Ozone: Object store handler supports reusing http client for multiple 
> requests.
> ---
>
> Key: HDFS-11873
> URL: https://issues.apache.org/jira/browse/HDFS-11873
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Xiaoyu Yao
>Priority: Critical
>  Labels: ozoneMerge
> Attachments: HDFS-11873-HDFS-7240.001.patch, 
> HDFS-11873-HDFS-7240.002.patch, HDFS-11873-HDFS-7240.003.patch, 
> HDFS-11873-HDFS-7240.004.patch, HDFS-11873-HDFS-7240.testcase.patch
>
>
> This issue was found when I worked on HDFS-11846. Instead of creating a new 
> http client instance per request, I tried to reuse {{CloseableHttpClient}} in 
> {{OzoneClient}} class in a {{PoolingHttpClientConnectionManager}}. However, 
> every second request from the http client hangs, which could not get 
> dispatched to {{ObjectStoreJerseyContainer}}. There seems to be something 
> wrong in the netty pipeline, this jira aims to 1) fix the problem in the 
> server side 2) use the pool for client http clients to reduce the resource 
> overhead.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11873) Ozone: Object store handler cannot serve multiple requests from single http client

2017-09-18 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170502#comment-16170502
 ] 

Xiaoyu Yao commented on HDFS-11873:
---

Thanks [~anu] and [~cheersyang] for the review. I will commit the patch soon.

> Ozone: Object store handler cannot serve multiple requests from single http 
> client
> --
>
> Key: HDFS-11873
> URL: https://issues.apache.org/jira/browse/HDFS-11873
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Xiaoyu Yao
>Priority: Critical
>  Labels: ozoneMerge
> Attachments: HDFS-11873-HDFS-7240.001.patch, 
> HDFS-11873-HDFS-7240.002.patch, HDFS-11873-HDFS-7240.003.patch, 
> HDFS-11873-HDFS-7240.004.patch, HDFS-11873-HDFS-7240.testcase.patch
>
>
> This issue was found when I worked on HDFS-11846. Instead of creating a new 
> http client instance per request, I tried to reuse {{CloseableHttpClient}} in 
> {{OzoneClient}} class in a {{PoolingHttpClientConnectionManager}}. However, 
> every second request from the http client hangs, which could not get 
> dispatched to {{ObjectStoreJerseyContainer}}. There seems to be something 
> wrong in the netty pipeline, this jira aims to 1) fix the problem in the 
> server side 2) use the pool for client http clients to reduce the resource 
> overhead.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12481) Ozone: Corona: Support for variable key length in offline mode

2017-09-18 Thread Nandakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar updated HDFS-12481:
--
Status: Patch Available  (was: Open)

> Ozone: Corona: Support for variable key length in offline mode
> --
>
> Key: HDFS-12481
> URL: https://issues.apache.org/jira/browse/HDFS-12481
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-12481-HDFS-7240.000.patch
>
>
> This jira is to bring support in corona to take key length from user and 
> generate random data of that length to write into ozone.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11873) Ozone: Object store handler cannot serve multiple requests from single http client

2017-09-18 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170494#comment-16170494
 ] 

Anu Engineer commented on HDFS-11873:
-

+1, [~cheersyang] Thanks for finding this issue and reporting it. [~xyao] 
Thanks for fixing it. I know it was a very complicated issue, appreciate all 
the effort that went into this fix.

> Ozone: Object store handler cannot serve multiple requests from single http 
> client
> --
>
> Key: HDFS-11873
> URL: https://issues.apache.org/jira/browse/HDFS-11873
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Xiaoyu Yao
>Priority: Critical
>  Labels: ozoneMerge
> Attachments: HDFS-11873-HDFS-7240.001.patch, 
> HDFS-11873-HDFS-7240.002.patch, HDFS-11873-HDFS-7240.003.patch, 
> HDFS-11873-HDFS-7240.004.patch, HDFS-11873-HDFS-7240.testcase.patch
>
>
> This issue was found when I worked on HDFS-11846. Instead of creating a new 
> http client instance per request, I tried to reuse {{CloseableHttpClient}} in 
> {{OzoneClient}} class in a {{PoolingHttpClientConnectionManager}}. However, 
> every second request from the http client hangs, which could not get 
> dispatched to {{ObjectStoreJerseyContainer}}. There seems to be something 
> wrong in the netty pipeline, this jira aims to 1) fix the problem in the 
> server side 2) use the pool for client http clients to reduce the resource 
> overhead.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12473) Change hosts JSON file format

2017-09-18 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170493#comment-16170493
 ] 

Manoj Govindassamy commented on HDFS-12473:
---

[~mingma],
  Thanks for the patch revisions. On of our minor releases recently shipped 
with support for Maintenance state and it uses the older hosts json format just 
like in the upstream. Internally we had a question on the non-standard json 
format in the hosts file, but we didn't want to diverge from the upstream code. 
Now that we are fixing the hosts file json format, an upgrade handling for 
people like us will be very helpful. You think otherwise, given that upstream 
releases doesn't have this issue?



> Change hosts JSON file format
> -
>
> Key: HDFS-12473
> URL: https://issues.apache.org/jira/browse/HDFS-12473
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HDFS-12473-2.patch, HDFS-12473-3.patch, 
> HDFS-12473-4.patch, HDFS-12473-5.patch, HDFS-12473.patch
>
>
> The existing host JSON file format doesn't have a top-level token.
> {noformat}
>   {"hostName": "host1"}
>   {"hostName": "host2", "upgradeDomain": "ud0"}
>   {"hostName": "host3", "adminState": "DECOMMISSIONED"}
>   {"hostName": "host4", "upgradeDomain": "ud2", "adminState": 
> "DECOMMISSIONED"}
>   {"hostName": "host5", "port": 8090}
>   {"hostName": "host6", "adminState": "IN_MAINTENANCE"}
>   {"hostName": "host7", "adminState": "IN_MAINTENANCE", 
> "maintenanceExpireTimeInMS": "112233"}
> {noformat}
> Instead, to conform with the JSON standard it should be like
> {noformat}
> [
>   {"hostName": "host1"},
>   {"hostName": "host2", "upgradeDomain": "ud0"},
>   {"hostName": "host3", "adminState": "DECOMMISSIONED"},
>   {"hostName": "host4", "upgradeDomain": "ud2", "adminState": 
> "DECOMMISSIONED"},
>   {"hostName": "host5", "port": 8090},
>   {"hostName": "host6", "adminState": "IN_MAINTENANCE"},
>   {"hostName": "host7", "adminState": "IN_MAINTENANCE", 
> "maintenanceExpireTimeInMS": "112233"}
> ]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12481) Ozone: Corona: Support for variable key length in offline mode

2017-09-18 Thread Nandakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar updated HDFS-12481:
--
Attachment: HDFS-12481-HDFS-7240.000.patch

> Ozone: Corona: Support for variable key length in offline mode
> --
>
> Key: HDFS-12481
> URL: https://issues.apache.org/jira/browse/HDFS-12481
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-12481-HDFS-7240.000.patch
>
>
> This jira is to bring support in corona to take key length from user and 
> generate random data of that length to write into ozone.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12481) Ozone: Corona: Support for variable key length in offline mode

2017-09-18 Thread Nandakumar (JIRA)
Nandakumar created HDFS-12481:
-

 Summary: Ozone: Corona: Support for variable key length in offline 
mode
 Key: HDFS-12481
 URL: https://issues.apache.org/jira/browse/HDFS-12481
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Nandakumar
Assignee: Nandakumar


This jira is to bring support in corona to take key length from user and 
generate random data of that length to write into ozone.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12480) TestNameNodeMetrics#testTransactionAndCheckpointMetrics Fails in trunk

2017-09-18 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-12480:
--
Status: Patch Available  (was: Open)

> TestNameNodeMetrics#testTransactionAndCheckpointMetrics Fails in trunk
> --
>
> Key: HDFS-12480
> URL: https://issues.apache.org/jira/browse/HDFS-12480
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Hanisha Koneru
> Attachments: HDFS-12480.001.patch
>
>
> {noformat}
> java.lang.AssertionError: Bad value for metric LastWrittenTransactionId 
> expected:<3> but was:<4>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.test.MetricsAsserts.assertGauge(MetricsAsserts.java:189)
>   at 
> org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics.testTransactionAndCheckpointMetrics(TestNameNodeMetrics.java:854)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12480) TestNameNodeMetrics#testTransactionAndCheckpointMetrics Fails in trunk

2017-09-18 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-12480:
--
Attachment: HDFS-12480.001.patch

Thanks for reporting this, [~brahmareddy].

HDFS-12395 added erasure coding policy operations to NN edit log and HDFS-12414 
made the following change in the _TestNamenodeMetrics#setUp()_ method
{code} 
166  fs.enableErasureCodingPolicy(EC_POLICY.getName());
{code}

This adds the EC operation to the edit log, incrementing the transaction id by 
1.
This update to transaction id was not being captured in 
_testTransactionAndCheckpointMetrics_. 

> TestNameNodeMetrics#testTransactionAndCheckpointMetrics Fails in trunk
> --
>
> Key: HDFS-12480
> URL: https://issues.apache.org/jira/browse/HDFS-12480
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Hanisha Koneru
> Attachments: HDFS-12480.001.patch
>
>
> {noformat}
> java.lang.AssertionError: Bad value for metric LastWrittenTransactionId 
> expected:<3> but was:<4>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.test.MetricsAsserts.assertGauge(MetricsAsserts.java:189)
>   at 
> org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics.testTransactionAndCheckpointMetrics(TestNameNodeMetrics.java:854)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12480) TestNameNodeMetrics#testTransactionAndCheckpointMetrics Fails in trunk

2017-09-18 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170452#comment-16170452
 ] 

Hanisha Koneru commented on HDFS-12480:
---

Uploaded a patch to fix this.

> TestNameNodeMetrics#testTransactionAndCheckpointMetrics Fails in trunk
> --
>
> Key: HDFS-12480
> URL: https://issues.apache.org/jira/browse/HDFS-12480
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Hanisha Koneru
> Attachments: HDFS-12480.001.patch
>
>
> {noformat}
> java.lang.AssertionError: Bad value for metric LastWrittenTransactionId 
> expected:<3> but was:<4>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.test.MetricsAsserts.assertGauge(MetricsAsserts.java:189)
>   at 
> org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics.testTransactionAndCheckpointMetrics(TestNameNodeMetrics.java:854)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12473) Change hosts JSON file format

2017-09-18 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-12473:
---
Attachment: HDFS-12473-5.patch

>From discussion with [~zhz], 2.8.2 hasn't been released yet. Thus we don't 
>need to deal with the backward compatibility issue of old JSON format being 
>used in HDFS-7541, assuming we can get the patch in 2.8.2 and branch-3.0. 
>[~manojg] here is the latest patch.

> Change hosts JSON file format
> -
>
> Key: HDFS-12473
> URL: https://issues.apache.org/jira/browse/HDFS-12473
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HDFS-12473-2.patch, HDFS-12473-3.patch, 
> HDFS-12473-4.patch, HDFS-12473-5.patch, HDFS-12473.patch
>
>
> The existing host JSON file format doesn't have a top-level token.
> {noformat}
>   {"hostName": "host1"}
>   {"hostName": "host2", "upgradeDomain": "ud0"}
>   {"hostName": "host3", "adminState": "DECOMMISSIONED"}
>   {"hostName": "host4", "upgradeDomain": "ud2", "adminState": 
> "DECOMMISSIONED"}
>   {"hostName": "host5", "port": 8090}
>   {"hostName": "host6", "adminState": "IN_MAINTENANCE"}
>   {"hostName": "host7", "adminState": "IN_MAINTENANCE", 
> "maintenanceExpireTimeInMS": "112233"}
> {noformat}
> Instead, to conform with the JSON standard it should be like
> {noformat}
> [
>   {"hostName": "host1"},
>   {"hostName": "host2", "upgradeDomain": "ud0"},
>   {"hostName": "host3", "adminState": "DECOMMISSIONED"},
>   {"hostName": "host4", "upgradeDomain": "ud2", "adminState": 
> "DECOMMISSIONED"},
>   {"hostName": "host5", "port": 8090},
>   {"hostName": "host6", "adminState": "IN_MAINTENANCE"},
>   {"hostName": "host7", "adminState": "IN_MAINTENANCE", 
> "maintenanceExpireTimeInMS": "112233"}
> ]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12268) Ozone: Add metrics for pending storage container requests

2017-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170430#comment-16170430
 ] 

Hadoop QA commented on HDFS-12268:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
45s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
46s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
1s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
41s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}142m  0s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}187m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestAuditLogs |
|   | hadoop.hdfs.TestLeaseRecovery |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 |
|   | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 |
|   | hadoop.ozone.container.common.impl.TestContainerPersistence |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | 
hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness
 |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 |
|   | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050 |
|   | hadoop.hdfs.TestEncryptedTransfer |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
|   | org.apache.hadoop.cblock.TestLocalBlockCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12268 |
| JIRA Patch URL | 

[jira] [Commented] (HDFS-11294) libhdfs++: Segfault in HA failover if DNS lookup for both Namenodes fails

2017-09-18 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170426#comment-16170426
 ] 

James Clampffer commented on HDFS-11294:


New tests were added, test4tests just doesn't notice.  Trying to sort out test 
failure now.  Haven't been able to reproduce it on my own docker instance yet.

> libhdfs++: Segfault in HA failover if DNS lookup for both Namenodes fails
> -
>
> Key: HDFS-11294
> URL: https://issues.apache.org/jira/browse/HDFS-11294
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-11294.HDFS-8707.001.patch, 
> HDFS-11294.HDFS-8707.002.patch, HDFS-8707.HDFS-11294.000.patch
>
>
> Hit while doing more manual testing on HDFS-11028.
> The HANamenodeTracker takes an asio endpoint to figure out what endpoint on 
> the other node to try next during a failover.  This is done by passing the 
> element at index 0 from endpoints (a std::vector).
> When DNS fails the endpoints vector for that node will be empty so the 
> iterator returned by endpoints\[0\] is just a null pointer that gets 
> dereferenced and causes a segfault.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12480) TestNameNodeMetrics#testTransactionAndCheckpointMetrics Fails in trunk

2017-09-18 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru reassigned HDFS-12480:
-

Assignee: Hanisha Koneru  (was: Bharat Viswanadham)

> TestNameNodeMetrics#testTransactionAndCheckpointMetrics Fails in trunk
> --
>
> Key: HDFS-12480
> URL: https://issues.apache.org/jira/browse/HDFS-12480
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Hanisha Koneru
>
> {noformat}
> java.lang.AssertionError: Bad value for metric LastWrittenTransactionId 
> expected:<3> but was:<4>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.test.MetricsAsserts.assertGauge(MetricsAsserts.java:189)
>   at 
> org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics.testTransactionAndCheckpointMetrics(TestNameNodeMetrics.java:854)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12480) TestNameNodeMetrics#testTransactionAndCheckpointMetrics Fails in trunk

2017-09-18 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDFS-12480:
-

Assignee: Bharat Viswanadham  (was: Hanisha Koneru)

> TestNameNodeMetrics#testTransactionAndCheckpointMetrics Fails in trunk
> --
>
> Key: HDFS-12480
> URL: https://issues.apache.org/jira/browse/HDFS-12480
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Bharat Viswanadham
>
> {noformat}
> java.lang.AssertionError: Bad value for metric LastWrittenTransactionId 
> expected:<3> but was:<4>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.test.MetricsAsserts.assertGauge(MetricsAsserts.java:189)
>   at 
> org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics.testTransactionAndCheckpointMetrics(TestNameNodeMetrics.java:854)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12480) TestNameNodeMetrics#testTransactionAndCheckpointMetrics Fails in trunk

2017-09-18 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru reassigned HDFS-12480:
-

Assignee: Hanisha Koneru

> TestNameNodeMetrics#testTransactionAndCheckpointMetrics Fails in trunk
> --
>
> Key: HDFS-12480
> URL: https://issues.apache.org/jira/browse/HDFS-12480
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Hanisha Koneru
>
> {noformat}
> java.lang.AssertionError: Bad value for metric LastWrittenTransactionId 
> expected:<3> but was:<4>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.test.MetricsAsserts.assertGauge(MetricsAsserts.java:189)
>   at 
> org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics.testTransactionAndCheckpointMetrics(TestNameNodeMetrics.java:854)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12470) DiskBalancer: Some tests create plan files under system directory

2017-09-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170359#comment-16170359
 ] 

Hudson commented on HDFS-12470:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12897 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12897/])
HDFS-12470. DiskBalancer: Some tests create plan files under system (arp: rev 
a2dcba18531c6fa4b76325f5132773f12ddfc6d5)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/diskbalancer/command/TestDiskBalancerCommand.java


> DiskBalancer: Some tests create plan files under system directory
> -
>
> Key: HDFS-12470
> URL: https://issues.apache.org/jira/browse/HDFS-12470
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: diskbalancer, test
>Reporter: Akira Ajisaka
>Assignee: Hanisha Koneru
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12470.001.patch
>
>
> When I ran HDFS tests, plan files are created under system directory.
> {noformat}
> $ ls -R hadoop-hdfs-project/hadoop-hdfs/system
> diskbalancer
> hadoop-hdfs-project/hadoop-hdfs/system/diskbalancer:
> 2017-Sep-15-19-37-34
> hadoop-hdfs-project/hadoop-hdfs/system/diskbalancer/2017-Sep-15-19-37-34:
> a87654a9-54c7-4693-8dd9-c9c7021dc340.before.json 
> a87654a9-54c7-4693-8dd9-c9c7021dc340.plan.json
> {noformat}
> All the files created by tests should be in target directory. That way the 
> files are ignored by git.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12470) DiskBalancer: Some tests create plan files under system directory

2017-09-18 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170358#comment-16170358
 ] 

Hanisha Koneru commented on HDFS-12470:
---

Thank you [~arpitagarwal] for reviewing and committing the patch.

> DiskBalancer: Some tests create plan files under system directory
> -
>
> Key: HDFS-12470
> URL: https://issues.apache.org/jira/browse/HDFS-12470
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: diskbalancer, test
>Reporter: Akira Ajisaka
>Assignee: Hanisha Koneru
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12470.001.patch
>
>
> When I ran HDFS tests, plan files are created under system directory.
> {noformat}
> $ ls -R hadoop-hdfs-project/hadoop-hdfs/system
> diskbalancer
> hadoop-hdfs-project/hadoop-hdfs/system/diskbalancer:
> 2017-Sep-15-19-37-34
> hadoop-hdfs-project/hadoop-hdfs/system/diskbalancer/2017-Sep-15-19-37-34:
> a87654a9-54c7-4693-8dd9-c9c7021dc340.before.json 
> a87654a9-54c7-4693-8dd9-c9c7021dc340.plan.json
> {noformat}
> All the files created by tests should be in target directory. That way the 
> files are ignored by git.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12375) Fail to start/stop journalnodes using start-dfs.sh/stop-dfs.sh.

2017-09-18 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16169155#comment-16169155
 ] 

Bharat Viswanadham edited comment on HDFS-12375 at 9/18/17 5:14 PM:


One more question from your configuration 

dfs.nameservices
ns1,ns2
  

Here you having two values, this is for federation right, where we give 
nameserviceIds.

Where as in HA we provide, logical name for our service, it should have one 
value only right?
like 

dfs.nameservices
mycluster
  

so that we access our cluster like hdfs://mycluster//

Let me know if I am missing something here?



was (Author: bharatviswa):
One more question from your configuration 

dfs.nameservices
ns1,ns2
  

Here you having two values, this is for federation right, where we give 
nameserviceIds.

Where as in HA we provide, logical name for our service, it should have one 
value only right?
like 

dfs.nameservices
mycluster
  

so that we acces sour cluster like hdfs://mycluster//

Let me know if I am missing something here?


> Fail to start/stop journalnodes using start-dfs.sh/stop-dfs.sh.
> ---
>
> Key: HDFS-12375
> URL: https://issues.apache.org/jira/browse/HDFS-12375
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation, scripts
>Affects Versions: 3.0.0-beta1
>Reporter: Wenxin He
>Assignee: Bharat Viswanadham
> Attachments: hdfs-site.xml
>
>
> When 'dfs.namenode.checkpoint.edits.dir' suffixed with the corresponding 
> NameServiceID, we can not start/stop journalnodes using 
> start-dfs.sh/stop-dfs.sh.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12480) TestNameNodeMetrics#testTransactionAndCheckpointMetrics Fails in trunk

2017-09-18 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HDFS-12480:
---

 Summary: TestNameNodeMetrics#testTransactionAndCheckpointMetrics 
Fails in trunk
 Key: HDFS-12480
 URL: https://issues.apache.org/jira/browse/HDFS-12480
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Brahma Reddy Battula


{noformat}
java.lang.AssertionError: Bad value for metric LastWrittenTransactionId 
expected:<3> but was:<4>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at 
org.apache.hadoop.test.MetricsAsserts.assertGauge(MetricsAsserts.java:189)
at 
org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics.testTransactionAndCheckpointMetrics(TestNameNodeMetrics.java:854)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12470) DiskBalancer: Some tests create plan files under system directory

2017-09-18 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-12470:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-beta1
   Status: Resolved  (was: Patch Available)

I've committed this. Thanks for the contribution [~hanishakoneru].

> DiskBalancer: Some tests create plan files under system directory
> -
>
> Key: HDFS-12470
> URL: https://issues.apache.org/jira/browse/HDFS-12470
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: diskbalancer, test
>Reporter: Akira Ajisaka
>Assignee: Hanisha Koneru
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12470.001.patch
>
>
> When I ran HDFS tests, plan files are created under system directory.
> {noformat}
> $ ls -R hadoop-hdfs-project/hadoop-hdfs/system
> diskbalancer
> hadoop-hdfs-project/hadoop-hdfs/system/diskbalancer:
> 2017-Sep-15-19-37-34
> hadoop-hdfs-project/hadoop-hdfs/system/diskbalancer/2017-Sep-15-19-37-34:
> a87654a9-54c7-4693-8dd9-c9c7021dc340.before.json 
> a87654a9-54c7-4693-8dd9-c9c7021dc340.plan.json
> {noformat}
> All the files created by tests should be in target directory. That way the 
> files are ignored by git.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12371) "BlockVerificationFailures" and "BlocksVerified" show up as 0 in Datanode JMX

2017-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170287#comment-16170287
 ] 

Hadoop QA commented on HDFS-12371:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}113m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
|   | hadoop.hdfs.TestDFSShellGenericOptions |
|   | hadoop.hdfs.TestParallelShortCircuitReadNoChecksum |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12371 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12887457/HDFS-12371.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 745b94c3c220 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0f9af24 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21194/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21194/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21194/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> "BlockVerificationFailures" and "BlocksVerified" show up as 0 in Datanode JMX
> 

[jira] [Commented] (HDFS-12472) Add JUNIT timeout to TestBlockStatsMXBean

2017-09-18 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170268#comment-16170268
 ] 

Bharat Viswanadham commented on HDFS-12472:
---

Thank You [~arpitagarwal] for committing the patch.

> Add JUNIT timeout to TestBlockStatsMXBean 
> --
>
> Key: HDFS-12472
> URL: https://issues.apache.org/jira/browse/HDFS-12472
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lei (Eddy) Xu
>Assignee: Bharat Viswanadham
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HDFS-12472.00.patch, HDFS-12472.01.patch
>
>
> Add Junit timeout to {{TestBlockStatsMXBean}} so that it can show up in the 
> test failure report if timeout occurs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12478) [WRITE] Command line tools for managing Provided Storage Backup mounts

2017-09-18 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170264#comment-16170264
 ] 

Allen Wittenauer commented on HDFS-12478:
-

bq. hdfs attach -remove

Using two conflicting verbs on the command line is extremely confusing.  'hdfs 
attach' should probably 'hdfs psb' or something similar.

> [WRITE] Command line tools for managing Provided Storage Backup mounts
> --
>
> Key: HDFS-12478
> URL: https://issues.apache.org/jira/browse/HDFS-12478
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
>
> This is a task for implementing the command line interface for attaching a 
> PROVIDED storage backup system (see HDFS-9806, HDFS-12090).
> # The administrator should be able to mount a PROVIDED storage volume from 
> the command line. 
> {code}hdfs attach -create [-name ]   path (external)>{code}
> # Whitelist of users who are able to manage mounts (create, attach, detach).
> # Be able to interrogate the status of the attached storage (last time a 
> snapshot was taken, files being backed up).
> # The administrator should be able to remove an attached PROVIDED storage 
> volume from the command line. This simply means that the synchronization 
> process no longer runs. If the administrator has configured their setup to no 
> longer have local copies of the data, the blocks in the subtree are simply no 
> longer accessible as the external file store system is currently inaccessible.
> {code}hdfs attach -remove  [-force | -flush]{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11799) Introduce a config to allow setting up write pipeline with fewer nodes than replication factor

2017-09-18 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-11799:

Attachment: HDFS-11799-008.patch

Uploaded the patch to address above comment.

> Introduce a config to allow setting up write pipeline with fewer nodes than 
> replication factor
> --
>
> Key: HDFS-11799
> URL: https://issues.apache.org/jira/browse/HDFS-11799
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-11799-002.patch, HDFS-11799-003.patch, 
> HDFS-11799-004.patch, HDFS-11799-005.patch, HDFS-11799-006.patch, 
> HDFS-11799-007.patch, HDFS-11799-008.patch, HDFS-11799.patch
>
>
> During pipeline recovery, if not enough DNs can be found, if 
> dfs.client.block.write.replace-datanode-on-failure.best-effort
> is enabled, we let the pipeline to continue, even if there is a single DN.
> Similarly, when we create the write pipeline initially, if for some reason we 
> can't find enough DNs, we can have a similar config to enable writing with a 
> single DN.
> More study will be done.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11799) Introduce a config to allow setting up write pipeline with fewer nodes than replication factor

2017-09-18 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170210#comment-16170210
 ] 

Yongjun Zhang commented on HDFS-11799:
--

Thanks for the updated patch [~brahmareddy], +1 pending the following nit and 
jenkins test:
{code}
323   String  MIN_REPLICATION = PREFIX + "min-replication";
324   short  REPLICATION_DEFAULT = 0;
{code}
The config and default value variable names are not consistent.  Suggest to 
change the default value to {{MIN_REPLICATION_DEFAULT}}.


> Introduce a config to allow setting up write pipeline with fewer nodes than 
> replication factor
> --
>
> Key: HDFS-11799
> URL: https://issues.apache.org/jira/browse/HDFS-11799
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-11799-002.patch, HDFS-11799-003.patch, 
> HDFS-11799-004.patch, HDFS-11799-005.patch, HDFS-11799-006.patch, 
> HDFS-11799-007.patch, HDFS-11799.patch
>
>
> During pipeline recovery, if not enough DNs can be found, if 
> dfs.client.block.write.replace-datanode-on-failure.best-effort
> is enabled, we let the pipeline to continue, even if there is a single DN.
> Similarly, when we create the write pipeline initially, if for some reason we 
> can't find enough DNs, we can have a similar config to enable writing with a 
> single DN.
> More study will be done.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12268) Ozone: Add metrics for pending storage container requests

2017-09-18 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12268:
-
Attachment: HDFS-12268-HDFS-7240.009.patch

Attach new patch to fix checkstyle and findbugs warnings,

> Ozone: Add metrics for pending storage container requests
> -
>
> Key: HDFS-12268
> URL: https://issues.apache.org/jira/browse/HDFS-12268
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: ozoneMerge
> Attachments: HDFS-12268-HDFS-7240.001.patch, 
> HDFS-12268-HDFS-7240.002.patch, HDFS-12268-HDFS-7240.003.patch, 
> HDFS-12268-HDFS-7240.004.patch, HDFS-12268-HDFS-7240.005.patch, 
> HDFS-12268-HDFS-7240.006.patch, HDFS-12268-HDFS-7240.007.patch, 
> HDFS-12268-HDFS-7240.008.patch, HDFS-12268-HDFS-7240.009.patch
>
>
>  As storage container async interface has been supported after HDFS-11580, we 
> need to keep an eye on the queue depth of pending container requests. It can 
> help us better found if there are some performance problems.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12371) "BlockVerificationFailures" and "BlocksVerified" show up as 0 in Datanode JMX

2017-09-18 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-12371:
--
Status: Patch Available  (was: Open)

> "BlockVerificationFailures" and "BlocksVerified" show up as 0 in Datanode JMX
> -
>
> Key: HDFS-12371
> URL: https://issues.apache.org/jira/browse/HDFS-12371
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.7.1
>Reporter: Sai Nukavarapu
>Assignee: Hanisha Koneru
> Attachments: HDFS-12371.001.patch
>
>
> "BlockVerificationFailures" and "BlocksVerified" show up as 0 in Datanode JMX
> Looking at the code, i see below description.
> {noformat}
> `BlockVerificationFailures` | Total number of verifications failures | 
> `BlocksVerified` | Total number of blocks verified | 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12084) Scheduled Count will not decrement when file is deleted before all IBR's received

2017-09-18 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170082#comment-16170082
 ] 

Kihwal Lee commented on HDFS-12084:
---

Sorry, I will get to it today.

> Scheduled Count will not decrement when file is deleted before all IBR's 
> received
> -
>
> Key: HDFS-12084
> URL: https://issues.apache.org/jira/browse/HDFS-12084
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-12084-001.patch, HDFS-12084-002.patch, 
> HDFS-12084-003.patch, HDFS-12084-branch-2.patch
>
>
> When small files creation && deletion happens so frequently and DN's did not 
> report blocks to NN before deletion, then scheduled count will keep on 
> increment and which will not deleted as blocks are deleted.
> *Note*: Every 20 mins,this can be rolled, but with in 20 mins, count can be 
> more as so many operations.
> when batchIBR enabled with committed allowed=1 this will be observed more.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12268) Ozone: Add metrics for pending storage container requests

2017-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16169910#comment-16169910
 ] 

Hadoop QA commented on HDFS-12268:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
30s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
40s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
43s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | {color:orange} hadoop-hdfs-project: The patch generated 3 new + 
0 unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
15s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
32s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}153m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  Should org.apache.hadoop.scm.XceiverClientHandler$ResponseFuture be a 
_static_ inner class?  At XceiverClientHandler.java:inner class?  At 
XceiverClientHandler.java:[lines 172-175] |
| Failed junit tests | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|  

[jira] [Commented] (HDFS-12395) Support erasure coding policy operations in namenode edit log

2017-09-18 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16169867#comment-16169867
 ] 

Brahma Reddy Battula commented on HDFS-12395:
-

Thanks for taking care, could have fixed the testcases in seperate jira since 
HDFS-12460 states "Make addErasureCodingPolicy an idempotent operation". Anyway 
it's committed,should be fine I feel.

> Support erasure coding policy operations in namenode edit log
> -
>
> Key: HDFS-12395
> URL: https://issues.apache.org/jira/browse/HDFS-12395
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Reporter: SammiChen
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Fix For: 3.0.0-beta1
>
> Attachments: editsStored, HDFS-12395.001.patch, HDFS-12395.002.patch, 
> HDFS-12395.003.patch, HDFS-12395.004.patch
>
>
> Support add, remove, disable, enable erasure coding policy operation in edit 
> log. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12205) Ozone: List Key on an empty ozone bucket fails with command failed error

2017-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16169862#comment-16169862
 ] 

Hadoop QA commented on HDFS-12205:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
32s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 11s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}135m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12205 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12887622/HDFS-12205-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux cc58572ceba1 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 8dbd035 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21191/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-12479) Some misuses of lock in DFSStripedOutputStream

2017-09-18 Thread Huafeng Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16169861#comment-16169861
 ] 

Huafeng Wang commented on HDFS-12479:
-

Hi [~drankye], can you help to review this patch? Thanks!

> Some misuses of lock in DFSStripedOutputStream
> --
>
> Key: HDFS-12479
> URL: https://issues.apache.org/jira/browse/HDFS-12479
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Huafeng Wang
>Assignee: Huafeng Wang
>Priority: Minor
> Attachments: HDFS-12479.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12460) Make addErasureCodingPolicy an idempotent operation

2017-09-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16169840#comment-16169840
 ] 

Hudson commented on HDFS-12460:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12895 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12895/])
HDFS-12460. Make addErasureCodingPolicy an idempotent operation. (kai.zheng: 
rev 0f9af246e89e4ad3c4d7ff2c1d7ec9b397494a03)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodingPolicies.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingPolicyManager.java


> Make addErasureCodingPolicy an idempotent operation
> ---
>
> Key: HDFS-12460
> URL: https://issues.apache.org/jira/browse/HDFS-12460
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Reporter: SammiChen
>Assignee: SammiChen
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12460.001.patch
>
>
> Make addErasureCodingPolicy an idempotent operation to guarantee  after HA 
> switch, addErasureCodingPolicy edit log  can be applied smoothly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12395) Support erasure coding policy operations in namenode edit log

2017-09-18 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16169832#comment-16169832
 ] 

Kai Zheng commented on HDFS-12395:
--

Thanks Brahma for raising this. I just got HDFS-12460 in.  [~Sammi] please help 
double check with the latest trunk, if the two failures are fixed or not. 
Thanks.

> Support erasure coding policy operations in namenode edit log
> -
>
> Key: HDFS-12395
> URL: https://issues.apache.org/jira/browse/HDFS-12395
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Reporter: SammiChen
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Fix For: 3.0.0-beta1
>
> Attachments: editsStored, HDFS-12395.001.patch, HDFS-12395.002.patch, 
> HDFS-12395.003.patch, HDFS-12395.004.patch
>
>
> Support add, remove, disable, enable erasure coding policy operation in edit 
> log. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >