[jira] [Commented] (HDFS-16569) Consider attaching block location info from client when closing a completed file

2023-01-12 Thread Doris Gu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17675889#comment-17675889
 ] 

Doris Gu commented on HDFS-16569:
-

> ` {{{}dfs.namenode.file.close.num-committed-allowed{}}}` is not helpful when 
> most files' size is less than block size.

@[~yuanbo]  I think  ` {{{}dfs.namenode.file.close.num-committed-allowed{}}}` 
is more helpful when most files' size is less than block size. Can you explain  
why?

> Consider attaching block location info from client when closing a completed 
> file
> 
>
> Key: HDFS-16569
> URL: https://issues.apache.org/jira/browse/HDFS-16569
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Yuanbo Liu
>Priority: Major
>
> when a file is finished, client will not close it until DNs send 
> RECEIVED_BLOCK by ibr or client is timeout. we can always see such kind of 
> log in namenode
> {code:java}
> is COMMITTED but not COMPLETE(numNodes= 0 <  minimum = 1) in file{code}
> Since client already has the last block locations, it's not necessary to rely 
> on the ibr from DN when closing file.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15097) Purge log in KMS and HttpFS

2020-01-12 Thread Doris Gu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-15097:

Status: Patch Available  (was: Open)

> Purge log in KMS and HttpFS
> ---
>
> Key: HDFS-15097
> URL: https://issues.apache.org/jira/browse/HDFS-15097
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, kms
>Affects Versions: 3.1.3, 3.2.1, 3.0.3, 3.3.0
>Reporter: Doris Gu
>Assignee: Doris Gu
>Priority: Minor
> Attachments: HDFS-15097.001.patch
>
>
> KMS and HttpFS uses ConfigurationWithLogging instead of Configuration,  which 
> logs a configuration object each access.  It's more like a development use.
> {code:java}
> 2020-01-07 16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' 
> 2020-01-07 16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' (default 
> 'false') 
> 2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' 
> 2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' (default 'false')
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15097) Purge log in KMS and HttpFS

2020-01-12 Thread Doris Gu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-15097:

Attachment: HDFS-15097.001.patch

> Purge log in KMS and HttpFS
> ---
>
> Key: HDFS-15097
> URL: https://issues.apache.org/jira/browse/HDFS-15097
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, kms
>Affects Versions: 3.0.3, 3.3.0, 3.2.1, 3.1.3
>Reporter: Doris Gu
>Assignee: Doris Gu
>Priority: Minor
> Attachments: HDFS-15097.001.patch
>
>
> KMS and HttpFS uses ConfigurationWithLogging instead of Configuration,  which 
> logs a configuration object each access.  It's more like a development use.
> {code:java}
> 2020-01-07 16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' 
> 2020-01-07 16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' (default 
> 'false') 
> 2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' 
> 2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' (default 'false')
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15097) Purge log in KMS and HttpFS

2020-01-12 Thread Doris Gu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-15097:

Attachment: (was: HDFS-15097.001.patch)

> Purge log in KMS and HttpFS
> ---
>
> Key: HDFS-15097
> URL: https://issues.apache.org/jira/browse/HDFS-15097
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, kms
>Affects Versions: 3.0.3, 3.3.0, 3.2.1, 3.1.3
>Reporter: Doris Gu
>Assignee: Doris Gu
>Priority: Minor
> Attachments: HDFS-15097.001.patch
>
>
> KMS and HttpFS uses ConfigurationWithLogging instead of Configuration,  which 
> logs a configuration object each access.  It's more like a development use.
> {code:java}
> 2020-01-07 16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' 
> 2020-01-07 16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' (default 
> 'false') 
> 2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' 
> 2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' (default 'false')
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15097) Purge log in KMS and HttpFS

2020-01-12 Thread Doris Gu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-15097:

Status: Open  (was: Patch Available)

> Purge log in KMS and HttpFS
> ---
>
> Key: HDFS-15097
> URL: https://issues.apache.org/jira/browse/HDFS-15097
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, kms
>Affects Versions: 3.1.3, 3.2.1, 3.0.3, 3.3.0
>Reporter: Doris Gu
>Assignee: Doris Gu
>Priority: Minor
> Attachments: HDFS-15097.001.patch
>
>
> KMS and HttpFS uses ConfigurationWithLogging instead of Configuration,  which 
> logs a configuration object each access.  It's more like a development use.
> {code:java}
> 2020-01-07 16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' 
> 2020-01-07 16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' (default 
> 'false') 
> 2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' 
> 2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' (default 'false')
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15097) Purge log in KMS and HttpFS

2020-01-09 Thread Doris Gu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-15097:

Status: Patch Available  (was: Open)

> Purge log in KMS and HttpFS
> ---
>
> Key: HDFS-15097
> URL: https://issues.apache.org/jira/browse/HDFS-15097
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, kms
>Affects Versions: 3.1.3, 3.2.1, 3.0.3, 3.3.0
>Reporter: Doris Gu
>Assignee: Doris Gu
>Priority: Minor
> Attachments: HDFS-15097.001.patch
>
>
> KMS and HttpFS uses ConfigurationWithLogging instead of Configuration,  which 
> logs a configuration object each access.  It's more like a development use.
> {code:java}
> 2020-01-07 16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' 
> 2020-01-07 16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' (default 
> 'false') 
> 2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' 
> 2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' (default 'false')
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15097) Purge log in KMS and HttpFS

2020-01-09 Thread Doris Gu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-15097:

Status: Open  (was: Patch Available)

> Purge log in KMS and HttpFS
> ---
>
> Key: HDFS-15097
> URL: https://issues.apache.org/jira/browse/HDFS-15097
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, kms
>Affects Versions: 3.1.3, 3.2.1, 3.0.3, 3.3.0
>Reporter: Doris Gu
>Assignee: Doris Gu
>Priority: Minor
> Attachments: HDFS-15097.001.patch
>
>
> KMS and HttpFS uses ConfigurationWithLogging instead of Configuration,  which 
> logs a configuration object each access.  It's more like a development use.
> {code:java}
> 2020-01-07 16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' 
> 2020-01-07 16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' (default 
> 'false') 
> 2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' 
> 2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' (default 'false')
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15097) Purge log in KMS and HttpFS

2020-01-09 Thread Doris Gu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-15097:

Status: Patch Available  (was: Open)

> Purge log in KMS and HttpFS
> ---
>
> Key: HDFS-15097
> URL: https://issues.apache.org/jira/browse/HDFS-15097
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, kms
>Affects Versions: 3.1.3, 3.2.1, 3.0.3, 3.3.0
>Reporter: Doris Gu
>Assignee: Doris Gu
>Priority: Minor
> Attachments: HDFS-15097.001.patch
>
>
> KMS and HttpFS uses ConfigurationWithLogging instead of Configuration,  which 
> logs a configuration object each access.  It's more like a development use.
> {code:java}
> 2020-01-07 16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' 
> 2020-01-07 16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' (default 
> 'false') 
> 2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' 
> 2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' (default 'false')
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15097) Purge log in KMS and HttpFS

2020-01-09 Thread Doris Gu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17011600#comment-17011600
 ] 

Doris Gu commented on HDFS-15097:
-

Thanks, [~weichiu].  By the way, since no other use of class 
ConfigurationWithLogging, is it suitable to delete it or not?

 

> Purge log in KMS and HttpFS
> ---
>
> Key: HDFS-15097
> URL: https://issues.apache.org/jira/browse/HDFS-15097
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, kms
>Affects Versions: 3.0.3, 3.3.0, 3.2.1, 3.1.3
>Reporter: Doris Gu
>Assignee: Doris Gu
>Priority: Minor
> Attachments: HDFS-15097.001.patch
>
>
> KMS and HttpFS uses ConfigurationWithLogging instead of Configuration,  which 
> logs a configuration object each access.  It's more like a development use.
> {code:java}
> 2020-01-07 16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' 
> 2020-01-07 16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' (default 
> 'false') 
> 2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' 
> 2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' (default 'false')
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15097) Purge log in KMS and HttpFS

2020-01-07 Thread Doris Gu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17009560#comment-17009560
 ] 

Doris Gu commented on HDFS-15097:
-

[~jzhuge], [~xiaochen], please check, thanks in advance!

> Purge log in KMS and HttpFS
> ---
>
> Key: HDFS-15097
> URL: https://issues.apache.org/jira/browse/HDFS-15097
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, kms
>Affects Versions: 3.0.3, 3.3.0, 3.2.1, 3.1.3
>Reporter: Doris Gu
>Assignee: Doris Gu
>Priority: Minor
> Attachments: HDFS-15097.001.patch
>
>
> KMS and HttpFS uses ConfigurationWithLogging instead of Configuration,  which 
> logs a configuration object each access.  It's more like a development use.
> {code:java}
> 2020-01-07 16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' 
> 2020-01-07 16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' (default 
> 'false') 
> 2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' 
> 2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' (default 'false')
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15097) Purge log in KMS and HttpFS

2020-01-07 Thread Doris Gu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-15097:

Attachment: HDFS-15097.001.patch

> Purge log in KMS and HttpFS
> ---
>
> Key: HDFS-15097
> URL: https://issues.apache.org/jira/browse/HDFS-15097
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, kms
>Affects Versions: 3.0.3, 3.3.0, 3.2.1, 3.1.3
>Reporter: Doris Gu
>Assignee: Doris Gu
>Priority: Minor
> Attachments: HDFS-15097.001.patch
>
>
> KMS and HttpFS uses ConfigurationWithLogging instead of Configuration,  which 
> logs a configuration object each access.  It's more like a development use.
> {code:java}
> 2020-01-07 16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' 
> 2020-01-07 16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' (default 
> 'false') 
> 2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' 
> 2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' (default 'false')
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15097) Purge log in KMS and HttpFS

2020-01-07 Thread Doris Gu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-15097:

Description: 
KMS and HttpFS uses ConfigurationWithLogging instead of Configuration,  which 
logs a configuration object each access.  It's more like a development use.
{code:java}
// 2020-01-07 16:52:00,456 INFO 
org.apache.hadoop.conf.ConfigurationWithLogging: Got 
hadoop.security.instrumentation.requires.admin = 'false' 2020-01-07 
16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: Got 
hadoop.security.instrumentation.requires.admin = 'false' (default 'false') 
2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
Got hadoop.security.instrumentation.requires.admin = 'false' 2020-01-07 
16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: Got 
hadoop.security.instrumentation.requires.admin = 'false' (default 'false')
{code}
 

> Purge log in KMS and HttpFS
> ---
>
> Key: HDFS-15097
> URL: https://issues.apache.org/jira/browse/HDFS-15097
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, kms
>Affects Versions: 3.0.3, 3.3.0, 3.2.1, 3.1.3
>Reporter: Doris Gu
>Assignee: Doris Gu
>Priority: Minor
>
> KMS and HttpFS uses ConfigurationWithLogging instead of Configuration,  which 
> logs a configuration object each access.  It's more like a development use.
> {code:java}
> // 2020-01-07 16:52:00,456 INFO 
> org.apache.hadoop.conf.ConfigurationWithLogging: Got 
> hadoop.security.instrumentation.requires.admin = 'false' 2020-01-07 
> 16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: Got 
> hadoop.security.instrumentation.requires.admin = 'false' (default 'false') 
> 2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' 2020-01-07 
> 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: Got 
> hadoop.security.instrumentation.requires.admin = 'false' (default 'false')
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15097) Purge log in KMS and HttpFS

2020-01-07 Thread Doris Gu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-15097:

Description: 
KMS and HttpFS uses ConfigurationWithLogging instead of Configuration,  which 
logs a configuration object each access.  It's more like a development use.
{code:java}
2020-01-07 16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
Got hadoop.security.instrumentation.requires.admin = 'false' 
2020-01-07 16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
Got hadoop.security.instrumentation.requires.admin = 'false' (default 'false') 
2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
Got hadoop.security.instrumentation.requires.admin = 'false' 
2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
Got hadoop.security.instrumentation.requires.admin = 'false' (default 'false')
{code}
 

  was:
KMS and HttpFS uses ConfigurationWithLogging instead of Configuration,  which 
logs a configuration object each access.  It's more like a development use.
{code:java}
// 2020-01-07 16:52:00,456 INFO 
org.apache.hadoop.conf.ConfigurationWithLogging: Got 
hadoop.security.instrumentation.requires.admin = 'false' 2020-01-07 
16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: Got 
hadoop.security.instrumentation.requires.admin = 'false' (default 'false') 
2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
Got hadoop.security.instrumentation.requires.admin = 'false' 2020-01-07 
16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: Got 
hadoop.security.instrumentation.requires.admin = 'false' (default 'false')
{code}
 


> Purge log in KMS and HttpFS
> ---
>
> Key: HDFS-15097
> URL: https://issues.apache.org/jira/browse/HDFS-15097
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, kms
>Affects Versions: 3.0.3, 3.3.0, 3.2.1, 3.1.3
>Reporter: Doris Gu
>Assignee: Doris Gu
>Priority: Minor
>
> KMS and HttpFS uses ConfigurationWithLogging instead of Configuration,  which 
> logs a configuration object each access.  It's more like a development use.
> {code:java}
> 2020-01-07 16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' 
> 2020-01-07 16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' (default 
> 'false') 
> 2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' 
> 2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' (default 'false')
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15097) Purge log in KMS and HttpFS

2020-01-07 Thread Doris Gu (Jira)
Doris Gu created HDFS-15097:
---

 Summary: Purge log in KMS and HttpFS
 Key: HDFS-15097
 URL: https://issues.apache.org/jira/browse/HDFS-15097
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: httpfs, kms
Affects Versions: 3.1.3, 3.2.1, 3.0.3, 3.3.0
Reporter: Doris Gu
Assignee: Doris Gu






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-11753) Multiple JournalNode Daemons Coexist

2018-07-15 Thread Doris Gu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-11753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu resolved HDFS-11753.
-
Resolution: Duplicate

> Multiple JournalNode Daemons Coexist
> 
>
> Key: HDFS-11753
> URL: https://issues.apache.org/jira/browse/HDFS-11753
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node
>Affects Versions: 3.0.0-alpha2, 2.8.1, 2.8.2
>Reporter: Doris Gu
>Assignee: Doris Gu
>Priority: Major
> Fix For: 3.0.0-beta1, 2.9.0
>
> Attachments: HDFS-11753.001.patch
>
>
> Add exception catch and termination. If I start journalnode multiple times, I 
> get multiple journalnode daemons that don't work for I config the same port.
> {quote}[hdfs@localhost ~]$ jps
> *10107 JournalNode*
> *46023 JournalNode*
> 57944 NameNode
> 46539 Jps
> 57651 DFSZKFailoverController
> 57909 DataNode
> *57739 JournalNode*
> *45721 JournalNode*{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11753) Multiple JournalNode Daemons Coexist

2018-07-15 Thread Doris Gu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-11753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11753:

Status: Open  (was: Patch Available)

> Multiple JournalNode Daemons Coexist
> 
>
> Key: HDFS-11753
> URL: https://issues.apache.org/jira/browse/HDFS-11753
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node
>Affects Versions: 2.8.2, 2.8.1, 3.0.0-alpha2
>Reporter: Doris Gu
>Assignee: Doris Gu
>Priority: Major
> Fix For: 3.0.0-beta1, 2.9.0
>
> Attachments: HDFS-11753.001.patch
>
>
> Add exception catch and termination. If I start journalnode multiple times, I 
> get multiple journalnode daemons that don't work for I config the same port.
> {quote}[hdfs@localhost ~]$ jps
> *10107 JournalNode*
> *46023 JournalNode*
> 57944 NameNode
> 46539 Jps
> 57651 DFSZKFailoverController
> 57909 DataNode
> *57739 JournalNode*
> *45721 JournalNode*{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11753) Multiple JournalNode Daemons Coexist

2018-07-15 Thread Doris Gu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-11753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11753:

Fix Version/s: 2.9.0
   3.0.0-beta1

> Multiple JournalNode Daemons Coexist
> 
>
> Key: HDFS-11753
> URL: https://issues.apache.org/jira/browse/HDFS-11753
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node
>Affects Versions: 3.0.0-alpha2, 2.8.1, 2.8.2
>Reporter: Doris Gu
>Assignee: Doris Gu
>Priority: Major
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HDFS-11753.001.patch
>
>
> Add exception catch and termination. If I start journalnode multiple times, I 
> get multiple journalnode daemons that don't work for I config the same port.
> {quote}[hdfs@localhost ~]$ jps
> *10107 JournalNode*
> *46023 JournalNode*
> 57944 NameNode
> 46539 Jps
> 57651 DFSZKFailoverController
> 57909 DataNode
> *57739 JournalNode*
> *45721 JournalNode*{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11753) Multiple JournalNode Daemons Coexist

2018-06-26 Thread Doris Gu (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-11753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523463#comment-16523463
 ] 

Doris Gu commented on HDFS-11753:
-

[~brahmareddy] Sorry for late and thank you, I confirm HDFS-12407, could you 
please assign this to me, so I can mark it as duplicated?

> Multiple JournalNode Daemons Coexist
> 
>
> Key: HDFS-11753
> URL: https://issues.apache.org/jira/browse/HDFS-11753
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node
>Affects Versions: 3.0.0-alpha2, 2.8.1, 2.8.2
>Reporter: Doris Gu
>Priority: Major
> Attachments: HDFS-11753.001.patch
>
>
> Add exception catch and termination. If I start journalnode multiple times, I 
> get multiple journalnode daemons that don't work for I config the same port.
> {quote}[hdfs@localhost ~]$ jps
> *10107 JournalNode*
> *46023 JournalNode*
> 57944 NameNode
> 46539 Jps
> 57651 DFSZKFailoverController
> 57909 DataNode
> *57739 JournalNode*
> *45721 JournalNode*{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11806) hdfs journalnode command should support meaningful --help

2018-01-05 Thread Doris Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312808#comment-16312808
 ] 

Doris Gu commented on HDFS-11806:
-

I ran junit tests at my local machine, they passed.

> hdfs journalnode command should support meaningful --help
> -
>
> Key: HDFS-11806
> URL: https://issues.apache.org/jira/browse/HDFS-11806
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 2.9.0, 3.0.0-alpha2, 3.1.0
>Reporter: Doris Gu
> Attachments: HDFS-11806.001.patch, HDFS-11806.002.patch
>
>
> Most (sub) commands support -help or -h options for detailed help while hdfs 
> journalnode does not. What's worse, when you use "hdfs journalnode -h" to get 
> some help, journalnode daemon starts straightly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11806) hdfs journalnode command should support meaningful --help

2018-01-05 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11806:

Affects Version/s: 2.9.0

> hdfs journalnode command should support meaningful --help
> -
>
> Key: HDFS-11806
> URL: https://issues.apache.org/jira/browse/HDFS-11806
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 2.9.0, 3.0.0-alpha2, 3.1.0
>Reporter: Doris Gu
> Attachments: HDFS-11806.001.patch, HDFS-11806.002.patch
>
>
> Most (sub) commands support -help or -h options for detailed help while hdfs 
> journalnode does not. What's worse, when you use "hdfs journalnode -h" to get 
> some help, journalnode daemon starts straightly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11806) hdfs journalnode command should support meaningful --help

2018-01-05 Thread Doris Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312702#comment-16312702
 ] 

Doris Gu commented on HDFS-11806:
-

Notice journalnode does not  parse general options so far, make 
{{printGenericCommandUsage}} to {{false}} in patch 002, the modification is 
like this:
*Before*
{quote}[root@localhost ~]# hdfs journalnode -h
2018-01-05 08:03:32,384 INFO server.JournalNode: STARTUP_MSG: 
/
STARTUP_MSG: Starting JournalNode
STARTUP_MSG:   host = localhost/172.17.0.19
STARTUP_MSG:   args = [-h]
STARTUP_MSG:   version = 3.1.0-SNAPSHOT
...{quote}

*After*
{quote}[root@localhost ~]# hdfs journalnode -h
Usage: hdfs journalnode 
JournalNode is a necessary role in a typical HA cluster with QJM. It 
allows a NameNode to read and write edit logs stored on it's local disk. Note 
that you should run an odd number of JNs, (i.e. 3, 5, 7, etc.), since edit log 
modifications must be written to a majority of JNs.

[root@localhost ~]# {quote}

[~sureshms] [~bharatviswa], could you please review this issue for me? Thanks 
very much!
Tips: Due to [https://issues.apache.org/jira/browse/HADOOP-14818], need to test 
'-h' when there's no active journalnode daemon already, or you'll got 
{{journalnode is running as process ...}} error.

> hdfs journalnode command should support meaningful --help
> -
>
> Key: HDFS-11806
> URL: https://issues.apache.org/jira/browse/HDFS-11806
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 3.0.0-alpha2, 3.1.0
>Reporter: Doris Gu
> Attachments: HDFS-11806.001.patch, HDFS-11806.002.patch
>
>
> Most (sub) commands support -help or -h options for detailed help while hdfs 
> journalnode does not. What's worse, when you use "hdfs journalnode -h" to get 
> some help, journalnode daemon starts straightly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11806) hdfs journalnode command should support meaningful --help

2018-01-04 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11806:

Status: Open  (was: Patch Available)

> hdfs journalnode command should support meaningful --help
> -
>
> Key: HDFS-11806
> URL: https://issues.apache.org/jira/browse/HDFS-11806
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2, 2.8.0, 3.1.0
>Reporter: Doris Gu
> Attachments: HDFS-11806.001.patch, HDFS-11806.002.patch
>
>
> Most (sub) commands support -help or -h options for detailed help while hdfs 
> journalnode does not. What's worse, when you use "hdfs journalnode -h" to get 
> some help, journalnode daemon starts straightly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11806) hdfs journalnode command should support meaningful --help

2018-01-04 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11806:

Status: Patch Available  (was: Open)

> hdfs journalnode command should support meaningful --help
> -
>
> Key: HDFS-11806
> URL: https://issues.apache.org/jira/browse/HDFS-11806
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2, 2.8.0, 3.1.0
>Reporter: Doris Gu
> Attachments: HDFS-11806.001.patch, HDFS-11806.002.patch
>
>
> Most (sub) commands support -help or -h options for detailed help while hdfs 
> journalnode does not. What's worse, when you use "hdfs journalnode -h" to get 
> some help, journalnode daemon starts straightly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11806) hdfs journalnode command should support meaningful --help

2018-01-04 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11806:

Attachment: HDFS-11806.002.patch

> hdfs journalnode command should support meaningful --help
> -
>
> Key: HDFS-11806
> URL: https://issues.apache.org/jira/browse/HDFS-11806
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 3.0.0-alpha2, 3.1.0
>Reporter: Doris Gu
> Attachments: HDFS-11806.001.patch, HDFS-11806.002.patch
>
>
> Most (sub) commands support -help or -h options for detailed help while hdfs 
> journalnode does not. What's worse, when you use "hdfs journalnode -h" to get 
> some help, journalnode daemon starts straightly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11806) hdfs journalnode command should support meaningful --help

2018-01-04 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11806:

Affects Version/s: 3.1.0

> hdfs journalnode command should support meaningful --help
> -
>
> Key: HDFS-11806
> URL: https://issues.apache.org/jira/browse/HDFS-11806
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 3.0.0-alpha2, 3.1.0
>Reporter: Doris Gu
> Attachments: HDFS-11806.001.patch
>
>
> Most (sub) commands support -help or -h options for detailed help while hdfs 
> journalnode does not. What's worse, when you use "hdfs journalnode -h" to get 
> some help, journalnode daemon starts straightly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12561) FavoredNodes should shuffle if more than replicas and make writer first if present at BlockPlacementPolicyDefault#chooseTarget

2017-09-28 Thread Doris Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16185376#comment-16185376
 ] 

Doris Gu commented on HDFS-12561:
-

I've reported the same issue last year, please check:
https://issues.apache.org/jira/browse/HDFS-11104 

> FavoredNodes should shuffle if more than replicas and make writer first if 
> present at BlockPlacementPolicyDefault#chooseTarget
> --
>
> Key: HDFS-12561
> URL: https://issues.apache.org/jira/browse/HDFS-12561
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Reporter: DENG FEI
> Attachments: HDFS-12561-trunk-001.patch
>
>
> As default behavior,will choose in the front of favorite nodes, may lead to 
> unbalance(IO/Storage)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12561) FavoredNodes should shuffle if more than replicas and make writer first if present at BlockPlacementPolicyDefault#chooseTarget

2017-09-28 Thread Doris Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16185223#comment-16185223
 ] 

Doris Gu commented on HDFS-12561:
-

I'am very interested in this, please assign it to me.

> FavoredNodes should shuffle if more than replicas and make writer first if 
> present at BlockPlacementPolicyDefault#chooseTarget
> --
>
> Key: HDFS-12561
> URL: https://issues.apache.org/jira/browse/HDFS-12561
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Reporter: DENG FEI
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-2139) Fast copy for HDFS.

2017-08-21 Thread Doris Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136200#comment-16136200
 ] 

Doris Gu commented on HDFS-2139:


[~mopishv0] Could you please tell me the plan of this issue? If not have, I am 
glad to know the information of this tool's practical experience or test 
results.
Btw, one more question, you mentioned that "hard-links at the HDFS file level 
won't work when copying files between two namespaces(with same datanodes) in 
fedaration", could you please explain this more detailedly?
Thanks very much!

> Fast copy for HDFS.
> ---
>
> Key: HDFS-2139
> URL: https://issues.apache.org/jira/browse/HDFS-2139
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Pritam Damania
>Assignee: Rituraj
> Attachments: HDFS-2139-For-2.7.1.patch, HDFS-2139.patch, 
> HDFS-2139.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> There is a need to perform fast file copy on HDFS. The fast copy mechanism 
> for a file works as
> follows :
> 1) Query metadata for all blocks of the source file.
> 2) For each block 'b' of the file, find out its datanode locations.
> 3) For each block of the file, add an empty block to the namesystem for
> the destination file.
> 4) For each location of the block, instruct the datanode to make a local
> copy of that block.
> 5) Once each datanode has copied over its respective blocks, they
> report to the namenode about it.
> 6) Wait for all blocks to be copied and exit.
> This would speed up the copying process considerably by removing top of
> the rack data transfers.
> Note : An extra improvement, would be to instruct the datanode to create a
> hardlink of the block file if we are copying a block on the same datanode



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11753) Multiple JournalNode Daemons Coexist

2017-05-11 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11753:

Summary: Multiple JournalNode Daemons Coexist  (was: Make Some Enhancements 
about JournalNode Daemon )

> Multiple JournalNode Daemons Coexist
> 
>
> Key: HDFS-11753
> URL: https://issues.apache.org/jira/browse/HDFS-11753
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node
>Affects Versions: 3.0.0-alpha2, 2.8.1, 2.8.2
>Reporter: Doris Gu
> Attachments: HDFS-11753.001.patch
>
>
> Add exception catch and termination. If I start journalnode multiple times, I 
> get multiple journalnode daemons that don't work for I config the same port.
> {quote}[hdfs@localhost ~]$ jps
> *10107 JournalNode*
> *46023 JournalNode*
> 57944 NameNode
> 46539 Jps
> 57651 DFSZKFailoverController
> 57909 DataNode
> *57739 JournalNode*
> *45721 JournalNode*{quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11753) Make Some Enhancements about JournalNode Daemon

2017-05-11 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11753:

Description: 
Add exception catch and termination. If I start journalnode multiple times, I 
get multiple journalnode daemons that don't work for I config the same port.
{quote}[hdfs@localhost ~]$ jps
*10107 JournalNode*
*46023 JournalNode*
57944 NameNode
46539 Jps
57651 DFSZKFailoverController
57909 DataNode
*57739 JournalNode*
*45721 JournalNode*{quote}

  was:
1.Add support -h. Right now, if I use *hdfs journalnode -h* , I straightly 
start journalnode daemon. But generally speaking, I just want to look at the 
usage.

2.Add exception catch and termination. If I start journalnode with different 
directions stored pids, I get servel journalnode daemons that don't work for I 
config the same port.
{quote}[hdfs@localhost ~]$ jps
*10107 JournalNode*
*46023 JournalNode*
57944 NameNode
46539 Jps
57651 DFSZKFailoverController
57909 DataNode
*57739 JournalNode*
*45721 JournalNode*{quote}


> Make Some Enhancements about JournalNode Daemon 
> 
>
> Key: HDFS-11753
> URL: https://issues.apache.org/jira/browse/HDFS-11753
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node
>Affects Versions: 3.0.0-alpha2, 2.8.1, 2.8.2
>Reporter: Doris Gu
> Attachments: HDFS-11753.001.patch
>
>
> Add exception catch and termination. If I start journalnode multiple times, I 
> get multiple journalnode daemons that don't work for I config the same port.
> {quote}[hdfs@localhost ~]$ jps
> *10107 JournalNode*
> *46023 JournalNode*
> 57944 NameNode
> 46539 Jps
> 57651 DFSZKFailoverController
> 57909 DataNode
> *57739 JournalNode*
> *45721 JournalNode*{quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11753) Make Some Enhancements about JournalNode Daemon

2017-05-11 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11753:

Affects Version/s: 2.8.2
   2.8.1

> Make Some Enhancements about JournalNode Daemon 
> 
>
> Key: HDFS-11753
> URL: https://issues.apache.org/jira/browse/HDFS-11753
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: journal-node
>Affects Versions: 3.0.0-alpha2, 2.8.1, 2.8.2
>Reporter: Doris Gu
> Attachments: HDFS-11753.001.patch
>
>
> 1.Add support -h. Right now, if I use *hdfs journalnode -h* , I straightly 
> start journalnode daemon. But generally speaking, I just want to look at the 
> usage.
> 2.Add exception catch and termination. If I start journalnode with different 
> directions stored pids, I get servel journalnode daemons that don't work for 
> I config the same port.
> {quote}[hdfs@localhost ~]$ jps
> *10107 JournalNode*
> *46023 JournalNode*
> 57944 NameNode
> 46539 Jps
> 57651 DFSZKFailoverController
> 57909 DataNode
> *57739 JournalNode*
> *45721 JournalNode*{quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11753) Make Some Enhancements about JournalNode Daemon

2017-05-11 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11753:

Issue Type: Bug  (was: Improvement)

> Make Some Enhancements about JournalNode Daemon 
> 
>
> Key: HDFS-11753
> URL: https://issues.apache.org/jira/browse/HDFS-11753
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node
>Affects Versions: 3.0.0-alpha2, 2.8.1, 2.8.2
>Reporter: Doris Gu
> Attachments: HDFS-11753.001.patch
>
>
> 1.Add support -h. Right now, if I use *hdfs journalnode -h* , I straightly 
> start journalnode daemon. But generally speaking, I just want to look at the 
> usage.
> 2.Add exception catch and termination. If I start journalnode with different 
> directions stored pids, I get servel journalnode daemons that don't work for 
> I config the same port.
> {quote}[hdfs@localhost ~]$ jps
> *10107 JournalNode*
> *46023 JournalNode*
> 57944 NameNode
> 46539 Jps
> 57651 DFSZKFailoverController
> 57909 DataNode
> *57739 JournalNode*
> *45721 JournalNode*{quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11753) Make Some Enhancements about JournalNode Daemon

2017-05-11 Thread Doris Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007576#comment-16007576
 ] 

Doris Gu commented on HDFS-11753:
-

I made more tests these days, and the conclusion about multiple jn daemons was:

*1.Branch-2(e.g. 2.8.1,2.7.3,2.6.0) does have the bug*
{code:title=A. First a normal environment.|borderStyle=solid}
hdfs@localhost:~> jps
46453 DFSZKFailoverController
5119 Jps
4311 JournalNode
46859 NameNode
46888 DataNode
{code}

{code:title=B. Start jn once more and it hangs up while nn or dn don't have the 
problem.|borderStyle=solid}
hdfs@localhost:~> hdfs journalnode
2017-05-12 10:32:17,291 INFO 
org.apache.hadoop.hdfs.qjournal.server.JournalNode: STARTUP_MSG: 
/
STARTUP_MSG: Starting JournalNode
STARTUP_MSG:   host = localhost/127.0.0.1
STARTUP_MSG:   args = []
..
2017-05-12 10:32:18,571 INFO org.apache.hadoop.http.HttpServer2: 
HttpServer.start() threw a non Bind IOException
java.net.BindException: Port in use: 0.0.0.0:8480
at 
org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:919)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:856)
at 
org.apache.hadoop.hdfs.qjournal.server.JournalNodeHttpServer.start(JournalNodeHttpServer.java:69)
at 
org.apache.hadoop.hdfs.qjournal.server.JournalNode.start(JournalNode.java:163)
at 
org.apache.hadoop.hdfs.qjournal.server.JournalNode.run(JournalNode.java:137)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at 
org.apache.hadoop.hdfs.qjournal.server.JournalNode.main(JournalNode.java:310)
Caused by: java.net.BindException: address in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at 
org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
at 
org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:914)
... 7 more
Exception in thread "main" java.net.BindException: Port in use: 0.0.0.0:8480
at 
org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:919)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:856)
at 
org.apache.hadoop.hdfs.qjournal.server.JournalNodeHttpServer.start(JournalNodeHttpServer.java:69)
at 
org.apache.hadoop.hdfs.qjournal.server.JournalNode.start(JournalNode.java:163)
at 
org.apache.hadoop.hdfs.qjournal.server.JournalNode.run(JournalNode.java:137)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at 
org.apache.hadoop.hdfs.qjournal.server.JournalNode.main(JournalNode.java:310)
Caused by: java.net.BindException: address in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at 
org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
at 
org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:914)
... 7 more

{code}

{code:title=C. Get multiple jn daemons.|borderStyle=solid}
hdfs@localhost:~> jps
45930 JournalNode
46453 DFSZKFailoverController
4311 JournalNode
46305 Jps
46859 NameNode
46888 DataNode
{code}

{code:title=Appendix. Abnormal jn thread dump.|borderStyle=solid}
hdfs@localhost:~> jstack 45930
2017-05-12 10:42:52
Full thread dump Java HotSpot(TM) 64-Bit Server VM (24.55-b03 mixed mode):

"Attach Listener" daemon prio=10 tid=0x7f87e478b800 nid=0x110a3 waiting on 
condition [0x]
   java.lang.Thread.State: RUNNABLE

"DestroyJavaVM" prio=10 tid=0x7f87e400f000 nid=0xb392 waiting on condition 
[0x]
   java.lang.Thread.State: RUNNABLE

"pool-1-thread-1" prio=10 tid=0x7f87e4a2e000 nid=0xb3ae waiting on 
condition [0x7f87d9a92000]
   java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0xef60b7a8> (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
at 

[jira] [Updated] (HDFS-11806) hdfs journalnode command should support meaningful --help

2017-05-11 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11806:

Affects Version/s: 2.8.0

> hdfs journalnode command should support meaningful --help
> -
>
> Key: HDFS-11806
> URL: https://issues.apache.org/jira/browse/HDFS-11806
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Doris Gu
> Attachments: HDFS-11806.001.patch
>
>
> Most (sub) commands support -help or -h options for detailed help while hdfs 
> journalnode does not. What's worse, when you use "hdfs journalnode -h" to get 
> some help, journalnode daemon starts straightly.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11806) hdfs journalnode command should support meaningful --help

2017-05-11 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11806:

Status: Patch Available  (was: Open)

> hdfs journalnode command should support meaningful --help
> -
>
> Key: HDFS-11806
> URL: https://issues.apache.org/jira/browse/HDFS-11806
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Doris Gu
> Attachments: HDFS-11806.001.patch
>
>
> Most (sub) commands support -help or -h options for detailed help while hdfs 
> journalnode does not. What's worse, when you use "hdfs journalnode -h" to get 
> some help, journalnode daemon starts straightly.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11806) hdfs journalnode command should support meaningful --help

2017-05-11 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11806:

Attachment: HDFS-11806.001.patch

> hdfs journalnode command should support meaningful --help
> -
>
> Key: HDFS-11806
> URL: https://issues.apache.org/jira/browse/HDFS-11806
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Doris Gu
> Attachments: HDFS-11806.001.patch
>
>
> Most (sub) commands support -help or -h options for detailed help while hdfs 
> journalnode does not. What's worse, when you use "hdfs journalnode -h" to get 
> some help, journalnode daemon starts straightly.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11806) hdfs journalnode command should support meaningful --help

2017-05-11 Thread Doris Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006100#comment-16006100
 ] 

Doris Gu edited comment on HDFS-11806 at 5/11/17 8:41 AM:
--

 Since "hdfs journalnode" only has general options, I add a brief statement 
about journalnode followed the style of secondary namenode.
{quote}[root@localhost ~]# hdfs journalnode -h
Usage: hdfs journalnode 
JournalNode is a necessary role in a typical HA cluster with QJM. It 
allows a NameNode to read and write edit logs stored on it's local disk. Note 
that you should run an odd number of JNs, (i.e. 3, 5, 7, etc.), since edit log 
modifications must be written to a majority of JNs.

Generic options supported are
-conf  specify an application configuration file
-D 

[jira] [Commented] (HDFS-11806) hdfs journalnode command should support meaningful --help

2017-05-11 Thread Doris Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006100#comment-16006100
 ] 

Doris Gu commented on HDFS-11806:
-

 Since "hdfs journalnode" only has general options, I add a brief statement 
about journalnode followed the style of secondary namenode.
{quote}[root@localhost ~]# hdfs journalnode -h
Usage: hdfs journalnode 
JournalNode is a necessary role in a typical HA cluster with QJM. It 
allows a NameNode to read and write edit logs stored on it's local disk. Note 
that you should run an odd number of JNs, (i.e. 3, 5, 7, etc.), since edit log 
modifications must be written to a majority of JNs.

Generic options supported are
-conf  specify an application configuration file
-D 

[jira] [Commented] (HDFS-11806) hdfs journalnode command should support meaningful --help

2017-05-11 Thread Doris Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006086#comment-16006086
 ] 

Doris Gu commented on HDFS-11806:
-

{quote}hdfs@localhost:~/version/hadoop-3.0.0-alpha2/bin$ ./hdfs journalnode -h
2017-05-11 16:30:26,128 INFO server.JournalNode: STARTUP_MSG: 
/
STARTUP_MSG: Starting JournalNode
STARTUP_MSG:   user = hdfs
STARTUP_MSG:   host = localhost/127.0.0.1
STARTUP_MSG:   args = [-h]
STARTUP_MSG:   version = 3.0.0-alpha2{quote}

> hdfs journalnode command should support meaningful --help
> -
>
> Key: HDFS-11806
> URL: https://issues.apache.org/jira/browse/HDFS-11806
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Doris Gu
>
> Most (sub) commands support -help or -h options for detailed help while hdfs 
> journalnode does not. What's worse, when you use "hdfs journalnode -h" to get 
> some help, journalnode daemon starts straightly.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11806) hdfs journalnode command should support meaningful --help

2017-05-11 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11806:

Description: Most (sub) commands support -help or -h options for detailed 
help while hdfs journalnode does not. What's worse, when you use "hdfs 
journalnode -h" to get some help, journalnode daemon starts straightly.  (was: 
Most (sub) commands support -help or -h options for detailed help while hdfs 
journalnode does not. What's worse, when you use "hdfs journalnode -h" to
get some help, journalnode daemon starts straightly.)

> hdfs journalnode command should support meaningful --help
> -
>
> Key: HDFS-11806
> URL: https://issues.apache.org/jira/browse/HDFS-11806
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Doris Gu
>
> Most (sub) commands support -help or -h options for detailed help while hdfs 
> journalnode does not. What's worse, when you use "hdfs journalnode -h" to get 
> some help, journalnode daemon starts straightly.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11806) hdfs journalnode command should support meaningful --help

2017-05-11 Thread Doris Gu (JIRA)
Doris Gu created HDFS-11806:
---

 Summary: hdfs journalnode command should support meaningful --help
 Key: HDFS-11806
 URL: https://issues.apache.org/jira/browse/HDFS-11806
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0-alpha2
Reporter: Doris Gu


Most (sub) commands support -help or -h options for detailed help while hdfs 
journalnode does not. What's worse, when you use "hdfs journalnode -h" to
get some help, journalnode daemon starts straightly.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11753) Make Some Enhancements about JournalNode Daemon

2017-05-08 Thread Doris Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16001008#comment-16001008
 ] 

Doris Gu commented on HDFS-11753:
-

Add some explanation about QA results, welcome suggestions.
1.test4tests
The patch is not that complicated, and can be verified by simple steps as 
follows.
1.1 Add support -h for *hdfs journalnode*, since it only has general options, I 
add a brief statement about journalnode followed the style of secondary 
namenode.
1.2 I use *hdfs journalnode* with different config files, i.e. start 
journalnode several times on one node, I get several journalnode daemons. 
Details will be pasted soon.
2.checkstyle
Very meaningful, plan to make a new patch soon, but I haven't realized the 
relationship with my patch, is there any opinions? 

> Make Some Enhancements about JournalNode Daemon 
> 
>
> Key: HDFS-11753
> URL: https://issues.apache.org/jira/browse/HDFS-11753
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: journal-node
>Affects Versions: 3.0.0-alpha2
>Reporter: Doris Gu
> Attachments: HDFS-11753.001.patch
>
>
> 1.Add support -h. Right now, if I use *hdfs journalnode -h* , I straightly 
> start journalnode daemon. But generally speaking, I just want to look at the 
> usage.
> 2.Add exception catch and termination. If I start journalnode with different 
> directions stored pids, I get servel journalnode daemons that don't work for 
> I config the same port.
> {quote}[hdfs@localhost ~]$ jps
> *10107 JournalNode*
> *46023 JournalNode*
> 57944 NameNode
> 46539 Jps
> 57651 DFSZKFailoverController
> 57909 DataNode
> *57739 JournalNode*
> *45721 JournalNode*{quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11753) Make Some Enhancements about JournalNode Daemon

2017-05-07 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11753:

Status: Patch Available  (was: Open)

> Make Some Enhancements about JournalNode Daemon 
> 
>
> Key: HDFS-11753
> URL: https://issues.apache.org/jira/browse/HDFS-11753
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: journal-node
>Affects Versions: 3.0.0-alpha2
>Reporter: Doris Gu
> Attachments: HDFS-11753.001.patch
>
>
> 1.Add support -h. Right now, if I use *hdfs journalnode -h* , I straightly 
> start journalnode daemon. But generally speaking, I just want to look at the 
> usage.
> 2.Add exception catch and termination. If I start journalnode with different 
> directions stored pids, I get servel journalnode daemons that don't work for 
> I config the same port.
> {quote}[hdfs@localhost ~]$ jps
> *10107 JournalNode*
> *46023 JournalNode*
> 57944 NameNode
> 46539 Jps
> 57651 DFSZKFailoverController
> 57909 DataNode
> *57739 JournalNode*
> *45721 JournalNode*{quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11751) DFSZKFailoverController daemon exits with wrong status code

2017-05-07 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11751:

Status: Patch Available  (was: Open)

> DFSZKFailoverController daemon exits with wrong status code
> ---
>
> Key: HDFS-11751
> URL: https://issues.apache.org/jira/browse/HDFS-11751
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover
>Affects Versions: 3.0.0-alpha2
>Reporter: Doris Gu
> Attachments: HDFS-11751.001.patch
>
>
> 1.use *hdfs zkfc* to start a zkfc daemon;
> 2.zkfc failed to start, but we got the successful code.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11751) DFSZKFailoverController daemon exits with wrong status code

2017-05-07 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11751:

Hadoop Flags: Incompatible change

> DFSZKFailoverController daemon exits with wrong status code
> ---
>
> Key: HDFS-11751
> URL: https://issues.apache.org/jira/browse/HDFS-11751
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover
>Affects Versions: 3.0.0-alpha2
>Reporter: Doris Gu
> Attachments: HDFS-11751.001.patch
>
>
> 1.use *hdfs zkfc* to start a zkfc daemon;
> 2.zkfc failed to start, but we got the successful code.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11751) DFSZKFailoverController daemon exits with wrong status code

2017-05-07 Thread Doris Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16000160#comment-16000160
 ] 

Doris Gu commented on HDFS-11751:
-

Paste some log:
{quote}2017-05-04 16:06:24,720 FATAL 
org.apache.hadoop.hdfs.tools.DFSZKFailoverController: Got a fatal error, 
exiting now
java.net.BindException: Problem binding to [localhost:8019] 
java.net.BindException: address already in use; For more details see:  
http://wiki.apache.org/hadoop/BindException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
at org.apache.hadoop.ipc.Server.bind(Server.java:431)
at org.apache.hadoop.ipc.Server$Listener.(Server.java:580)
at org.apache.hadoop.ipc.Server.(Server.java:2221)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:951)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:534)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
at org.apache.hadoop.ha.ZKFCRpcServer.(ZKFCRpcServer.java:58)
at 
org.apache.hadoop.ha.ZKFailoverController.initRPC(ZKFailoverController.java:341)
at 
org.apache.hadoop.hdfs.tools.DFSZKFailoverController.initRPC(DFSZKFailoverController.java:152)
at 
org.apache.hadoop.ha.ZKFailoverController.doRun(ZKFailoverController.java:249)
at 
org.apache.hadoop.ha.ZKFailoverController.access$000(ZKFailoverController.java:72)
at 
org.apache.hadoop.ha.ZKFailoverController$1.run(ZKFailoverController.java:187)
at 
org.apache.hadoop.ha.ZKFailoverController$1.run(ZKFailoverController.java:183)
at 
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
at 
org.apache.hadoop.ha.ZKFailoverController.run(ZKFailoverController.java:183)
at 
org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:181)
Caused by: java.net.BindException: address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.apache.hadoop.ipc.Server.bind(Server.java:414)
... 16 more
[hdfs@localhost ~]$ echo $?
0{quote}


> DFSZKFailoverController daemon exits with wrong status code
> ---
>
> Key: HDFS-11751
> URL: https://issues.apache.org/jira/browse/HDFS-11751
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover
>Affects Versions: 3.0.0-alpha2
>Reporter: Doris Gu
> Attachments: HDFS-11751.001.patch
>
>
> 1.use *hdfs zkfc* to start a zkfc daemon;
> 2.zkfc failed to start, but we got the successful code.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11751) DFSZKFailoverController daemon exits with wrong status code

2017-05-07 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11751:

Description: 
1.use *hdfs zkfc* to start a zkfc daemon;
2.zkfc failed to start, but we got the successful code.

  was:
1.use *hdfs zkfc* to start a zkfc daemon;
2.zkfc failed to start for some reason, but we got the successful code:
{quote}2017-05-04 16:06:24,720 FATAL 
org.apache.hadoop.hdfs.tools.DFSZKFailoverController: Got a fatal error, 
exiting now
java.net.BindException: Problem binding to [localhost:8019] 
java.net.BindException: address already in use; For more details see:  
http://wiki.apache.org/hadoop/BindException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
at org.apache.hadoop.ipc.Server.bind(Server.java:431)
at org.apache.hadoop.ipc.Server$Listener.(Server.java:580)
at org.apache.hadoop.ipc.Server.(Server.java:2221)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:951)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:534)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
at org.apache.hadoop.ha.ZKFCRpcServer.(ZKFCRpcServer.java:58)
at 
org.apache.hadoop.ha.ZKFailoverController.initRPC(ZKFailoverController.java:341)
at 
org.apache.hadoop.hdfs.tools.DFSZKFailoverController.initRPC(DFSZKFailoverController.java:152)
at 
org.apache.hadoop.ha.ZKFailoverController.doRun(ZKFailoverController.java:249)
at 
org.apache.hadoop.ha.ZKFailoverController.access$000(ZKFailoverController.java:72)
at 
org.apache.hadoop.ha.ZKFailoverController$1.run(ZKFailoverController.java:187)
at 
org.apache.hadoop.ha.ZKFailoverController$1.run(ZKFailoverController.java:183)
at 
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
at 
org.apache.hadoop.ha.ZKFailoverController.run(ZKFailoverController.java:183)
at 
org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:181)
Caused by: java.net.BindException: address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.apache.hadoop.ipc.Server.bind(Server.java:414)
... 16 more
[hdfs@localhost ~]$ echo $?
0{quote}



> DFSZKFailoverController daemon exits with wrong status code
> ---
>
> Key: HDFS-11751
> URL: https://issues.apache.org/jira/browse/HDFS-11751
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover
>Affects Versions: 3.0.0-alpha2
>Reporter: Doris Gu
> Attachments: HDFS-11751.001.patch
>
>
> 1.use *hdfs zkfc* to start a zkfc daemon;
> 2.zkfc failed to start, but we got the successful code.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11753) Make Some Enhancements about JournalNode Daemon

2017-05-05 Thread Doris Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15997847#comment-15997847
 ] 

Doris Gu commented on HDFS-11753:
-

Add relationship with https://issues.apache.org/jira/browse/HDFS-3723

> Make Some Enhancements about JournalNode Daemon 
> 
>
> Key: HDFS-11753
> URL: https://issues.apache.org/jira/browse/HDFS-11753
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: journal-node
>Affects Versions: 3.0.0-alpha2
>Reporter: Doris Gu
> Attachments: HDFS-11753.001.patch
>
>
> 1.Add support -h. Right now, if I use *hdfs journalnode -h* , I straightly 
> start journalnode daemon. But generally speaking, I just want to look at the 
> usage.
> 2.Add exception catch and termination. If I start journalnode with different 
> directions stored pids, I get servel journalnode daemons that don't work for 
> I config the same port.
> {quote}[hdfs@localhost ~]$ jps
> *10107 JournalNode*
> *46023 JournalNode*
> 57944 NameNode
> 46539 Jps
> 57651 DFSZKFailoverController
> 57909 DataNode
> *57739 JournalNode*
> *45721 JournalNode*{quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11753) Make Some Enhancements about JournalNode Daemon

2017-05-04 Thread Doris Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15996428#comment-15996428
 ] 

Doris Gu commented on HDFS-11753:
-

Add some enhancements to journalnode, please check, thanks in advance.

> Make Some Enhancements about JournalNode Daemon 
> 
>
> Key: HDFS-11753
> URL: https://issues.apache.org/jira/browse/HDFS-11753
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: journal-node
>Affects Versions: 3.0.0-alpha2
>Reporter: Doris Gu
> Attachments: HDFS-11753.001.patch
>
>
> 1.Add support -h. Right now, if I use *hdfs journalnode -h* , I straightly 
> start journalnode daemon. But generally speaking, I just want to look at the 
> usage.
> 2.Add exception catch and termination. If I start journalnode with different 
> directions stored pids, I get servel journalnode daemons that don't work for 
> I config the same port.
> {quote}[hdfs@localhost ~]$ jps
> *10107 JournalNode*
> *46023 JournalNode*
> 57944 NameNode
> 46539 Jps
> 57651 DFSZKFailoverController
> 57909 DataNode
> *57739 JournalNode*
> *45721 JournalNode*{quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11753) Make Some Enhancements about JournalNode Daemon

2017-05-04 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11753:

Description: 
1.Add support -h. Right now, if I use *hdfs journalnode -h* , I straightly 
start journalnode daemon. But generally speaking, I just want to look at the 
usage.

2.Add exception catch and termination. If I start journalnode with different 
directions stored pids, I get servel journalnode daemons that don't work for I 
config the same port.
{quote}[hdfs@localhost ~]$ jps
*10107 JournalNode*
*46023 JournalNode*
57944 NameNode
46539 Jps
57651 DFSZKFailoverController
57909 DataNode
*57739 JournalNode*
*45721 JournalNode*{quote}

  was:
1.Add support -h. Right now, if I use *hdfs journalnode -h* , I straightly 
start journalnode daemon. But generally speaking, I just want to look at the 
usage.

2.Add exception catch and termination. If I start journalnode with different 
directions stored pids, I get servel journalnode daemons that don't work for I 
config the same port.


> Make Some Enhancements about JournalNode Daemon 
> 
>
> Key: HDFS-11753
> URL: https://issues.apache.org/jira/browse/HDFS-11753
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: journal-node
>Affects Versions: 3.0.0-alpha2
>Reporter: Doris Gu
> Attachments: HDFS-11753.001.patch
>
>
> 1.Add support -h. Right now, if I use *hdfs journalnode -h* , I straightly 
> start journalnode daemon. But generally speaking, I just want to look at the 
> usage.
> 2.Add exception catch and termination. If I start journalnode with different 
> directions stored pids, I get servel journalnode daemons that don't work for 
> I config the same port.
> {quote}[hdfs@localhost ~]$ jps
> *10107 JournalNode*
> *46023 JournalNode*
> 57944 NameNode
> 46539 Jps
> 57651 DFSZKFailoverController
> 57909 DataNode
> *57739 JournalNode*
> *45721 JournalNode*{quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11753) Make Some Enhancements about JournalNode Daemon

2017-05-04 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11753:

Description: 
1.Add support -h. Right now, if I use *hdfs journalnode -h* , I straightly 
start journalnode daemon. But generally speaking, I just want to look at the 
usage.

2.Add exception catch and termination. If I start journalnode with different 
directions stored pids, I get servel journalnode daemons that don't work for I 
config the same port.

  was:
1.Add support -h. Right now, if I use *hdfs journalnode -h* , I straightly 
start journalnode daemon. But generally speaking, I just want to look at the 
usage.

2.Add exception catch and termination. If I start journalnode with different 
directions stored pids, I get servel journalnode daemons which I don't want for 
I config the same port.


> Make Some Enhancements about JournalNode Daemon 
> 
>
> Key: HDFS-11753
> URL: https://issues.apache.org/jira/browse/HDFS-11753
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: journal-node
>Affects Versions: 3.0.0-alpha2
>Reporter: Doris Gu
> Attachments: HDFS-11753.001.patch
>
>
> 1.Add support -h. Right now, if I use *hdfs journalnode -h* , I straightly 
> start journalnode daemon. But generally speaking, I just want to look at the 
> usage.
> 2.Add exception catch and termination. If I start journalnode with different 
> directions stored pids, I get servel journalnode daemons that don't work for 
> I config the same port.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11753) Make Some Enhancements about JournalNode Daemon

2017-05-04 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11753:

Attachment: HDFS-11753.001.patch

> Make Some Enhancements about JournalNode Daemon 
> 
>
> Key: HDFS-11753
> URL: https://issues.apache.org/jira/browse/HDFS-11753
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: journal-node
>Affects Versions: 3.0.0-alpha2
>Reporter: Doris Gu
> Attachments: HDFS-11753.001.patch
>
>
> 1.Add support -h. Right now, if I use *hdfs journalnode -h* , I straightly 
> start journalnode daemon. But generally speaking, I just want to look at the 
> usage.
> 2.Add exception catch and termination. If I start journalnode with different 
> directions stored pids, I get servel journalnode daemons which I don't want 
> for I config the same port.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11753) Make Some Enhancements about JournalNode Daemon

2017-05-04 Thread Doris Gu (JIRA)
Doris Gu created HDFS-11753:
---

 Summary: Make Some Enhancements about JournalNode Daemon 
 Key: HDFS-11753
 URL: https://issues.apache.org/jira/browse/HDFS-11753
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: journal-node
Affects Versions: 3.0.0-alpha2
Reporter: Doris Gu


1.Add support -h. Right now, if I use *hdfs journalnode -h* , I straightly 
start journalnode daemon. But generally speaking, I just want to look at the 
usage.

2.Add exception catch and termination. If I start journalnode with different 
directions stored pids, I get servel journalnode daemons which I don't want for 
I config the same port.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11751) DFSZKFailoverController daemon exits with wrong status code

2017-05-04 Thread Doris Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15996343#comment-15996343
 ] 

Doris Gu commented on HDFS-11751:
-

Modify the retCode of zkfc, please check, thanks in advance!

> DFSZKFailoverController daemon exits with wrong status code
> ---
>
> Key: HDFS-11751
> URL: https://issues.apache.org/jira/browse/HDFS-11751
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover
>Affects Versions: 3.0.0-alpha2
>Reporter: Doris Gu
> Attachments: HDFS-11751.001.patch
>
>
> 1.use *hdfs zkfc* to start a zkfc daemon;
> 2.zkfc failed to start for some reason, but we got the successful code:
> {quote}2017-05-04 16:06:24,720 FATAL 
> org.apache.hadoop.hdfs.tools.DFSZKFailoverController: Got a fatal error, 
> exiting now
> java.net.BindException: Problem binding to [localhost:8019] 
> java.net.BindException: address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
> at org.apache.hadoop.ipc.Server.bind(Server.java:431)
> at org.apache.hadoop.ipc.Server$Listener.(Server.java:580)
> at org.apache.hadoop.ipc.Server.(Server.java:2221)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:951)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:534)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
> at org.apache.hadoop.ha.ZKFCRpcServer.(ZKFCRpcServer.java:58)
> at 
> org.apache.hadoop.ha.ZKFailoverController.initRPC(ZKFailoverController.java:341)
> at 
> org.apache.hadoop.hdfs.tools.DFSZKFailoverController.initRPC(DFSZKFailoverController.java:152)
> at 
> org.apache.hadoop.ha.ZKFailoverController.doRun(ZKFailoverController.java:249)
> at 
> org.apache.hadoop.ha.ZKFailoverController.access$000(ZKFailoverController.java:72)
> at 
> org.apache.hadoop.ha.ZKFailoverController$1.run(ZKFailoverController.java:187)
> at 
> org.apache.hadoop.ha.ZKFailoverController$1.run(ZKFailoverController.java:183)
> at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
> at 
> org.apache.hadoop.ha.ZKFailoverController.run(ZKFailoverController.java:183)
> at 
> org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:181)
> Caused by: java.net.BindException: address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:444)
> at sun.nio.ch.Net.bind(Net.java:436)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at org.apache.hadoop.ipc.Server.bind(Server.java:414)
> ... 16 more
> [hdfs@localhost ~]$ echo $?
> 0{quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11751) DFSZKFailoverController daemon exits with wrong status code

2017-05-04 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11751:

Attachment: HDFS-11751.001.patch

> DFSZKFailoverController daemon exits with wrong status code
> ---
>
> Key: HDFS-11751
> URL: https://issues.apache.org/jira/browse/HDFS-11751
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover
>Affects Versions: 3.0.0-alpha2
>Reporter: Doris Gu
> Attachments: HDFS-11751.001.patch
>
>
> 1.use *hdfs zkfc* to start a zkfc daemon;
> 2.zkfc failed to start for some reason, but we got the successful code:
> {quote}2017-05-04 16:06:24,720 FATAL 
> org.apache.hadoop.hdfs.tools.DFSZKFailoverController: Got a fatal error, 
> exiting now
> java.net.BindException: Problem binding to [localhost:8019] 
> java.net.BindException: address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
> at org.apache.hadoop.ipc.Server.bind(Server.java:431)
> at org.apache.hadoop.ipc.Server$Listener.(Server.java:580)
> at org.apache.hadoop.ipc.Server.(Server.java:2221)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:951)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:534)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
> at org.apache.hadoop.ha.ZKFCRpcServer.(ZKFCRpcServer.java:58)
> at 
> org.apache.hadoop.ha.ZKFailoverController.initRPC(ZKFailoverController.java:341)
> at 
> org.apache.hadoop.hdfs.tools.DFSZKFailoverController.initRPC(DFSZKFailoverController.java:152)
> at 
> org.apache.hadoop.ha.ZKFailoverController.doRun(ZKFailoverController.java:249)
> at 
> org.apache.hadoop.ha.ZKFailoverController.access$000(ZKFailoverController.java:72)
> at 
> org.apache.hadoop.ha.ZKFailoverController$1.run(ZKFailoverController.java:187)
> at 
> org.apache.hadoop.ha.ZKFailoverController$1.run(ZKFailoverController.java:183)
> at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
> at 
> org.apache.hadoop.ha.ZKFailoverController.run(ZKFailoverController.java:183)
> at 
> org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:181)
> Caused by: java.net.BindException: address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:444)
> at sun.nio.ch.Net.bind(Net.java:436)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at org.apache.hadoop.ipc.Server.bind(Server.java:414)
> ... 16 more
> [hdfs@localhost ~]$ echo $?
> 0{quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11751) DFSZKFailoverController daemon exits with wrong status code

2017-05-04 Thread Doris Gu (JIRA)
Doris Gu created HDFS-11751:
---

 Summary: DFSZKFailoverController daemon exits with wrong status 
code
 Key: HDFS-11751
 URL: https://issues.apache.org/jira/browse/HDFS-11751
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: auto-failover
Affects Versions: 3.0.0-alpha2
Reporter: Doris Gu


1.use *hdfs zkfc* to start a zkfc daemon;
2.zkfc failed to start for some reason, but we got the successful code:
{quote}2017-05-04 16:06:24,720 FATAL 
org.apache.hadoop.hdfs.tools.DFSZKFailoverController: Got a fatal error, 
exiting now
java.net.BindException: Problem binding to [localhost:8019] 
java.net.BindException: address already in use; For more details see:  
http://wiki.apache.org/hadoop/BindException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
at org.apache.hadoop.ipc.Server.bind(Server.java:431)
at org.apache.hadoop.ipc.Server$Listener.(Server.java:580)
at org.apache.hadoop.ipc.Server.(Server.java:2221)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:951)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:534)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
at org.apache.hadoop.ha.ZKFCRpcServer.(ZKFCRpcServer.java:58)
at 
org.apache.hadoop.ha.ZKFailoverController.initRPC(ZKFailoverController.java:341)
at 
org.apache.hadoop.hdfs.tools.DFSZKFailoverController.initRPC(DFSZKFailoverController.java:152)
at 
org.apache.hadoop.ha.ZKFailoverController.doRun(ZKFailoverController.java:249)
at 
org.apache.hadoop.ha.ZKFailoverController.access$000(ZKFailoverController.java:72)
at 
org.apache.hadoop.ha.ZKFailoverController$1.run(ZKFailoverController.java:187)
at 
org.apache.hadoop.ha.ZKFailoverController$1.run(ZKFailoverController.java:183)
at 
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
at 
org.apache.hadoop.ha.ZKFailoverController.run(ZKFailoverController.java:183)
at 
org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:181)
Caused by: java.net.BindException: address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.apache.hadoop.ipc.Server.bind(Server.java:414)
... 16 more
[hdfs@localhost ~]$ echo $?
0{quote}




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11587) Spelling errors in the Java source

2017-03-28 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11587:

Attachment: HDFS-11587.001.patch

> Spelling errors in the Java source
> --
>
> Key: HDFS-11587
> URL: https://issues.apache.org/jira/browse/HDFS-11587
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Doris Gu
>Priority: Minor
> Attachments: HDFS-11587.001.patch
>
>
> Found some spelling errors.
> Examples are:
> seperated -> separated 
> seperator -> separator



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11587) Spelling errors in the Java source

2017-03-28 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11587:

Attachment: (was: HDFS-11587.001.patch)

> Spelling errors in the Java source
> --
>
> Key: HDFS-11587
> URL: https://issues.apache.org/jira/browse/HDFS-11587
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Doris Gu
>Priority: Minor
>
> Found some spelling errors.
> Examples are:
> seperated -> separated 
> seperator -> separator



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11587) Spelling errors in the Java source

2017-03-28 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11587:

Status: Open  (was: Patch Available)

> Spelling errors in the Java source
> --
>
> Key: HDFS-11587
> URL: https://issues.apache.org/jira/browse/HDFS-11587
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Doris Gu
>Priority: Minor
> Attachments: HDFS-11587.001.patch
>
>
> Found some spelling errors.
> Examples are:
> seperated -> separated 
> seperator -> separator



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11587) Spelling errors in the Java source

2017-03-28 Thread Doris Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944712#comment-15944712
 ] 

Doris Gu commented on HDFS-11587:
-

Fix some spelling errors, please check! Thanks.

> Spelling errors in the Java source
> --
>
> Key: HDFS-11587
> URL: https://issues.apache.org/jira/browse/HDFS-11587
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Doris Gu
>Priority: Minor
> Attachments: HDFS-11587.001.patch
>
>
> Found some spelling errors.
> Examples are:
> seperated -> separated 
> seperator -> separator



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11587) Spelling errors in the Java source

2017-03-28 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11587:

Status: Patch Available  (was: Open)

> Spelling errors in the Java source
> --
>
> Key: HDFS-11587
> URL: https://issues.apache.org/jira/browse/HDFS-11587
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Doris Gu
>Priority: Minor
> Attachments: HDFS-11587.001.patch
>
>
> Found some spelling errors.
> Examples are:
> seperated -> separated 
> seperator -> separator



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11587) Spelling errors in the Java source

2017-03-28 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11587:

Attachment: HDFS-11587.001.patch

> Spelling errors in the Java source
> --
>
> Key: HDFS-11587
> URL: https://issues.apache.org/jira/browse/HDFS-11587
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Doris Gu
>Priority: Minor
> Attachments: HDFS-11587.001.patch
>
>
> Found some spelling errors.
> Examples are:
> seperated -> separated 
> seperator -> separator



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11587) Spelling errors in the Java source

2017-03-28 Thread Doris Gu (JIRA)
Doris Gu created HDFS-11587:
---

 Summary: Spelling errors in the Java source
 Key: HDFS-11587
 URL: https://issues.apache.org/jira/browse/HDFS-11587
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Doris Gu
Priority: Minor


Found some spelling errors.
Examples are:
seperated -> separated 
seperator -> separator




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7285) Erasure Coding Support inside HDFS

2017-02-10 Thread Doris Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861265#comment-15861265
 ] 

Doris Gu commented on HDFS-7285:


Thanks a lot, Andrew. I see. Since such a situation, do you have any idea about 
the realese time of stable 3.0.0?

> Erasure Coding Support inside HDFS
> --
>
> Key: HDFS-7285
> URL: https://issues.apache.org/jira/browse/HDFS-7285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Weihua Jiang
>Assignee: Zhe Zhang
> Fix For: 3.0.0-alpha1
>
> Attachments: Compare-consolidated-20150824.diff, 
> Consolidated-20150707.patch, Consolidated-20150806.patch, 
> Consolidated-20150810.patch, ECAnalyzer.py, ECParser.py, 
> fsimage-analysis-20150105.pdf, HDFS-7285-Consolidated-20150911.patch, 
> HDFS-7285-initial-PoC.patch, HDFS-7285-merge-consolidated-01.patch, 
> HDFS-7285-merge-consolidated-trunk-01.patch, 
> HDFS-7285-merge-consolidated.trunk.03.patch, 
> HDFS-7285-merge-consolidated.trunk.04.patch, HDFS-bistriped.patch, 
> HDFS-EC-merge-consolidated-01.patch, HDFS-EC-Merge-PoC-20150624.patch, 
> HDFSErasureCodingDesign-20141028.pdf, HDFSErasureCodingDesign-20141217.pdf, 
> HDFSErasureCodingDesign-20150204.pdf, HDFSErasureCodingDesign-20150206.pdf, 
> HDFSErasureCodingPhaseITestPlan.pdf, 
> HDFSErasureCodingSystemTestPlan-20150824.pdf, 
> HDFSErasureCodingSystemTestReport-20150826.pdf
>
>
> Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice 
> of data reliability, comparing to the existing HDFS 3-replica approach. For 
> example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, 
> with storage overhead only being 40%. This makes EC a quite attractive 
> alternative for big data storage, particularly for cold data. 
> Facebook had a related open source project called HDFS-RAID. It used to be 
> one of the contribute packages in HDFS but had been removed since Hadoop 2.0 
> for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends 
> on MapReduce to do encoding and decoding tasks; 2) it can only be used for 
> cold files that are intended not to be appended anymore; 3) the pure Java EC 
> coding implementation is extremely slow in practical use. Due to these, it 
> might not be a good idea to just bring HDFS-RAID back.
> We (Intel and Cloudera) are working on a design to build EC into HDFS that 
> gets rid of any external dependencies, makes it self-contained and 
> independently maintained. This design lays the EC feature on the storage type 
> support and considers compatible with existing HDFS features like caching, 
> snapshot, encryption, high availability and etc. This design will also 
> support different EC coding schemes, implementations and policies for 
> different deployment scenarios. By utilizing advanced libraries (e.g. Intel 
> ISA-L library), an implementation can greatly improve the performance of EC 
> encoding/decoding and makes the EC solution even more attractive. We will 
> post the design document soon. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7285) Erasure Coding Support inside HDFS

2017-02-09 Thread Doris Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859433#comment-15859433
 ] 

Doris Gu commented on HDFS-7285:


Hi there,

I planed to apply this patch(Consolidated-20150810.patch),could anyone 
please tell me the branch which this patch can be applied to?
And my goal is hadoopV2.7.3, is it possible to work? 

Looking forward to the answers.

Best wishes!



> Erasure Coding Support inside HDFS
> --
>
> Key: HDFS-7285
> URL: https://issues.apache.org/jira/browse/HDFS-7285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Weihua Jiang
>Assignee: Zhe Zhang
> Fix For: 3.0.0-alpha1
>
> Attachments: Compare-consolidated-20150824.diff, 
> Consolidated-20150707.patch, Consolidated-20150806.patch, 
> Consolidated-20150810.patch, ECAnalyzer.py, ECParser.py, 
> fsimage-analysis-20150105.pdf, HDFS-7285-Consolidated-20150911.patch, 
> HDFS-7285-initial-PoC.patch, HDFS-7285-merge-consolidated-01.patch, 
> HDFS-7285-merge-consolidated-trunk-01.patch, 
> HDFS-7285-merge-consolidated.trunk.03.patch, 
> HDFS-7285-merge-consolidated.trunk.04.patch, HDFS-bistriped.patch, 
> HDFS-EC-merge-consolidated-01.patch, HDFS-EC-Merge-PoC-20150624.patch, 
> HDFSErasureCodingDesign-20141028.pdf, HDFSErasureCodingDesign-20141217.pdf, 
> HDFSErasureCodingDesign-20150204.pdf, HDFSErasureCodingDesign-20150206.pdf, 
> HDFSErasureCodingPhaseITestPlan.pdf, 
> HDFSErasureCodingSystemTestPlan-20150824.pdf, 
> HDFSErasureCodingSystemTestReport-20150826.pdf
>
>
> Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice 
> of data reliability, comparing to the existing HDFS 3-replica approach. For 
> example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, 
> with storage overhead only being 40%. This makes EC a quite attractive 
> alternative for big data storage, particularly for cold data. 
> Facebook had a related open source project called HDFS-RAID. It used to be 
> one of the contribute packages in HDFS but had been removed since Hadoop 2.0 
> for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends 
> on MapReduce to do encoding and decoding tasks; 2) it can only be used for 
> cold files that are intended not to be appended anymore; 3) the pure Java EC 
> coding implementation is extremely slow in practical use. Due to these, it 
> might not be a good idea to just bring HDFS-RAID back.
> We (Intel and Cloudera) are working on a design to build EC into HDFS that 
> gets rid of any external dependencies, makes it self-contained and 
> independently maintained. This design lays the EC feature on the storage type 
> support and considers compatible with existing HDFS features like caching, 
> snapshot, encryption, high availability and etc. This design will also 
> support different EC coding schemes, implementations and policies for 
> different deployment scenarios. By utilizing advanced libraries (e.g. Intel 
> ISA-L library), an implementation can greatly improve the performance of EC 
> encoding/decoding and makes the EC solution even more attractive. We will 
> post the design document soon. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11104) BlockPlacementPolicyDefault choose favoredNodes in turn which may cause imbalance

2017-01-19 Thread Doris Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15831086#comment-15831086
 ] 

Doris Gu commented on HDFS-11104:
-

I've already been working on this, is it possible to assign to me?

> BlockPlacementPolicyDefault choose favoredNodes in turn which may cause 
> imbalance
> -
>
> Key: HDFS-11104
> URL: https://issues.apache.org/jira/browse/HDFS-11104
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Doris Gu
>Assignee: Ajith S
> Attachments: HDFS-11104.patch
>
>
> if client transfer favoredNodes when it writes files into hdfs,chooseTarget 
> in BlockPlacementPolicyDefault prior chooseTarget in turn:
> {quote}
> DatanodeStorageInfo[] chooseTarget(String src,
>   int numOfReplicas,
>   Node writer,
>   Set excludedNodes,
>   long blocksize,
>   List favoredNodes,
>   BlockStoragePolicy storagePolicy) {
> try {
> ...
>*for (int i = 0; i < favoredNodes.size() && results.size() < 
> numOfReplicas; i++)* {
> DatanodeDescriptor favoredNode = favoredNodes.get(i);
> // Choose a single node which is local to favoredNode.
> // 'results' is updated within chooseLocalNode
> final DatanodeStorageInfo target = chooseLocalStorage(favoredNode,
> favoriteAndExcludedNodes, blocksize, maxNodesPerRack,
> results, avoidStaleNodes, storageTypes, false);
>   ...
> {quote}
> why not shuffle it here? Make block more balanced, save the cost balancer 
> will pay and make cluster more stable.
> {quote}
> for (DatanodeDescriptor favoredNode : 
> DFSUtil.shuffle(favoredNodes.toArray(new 
> DatanodeDescriptor[favoredNodes.size()]))) 
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11104) BlockPlacementPolicyDefault choose favoredNodes in turn which may cause imbalance

2017-01-19 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11104:

Attachment: HDFS-11104.patch

> BlockPlacementPolicyDefault choose favoredNodes in turn which may cause 
> imbalance
> -
>
> Key: HDFS-11104
> URL: https://issues.apache.org/jira/browse/HDFS-11104
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Doris Gu
>Assignee: Ajith S
> Attachments: HDFS-11104.patch
>
>
> if client transfer favoredNodes when it writes files into hdfs,chooseTarget 
> in BlockPlacementPolicyDefault prior chooseTarget in turn:
> {quote}
> DatanodeStorageInfo[] chooseTarget(String src,
>   int numOfReplicas,
>   Node writer,
>   Set excludedNodes,
>   long blocksize,
>   List favoredNodes,
>   BlockStoragePolicy storagePolicy) {
> try {
> ...
>*for (int i = 0; i < favoredNodes.size() && results.size() < 
> numOfReplicas; i++)* {
> DatanodeDescriptor favoredNode = favoredNodes.get(i);
> // Choose a single node which is local to favoredNode.
> // 'results' is updated within chooseLocalNode
> final DatanodeStorageInfo target = chooseLocalStorage(favoredNode,
> favoriteAndExcludedNodes, blocksize, maxNodesPerRack,
> results, avoidStaleNodes, storageTypes, false);
>   ...
> {quote}
> why not shuffle it here? Make block more balanced, save the cost balancer 
> will pay and make cluster more stable.
> {quote}
> for (DatanodeDescriptor favoredNode : 
> DFSUtil.shuffle(favoredNodes.toArray(new 
> DatanodeDescriptor[favoredNodes.size()]))) 
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11104) BlockPlacementPolicyDefault choose favoredNodes in turn which may cause imbalance

2017-01-19 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11104:

Affects Version/s: 3.0.0-alpha1

> BlockPlacementPolicyDefault choose favoredNodes in turn which may cause 
> imbalance
> -
>
> Key: HDFS-11104
> URL: https://issues.apache.org/jira/browse/HDFS-11104
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Doris Gu
>Assignee: Ajith S
>
> if client transfer favoredNodes when it writes files into hdfs,chooseTarget 
> in BlockPlacementPolicyDefault prior chooseTarget in turn:
> {quote}
> DatanodeStorageInfo[] chooseTarget(String src,
>   int numOfReplicas,
>   Node writer,
>   Set excludedNodes,
>   long blocksize,
>   List favoredNodes,
>   BlockStoragePolicy storagePolicy) {
> try {
> ...
>*for (int i = 0; i < favoredNodes.size() && results.size() < 
> numOfReplicas; i++)* {
> DatanodeDescriptor favoredNode = favoredNodes.get(i);
> // Choose a single node which is local to favoredNode.
> // 'results' is updated within chooseLocalNode
> final DatanodeStorageInfo target = chooseLocalStorage(favoredNode,
> favoriteAndExcludedNodes, blocksize, maxNodesPerRack,
> results, avoidStaleNodes, storageTypes, false);
>   ...
> {quote}
> why not shuffle it here? Make block more balanced, save the cost balancer 
> will pay and make cluster more stable.
> {quote}
> for (DatanodeDescriptor favoredNode : 
> DFSUtil.shuffle(favoredNodes.toArray(new 
> DatanodeDescriptor[favoredNodes.size()]))) 
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11104) BlockPlacementPolicyDefault choose favoredNodes in turn which may cause imbalance

2016-11-04 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11104:

Description: 
if client transfer favoredNodes when it writes files into hdfs,chooseTarget in 
BlockPlacementPolicyDefault prior chooseTarget in turn:
{quote}
DatanodeStorageInfo[] chooseTarget(String src,
  int numOfReplicas,
  Node writer,
  Set excludedNodes,
  long blocksize,
  List favoredNodes,
  BlockStoragePolicy storagePolicy) {
try {
...

   *for (int i = 0; i < favoredNodes.size() && results.size() < numOfReplicas; 
i++)* {
DatanodeDescriptor favoredNode = favoredNodes.get(i);
// Choose a single node which is local to favoredNode.
// 'results' is updated within chooseLocalNode
final DatanodeStorageInfo target = chooseLocalStorage(favoredNode,
favoriteAndExcludedNodes, blocksize, maxNodesPerRack,
results, avoidStaleNodes, storageTypes, false);
  ...
{quote}
why not shuffle it here? Make block more balanced, save the cost balancer will 
pay and make cluster more stable.
{quote}
for (DatanodeDescriptor favoredNode : DFSUtil.shuffle(favoredNodes.toArray(new 
DatanodeDescriptor[favoredNodes.size()]))) 
{quote}

  was:
if client transfer favoredNodes when it writes files into hdfs,chooseTarget in 
BlockPlacementPolicyDefault prior chooseTarget in turn:
{quote}
DatanodeStorageInfo[] chooseTarget(String src,
  int numOfReplicas,
  Node writer,
  Set excludedNodes,
  long blocksize,
  List favoredNodes,
  BlockStoragePolicy storagePolicy) {
try {
...

   *for (int i = 0; i < favoredNodes.size() && results.size() < numOfReplicas; 
i++)* {
DatanodeDescriptor favoredNode = favoredNodes.get(i);
// Choose a single node which is local to favoredNode.
// 'results' is updated within chooseLocalNode
final DatanodeStorageInfo target = chooseLocalStorage(favoredNode,
favoriteAndExcludedNodes, blocksize, maxNodesPerRack,
results, avoidStaleNodes, storageTypes, false);
  ...
{quote}
why not shuffle it?
{quote}
 *for (DatanodeDescriptor favoredNode : 
DFSUtil.shuffle(favoredNodes.toArray(new 
DatanodeDescriptor[favoredNodes.size()]))) *
{quote}


> BlockPlacementPolicyDefault choose favoredNodes in turn which may cause 
> imbalance
> -
>
> Key: HDFS-11104
> URL: https://issues.apache.org/jira/browse/HDFS-11104
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Doris Gu
>
> if client transfer favoredNodes when it writes files into hdfs,chooseTarget 
> in BlockPlacementPolicyDefault prior chooseTarget in turn:
> {quote}
> DatanodeStorageInfo[] chooseTarget(String src,
>   int numOfReplicas,
>   Node writer,
>   Set excludedNodes,
>   long blocksize,
>   List favoredNodes,
>   BlockStoragePolicy storagePolicy) {
> try {
> ...
>*for (int i = 0; i < favoredNodes.size() && results.size() < 
> numOfReplicas; i++)* {
> DatanodeDescriptor favoredNode = favoredNodes.get(i);
> // Choose a single node which is local to favoredNode.
> // 'results' is updated within chooseLocalNode
> final DatanodeStorageInfo target = chooseLocalStorage(favoredNode,
> favoriteAndExcludedNodes, blocksize, maxNodesPerRack,
> results, avoidStaleNodes, storageTypes, false);
>   ...
> {quote}
> why not shuffle it here? Make block more balanced, save the cost balancer 
> will pay and make cluster more stable.
> {quote}
> for (DatanodeDescriptor favoredNode : 
> DFSUtil.shuffle(favoredNodes.toArray(new 
> DatanodeDescriptor[favoredNodes.size()]))) 
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11104) BlockPlacementPolicyDefault choose favoredNodes in turn which may cause imbalance

2016-11-04 Thread Doris Gu (JIRA)
Doris Gu created HDFS-11104:
---

 Summary: BlockPlacementPolicyDefault choose favoredNodes in turn 
which may cause imbalance
 Key: HDFS-11104
 URL: https://issues.apache.org/jira/browse/HDFS-11104
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Doris Gu


if client transfer favoredNodes when it writes files into hdfs,chooseTarget in 
BlockPlacementPolicyDefault prior chooseTarget in turn:
{quote}
DatanodeStorageInfo[] chooseTarget(String src,
  int numOfReplicas,
  Node writer,
  Set excludedNodes,
  long blocksize,
  List favoredNodes,
  BlockStoragePolicy storagePolicy) {
try {
...

   *for (int i = 0; i < favoredNodes.size() && results.size() < numOfReplicas; 
i++)* {
DatanodeDescriptor favoredNode = favoredNodes.get(i);
// Choose a single node which is local to favoredNode.
// 'results' is updated within chooseLocalNode
final DatanodeStorageInfo target = chooseLocalStorage(favoredNode,
favoriteAndExcludedNodes, blocksize, maxNodesPerRack,
results, avoidStaleNodes, storageTypes, false);
  ...
{quote}
why not shuffle it?
{quote}
 *for (DatanodeDescriptor favoredNode : 
DFSUtil.shuffle(favoredNodes.toArray(new 
DatanodeDescriptor[favoredNodes.size()]))) *
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10604) What about this?Group DNs and add DN groups--named region to HDFS model , use this region to instead of single DN when saving files.

2016-07-11 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-10604:

Description: 
The biggest difference this feature will bring is *making blocks belong to the 
same file to save in the same region(DN group).*
So the process will be:
1.Config DN groups, for example
bq.Region1:dn1,dn2,dn3
bq.Region2:dn4,dn5,dn6
bq.Region3:dn7,dn8,dn9,dn10

2.Client uploads a file, first analyze whether this file has any existed blocks:
bq.i)Yes:assign new blocks to the DN group where the existed blocks belong to.
bq.ii)No:assign new blocks to a DN group which is chosen by some certain policy 
to avoid imbalance.

3.Other related processes,including append,balancer etc. also need to modify as 
well.   

The benefit we wish is when some DNs are down at the same time, the number of 
affected files(miss all replicas) is small.
But we are wondering if this is worth doing or not, or if there are problems we 
haven't noticed.

  was:
The biggest difference this feature will bring is *strong* making blocks belong 
to the same file to save in the same region(DN group).*strong*
So the process will be:
1.Config DN groups, for example
bq.Region1:dn1,dn2,dn3
bq.Region2:dn4,dn5,dn6
bq.Region3:dn7,dn8,dn9,dn10

2.Client uploads a file, first analyze whether this file has any existed blocks:
bq.i)Yes:assign new blocks to the DN group where the existed blocks belong to.
bq.ii)No:assign new blocks to a DN group which is chosen by some certain policy 
to avoid imbalance.

3.Other related processes,including append,balancer etc. also need to modify as 
well.   

The benefit we wish is when some DNs are down at the same time, the number of 
affected files(miss all replicas) is small.
But we are wondering if this is worth doing or not, or if there are problems we 
haven't noticed.


> What about this?Group DNs and add DN groups--named region to HDFS model , use 
> this region to instead of single DN when saving files.
> 
>
> Key: HDFS-10604
> URL: https://issues.apache.org/jira/browse/HDFS-10604
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: Doris Gu
>
> The biggest difference this feature will bring is *making blocks belong to 
> the same file to save in the same region(DN group).*
> So the process will be:
> 1.Config DN groups, for example
> bq.Region1:dn1,dn2,dn3
> bq.Region2:dn4,dn5,dn6
> bq.Region3:dn7,dn8,dn9,dn10
> 2.Client uploads a file, first analyze whether this file has any existed 
> blocks:
> bq.i)Yes:assign new blocks to the DN group where the existed blocks belong to.
> bq.ii)No:assign new blocks to a DN group which is chosen by some certain 
> policy to avoid imbalance.
> 3.Other related processes,including append,balancer etc. also need to modify 
> as well.   
> The benefit we wish is when some DNs are down at the same time, the number of 
> affected files(miss all replicas) is small.
> But we are wondering if this is worth doing or not, or if there are problems 
> we haven't noticed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10604) What about this?Group DNs and add DN groups--named region to HDFS model , use this region to instead of single DN when saving files.

2016-07-11 Thread Doris Gu (JIRA)
Doris Gu created HDFS-10604:
---

 Summary: What about this?Group DNs and add DN groups--named region 
to HDFS model , use this region to instead of single DN when saving files.
 Key: HDFS-10604
 URL: https://issues.apache.org/jira/browse/HDFS-10604
 Project: Hadoop HDFS
  Issue Type: Wish
Reporter: Doris Gu


The biggest difference this feature will bring is *strong* making blocks belong 
to the same file to save in the same region(DN group).*strong*
So the process will be:
1.Config DN groups, for example
bq.Region1:dn1,dn2,dn3
bq.Region2:dn4,dn5,dn6
bq.Region3:dn7,dn8,dn9,dn10

2.Client uploads a file, first analyze whether this file has any existed blocks:
bq.i)Yes:assign new blocks to the DN group where the existed blocks belong to.
bq.ii)No:assign new blocks to a DN group which is chosen by some certain policy 
to avoid imbalance.

3.Other related processes,including append,balancer etc. also need to modify as 
well.   

The benefit we wish is when some DNs are down at the same time, the number of 
affected files(miss all replicas) is small.
But we are wondering if this is worth doing or not, or if there are problems we 
haven't noticed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-7601) Operations(e.g. balance) failed due to deficient configuration parsing

2015-08-06 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-7601:
---
Attachment: HDFS-7601.patch

 Operations(e.g. balance) failed due to deficient configuration parsing
 --

 Key: HDFS-7601
 URL: https://issues.apache.org/jira/browse/HDFS-7601
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.3.0, 2.6.0
Reporter: Doris Gu
Assignee: Doris Gu
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HDFS-7601.patch


 Some operations, for example,balance,parses configuration(from 
 core-site.xml,hdfs-site.xml) to get NameServiceUris to link to.
 Current method considers those end with or without /  as two different 
 uris, then following operation may meet errors.
 bq. [hdfs://haCluster, hdfs://haCluster/] are considered to be two different 
 uris   which actually the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7601) Operations(e.g. balance) failed due to deficient configuration parsing

2015-08-06 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-7601:
---
Attachment: (was: HDFS-7601.patch)

 Operations(e.g. balance) failed due to deficient configuration parsing
 --

 Key: HDFS-7601
 URL: https://issues.apache.org/jira/browse/HDFS-7601
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.3.0, 2.6.0
Reporter: Doris Gu
Assignee: Doris Gu
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HDFS-7601.patch


 Some operations, for example,balance,parses configuration(from 
 core-site.xml,hdfs-site.xml) to get NameServiceUris to link to.
 Current method considers those end with or without /  as two different 
 uris, then following operation may meet errors.
 bq. [hdfs://haCluster, hdfs://haCluster/] are considered to be two different 
 uris   which actually the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7601) Operations(e.g. balance) failed due to deficient configuration parsing

2015-08-06 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-7601:
---
Attachment: HDFS-7601.patch

 Operations(e.g. balance) failed due to deficient configuration parsing
 --

 Key: HDFS-7601
 URL: https://issues.apache.org/jira/browse/HDFS-7601
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.3.0, 2.6.0
Reporter: Doris Gu
Assignee: Doris Gu
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HDFS-7601.patch


 Some operations, for example,balance,parses configuration(from 
 core-site.xml,hdfs-site.xml) to get NameServiceUris to link to.
 Current method considers those end with or without /  as two different 
 uris, then following operation may meet errors.
 bq. [hdfs://haCluster, hdfs://haCluster/] are considered to be two different 
 uris   which actually the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7601) Operations(e.g. balance) failed due to deficient configuration parsing

2015-08-06 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-7601:
---
Attachment: (was: HDFS-7601.1.patch)

 Operations(e.g. balance) failed due to deficient configuration parsing
 --

 Key: HDFS-7601
 URL: https://issues.apache.org/jira/browse/HDFS-7601
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.3.0, 2.6.0
Reporter: Doris Gu
Assignee: Doris Gu
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HDFS-7601.patch


 Some operations, for example,balance,parses configuration(from 
 core-site.xml,hdfs-site.xml) to get NameServiceUris to link to.
 Current method considers those end with or without /  as two different 
 uris, then following operation may meet errors.
 bq. [hdfs://haCluster, hdfs://haCluster/] are considered to be two different 
 uris   which actually the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7601) Operations(e.g. balance) failed due to deficient configuration parsing

2015-08-06 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-7601:
---
Attachment: (was: HDFS-7601.2.patch)

 Operations(e.g. balance) failed due to deficient configuration parsing
 --

 Key: HDFS-7601
 URL: https://issues.apache.org/jira/browse/HDFS-7601
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.3.0, 2.6.0
Reporter: Doris Gu
Assignee: Doris Gu
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HDFS-7601.patch


 Some operations, for example,balance,parses configuration(from 
 core-site.xml,hdfs-site.xml) to get NameServiceUris to link to.
 Current method considers those end with or without /  as two different 
 uris, then following operation may meet errors.
 bq. [hdfs://haCluster, hdfs://haCluster/] are considered to be two different 
 uris   which actually the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7601) Operations(e.g. balance) failed due to deficient configuration parsing

2015-08-06 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-7601:
---
Attachment: HDFS-7601.2.patch
HDFS-7601.1.patch

 Operations(e.g. balance) failed due to deficient configuration parsing
 --

 Key: HDFS-7601
 URL: https://issues.apache.org/jira/browse/HDFS-7601
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.3.0, 2.6.0
Reporter: Doris Gu
Assignee: Doris Gu
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HDFS-7601.1.patch, HDFS-7601.2.patch


 Some operations, for example,balance,parses configuration(from 
 core-site.xml,hdfs-site.xml) to get NameServiceUris to link to.
 Current method considers those end with or without /  as two different 
 uris, then following operation may meet errors.
 bq. [hdfs://haCluster, hdfs://haCluster/] are considered to be two different 
 uris   which actually the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7601) Operations(e.g. balance) failed due to deficient configuration parsing

2015-08-06 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-7601:
---
Attachment: (was: HDFS-7601.patch)

 Operations(e.g. balance) failed due to deficient configuration parsing
 --

 Key: HDFS-7601
 URL: https://issues.apache.org/jira/browse/HDFS-7601
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.3.0, 2.6.0
Reporter: Doris Gu
Assignee: Doris Gu
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HDFS-7601.1.patch, HDFS-7601.2.patch


 Some operations, for example,balance,parses configuration(from 
 core-site.xml,hdfs-site.xml) to get NameServiceUris to link to.
 Current method considers those end with or without /  as two different 
 uris, then following operation may meet errors.
 bq. [hdfs://haCluster, hdfs://haCluster/] are considered to be two different 
 uris   which actually the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7601) Operations(e.g. balance) failed due to deficient configuration parsing

2015-08-06 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-7601:
---
Attachment: HDFS-7601.patch

 Operations(e.g. balance) failed due to deficient configuration parsing
 --

 Key: HDFS-7601
 URL: https://issues.apache.org/jira/browse/HDFS-7601
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.3.0, 2.6.0
Reporter: Doris Gu
Assignee: Doris Gu
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HDFS-7601.patch


 Some operations, for example,balance,parses configuration(from 
 core-site.xml,hdfs-site.xml) to get NameServiceUris to link to.
 Current method considers those end with or without /  as two different 
 uris, then following operation may meet errors.
 bq. [hdfs://haCluster, hdfs://haCluster/] are considered to be two different 
 uris   which actually the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7601) Operations(e.g. balance) failed due to deficient configuration parsing

2015-08-06 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-7601:
---
Attachment: (was: 0001-for-hdfs-7601.patch)

 Operations(e.g. balance) failed due to deficient configuration parsing
 --

 Key: HDFS-7601
 URL: https://issues.apache.org/jira/browse/HDFS-7601
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.3.0, 2.6.0
Reporter: Doris Gu
Assignee: Doris Gu
Priority: Minor
  Labels: BB2015-05-TBR

 Some operations, for example,balance,parses configuration(from 
 core-site.xml,hdfs-site.xml) to get NameServiceUris to link to.
 Current method considers those end with or without /  as two different 
 uris, then following operation may meet errors.
 bq. [hdfs://haCluster, hdfs://haCluster/] are considered to be two different 
 uris   which actually the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7601) Operations(e.g. balance) failed due to deficient configuration parsing

2015-08-03 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-7601:
---
Attachment: 0001-for-hdfs-7601.patch

 Operations(e.g. balance) failed due to deficient configuration parsing
 --

 Key: HDFS-7601
 URL: https://issues.apache.org/jira/browse/HDFS-7601
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.3.0, 2.6.0
Reporter: Doris Gu
Assignee: Doris Gu
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: 0001-for-hdfs-7601.patch


 Some operations, for example,balance,parses configuration(from 
 core-site.xml,hdfs-site.xml) to get NameServiceUris to link to.
 Current method considers those end with or without /  as two different 
 uris, then following operation may meet errors.
 bq. [hdfs://haCluster, hdfs://haCluster/] are considered to be two different 
 uris   which actually the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7601) Operations(e.g. balance) failed due to deficient configuration parsing

2015-08-03 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-7601:
---
Attachment: (was: 0001-for-hdfs-7601.patch)

 Operations(e.g. balance) failed due to deficient configuration parsing
 --

 Key: HDFS-7601
 URL: https://issues.apache.org/jira/browse/HDFS-7601
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.3.0, 2.6.0
Reporter: Doris Gu
Assignee: Doris Gu
Priority: Minor
  Labels: BB2015-05-TBR

 Some operations, for example,balance,parses configuration(from 
 core-site.xml,hdfs-site.xml) to get NameServiceUris to link to.
 Current method considers those end with or without /  as two different 
 uris, then following operation may meet errors.
 bq. [hdfs://haCluster, hdfs://haCluster/] are considered to be two different 
 uris   which actually the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7601) Operations(e.g. balance) failed due to deficient configuration parsing

2015-05-21 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-7601:
---
Attachment: 0001-for-hdfs-7601.patch

 Operations(e.g. balance) failed due to deficient configuration parsing
 --

 Key: HDFS-7601
 URL: https://issues.apache.org/jira/browse/HDFS-7601
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.3.0, 2.6.0
Reporter: Doris Gu
Assignee: Doris Gu
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: 0001-for-hdfs-7601.patch


 Some operations, for example,balance,parses configuration(from 
 core-site.xml,hdfs-site.xml) to get NameServiceUris to link to.
 Current method considers those end with or without /  as two different 
 uris, then following operation may meet errors.
 bq. [hdfs://haCluster, hdfs://haCluster/] are considered to be two different 
 uris   which actually the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7601) Operations(e.g. balance) failed due to deficient configuration parsing

2015-05-21 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-7601:
---
Attachment: (was: 7601-test.patch)

 Operations(e.g. balance) failed due to deficient configuration parsing
 --

 Key: HDFS-7601
 URL: https://issues.apache.org/jira/browse/HDFS-7601
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.3.0, 2.6.0
Reporter: Doris Gu
Assignee: Doris Gu
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: 0001-for-hdfs-7601.patch


 Some operations, for example,balance,parses configuration(from 
 core-site.xml,hdfs-site.xml) to get NameServiceUris to link to.
 Current method considers those end with or without /  as two different 
 uris, then following operation may meet errors.
 bq. [hdfs://haCluster, hdfs://haCluster/] are considered to be two different 
 uris   which actually the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7601) Operations(e.g. balance) failed due to deficient configuration parsing

2015-04-27 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-7601:
---
Attachment: 7601-test.patch

 Operations(e.g. balance) failed due to deficient configuration parsing
 --

 Key: HDFS-7601
 URL: https://issues.apache.org/jira/browse/HDFS-7601
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.3.0, 2.6.0
Reporter: Doris Gu
Assignee: Doris Gu
Priority: Minor
 Attachments: 7601-test.patch


 Some operations, for example,balance,parses configuration(from 
 core-site.xml,hdfs-site.xml) to get NameServiceUris to link to.
 Current method considers those end with or without /  as two different 
 uris, then following operation may meet errors.
 bq. [hdfs://haCluster, hdfs://haCluster/] are considered to be two different 
 uris   which actually the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7601) Operations(e.g. balance) failed due to deficient configuration parsing

2015-04-26 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-7601:
---
Attachment: (was: 0001-for-hdfs-7601.patch)

 Operations(e.g. balance) failed due to deficient configuration parsing
 --

 Key: HDFS-7601
 URL: https://issues.apache.org/jira/browse/HDFS-7601
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.3.0, 2.6.0
Reporter: Doris Gu
Assignee: Doris Gu
Priority: Minor

 Some operations, for example,balance,parses configuration(from 
 core-site.xml,hdfs-site.xml) to get NameServiceUris to link to.
 Current method considers those end with or without /  as two different 
 uris, then following operation may meet errors.
 bq. [hdfs://haCluster, hdfs://haCluster/] are considered to be two different 
 uris   which actually the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7601) Operations(e.g. balance) failed due to deficient configuration parsing

2015-03-26 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-7601:
---
Attachment: (was: 0001-for-hdfs-7601.patch)

 Operations(e.g. balance) failed due to deficient configuration parsing
 --

 Key: HDFS-7601
 URL: https://issues.apache.org/jira/browse/HDFS-7601
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.3.0, 2.6.0
Reporter: Doris Gu
Assignee: Doris Gu
Priority: Minor

 Some operations, for example,balance,parses configuration(from 
 core-site.xml,hdfs-site.xml) to get NameServiceUris to link to.
 Current method considers those end with or without /  as two different 
 uris, then following operation may meet errors.
 bq. [hdfs://haCluster, hdfs://haCluster/] are considered to be two different 
 uris   which actually the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7601) Operations(e.g. balance) failed due to deficient configuration parsing

2015-03-26 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-7601:
---
Attachment: 0001-for-hdfs-7601.patch

 Operations(e.g. balance) failed due to deficient configuration parsing
 --

 Key: HDFS-7601
 URL: https://issues.apache.org/jira/browse/HDFS-7601
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.3.0, 2.6.0
Reporter: Doris Gu
Assignee: Doris Gu
Priority: Minor
 Attachments: 0001-for-hdfs-7601.patch


 Some operations, for example,balance,parses configuration(from 
 core-site.xml,hdfs-site.xml) to get NameServiceUris to link to.
 Current method considers those end with or without /  as two different 
 uris, then following operation may meet errors.
 bq. [hdfs://haCluster, hdfs://haCluster/] are considered to be two different 
 uris   which actually the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7601) Operations(e.g. balance) failed due to deficient configuration parsing

2015-03-09 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-7601:
---
Attachment: (was: hadoop-2.6.0-src.patch)

 Operations(e.g. balance) failed due to deficient configuration parsing
 --

 Key: HDFS-7601
 URL: https://issues.apache.org/jira/browse/HDFS-7601
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.3.0, 2.6.0
Reporter: Doris Gu
Assignee: Doris Gu
Priority: Minor

 Some operations, for example,balance,parses configuration(from 
 core-site.xml,hdfs-site.xml) to get NameServiceUris to link to.
 Current method considers those end with or without /  as two different 
 uris, then following operation may meet errors.
 bq. [hdfs://haCluster, hdfs://haCluster/] are considered to be two different 
 uris   which actually the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7601) Operations(e.g. balance) failed due to deficient configuration parsing

2015-03-09 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-7601:
---
Attachment: 0001-for-hdfs-7601.patch

 Operations(e.g. balance) failed due to deficient configuration parsing
 --

 Key: HDFS-7601
 URL: https://issues.apache.org/jira/browse/HDFS-7601
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.3.0, 2.6.0
Reporter: Doris Gu
Assignee: Doris Gu
Priority: Minor
 Attachments: 0001-for-hdfs-7601.patch


 Some operations, for example,balance,parses configuration(from 
 core-site.xml,hdfs-site.xml) to get NameServiceUris to link to.
 Current method considers those end with or without /  as two different 
 uris, then following operation may meet errors.
 bq. [hdfs://haCluster, hdfs://haCluster/] are considered to be two different 
 uris   which actually the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7601) Operations(e.g. balance) failed due to deficient configuration parsing

2015-03-05 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-7601:
---
Attachment: hadoop-2.6.0-src.patch

 Operations(e.g. balance) failed due to deficient configuration parsing
 --

 Key: HDFS-7601
 URL: https://issues.apache.org/jira/browse/HDFS-7601
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.3.0, 2.6.0
Reporter: Doris Gu
Assignee: Doris Gu
Priority: Minor
 Attachments: hadoop-2.6.0-src.patch


 Some operations, for example,balance,parses configuration(from 
 core-site.xml,hdfs-site.xml) to get NameServiceUris to link to.
 Current method considers those end with or without /  as two different 
 uris, then following operation may meet errors.
 bq. [hdfs://haCluster, hdfs://haCluster/] are considered to be two different 
 uris   which actually the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7601) Operations(e.g. balance) failed due to deficient configuration parsing

2015-03-05 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-7601:
---
Attachment: (was: hadoop-hdfs-project.patch)

 Operations(e.g. balance) failed due to deficient configuration parsing
 --

 Key: HDFS-7601
 URL: https://issues.apache.org/jira/browse/HDFS-7601
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.3.0, 2.6.0
Reporter: Doris Gu
Assignee: Doris Gu
Priority: Minor
 Attachments: hadoop-2.6.0-src.patch


 Some operations, for example,balance,parses configuration(from 
 core-site.xml,hdfs-site.xml) to get NameServiceUris to link to.
 Current method considers those end with or without /  as two different 
 uris, then following operation may meet errors.
 bq. [hdfs://haCluster, hdfs://haCluster/] are considered to be two different 
 uris   which actually the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7601) Operations(e.g. balance) failed due to deficient configuration parsing

2015-03-04 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-7601:
---
Attachment: hadoop-hdfs-project.patch

This patch includes:
1.

 Operations(e.g. balance) failed due to deficient configuration parsing
 --

 Key: HDFS-7601
 URL: https://issues.apache.org/jira/browse/HDFS-7601
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.3.0, 2.6.0
Reporter: Doris Gu
Assignee: Doris Gu
Priority: Minor
  Labels: patch
 Attachments: hadoop-hdfs-project.patch


 Some operations, for example,balance,parses configuration(from 
 core-site.xml,hdfs-site.xml) to get NameServiceUris to link to.
 Current method considers those end with or without /  as two different 
 uris, then following operation may meet errors.
 bq. [hdfs://haCluster, hdfs://haCluster/] are considered to be two different 
 uris   which actually the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7601) Operations(e.g. balance) failed due to deficient configuration parsing

2015-03-04 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-7601:
---
Status: Patch Available  (was: Open)

This patch includes:
1.GetNameServiceUris from config in DFSUtil.java adds functions to ignore the 
/ in the end of uris.

2.Add corresponding test to make sure that an HA URI and the default URI whose 
difference is ending with or without / doesn't result in multiple entries 
being returned.

Actually speaking,this problem comes from my own not so well config. I use 
hdfs://haCluster/ for fs.defaultFS in core-site.xml. I can easily avoid 
this problem by make this config value to be hdfs://haCluster, but someone 
else may give the similar value, so I propose this patch.

 Operations(e.g. balance) failed due to deficient configuration parsing
 --

 Key: HDFS-7601
 URL: https://issues.apache.org/jira/browse/HDFS-7601
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.6.0, 2.3.0
Reporter: Doris Gu
Assignee: Doris Gu
Priority: Minor
  Labels: patch

 Some operations, for example,balance,parses configuration(from 
 core-site.xml,hdfs-site.xml) to get NameServiceUris to link to.
 Current method considers those end with or without /  as two different 
 uris, then following operation may meet errors.
 bq. [hdfs://haCluster, hdfs://haCluster/] are considered to be two different 
 uris   which actually the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7601) Operations(e.g. balance) failed due to deficient configuration parsing

2015-03-04 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-7601:
---
Attachment: hadoop-hdfs-project.patch

 Operations(e.g. balance) failed due to deficient configuration parsing
 --

 Key: HDFS-7601
 URL: https://issues.apache.org/jira/browse/HDFS-7601
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.3.0, 2.6.0
Reporter: Doris Gu
Assignee: Doris Gu
Priority: Minor
  Labels: patch
 Attachments: hadoop-hdfs-project.patch


 Some operations, for example,balance,parses configuration(from 
 core-site.xml,hdfs-site.xml) to get NameServiceUris to link to.
 Current method considers those end with or without /  as two different 
 uris, then following operation may meet errors.
 bq. [hdfs://haCluster, hdfs://haCluster/] are considered to be two different 
 uris   which actually the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7601) Operations(e.g. balance) failed due to deficient configuration parsing

2015-03-04 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-7601:
---
Labels:   (was: patch)

 Operations(e.g. balance) failed due to deficient configuration parsing
 --

 Key: HDFS-7601
 URL: https://issues.apache.org/jira/browse/HDFS-7601
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.3.0, 2.6.0
Reporter: Doris Gu
Assignee: Doris Gu
Priority: Minor
 Attachments: hadoop-hdfs-project.patch


 Some operations, for example,balance,parses configuration(from 
 core-site.xml,hdfs-site.xml) to get NameServiceUris to link to.
 Current method considers those end with or without /  as two different 
 uris, then following operation may meet errors.
 bq. [hdfs://haCluster, hdfs://haCluster/] are considered to be two different 
 uris   which actually the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7601) Operations(e.g. balance) failed due to deficient configuration parsing

2015-03-04 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu reassigned HDFS-7601:
--

Assignee: Doris Gu

 Operations(e.g. balance) failed due to deficient configuration parsing
 --

 Key: HDFS-7601
 URL: https://issues.apache.org/jira/browse/HDFS-7601
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.3.0, 2.6.0
Reporter: Doris Gu
Assignee: Doris Gu
Priority: Minor

 Some operations, for example,balance,parses configuration(from 
 core-site.xml,hdfs-site.xml) to get NameServiceUris to link to.
 Current method considers those end with or without /  as two different 
 uris, then following operation may meet errors.
 bq. [hdfs://haCluster, hdfs://haCluster/] are considered to be two different 
 uris   which actually the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HDFS-7601) Operations(e.g. balance) failed due to deficient configuration parsing

2015-03-04 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-7601 started by Doris Gu.
--
 Operations(e.g. balance) failed due to deficient configuration parsing
 --

 Key: HDFS-7601
 URL: https://issues.apache.org/jira/browse/HDFS-7601
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.3.0, 2.6.0
Reporter: Doris Gu
Assignee: Doris Gu
Priority: Minor

 Some operations, for example,balance,parses configuration(from 
 core-site.xml,hdfs-site.xml) to get NameServiceUris to link to.
 Current method considers those end with or without /  as two different 
 uris, then following operation may meet errors.
 bq. [hdfs://haCluster, hdfs://haCluster/] are considered to be two different 
 uris   which actually the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7601) Operations(e.g. balance) failed due to deficient configuration parsing

2015-03-04 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-7601:
---
Attachment: (was: hadoop-hdfs-project.patch)

 Operations(e.g. balance) failed due to deficient configuration parsing
 --

 Key: HDFS-7601
 URL: https://issues.apache.org/jira/browse/HDFS-7601
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.3.0, 2.6.0
Reporter: Doris Gu
Assignee: Doris Gu
Priority: Minor
  Labels: patch

 Some operations, for example,balance,parses configuration(from 
 core-site.xml,hdfs-site.xml) to get NameServiceUris to link to.
 Current method considers those end with or without /  as two different 
 uris, then following operation may meet errors.
 bq. [hdfs://haCluster, hdfs://haCluster/] are considered to be two different 
 uris   which actually the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >