[jira] [Updated] (HDFS-9084) Pagination, sorting and filtering of files/directories in the HDFS Web UI

2016-02-23 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HDFS-9084:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-2, and branch-2.8. Thanks [~raviprak] for your 
contribution and thanks [~wheat9] for the comment.

> Pagination, sorting and filtering of files/directories in the HDFS Web UI
> -
>
> Key: HDFS-9084
> URL: https://issues.apache.org/jira/browse/HDFS-9084
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.4.0
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Fix For: 2.8.0
>
> Attachments: HDFS-9084.01.patch, HDFS-9084.02.patch, 
> HDFS-9084.03.patch, Screenshot HD-9084.jpg
>
>
> We should paginate directories with a large number of children. 
> Simultaneously, we can allow searching, sorting and filtering in those columns



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9084) Pagination, sorting and filtering of files/directories in the HDFS Web UI

2016-02-23 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15158668#comment-15158668
 ] 

Tsuyoshi Ozawa commented on HDFS-9084:
--

+1, checking this in.

> Pagination, sorting and filtering of files/directories in the HDFS Web UI
> -
>
> Key: HDFS-9084
> URL: https://issues.apache.org/jira/browse/HDFS-9084
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.4.0
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Attachments: HDFS-9084.01.patch, HDFS-9084.02.patch, 
> HDFS-9084.03.patch, Screenshot HD-9084.jpg
>
>
> We should paginate directories with a large number of children. 
> Simultaneously, we can allow searching, sorting and filtering in those columns



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-9084) Pagination, sorting and filtering of files/directories in the HDFS Web UI

2016-02-19 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155434#comment-15155434
 ] 

Tsuyoshi Ozawa edited comment on HDFS-9084 at 2/20/16 6:48 AM:
---

[~raviprak] I found mysterious behaviour: if the number of directories are 
large(20 - 30), the pull down is not shown though it works with middle 
size(under 10). Maybe we can address it on another jira.

> My suggestion is to create configuration value for the upper limit of the 
> entries to be rendered like Google's search result.

sorry, please ignore the comment. It's done on Ravi's patch. 

Anyway, my point is that server-sider filtering as Haohui mentioned. It can be 
done on another jira too.


was (Author: ozawa):
[~raviprak] I found mysterious behaviour: if the number of directories are 
large(20 - 30), the pull down is not shown though it works with middle 
size. Maybe we can address it on another jira.

> My suggestion is to create configuration value for the upper limit of the 
> entries to be rendered like Google's search result.

sorry, please ignore the comment. It's done on Ravi's patch. 

Anyway, my point is that server-sider filtering as Haohui mentioned. It can be 
done on another jira too.

> Pagination, sorting and filtering of files/directories in the HDFS Web UI
> -
>
> Key: HDFS-9084
> URL: https://issues.apache.org/jira/browse/HDFS-9084
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.4.0
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Attachments: HDFS-9084.01.patch, HDFS-9084.02.patch, Screenshot 
> HD-9084.jpg
>
>
> We should paginate directories with a large number of children. 
> Simultaneously, we can allow searching, sorting and filtering in those columns



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9084) Pagination, sorting and filtering of files/directories in the HDFS Web UI

2016-02-19 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155434#comment-15155434
 ] 

Tsuyoshi Ozawa commented on HDFS-9084:
--

[~raviprak] I found mysterious behaviour: if the number of directories are 
large(20 - 30), the pull down is not shown though it works with middle 
size. Maybe we can address it on another jira.

> My suggestion is to create configuration value for the upper limit of the 
> entries to be rendered like Google's search result.

sorry, please ignore the comment. It's done on Ravi's patch. 

Anyway, my point is that server-sider filtering as Haohui mentioned. It can be 
done on another jira too.

> Pagination, sorting and filtering of files/directories in the HDFS Web UI
> -
>
> Key: HDFS-9084
> URL: https://issues.apache.org/jira/browse/HDFS-9084
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.4.0
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Attachments: HDFS-9084.01.patch, HDFS-9084.02.patch, Screenshot 
> HD-9084.jpg
>
>
> We should paginate directories with a large number of children. 
> Simultaneously, we can allow searching, sorting and filtering in those columns



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-9084) Pagination, sorting and filtering of files/directories in the HDFS Web UI

2016-02-19 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15153958#comment-15153958
 ] 

Tsuyoshi Ozawa edited comment on HDFS-9084 at 2/19/16 9:13 AM:
---

[~raviprak] thank you for the very neat feature! It works well with small 
numbers of files and directories. I'm also okay to postpone the trimming down 
of columns to another JIRA.

I think, however, we have one problem. If the number of directories increases, 
the Web UI gets slow down because the client-side rendering take CPU time. I 
had an experiment as follows:

{code}
$ cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 16  | head -n 15 > 
lists.txt
$ cat lists.txt | hdfs dfs -mkdir 

{code}

This can be happens when Hive creates lots partition. My suggestion is to 
create configuration value for the upper limit of the entries to be rendered 
like Google's search result. What do you think?


was (Author: ozawa):
[~raviprak] thank you for the very neat feature! It works well with small 
numbers of files and directories. I'm also okay to postpone the trimming down 
of columns to another JIRA.

I think, however, we have one problem. If the number of directories increases, 
the Web UI gets slow down because the client-side rendering take CPU time. I 
had an experiment as follows:

{code}
$ cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 16  | head -n 15 > 
lists.txt
$ cat lists.txt | hdfs dfs -mkdir 

{code}

This can be happens when Hive create lots partition. My suggestion is to create 
upper limit of the entries to be rendered like Google's search result. What do 
you think?

> Pagination, sorting and filtering of files/directories in the HDFS Web UI
> -
>
> Key: HDFS-9084
> URL: https://issues.apache.org/jira/browse/HDFS-9084
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.4.0
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Attachments: HDFS-9084.01.patch, HDFS-9084.02.patch
>
>
> We should paginate directories with a large number of children. 
> Simultaneously, we can allow searching, sorting and filtering in those columns



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-9084) Pagination, sorting and filtering of files/directories in the HDFS Web UI

2016-02-19 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15153958#comment-15153958
 ] 

Tsuyoshi Ozawa edited comment on HDFS-9084 at 2/19/16 9:10 AM:
---

[~raviprak] thank you for the very neat feature! It works well with small 
numbers of files and directories. I'm also okay to postpone the trimming down 
of columns to another JIRA.

I think, however, we have one problem. If the number of directories increases, 
the Web UI gets slow down because the client-side rendering take CPU time. I 
had an experiment as follows:

{code}
$ cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 16  | head -n 15 > 
lists.txt
$ cat lists.txt | hdfs dfs -mkdir 

{code}

This can be happens when Hive create lots partition. My suggestion is to create 
upper limit of the entries to be rendered like Google's search result. What do 
you think?


was (Author: ozawa):
[~raviprak] thank you for the very neat feature! It works well with small files 
or directories. I'm also okay to postpone the trimming down of columns to 
another JIRA.

I think, however, we have one problem. If the number of directories increases, 
the Web UI gets slow down because the client-side rendering take CPU time. I 
had an experiment as follows:

{code}
$ cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 16  | head -n 15 > 
lists.txt
$ cat lists.txt | hdfs dfs -mkdir 

{code}

This can be happens when Hive create lots partition. My suggestion is to create 
upper limit of the entries to be rendered like Google's search result. What do 
you think?

> Pagination, sorting and filtering of files/directories in the HDFS Web UI
> -
>
> Key: HDFS-9084
> URL: https://issues.apache.org/jira/browse/HDFS-9084
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.4.0
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Attachments: HDFS-9084.01.patch, HDFS-9084.02.patch
>
>
> We should paginate directories with a large number of children. 
> Simultaneously, we can allow searching, sorting and filtering in those columns



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9084) Pagination, sorting and filtering of files/directories in the HDFS Web UI

2016-02-19 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15153958#comment-15153958
 ] 

Tsuyoshi Ozawa commented on HDFS-9084:
--

[~raviprak] thank you for the very neat feature! It works well with small files 
or directories. I'm also okay to postpone the trimming down of columns to 
another JIRA.

I think, however, we have one problem. If the number of directories increases, 
the Web UI gets slow down because the client-side rendering take CPU time. I 
had an experiment as follows:

{code}
$ cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 16  | head -n 15 > 
lists.txt
$ cat lists.txt | hdfs dfs -mkdir 

{code}

This can be happens when Hive create lots partition. My suggestion is to create 
upper limit of the entries to be rendered like Google's search result. What do 
you think?

> Pagination, sorting and filtering of files/directories in the HDFS Web UI
> -
>
> Key: HDFS-9084
> URL: https://issues.apache.org/jira/browse/HDFS-9084
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.4.0
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Attachments: HDFS-9084.01.patch, HDFS-9084.02.patch
>
>
> We should paginate directories with a large number of children. 
> Simultaneously, we can allow searching, sorting and filtering in those columns



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9451) TestFsPermission#testDeprecatedUmask is broken

2015-11-24 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15024266#comment-15024266
 ] 

Tsuyoshi Ozawa commented on HDFS-9451:
--

Targeting this to 3.0.0 since HADOOP-12294 is only for 3.0.0.

> TestFsPermission#testDeprecatedUmask is broken
> --
>
> Key: HDFS-9451
> URL: https://issues.apache.org/jira/browse/HDFS-9451
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-9451.001.patch, HDFS-9451.002.patch
>
>
> I noticed this test failed consistently since yesterday. The first failed 
> jenkins job is 
> https://builds.apache.org/job/Hadoop-common-trunk-Java8/723/changes, and from 
> the change log:
> {noformat}
> Changes:
> [wheat9] HDFS-9402. Switch DataNode.LOG to use slf4j. Contributed by Walter 
> Su.
> [wheat9] HADOOP-11218. Add TLSv1.1,TLSv1.2 to KMS, HttpFS, SSLFactory.
> [wheat9] HADOOP-12467. Respect user-defined JAVA_LIBRARY_PATH in Windows 
> Hadoop
> [wheat9] HDFS-8914. Document HA support in the HDFS HdfsDesign.md. 
> Contributed by
> [wheat9] HDFS-9153. Pretty-format the output for DFSIO. Contributed by Kai 
> Zheng.
> [wheat9] HDFS-7796. Include X-editable for slick contenteditable fields in the
> [wheat9] HDFS-3302. Review and improve HDFS trash documentation. Contributed 
> by
> [wheat9] HADOOP-12294. Remove the support of the deprecated dfs.umask.
> {noformat}
> HADOOP-12294 looks to be the most likely cause.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9451) TestFsPermission#testDeprecatedUmask is broken

2015-11-24 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HDFS-9451:
-
Target Version/s: 3.0.0

> TestFsPermission#testDeprecatedUmask is broken
> --
>
> Key: HDFS-9451
> URL: https://issues.apache.org/jira/browse/HDFS-9451
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-9451.001.patch, HDFS-9451.002.patch
>
>
> I noticed this test failed consistently since yesterday. The first failed 
> jenkins job is 
> https://builds.apache.org/job/Hadoop-common-trunk-Java8/723/changes, and from 
> the change log:
> {noformat}
> Changes:
> [wheat9] HDFS-9402. Switch DataNode.LOG to use slf4j. Contributed by Walter 
> Su.
> [wheat9] HADOOP-11218. Add TLSv1.1,TLSv1.2 to KMS, HttpFS, SSLFactory.
> [wheat9] HADOOP-12467. Respect user-defined JAVA_LIBRARY_PATH in Windows 
> Hadoop
> [wheat9] HDFS-8914. Document HA support in the HDFS HdfsDesign.md. 
> Contributed by
> [wheat9] HDFS-9153. Pretty-format the output for DFSIO. Contributed by Kai 
> Zheng.
> [wheat9] HDFS-7796. Include X-editable for slick contenteditable fields in the
> [wheat9] HDFS-3302. Review and improve HDFS trash documentation. Contributed 
> by
> [wheat9] HADOOP-12294. Remove the support of the deprecated dfs.umask.
> {noformat}
> HADOOP-12294 looks to be the most likely cause.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9451) TestFsPermission#testDeprecatedUmask is broken

2015-11-24 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15024273#comment-15024273
 ] 

Tsuyoshi Ozawa commented on HDFS-9451:
--

LGTM, pending for Jenkins.

> TestFsPermission#testDeprecatedUmask is broken
> --
>
> Key: HDFS-9451
> URL: https://issues.apache.org/jira/browse/HDFS-9451
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-9451.001.patch, HDFS-9451.002.patch
>
>
> I noticed this test failed consistently since yesterday. The first failed 
> jenkins job is 
> https://builds.apache.org/job/Hadoop-common-trunk-Java8/723/changes, and from 
> the change log:
> {noformat}
> Changes:
> [wheat9] HDFS-9402. Switch DataNode.LOG to use slf4j. Contributed by Walter 
> Su.
> [wheat9] HADOOP-11218. Add TLSv1.1,TLSv1.2 to KMS, HttpFS, SSLFactory.
> [wheat9] HADOOP-12467. Respect user-defined JAVA_LIBRARY_PATH in Windows 
> Hadoop
> [wheat9] HDFS-8914. Document HA support in the HDFS HdfsDesign.md. 
> Contributed by
> [wheat9] HDFS-9153. Pretty-format the output for DFSIO. Contributed by Kai 
> Zheng.
> [wheat9] HDFS-7796. Include X-editable for slick contenteditable fields in the
> [wheat9] HDFS-3302. Review and improve HDFS trash documentation. Contributed 
> by
> [wheat9] HADOOP-12294. Remove the support of the deprecated dfs.umask.
> {noformat}
> HADOOP-12294 looks to be the most likely cause.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7240) Object store in HDFS

2015-11-23 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15022724#comment-15022724
 ] 

Tsuyoshi Ozawa commented on HDFS-7240:
--

Thank you for following up, Anu and Chris. 

> Object store in HDFS
> 
>
> Key: HDFS-7240
> URL: https://issues.apache.org/jira/browse/HDFS-7240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: Ozone-architecture-v1.pdf
>
>
> This jira proposes to add object store capabilities into HDFS. 
> As part of the federation work (HDFS-1052) we separated block storage as a 
> generic storage layer. Using the Block Pool abstraction, new kinds of 
> namespaces can be built on top of the storage layer i.e. datanodes.
> In this jira I will explore building an object store using the datanode 
> storage, but independent of namespace metadata.
> I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7240) Object store in HDFS

2015-11-22 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15021179#comment-15021179
 ] 

Tsuyoshi Ozawa commented on HDFS-7240:
--

2 branches, HDFS-7240 and hdsfs-7240, seems to be created on repository.

{quote}
$ git pull
>From https://git-wip-us.apache.org/repos/asf/hadoop
 * [new branch]  HDFS-7240  -> origin/HDFS-7240
 * [new branch]  hdfs-7240  -> origin/hdfs-7240
{quote}

Is it assumed one?

> Object store in HDFS
> 
>
> Key: HDFS-7240
> URL: https://issues.apache.org/jira/browse/HDFS-7240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: Ozone-architecture-v1.pdf
>
>
> This jira proposes to add object store capabilities into HDFS. 
> As part of the federation work (HDFS-1052) we separated block storage as a 
> generic storage layer. Using the Block Pool abstraction, new kinds of 
> namespaces can be built on top of the storage layer i.e. datanodes.
> In this jira I will explore building an object store using the datanode 
> storage, but independent of namespace metadata.
> I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9242) Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache

2015-11-01 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984419#comment-14984419
 ] 

Tsuyoshi Ozawa commented on HDFS-9242:
--

[~brahmareddy] [~liuml07] I think this is not false positive. For more detail. 
please check this article: http://www.cs.umd.edu/~pugh/java/memoryModel/

A correct way to fix is to make ugiCache volatile. Could you update it?

> Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache 
> ---
>
> Key: HDFS-9242
> URL: https://issues.apache.org/jira/browse/HDFS-9242
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9242.patch
>
>
> This was introduced by HDFS-8855 and pre-patch warning can be found at 
> https://builds.apache.org/job/PreCommit-HDFS-Build/12975/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html#DC_DOUBLECHECK
> {code}
> Code  Warning
> DCPossible doublecheck on 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
>  in new 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
>  Configuration)
> Bug type DC_DOUBLECHECK (click for details) 
> In class 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider
> In method new 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
>  Configuration)
> On field 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
> At DataNodeUGIProvider.java:[lines 49-51]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9242) Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache

2015-11-01 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984572#comment-14984572
 ] 

Tsuyoshi Ozawa commented on HDFS-9242:
--

[~wheat9] Agree with you..

> Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache 
> ---
>
> Key: HDFS-9242
> URL: https://issues.apache.org/jira/browse/HDFS-9242
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Brahma Reddy Battula
>Priority: Critical
> Attachments: HDFS-9242.patch
>
>
> This was introduced by HDFS-8855 and pre-patch warning can be found at 
> https://builds.apache.org/job/PreCommit-HDFS-Build/12975/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html#DC_DOUBLECHECK
> {code}
> Code  Warning
> DCPossible doublecheck on 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
>  in new 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
>  Configuration)
> Bug type DC_DOUBLECHECK (click for details) 
> In class 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider
> In method new 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
>  Configuration)
> On field 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
> At DataNodeUGIProvider.java:[lines 49-51]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9237) NPE at TestDataNodeVolumeFailureToleration#tearDown

2015-10-19 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HDFS-9237:
-
  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: 2.8.0
  3.0.0
Target Version/s: 3.0.0, 2.8.0
  Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~brahmareddy] for your review and 
thanks [~liuml07] for your review.

> NPE at TestDataNodeVolumeFailureToleration#tearDown
> ---
>
> Key: HDFS-9237
> URL: https://issues.apache.org/jira/browse/HDFS-9237
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9237.patch
>
>
> {noformat}
> Stack Trace:
> java.lang.NullPointerException: null
> at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration.tearDown(TestDataNodeVolumeFailureToleration.java:79)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9237) NPE at TestDataNodeVolumeFailureToleration#tearDown

2015-10-19 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962898#comment-14962898
 ] 

Tsuyoshi Ozawa commented on HDFS-9237:
--

+1, checking this in.

> NPE at TestDataNodeVolumeFailureToleration#tearDown
> ---
>
> Key: HDFS-9237
> URL: https://issues.apache.org/jira/browse/HDFS-9237
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9237.patch
>
>
> {noformat}
> Stack Trace:
> java.lang.NullPointerException: null
> at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration.tearDown(TestDataNodeVolumeFailureToleration.java:79)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8802) dfs.checksum.type is not described in hdfs-default.xml

2015-10-07 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947852#comment-14947852
 ] 

Tsuyoshi Ozawa commented on HDFS-8802:
--

[~gururaj] thank you for the updating. How about adding description about the 
types we can choose as the checksum in addition to the default value?
I think we can choose NULL, CRC32, CRC32C as the checksum.

> dfs.checksum.type is not described in hdfs-default.xml
> --
>
> Key: HDFS-8802
> URL: https://issues.apache.org/jira/browse/HDFS-8802
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Tsuyoshi Ozawa
>Assignee: Gururaj Shetty
> Attachments: HDFS-8802.patch, HDFS-8802_01.patch, HDFS-8802_02.patch
>
>
> It's a good timing to check other configurations about hdfs-default.xml here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8802) dfs.checksum.type is not described in hdfs-default.xml

2015-10-07 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HDFS-8802:
-
Status: Open  (was: Patch Available)

> dfs.checksum.type is not described in hdfs-default.xml
> --
>
> Key: HDFS-8802
> URL: https://issues.apache.org/jira/browse/HDFS-8802
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Tsuyoshi Ozawa
>Assignee: Gururaj Shetty
> Attachments: HDFS-8802.patch, HDFS-8802_01.patch, HDFS-8802_02.patch
>
>
> It's a good timing to check other configurations about hdfs-default.xml here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9195) TestDelegationTokenForProxyUser.testWebHdfsDoAs fails on trunk

2015-10-04 Thread Tsuyoshi Ozawa (JIRA)
Tsuyoshi Ozawa created HDFS-9195:


 Summary: TestDelegationTokenForProxyUser.testWebHdfsDoAs fails on 
trunk
 Key: HDFS-9195
 URL: https://issues.apache.org/jira/browse/HDFS-9195
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Tsuyoshi Ozawa


{quote}
testWebHdfsDoAs(org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser)
  Time elapsed: 1.299 sec  <<< FAILURE!
org.junit.ComparisonFailure: expected:<...ocalhost:44528/user/[Proxy]User> but 
was:<...ocalhost:44528/user/[Real]User>
at org.junit.Assert.assertEquals(Assert.java:115)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser.testWebHdfsDoAs(TestDelegationTokenForProxyUser.java:163)
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9196) TestWebHdfsContentLength fails on trunk

2015-10-04 Thread Tsuyoshi Ozawa (JIRA)
Tsuyoshi Ozawa created HDFS-9196:


 Summary: TestWebHdfsContentLength fails on trunk
 Key: HDFS-9196
 URL: https://issues.apache.org/jira/browse/HDFS-9196
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Tsuyoshi Ozawa



{quote}
Tests run: 7, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 181.278 sec <<< 
FAILURE! - in org.apache.hadoop.hdfs.web.TestWebHdfsContentLength
testPutOp(org.apache.hadoop.hdfs.web.TestWebHdfsContentLength)  Time elapsed: 
60.05 sec  <<< FAILURE!
java.lang.AssertionError: expected:<0> but was:
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.hadoop.hdfs.web.TestWebHdfsContentLength.testPutOp(TestWebHdfsContentLength.java:116)

testPutOpWithRedirect(org.apache.hadoop.hdfs.web.TestWebHdfsContentLength)  
Time elapsed: 0.01 sec  <<< FAILURE!
org.junit.ComparisonFailure: expected:<[chunked]> but was:<[0]>
at org.junit.Assert.assertEquals(Assert.java:115)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.hadoop.hdfs.web.TestWebHdfsContentLength.testPutOpWithRedirect(TestWebHdfsContentLength.java:130)
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-4101) ZKFC should implement zookeeper.recovery.retry like HBase to connect to ZooKeeper

2015-09-03 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HDFS-4101:
-
Status: Open  (was: Patch Available)

Cancelling this patch for the comments. It would be great if someone takes over 
this.

> ZKFC should implement zookeeper.recovery.retry like HBase to connect to 
> ZooKeeper
> -
>
> Key: HDFS-4101
> URL: https://issues.apache.org/jira/browse/HDFS-4101
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: auto-failover, ha
>Affects Versions: 2.0.0-alpha, 3.0.0
> Environment: running CDH4.1.1
>Reporter: Damien Hardy
>Assignee: Damien Hardy
>Priority: Minor
>  Labels: BB2015-05-TBR, newbie
> Attachments: HDFS-4101-2.patch
>
>
> When zkfc start and zookeeper is not yet started ZKFC fails and stop directly.
> Maybe ZKFC should allow some retries on Zookeeper services like does HBase 
> with zookeeper.recovery.retry
> This particularly appends when I start my whole cluster on VirtualBox for 
> example (every components nearly at the same time) ZKFC is the only that fail 
> and stop ... 
> Every others can wait each-others some time independently of the start order 
> like NameNode/DataNode/JournalNode/Zookeeper/HBaseMaster/HBaseRS so that the 
> system can be set and stable in few seconds



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-5165) FSNameSystem TotalFiles and FilesTotal metrics are the same

2015-09-03 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HDFS-5165:
-
Status: Open  (was: Patch Available)

Cancelling the patch since it's obsoleted and cannot be applied.  [~ajisakaa] 
could you refresh it?

> FSNameSystem TotalFiles and FilesTotal metrics are the same
> ---
>
> Key: HDFS-5165
> URL: https://issues.apache.org/jira/browse/HDFS-5165
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.1.0-beta
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
>  Labels: BB2015-05-TBR, metrics, newbie
> Attachments: HDFS-5165.2.patch, HDFS-5165.patch
>
>
> Both FSNameSystem TotalFiles and FilesTotal metrics mean total files/dirs in 
> the cluster. One of these metrics should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-5361) Change the unit of StartupProgress 'PercentComplete' to percentage

2015-09-03 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HDFS-5361:
-
Status: Open  (was: Patch Available)

Cancelling the patch since it cannot be applied. 

> Change the unit of StartupProgress 'PercentComplete' to percentage
> --
>
> Key: HDFS-5361
> URL: https://issues.apache.org/jira/browse/HDFS-5361
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.1.0-beta
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
>  Labels: BB2015-05-TBR, metrics, newbie
> Attachments: HDFS-5361.2.patch, HDFS-5361.3.patch, HDFS-5361.patch
>
>
> Now the unit of 'PercentComplete' metrics is rate (maximum is 1.0). It's 
> confusing for users because its name includes "percent".
> The metrics should be multiplied by 100.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8802) dfs.checksum.type is not described in hdfs-default.xml

2015-07-22 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14636914#comment-14636914
 ] 

Tsuyoshi Ozawa commented on HDFS-8802:
--

[~gururaj] Great. Do you mind creating a patch?

 dfs.checksum.type is not described in hdfs-default.xml
 --

 Key: HDFS-8802
 URL: https://issues.apache.org/jira/browse/HDFS-8802
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.1
Reporter: Tsuyoshi Ozawa
Assignee: Gururaj Shetty

 It's a good timing to check other configurations about hdfs-default.xml here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8802) dfs.checksum.type is not described in hdfs-default.xml

2015-07-21 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HDFS-8802:
-
Description: It's a good timing to check other configurations about 
hdfs-default.xml here.

 dfs.checksum.type is not described in hdfs-default.xml
 --

 Key: HDFS-8802
 URL: https://issues.apache.org/jira/browse/HDFS-8802
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.1
Reporter: Tsuyoshi Ozawa

 It's a good timing to check other configurations about hdfs-default.xml here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8802) dfs.checksum.type is not described in hdfs-default.xml

2015-07-21 Thread Tsuyoshi Ozawa (JIRA)
Tsuyoshi Ozawa created HDFS-8802:


 Summary: dfs.checksum.type is not described in hdfs-default.xml
 Key: HDFS-8802
 URL: https://issues.apache.org/jira/browse/HDFS-8802
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.1
Reporter: Tsuyoshi Ozawa






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8639) Option for HTTP port of NameNode by MiniDFSClusterManager

2015-06-20 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HDFS-8639:
-
Status: Patch Available  (was: Open)

 Option for HTTP port of NameNode by MiniDFSClusterManager
 -

 Key: HDFS-8639
 URL: https://issues.apache.org/jira/browse/HDFS-8639
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Affects Versions: 2.7.0
Reporter: Kai Sasaki
Assignee: Kai Sasaki
Priority: Trivial
 Attachments: HDFS-8639.00.patch


 Current {{MiniDFSClusterManager}} uses 0 as the default rpc port and http 
 port. In case of system test with {{MiniDFSCluster}}, it is difficult to 
 debug because of the random http port. 
 We can add option to make the configuration of the http port for NN web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8490) Typo in trace enabled log in ExceptionHandler of WebHDFS

2015-05-31 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HDFS-8490:
-
Target Version/s: 2.8.0

 Typo in trace enabled log in ExceptionHandler of WebHDFS
 

 Key: HDFS-8490
 URL: https://issues.apache.org/jira/browse/HDFS-8490
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: webhdfs
Reporter: Jakob Homan
Assignee: Archana T
Priority: Trivial
  Labels: newbie
 Fix For: 2.8.0

 Attachments: HDFS-8490.patch


 /hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ExceptionHandler.java:
 {code}  static DefaultFullHttpResponse exceptionCaught(Throwable cause) {
 Exception e = cause instanceof Exception ? (Exception) cause : new 
 Exception(cause);
 if (LOG.isTraceEnabled()) {
   LOG.trace(GOT EXCEPITION, e);
 }{code}
 EXCEPITION is a typo.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8490) Typo in trace enabled log in ExceptionHandler of WebHDFS

2015-05-31 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HDFS-8490:
-
   Resolution: Fixed
Fix Version/s: 2.8.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~archanat] for your contribution!

 Typo in trace enabled log in ExceptionHandler of WebHDFS
 

 Key: HDFS-8490
 URL: https://issues.apache.org/jira/browse/HDFS-8490
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: webhdfs
Reporter: Jakob Homan
Assignee: Archana T
Priority: Trivial
  Labels: newbie
 Fix For: 2.8.0

 Attachments: HDFS-8490.patch


 /hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ExceptionHandler.java:
 {code}  static DefaultFullHttpResponse exceptionCaught(Throwable cause) {
 Exception e = cause instanceof Exception ? (Exception) cause : new 
 Exception(cause);
 if (LOG.isTraceEnabled()) {
   LOG.trace(GOT EXCEPITION, e);
 }{code}
 EXCEPITION is a typo.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8490) Typo in trace enabled log in ExceptionHandler of WebHDFS

2015-05-31 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HDFS-8490:
-
Summary: Typo in trace enabled log in ExceptionHandler of WebHDFS  (was: 
Typo in trace enabled log in WebHDFS exception handler)

 Typo in trace enabled log in ExceptionHandler of WebHDFS
 

 Key: HDFS-8490
 URL: https://issues.apache.org/jira/browse/HDFS-8490
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: webhdfs
Reporter: Jakob Homan
Assignee: Archana T
Priority: Trivial
  Labels: newbie
 Attachments: HDFS-8490.patch


 /hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ExceptionHandler.java:
 {code}  static DefaultFullHttpResponse exceptionCaught(Throwable cause) {
 Exception e = cause instanceof Exception ? (Exception) cause : new 
 Exception(cause);
 if (LOG.isTraceEnabled()) {
   LOG.trace(GOT EXCEPITION, e);
 }{code}
 EXCEPITION is a typo.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8490) Typo in trace enabled log in WebHDFS exception handler

2015-05-28 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14564187#comment-14564187
 ] 

Tsuyoshi Ozawa commented on HDFS-8490:
--

+1, pending for Jenkins.

 Typo in trace enabled log in WebHDFS exception handler
 --

 Key: HDFS-8490
 URL: https://issues.apache.org/jira/browse/HDFS-8490
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: webhdfs
Reporter: Jakob Homan
Assignee: Archana T
Priority: Trivial
  Labels: newbie
 Attachments: HDFS-8490.patch


 /hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ExceptionHandler.java:
 {code}  static DefaultFullHttpResponse exceptionCaught(Throwable cause) {
 Exception e = cause instanceof Exception ? (Exception) cause : new 
 Exception(cause);
 if (LOG.isTraceEnabled()) {
   LOG.trace(GOT EXCEPITION, e);
 }{code}
 EXCEPITION is a typo.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8207) Improper log mesage when blockreport interval comapred with initial delay

2015-05-08 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534150#comment-14534150
 ] 

Tsuyoshi Ozawa commented on HDFS-8207:
--

+1

 Improper log mesage when blockreport interval comapred with initial delay
 -

 Key: HDFS-8207
 URL: https://issues.apache.org/jira/browse/HDFS-8207
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HDFS-8207.patch, Hadoop-8253.patch, Hadoop-8253.patch


 Log message is telling if initialDelay is more than blockReportInterval 
 setting to 0.
 But actuall check is greaterthan and equal..like following..It is misleading 
 initail I thought if it
 is equal then initailBr wn't set to zero.
 {code}
 if (initBRDelay = blockReportInterval) {
   initBRDelay = 0;
   DataNode.LOG.info(dfs.blockreport.initialDelay is greater than  +
   dfs.blockreport.intervalMsec. +  Setting initial delay to 0 
 msec:);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8116) RollingWindowManager#getTopUserForMetric should check if LOG.isDebugEnabled() before LOG.debug()

2015-05-08 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534141#comment-14534141
 ] 

Tsuyoshi Ozawa commented on HDFS-8116:
--

I think we should performance test before and after this fix as [~andrew.wang] 
mentioned.

 RollingWindowManager#getTopUserForMetric should check if LOG.isDebugEnabled() 
 before LOG.debug() 
 -

 Key: HDFS-8116
 URL: https://issues.apache.org/jira/browse/HDFS-8116
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.7.0
Reporter: Xiaoyu Yao
Assignee: Brahma Reddy Battula
Priority: Trivial
  Labels: BB2015-05-TBR
 Attachments: HDFS-8116.patch


 RollingWindowManager#getTopUserForMetric should check if LOG.isDebugEnabled() 
 before LOG.debug() 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8207) Improper log message when blockreport interval compared with initial delay

2015-05-08 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HDFS-8207:
-
  Resolution: Fixed
   Fix Version/s: 2.8.0
Target Version/s: 2.8.0
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~brahmareddy] and [~ashish 
singhi] for your contribution.

 Improper log message when blockreport interval compared with initial delay
 --

 Key: HDFS-8207
 URL: https://issues.apache.org/jira/browse/HDFS-8207
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
Priority: Minor
  Labels: BB2015-05-TBR
 Fix For: 2.8.0

 Attachments: HDFS-8207.patch, Hadoop-8253.patch, Hadoop-8253.patch


 Log message is telling if initialDelay is more than blockReportInterval 
 setting to 0.
 But actuall check is greaterthan and equal..like following..It is misleading 
 initail I thought if it
 is equal then initailBr wn't set to zero.
 {code}
 if (initBRDelay = blockReportInterval) {
   initBRDelay = 0;
   DataNode.LOG.info(dfs.blockreport.initialDelay is greater than  +
   dfs.blockreport.intervalMsec. +  Setting initial delay to 0 
 msec:);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8207) Improper log message when blockreport interval compared with initial delay

2015-05-08 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HDFS-8207:
-
Summary: Improper log message when blockreport interval compared with 
initial delay  (was: Improper log mesage when blockreport interval comapred 
with initial delay)

 Improper log message when blockreport interval compared with initial delay
 --

 Key: HDFS-8207
 URL: https://issues.apache.org/jira/browse/HDFS-8207
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HDFS-8207.patch, Hadoop-8253.patch, Hadoop-8253.patch


 Log message is telling if initialDelay is more than blockReportInterval 
 setting to 0.
 But actuall check is greaterthan and equal..like following..It is misleading 
 initail I thought if it
 is equal then initailBr wn't set to zero.
 {code}
 if (initBRDelay = blockReportInterval) {
   initBRDelay = 0;
   DataNode.LOG.info(dfs.blockreport.initialDelay is greater than  +
   dfs.blockreport.intervalMsec. +  Setting initial delay to 0 
 msec:);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8207) Improper log message when blockreport interval compared with initial delay

2015-05-08 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HDFS-8207:
-
Issue Type: Improvement  (was: Bug)

 Improper log message when blockreport interval compared with initial delay
 --

 Key: HDFS-8207
 URL: https://issues.apache.org/jira/browse/HDFS-8207
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
Priority: Minor
  Labels: BB2015-05-TBR
 Fix For: 2.8.0

 Attachments: HDFS-8207.patch, Hadoop-8253.patch, Hadoop-8253.patch


 Log message is telling if initialDelay is more than blockReportInterval 
 setting to 0.
 But actuall check is greaterthan and equal..like following..It is misleading 
 initail I thought if it
 is equal then initailBr wn't set to zero.
 {code}
 if (initBRDelay = blockReportInterval) {
   initBRDelay = 0;
   DataNode.LOG.info(dfs.blockreport.initialDelay is greater than  +
   dfs.blockreport.intervalMsec. +  Setting initial delay to 0 
 msec:);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8043) NPE in MiniDFSCluster teardown

2015-04-19 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14501762#comment-14501762
 ] 

Tsuyoshi Ozawa commented on HDFS-8043:
--

+1

 NPE in MiniDFSCluster teardown
 --

 Key: HDFS-8043
 URL: https://issues.apache.org/jira/browse/HDFS-8043
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
 Environment: jenkins
Reporter: Steve Loughran
Assignee: Brahma Reddy Battula
 Attachments: HDFS-8043-002.patch, HDFS-8043-003.patch, HDFS-8043.patch


 NPE surfacing in {{MiniDFSCluster.shutdown}} during test teardown 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7863) Missing description of some methods and parameters in javadoc of FSDirDeleteOp

2015-04-19 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HDFS-7863:
-
Summary: Missing description of some methods and parameters in javadoc of 
FSDirDeleteOp  (was: Missing description of parameters in javadoc of 
FSDirDeleteOp.)

 Missing description of some methods and parameters in javadoc of FSDirDeleteOp
 --

 Key: HDFS-7863
 URL: https://issues.apache.org/jira/browse/HDFS-7863
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yongjun Zhang
Assignee: Brahma Reddy Battula
Priority: Minor
 Attachments: HDFS-7863-002.patch, HDFS-7863-003.patch, 
 HDFS-7863-004.patch, HDFS-7863-005.patch, HDFS-7863.patch


 HDFS-7573 did refactoring of delete() code. New parameter {{FSDirectory fsd}} 
 is added to resulted methods, but the javadoc is not updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8043) NPE in MiniDFSCluster teardown

2015-04-19 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HDFS-8043:
-
  Resolution: Fixed
   Fix Version/s: 2.8.0
Target Version/s: 2.8.0
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~brahmareddy] for your 
contribution and thanks [~steve_l] for your review.

 NPE in MiniDFSCluster teardown
 --

 Key: HDFS-8043
 URL: https://issues.apache.org/jira/browse/HDFS-8043
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
 Environment: jenkins
Reporter: Steve Loughran
Assignee: Brahma Reddy Battula
 Fix For: 2.8.0

 Attachments: HDFS-8043-002.patch, HDFS-8043-003.patch, HDFS-8043.patch


 NPE surfacing in {{MiniDFSCluster.shutdown}} during test teardown 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7863) Missing description of parameters in javadoc of FSDirDeleteOp.

2015-04-19 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HDFS-7863:
-
Summary: Missing description of parameters in javadoc of FSDirDeleteOp.  
(was: Missing description of parameter fsd in javadoc )

 Missing description of parameters in javadoc of FSDirDeleteOp.
 --

 Key: HDFS-7863
 URL: https://issues.apache.org/jira/browse/HDFS-7863
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yongjun Zhang
Assignee: Brahma Reddy Battula
Priority: Minor
 Attachments: HDFS-7863-002.patch, HDFS-7863-003.patch, 
 HDFS-7863-004.patch, HDFS-7863-005.patch, HDFS-7863.patch


 HDFS-7573 did refactoring of delete() code. New parameter {{FSDirectory fsd}} 
 is added to resulted methods, but the javadoc is not updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7863) Missing description of some methods and parameters in javadoc of FSDirDeleteOp

2015-04-19 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HDFS-7863:
-
  Resolution: Fixed
   Fix Version/s: 2.8.0
Target Version/s: 2.8.0
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~brahmareddy] for your 
contribution and thanks [~yzhangal] for your reporting.

 Missing description of some methods and parameters in javadoc of FSDirDeleteOp
 --

 Key: HDFS-7863
 URL: https://issues.apache.org/jira/browse/HDFS-7863
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yongjun Zhang
Assignee: Brahma Reddy Battula
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-7863-002.patch, HDFS-7863-003.patch, 
 HDFS-7863-004.patch, HDFS-7863-005.patch, HDFS-7863.patch


 HDFS-7573 did refactoring of delete() code. New parameter {{FSDirectory fsd}} 
 is added to resulted methods, but the javadoc is not updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7863) Missing description of some methods and parameters in javadoc of FSDirDeleteOp

2015-04-19 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14501756#comment-14501756
 ] 

Tsuyoshi Ozawa commented on HDFS-7863:
--

+1

 Missing description of some methods and parameters in javadoc of FSDirDeleteOp
 --

 Key: HDFS-7863
 URL: https://issues.apache.org/jira/browse/HDFS-7863
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yongjun Zhang
Assignee: Brahma Reddy Battula
Priority: Minor
 Attachments: HDFS-7863-002.patch, HDFS-7863-003.patch, 
 HDFS-7863-004.patch, HDFS-7863-005.patch, HDFS-7863.patch


 HDFS-7573 did refactoring of delete() code. New parameter {{FSDirectory fsd}} 
 is added to resulted methods, but the javadoc is not updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8043) NPE in MiniDFSCluster teardown

2015-04-17 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HDFS-8043:
-
Status: Open  (was: Patch Available)

[~brahmareddy] thank you for taking this issue. I think we should fix 
MiniDFSCluster itself since NullPointerException occurs in MiniDFSCluster class 
instead of test class. IIUC, base_dir in MiniDFSCluster is null but 
base_dir.delete() or base_dir.deleteOnExit() is called. Could you check it?

 NPE in MiniDFSCluster teardown
 --

 Key: HDFS-8043
 URL: https://issues.apache.org/jira/browse/HDFS-8043
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
 Environment: jenkins
Reporter: Steve Loughran
Assignee: Brahma Reddy Battula
 Attachments: HDFS-8043.patch


 NPE surfacing in {{MiniDFSCluster.shutdown}} during test teardown 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7863) Missing description of parameter fsd in javadoc

2015-04-17 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499716#comment-14499716
 ] 

Tsuyoshi Ozawa commented on HDFS-7863:
--

[~brahmareddy] thank you for updating a patch.

{code}
+   * @param src The given path
{code}

Please use lower case in the sentence - in the patch The should be the. 
Also, we should describe more concrete comments here. I think path name to be 
deleted is looks instead of The given path. Other parts look good to me.

 Missing description of parameter fsd in javadoc 
 

 Key: HDFS-7863
 URL: https://issues.apache.org/jira/browse/HDFS-7863
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yongjun Zhang
Assignee: Brahma Reddy Battula
Priority: Minor
 Attachments: HDFS-7863-002.patch, HDFS-7863-003.patch, HDFS-7863.patch


 HDFS-7573 did refactoring of delete() code. New parameter {{FSDirectory fsd}} 
 is added to resulted methods, but the javadoc is not updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8043) NPE in MiniDFSCluster teardown

2015-04-17 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14500254#comment-14500254
 ] 

Tsuyoshi Ozawa commented on HDFS-8043:
--

[~brahmareddy] thanks for your updating.
{code}
-if (deleteDfsDir) {
+if (deleteDfsDir  base_dir != null) {
 base_dir.delete();
 } else {
+  if (base_dir != null) {
 base_dir.deleteOnExit();
+  }
 }
{code}

The condition statements look complex. Could you fix it not to use nested 
if-else statement?

 NPE in MiniDFSCluster teardown
 --

 Key: HDFS-8043
 URL: https://issues.apache.org/jira/browse/HDFS-8043
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
 Environment: jenkins
Reporter: Steve Loughran
Assignee: Brahma Reddy Battula
 Attachments: HDFS-8043-002.patch, HDFS-8043.patch


 NPE surfacing in {{MiniDFSCluster.shutdown}} during test teardown 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8043) NPE in MiniDFSCluster teardown

2015-04-17 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HDFS-8043:
-
Status: Patch Available  (was: Open)

 NPE in MiniDFSCluster teardown
 --

 Key: HDFS-8043
 URL: https://issues.apache.org/jira/browse/HDFS-8043
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
 Environment: jenkins
Reporter: Steve Loughran
Assignee: Brahma Reddy Battula
 Attachments: HDFS-8043-002.patch, HDFS-8043.patch


 NPE surfacing in {{MiniDFSCluster.shutdown}} during test teardown 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7863) Missing description of parameter fsd in javadoc

2015-04-17 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14500282#comment-14500282
 ] 

Tsuyoshi Ozawa commented on HDFS-7863:
--

[~brahmareddy] thank you for updating. I found a empty return documentation:

{code}
+   * @return
{code}

Could you add a description like blocks collected from the deleted path or 
something?


 Missing description of parameter fsd in javadoc 
 

 Key: HDFS-7863
 URL: https://issues.apache.org/jira/browse/HDFS-7863
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yongjun Zhang
Assignee: Brahma Reddy Battula
Priority: Minor
 Attachments: HDFS-7863-002.patch, HDFS-7863-003.patch, 
 HDFS-7863-004.patch, HDFS-7863.patch


 HDFS-7573 did refactoring of delete() code. New parameter {{FSDirectory fsd}} 
 is added to resulted methods, but the javadoc is not updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5019) Cleanup imports in HDFS project

2015-03-31 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14388238#comment-14388238
 ] 

Tsuyoshi Ozawa commented on HDFS-5019:
--

Hi [~djp], thank you for updating. Could you rebase the patch?

 Cleanup imports in HDFS project
 ---

 Key: HDFS-5019
 URL: https://issues.apache.org/jira/browse/HDFS-5019
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Junping Du
Assignee: Junping Du
Priority: Minor
 Attachments: HDFS-5019-v2.patch, HDFS-5019.patch


 There are some unused imported packages in current code base which cause some 
 unnecessary java warnings. Also, the sequence of imports should follow 
 alphabet and import x.x.* is not recommended.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7863) Missing description of parameter fsd in javadoc

2015-03-30 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386419#comment-14386419
 ] 

Tsuyoshi Ozawa commented on HDFS-7863:
--

[~brahmareddy] thank you for updating. Please merge following comments to one 
comment.

{code}
   * For small directory or file the deletion is done in one shot.
   */
  /**
   * @param fsn namespace
{code}

 Missing description of parameter fsd in javadoc 
 

 Key: HDFS-7863
 URL: https://issues.apache.org/jira/browse/HDFS-7863
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yongjun Zhang
Assignee: Brahma Reddy Battula
Priority: Minor
 Attachments: HDFS-7863-002.patch, HDFS-7863.patch


 HDFS-7573 did refactoring of delete() code. New parameter {{FSDirectory fsd}} 
 is added to resulted methods, but the javadoc is not updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7804) haadmin command usage #HDFSHighAvailabilityWithQJM.html

2015-03-27 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HDFS-7804:
-
  Resolution: Fixed
   Fix Version/s: 3.0.0
Target Version/s: 3.0.0
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

[~brahmareddy], this issue has been already fixed on HDFS-7668 about branch-2. 
We can mark this as resolved only for trunk. Thanks for your work. Also, thanks 
Uma for your review.

 haadmin command usage #HDFSHighAvailabilityWithQJM.html
 ---

 Key: HDFS-7804
 URL: https://issues.apache.org/jira/browse/HDFS-7804
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Fix For: 3.0.0

 Attachments: HDFS-7804-002.patch, HDFS-7804-003.patch, 
 HDFS-7804-branch-2-002.patch, HDFS-7804.patch


  *Currently it's given like following* 
  *{color:red}Usage: DFSHAAdmin [-ns nameserviceId]{color}* 
 [-transitionToActive serviceId]
 [-transitionToStandby serviceId]
 [-failover [--forcefence] [--forceactive] serviceId serviceId]
 [-getServiceState serviceId]
 [-checkHealth serviceId]
 [-help command]
  *Expected:* 
  *{color:green}hdfs hadmin{color}* 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7863) Missing description of parameter fsd in javadoc

2015-03-27 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383871#comment-14383871
 ] 

Tsuyoshi Ozawa commented on HDFS-7863:
--

[~brahmareddy] thank you for taking this issue. How about also adding 
description of parameters about delete(FSNamesystem fsn, String src, boolean 
recursive, boolean logRetryCache) and deleteInternal(FSNamesystem fsn, String 
src, INodesInPath iip, boolean logRetryCache)?

 Missing description of parameter fsd in javadoc 
 

 Key: HDFS-7863
 URL: https://issues.apache.org/jira/browse/HDFS-7863
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yongjun Zhang
Assignee: Brahma Reddy Battula
Priority: Minor
 Attachments: HDFS-7863.patch


 HDFS-7573 did refactoring of delete() code. New parameter {{FSDirectory fsd}} 
 is added to resulted methods, but the javadoc is not updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7978) Add LOG.isDebugEnabled() guard for some LOG.debug(..)

2015-03-25 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14379389#comment-14379389
 ] 

Tsuyoshi Ozawa commented on HDFS-7978:
--

Make sense. Thank you for clarification!

 Add LOG.isDebugEnabled() guard for some LOG.debug(..)
 -

 Key: HDFS-7978
 URL: https://issues.apache.org/jira/browse/HDFS-7978
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Walter Su
Assignee: Walter Su
 Attachments: HDFS-7978.001.patch, HDFS-7978.002.patch


 {{isDebugEnabled()}} is optional. But when there are :
 1. lots of concatenating Strings
 2. complicated function calls
 in the arguments, {{LOG.debug(..)}} should be guarded with 
 {{LOG.isDebugEnabled()}} to avoid unnecessary arguments evaluation and impove 
 performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7978) Add LOG.isDebugEnabled() guard for some LOG.debug(..)

2015-03-24 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14378147#comment-14378147
 ] 

Tsuyoshi Ozawa commented on HDFS-7978:
--

[~andrew.wang] [~walter.k.su] it sounds reasonable to me to avoid concatenating 
with LOG.isDebugEnabled guard too. 

 Add LOG.isDebugEnabled() guard for some LOG.debug(..)
 -

 Key: HDFS-7978
 URL: https://issues.apache.org/jira/browse/HDFS-7978
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Walter Su
Assignee: Walter Su
 Attachments: HDFS-7978.001.patch


 {{isDebugEnabled()}} is optional. But when there are :
 1. lots of concatenating Strings
 2. complicated function calls
 in the arguments, {{LOG.debug(..)}} should be guarded with 
 {{LOG.isDebugEnabled()}} to avoid unnecessary arguments evaluation and impove 
 performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HDFS-7971) mockito's version in hadoop-nfs’ pom.xml shouldn't be specified

2015-03-22 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa moved HADOOP-11735 to HDFS-7971:
---

Component/s: (was: nfs)
 nfs
Key: HDFS-7971  (was: HADOOP-11735)
Project: Hadoop HDFS  (was: Hadoop Common)

 mockito's version in hadoop-nfs’ pom.xml shouldn't be specified
 ---

 Key: HDFS-7971
 URL: https://issues.apache.org/jira/browse/HDFS-7971
 Project: Hadoop HDFS
  Issue Type: Task
  Components: nfs
Reporter: Kengo Seki
Assignee: Kengo Seki
Priority: Minor
 Attachments: HADOOP-11735.001.patch


 It should be removed because only hadoop-nfs will be left behind when parent 
 upgrades mockito.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7971) mockito's version in hadoop-nfs’ pom.xml shouldn't be specified

2015-03-22 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HDFS-7971:
-
Issue Type: Improvement  (was: Task)

 mockito's version in hadoop-nfs’ pom.xml shouldn't be specified
 ---

 Key: HDFS-7971
 URL: https://issues.apache.org/jira/browse/HDFS-7971
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Reporter: Kengo Seki
Assignee: Kengo Seki
Priority: Minor
 Attachments: HADOOP-11735.001.patch


 It should be removed because only hadoop-nfs will be left behind when parent 
 upgrades mockito.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6833) DirectoryScanner should not register a deleting block with memory of DataNode

2015-03-14 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361663#comment-14361663
 ] 

Tsuyoshi Ozawa commented on HDFS-6833:
--

Thanks [~yamashitasni] for working hard this and thanks [~szetszwo], 
[~yzhangal], [~cnauroth], [~sureshms], [~iwasakims]  for your review!

 DirectoryScanner should not register a deleting block with memory of DataNode
 -

 Key: HDFS-6833
 URL: https://issues.apache.org/jira/browse/HDFS-6833
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.5.0, 2.5.1
Reporter: Shinichi Yamashita
Assignee: Shinichi Yamashita
Priority: Critical
 Fix For: 2.7.0

 Attachments: HDFS-6833-10.patch, HDFS-6833-11.patch, 
 HDFS-6833-12.patch, HDFS-6833-13.patch, HDFS-6833-14.patch, 
 HDFS-6833-15.patch, HDFS-6833-16.patch, HDFS-6833-6-2.patch, 
 HDFS-6833-6-3.patch, HDFS-6833-6.patch, HDFS-6833-7-2.patch, 
 HDFS-6833-7.patch, HDFS-6833.8.patch, HDFS-6833.9.patch, HDFS-6833.patch, 
 HDFS-6833.patch, HDFS-6833.patch, HDFS-6833.patch, HDFS-6833.patch


 When a block is deleted in DataNode, the following messages are usually 
 output.
 {code}
 2014-08-07 17:53:11,606 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Scheduling blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
  for deletion
 2014-08-07 17:53:11,617 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
 {code}
 However, DirectoryScanner may be executed when DataNode deletes the block in 
 the current implementation. And the following messsages are output.
 {code}
 2014-08-07 17:53:30,519 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Scheduling blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
  for deletion
 2014-08-07 17:53:31,426 INFO 
 org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool 
 BP-1887080305-172.28.0.101-1407398838872 Total blocks: 1, missing metadata 
 files:0, missing block files:0, missing blocks in memory:1, mismatched 
 blocks:0
 2014-08-07 17:53:31,426 WARN 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added 
 missing block to memory FinalizedReplica, blk_1073741825_1001, FINALIZED
   getNumBytes() = 21230663
   getBytesOnDisk()  = 21230663
   getVisibleLength()= 21230663
   getVolume()   = /hadoop/data1/dfs/data/current
   getBlockFile()= 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
   unlinked  =false
 2014-08-07 17:53:31,531 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
 {code}
 Deleting block information is registered in DataNode's memory.
 And when DataNode sends a block report, NameNode receives wrong block 
 information.
 For example, when we execute recommission or change the number of 
 replication, NameNode may delete the right block as ExcessReplicate by this 
 problem.
 And Under-Replicated Blocks and Missing Blocks occur.
 When DataNode run DirectoryScanner, DataNode should not register a deleting 
 block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7927) Fluentd unable to write events to MaprFS using httpfs

2015-03-13 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360751#comment-14360751
 ] 

Tsuyoshi Ozawa commented on HDFS-7927:
--

s/webhdfs/fluent-plugin-webhdfs/

 Fluentd unable to write events to MaprFS using httpfs
 -

 Key: HDFS-7927
 URL: https://issues.apache.org/jira/browse/HDFS-7927
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.1
 Environment: mapr 4.0.1
Reporter: Roman Slysh
 Fix For: 2.4.1

 Attachments: HDFS-7927.patch


 The issue is on MaprFS file system. Probably, can be reproduced on HDFS, but 
 not sure. 
 We have observed in td-agent log whenever webhdfs plugin call to flush events 
 its calling append instead of create file on maprfs by communicating with 
 webhdfs. We need to modify this plugin to create file and then append data to 
 the file as manually creating file is not a solution as lot of log events 
 write to Filesystem they need to rotate on timely basis.
 http://docs.fluentd.org/articles/http-to-hdfs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7927) Fluentd unable to write events to MaprFS using httpfs

2015-03-13 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360749#comment-14360749
 ] 

Tsuyoshi Ozawa commented on HDFS-7927:
--

[~rslysh] the code of webhdfs looks same thing at td-agent side. 
https://github.com/fluent/fluent-plugin-webhdfs/blob/master/lib/fluent/plugin/out_webhdfs.rb#L220

Doesn't this work correctly for you?

 Fluentd unable to write events to MaprFS using httpfs
 -

 Key: HDFS-7927
 URL: https://issues.apache.org/jira/browse/HDFS-7927
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.1
 Environment: mapr 4.0.1
Reporter: Roman Slysh
 Fix For: 2.4.1

 Attachments: HDFS-7927.patch


 The issue is on MaprFS file system. Probably, can be reproduced on HDFS, but 
 not sure. 
 We have observed in td-agent log whenever webhdfs plugin call to flush events 
 its calling append instead of create file on maprfs by communicating with 
 webhdfs. We need to modify this plugin to create file and then append data to 
 the file as manually creating file is not a solution as lot of log events 
 write to Filesystem they need to rotate on timely basis.
 http://docs.fluentd.org/articles/http-to-hdfs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6833) DirectoryScanner should not register a deleting block with memory of DataNode

2015-03-12 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14357143#comment-14357143
 ] 

Tsuyoshi Ozawa commented on HDFS-6833:
--

[~szetszwo] could you commit this?

 DirectoryScanner should not register a deleting block with memory of DataNode
 -

 Key: HDFS-6833
 URL: https://issues.apache.org/jira/browse/HDFS-6833
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.5.0, 2.5.1
Reporter: Shinichi Yamashita
Assignee: Shinichi Yamashita
Priority: Critical
 Attachments: HDFS-6833-10.patch, HDFS-6833-11.patch, 
 HDFS-6833-12.patch, HDFS-6833-13.patch, HDFS-6833-14.patch, 
 HDFS-6833-15.patch, HDFS-6833-16.patch, HDFS-6833-6-2.patch, 
 HDFS-6833-6-3.patch, HDFS-6833-6.patch, HDFS-6833-7-2.patch, 
 HDFS-6833-7.patch, HDFS-6833.8.patch, HDFS-6833.9.patch, HDFS-6833.patch, 
 HDFS-6833.patch, HDFS-6833.patch, HDFS-6833.patch, HDFS-6833.patch


 When a block is deleted in DataNode, the following messages are usually 
 output.
 {code}
 2014-08-07 17:53:11,606 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Scheduling blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
  for deletion
 2014-08-07 17:53:11,617 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
 {code}
 However, DirectoryScanner may be executed when DataNode deletes the block in 
 the current implementation. And the following messsages are output.
 {code}
 2014-08-07 17:53:30,519 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Scheduling blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
  for deletion
 2014-08-07 17:53:31,426 INFO 
 org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool 
 BP-1887080305-172.28.0.101-1407398838872 Total blocks: 1, missing metadata 
 files:0, missing block files:0, missing blocks in memory:1, mismatched 
 blocks:0
 2014-08-07 17:53:31,426 WARN 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added 
 missing block to memory FinalizedReplica, blk_1073741825_1001, FINALIZED
   getNumBytes() = 21230663
   getBytesOnDisk()  = 21230663
   getVisibleLength()= 21230663
   getVolume()   = /hadoop/data1/dfs/data/current
   getBlockFile()= 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
   unlinked  =false
 2014-08-07 17:53:31,531 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
 {code}
 Deleting block information is registered in DataNode's memory.
 And when DataNode sends a block report, NameNode receives wrong block 
 information.
 For example, when we execute recommission or change the number of 
 replication, NameNode may delete the right block as ExcessReplicate by this 
 problem.
 And Under-Replicated Blocks and Missing Blocks occur.
 When DataNode run DirectoryScanner, DataNode should not register a deleting 
 block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7312) Update DistCp v1 to optionally not use tmp location (branch-1 only)

2015-03-01 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HDFS-7312:
-
Affects Version/s: (was: 2.5.1)
   1.2.1

 Update DistCp v1 to optionally not use tmp location (branch-1 only)
 ---

 Key: HDFS-7312
 URL: https://issues.apache.org/jira/browse/HDFS-7312
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Affects Versions: 1.2.1
Reporter: Joseph Prosser
Assignee: Joseph Prosser
Priority: Minor
 Attachments: HDFS-7312.001.patch, HDFS-7312.002.patch, 
 HDFS-7312.003.patch, HDFS-7312.004.patch, HDFS-7312.005.patch, 
 HDFS-7312.006.patch, HDFS-7312.007.patch, HDFS-7312.patch

   Original Estimate: 72h
  Remaining Estimate: 72h

 DistCp v1 currently copies files to a tmp location and then renames that to 
 the specified destination.  This can cause performance issues on filesystems 
 such as S3.  A -skiptmp flag will be added to bypass this step and copy 
 directly to the destination.  This feature mirrors a similar one added to 
 HBase ExportSnapshot 
 [HBASE-9|https://issues.apache.org/jira/browse/HBASE-9]
 NOTE: This is a branch-1 change only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7312) Update DistCp v1 to optionally not use tmp location (branch-1 only)

2015-03-01 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HDFS-7312:
-
Target Version/s: 1.3.0  (was: 2.5.1)

 Update DistCp v1 to optionally not use tmp location (branch-1 only)
 ---

 Key: HDFS-7312
 URL: https://issues.apache.org/jira/browse/HDFS-7312
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Affects Versions: 1.2.1
Reporter: Joseph Prosser
Assignee: Joseph Prosser
Priority: Minor
 Attachments: HDFS-7312.001.patch, HDFS-7312.002.patch, 
 HDFS-7312.003.patch, HDFS-7312.004.patch, HDFS-7312.005.patch, 
 HDFS-7312.006.patch, HDFS-7312.007.patch, HDFS-7312.patch

   Original Estimate: 72h
  Remaining Estimate: 72h

 DistCp v1 currently copies files to a tmp location and then renames that to 
 the specified destination.  This can cause performance issues on filesystems 
 such as S3.  A -skiptmp flag will be added to bypass this step and copy 
 directly to the destination.  This feature mirrors a similar one added to 
 HBase ExportSnapshot 
 [HBASE-9|https://issues.apache.org/jira/browse/HBASE-9]
 NOTE: This is a branch-1 change only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7008) xlator should be closed upon exit from DFSAdmin#genericRefresh()

2015-02-24 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HDFS-7008:
-
  Resolution: Fixed
   Fix Version/s: 2.7.0
Target Version/s: 2.7.0
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~ted_yu] for your report and 
review.

 xlator should be closed upon exit from DFSAdmin#genericRefresh()
 

 Key: HDFS-7008
 URL: https://issues.apache.org/jira/browse/HDFS-7008
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Tsuyoshi OZAWA
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7008.1.patch, HDFS-7008.2.patch


 {code}
 GenericRefreshProtocol xlator =
   new GenericRefreshProtocolClientSideTranslatorPB(proxy);
 // Refresh
 CollectionRefreshResponse responses = xlator.refresh(identifier, args);
 {code}
 GenericRefreshProtocolClientSideTranslatorPB#close() should be called on 
 xlator before return.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7008) xlator should be closed upon exit from DFSAdmin#genericRefresh()

2015-02-23 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14334409#comment-14334409
 ] 

Tsuyoshi OZAWA commented on HDFS-7008:
--

The test failure looks not related to the patch - it passed locally. Committing 
this shortly.

 xlator should be closed upon exit from DFSAdmin#genericRefresh()
 

 Key: HDFS-7008
 URL: https://issues.apache.org/jira/browse/HDFS-7008
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Tsuyoshi OZAWA
Priority: Minor
 Attachments: HDFS-7008.1.patch, HDFS-7008.2.patch


 {code}
 GenericRefreshProtocol xlator =
   new GenericRefreshProtocolClientSideTranslatorPB(proxy);
 // Refresh
 CollectionRefreshResponse responses = xlator.refresh(identifier, args);
 {code}
 GenericRefreshProtocolClientSideTranslatorPB#close() should be called on 
 xlator before return.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7008) xlator should be closed upon exit from DFSAdmin#genericRefresh()

2015-02-22 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HDFS-7008:
-
Attachment: HDFS-7008.2.patch

Refreshed a patch.

 xlator should be closed upon exit from DFSAdmin#genericRefresh()
 

 Key: HDFS-7008
 URL: https://issues.apache.org/jira/browse/HDFS-7008
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Tsuyoshi OZAWA
Priority: Minor
 Attachments: HDFS-7008.1.patch, HDFS-7008.2.patch


 {code}
 GenericRefreshProtocol xlator =
   new GenericRefreshProtocolClientSideTranslatorPB(proxy);
 // Refresh
 CollectionRefreshResponse responses = xlator.refresh(identifier, args);
 {code}
 GenericRefreshProtocolClientSideTranslatorPB#close() should be called on 
 xlator before return.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7704) DN heartbeat to Active NN may be blocked and expire if connection to Standby NN continues to time out.

2015-02-12 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318563#comment-14318563
 ] 

Tsuyoshi OZAWA commented on HDFS-7704:
--

As a temporal solution, I'll revert this commit shortly.

 DN heartbeat to Active NN may be blocked and expire if connection to Standby 
 NN continues to time out. 
 ---

 Key: HDFS-7704
 URL: https://issues.apache.org/jira/browse/HDFS-7704
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, namenode
Affects Versions: 2.5.0
Reporter: Rushabh S Shah
Assignee: Rushabh S Shah
 Fix For: 2.7.0

 Attachments: HDFS-7704-v2.patch, HDFS-7704-v3.patch, 
 HDFS-7704-v4.patch, HDFS-7704-v5.patch, HDFS-7704.patch


 There are couple of synchronous calls in BPOfferservice (i.e reportBadBlocks 
 and trySendErrorReport) which will wait for both of the actor threads to 
 process this calls.
 This calls are made with writeLock acquired.
 When reportBadBlocks() is blocked at the RPC layer due to unreachable NN, 
 subsequent heartbeat response processing has to wait for the write lock. It 
 eventually gets through, but takes too long and it blocks the next heartbeat.
 In our HA cluster setup, the standby namenode was taking a long time to 
 process the request.
 Requesting improvement in datanode to make the above calls asynchronous since 
 these reports don't have any specific
 deadlines, so extra few seconds of delay should be acceptable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7704) DN heartbeat to Active NN may be blocked and expire if connection to Standby NN continues to time out.

2015-02-12 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318577#comment-14318577
 ] 

Tsuyoshi OZAWA commented on HDFS-7704:
--

[~kihwal] fixed this problem. Thank you for dealing with the problem!

 DN heartbeat to Active NN may be blocked and expire if connection to Standby 
 NN continues to time out. 
 ---

 Key: HDFS-7704
 URL: https://issues.apache.org/jira/browse/HDFS-7704
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, namenode
Affects Versions: 2.5.0
Reporter: Rushabh S Shah
Assignee: Rushabh S Shah
 Fix For: 2.7.0

 Attachments: HDFS-7704-v2.patch, HDFS-7704-v3.patch, 
 HDFS-7704-v4.patch, HDFS-7704-v5.patch, HDFS-7704.patch


 There are couple of synchronous calls in BPOfferservice (i.e reportBadBlocks 
 and trySendErrorReport) which will wait for both of the actor threads to 
 process this calls.
 This calls are made with writeLock acquired.
 When reportBadBlocks() is blocked at the RPC layer due to unreachable NN, 
 subsequent heartbeat response processing has to wait for the write lock. It 
 eventually gets through, but takes too long and it blocks the next heartbeat.
 In our HA cluster setup, the standby namenode was taking a long time to 
 process the request.
 Requesting improvement in datanode to make the above calls asynchronous since 
 these reports don't have any specific
 deadlines, so extra few seconds of delay should be acceptable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7709) Fix Findbug Warnings in httpfs

2015-02-05 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14306873#comment-14306873
 ] 

Tsuyoshi OZAWA commented on HDFS-7709:
--

Looks good to me. Pending for Jenkins.

 Fix Findbug Warnings in httpfs
 --

 Key: HDFS-7709
 URL: https://issues.apache.org/jira/browse/HDFS-7709
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-7709.patch, HDFS-7709.patch


 There are many findbug warnings related to the warning types, 
 - DM_DEFAULT_ENCODING, 
 - RCN_REDUNDANT_NULLCHECK_OF_NONNULL_VALUE,
 - RCN_REDUNDANT_NULLCHECK_WOULD_HAVE_BEEN_A_NPE
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-httpfs.html
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-rumen.html
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-mapreduce-client-core.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7709) Fix Findbug Warnings in httpfs

2015-02-05 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307043#comment-14307043
 ] 

Tsuyoshi OZAWA commented on HDFS-7709:
--

+1

 Fix Findbug Warnings in httpfs
 --

 Key: HDFS-7709
 URL: https://issues.apache.org/jira/browse/HDFS-7709
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-7709.patch, HDFS-7709.patch, HDFS-7709.patch


 There are many findbug warnings related to the warning types, 
 - DM_DEFAULT_ENCODING, 
 - RCN_REDUNDANT_NULLCHECK_OF_NONNULL_VALUE,
 - RCN_REDUNDANT_NULLCHECK_WOULD_HAVE_BEEN_A_NPE
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-httpfs.html
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-rumen.html
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-mapreduce-client-core.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7709) Fix findbug warnings in httpfs

2015-02-05 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307054#comment-14307054
 ] 

Tsuyoshi OZAWA commented on HDFS-7709:
--

Committed this to trunk and branch-2. Thanks [~rakeshr] for your contribution 
and thanks [~cmccabe] for your comment.

 Fix findbug warnings in httpfs
 --

 Key: HDFS-7709
 URL: https://issues.apache.org/jira/browse/HDFS-7709
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-7709.patch, HDFS-7709.patch, HDFS-7709.patch


 There are many findbug warnings related to the warning types, 
 - DM_DEFAULT_ENCODING, 
 - RCN_REDUNDANT_NULLCHECK_OF_NONNULL_VALUE,
 - RCN_REDUNDANT_NULLCHECK_WOULD_HAVE_BEEN_A_NPE
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-httpfs.html
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-rumen.html
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-mapreduce-client-core.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7709) Fix findbug warnings in httpfs

2015-02-05 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HDFS-7709:
-
   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

 Fix findbug warnings in httpfs
 --

 Key: HDFS-7709
 URL: https://issues.apache.org/jira/browse/HDFS-7709
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Rakesh R
Assignee: Rakesh R
 Fix For: 2.7.0

 Attachments: HDFS-7709.patch, HDFS-7709.patch, HDFS-7709.patch


 There are many findbug warnings related to the warning types, 
 - DM_DEFAULT_ENCODING, 
 - RCN_REDUNDANT_NULLCHECK_OF_NONNULL_VALUE,
 - RCN_REDUNDANT_NULLCHECK_WOULD_HAVE_BEEN_A_NPE
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-httpfs.html
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-rumen.html
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-mapreduce-client-core.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7709) Fix findbug warnings in httpfs

2015-02-05 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HDFS-7709:
-
Summary: Fix findbug warnings in httpfs  (was: Fix Findbug Warnings in 
httpfs)

 Fix findbug warnings in httpfs
 --

 Key: HDFS-7709
 URL: https://issues.apache.org/jira/browse/HDFS-7709
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-7709.patch, HDFS-7709.patch, HDFS-7709.patch


 There are many findbug warnings related to the warning types, 
 - DM_DEFAULT_ENCODING, 
 - RCN_REDUNDANT_NULLCHECK_OF_NONNULL_VALUE,
 - RCN_REDUNDANT_NULLCHECK_WOULD_HAVE_BEEN_A_NPE
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-httpfs.html
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-rumen.html
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-mapreduce-client-core.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7709) Fix Findbug Warnings in httpfs

2015-02-05 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14306891#comment-14306891
 ] 

Tsuyoshi OZAWA commented on HDFS-7709:
--

JSONProvider should also be fixed as findbugs mentioned. [~rakeshr] could you 
update a patch?

 Fix Findbug Warnings in httpfs
 --

 Key: HDFS-7709
 URL: https://issues.apache.org/jira/browse/HDFS-7709
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-7709.patch, HDFS-7709.patch


 There are many findbug warnings related to the warning types, 
 - DM_DEFAULT_ENCODING, 
 - RCN_REDUNDANT_NULLCHECK_OF_NONNULL_VALUE,
 - RCN_REDUNDANT_NULLCHECK_WOULD_HAVE_BEEN_A_NPE
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-httpfs.html
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-rumen.html
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-mapreduce-client-core.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7709) Fix findbug warnings in httpfs

2015-02-05 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HDFS-7709:
-
Affects Version/s: 2.6.0

 Fix findbug warnings in httpfs
 --

 Key: HDFS-7709
 URL: https://issues.apache.org/jira/browse/HDFS-7709
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Rakesh R
Assignee: Rakesh R
 Fix For: 2.7.0

 Attachments: HDFS-7709.patch, HDFS-7709.patch, HDFS-7709.patch


 There are many findbug warnings related to the warning types, 
 - DM_DEFAULT_ENCODING, 
 - RCN_REDUNDANT_NULLCHECK_OF_NONNULL_VALUE,
 - RCN_REDUNDANT_NULLCHECK_WOULD_HAVE_BEEN_A_NPE
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-httpfs.html
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-rumen.html
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-mapreduce-client-core.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7376) Upgrade jsch lib to jsch-0.1.51 to avoid problems running on java7

2015-01-28 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HDFS-7376:
-
Assignee: Tsuyoshi OZAWA
  Status: Patch Available  (was: Open)

 Upgrade jsch lib to jsch-0.1.51 to avoid problems running on java7
 --

 Key: HDFS-7376
 URL: https://issues.apache.org/jira/browse/HDFS-7376
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Reporter: Johannes Zillmann
Assignee: Tsuyoshi OZAWA
 Attachments: HDFS-7376.1.patch


 We had an application sitting on top of Hadoop and got problems using jsch 
 once we switched to java 7. Got this exception:
 {noformat}
  com.jcraft.jsch.JSchException: verify: false
   at com.jcraft.jsch.Session.connect(Session.java:330)
   at com.jcraft.jsch.Session.connect(Session.java:183)
 {noformat}
 Upgrading to jsch-0.1.51 from jsch-0.1.49 fixed the issue for us, but then it 
 got in conflict with hadoop's jsch version (we fixed this for us by 
 jarjar'ing our jsch version).
 So i think jsch got introduce by namenode HA (HDFS-1623). So you guys should 
 check if the ssh part is properly working for java7 or preventively upgrade 
 the jsch lib to jsch-0.1.51!
 Some references to problems reported:
 - 
 http://sourceforge.net/p/jsch/mailman/jsch-users/thread/loom.20131009t211650-...@post.gmane.org/
 - https://issues.apache.org/bugzilla/show_bug.cgi?id=53437



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7376) Upgrade jsch lib to jsch-0.1.51 to avoid problems running on java7

2015-01-28 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HDFS-7376:
-
Attachment: HDFS-7376.1.patch

Attaching first patch.

 Upgrade jsch lib to jsch-0.1.51 to avoid problems running on java7
 --

 Key: HDFS-7376
 URL: https://issues.apache.org/jira/browse/HDFS-7376
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Reporter: Johannes Zillmann
 Attachments: HDFS-7376.1.patch


 We had an application sitting on top of Hadoop and got problems using jsch 
 once we switched to java 7. Got this exception:
 {noformat}
  com.jcraft.jsch.JSchException: verify: false
   at com.jcraft.jsch.Session.connect(Session.java:330)
   at com.jcraft.jsch.Session.connect(Session.java:183)
 {noformat}
 Upgrading to jsch-0.1.51 from jsch-0.1.49 fixed the issue for us, but then it 
 got in conflict with hadoop's jsch version (we fixed this for us by 
 jarjar'ing our jsch version).
 So i think jsch got introduce by namenode HA (HDFS-1623). So you guys should 
 check if the ssh part is properly working for java7 or preventively upgrade 
 the jsch lib to jsch-0.1.51!
 Some references to problems reported:
 - 
 http://sourceforge.net/p/jsch/mailman/jsch-users/thread/loom.20131009t211650-...@post.gmane.org/
 - https://issues.apache.org/bugzilla/show_bug.cgi?id=53437



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6833) DirectoryScanner should not register a deleting block with memory of DataNode

2015-01-16 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14280120#comment-14280120
 ] 

Tsuyoshi OZAWA commented on HDFS-6833:
--

[~szetszwo] [~cnauroth] sorry for iterative ping, but could you take a look the 
latest patch(HDFS-6833-14.patch)?

 DirectoryScanner should not register a deleting block with memory of DataNode
 -

 Key: HDFS-6833
 URL: https://issues.apache.org/jira/browse/HDFS-6833
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.5.0, 2.5.1
Reporter: Shinichi Yamashita
Assignee: Shinichi Yamashita
Priority: Critical
 Attachments: HDFS-6833-10.patch, HDFS-6833-11.patch, 
 HDFS-6833-12.patch, HDFS-6833-13.patch, HDFS-6833-14.patch, 
 HDFS-6833-6-2.patch, HDFS-6833-6-3.patch, HDFS-6833-6.patch, 
 HDFS-6833-7-2.patch, HDFS-6833-7.patch, HDFS-6833.8.patch, HDFS-6833.9.patch, 
 HDFS-6833.patch, HDFS-6833.patch, HDFS-6833.patch, HDFS-6833.patch, 
 HDFS-6833.patch


 When a block is deleted in DataNode, the following messages are usually 
 output.
 {code}
 2014-08-07 17:53:11,606 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Scheduling blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
  for deletion
 2014-08-07 17:53:11,617 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
 {code}
 However, DirectoryScanner may be executed when DataNode deletes the block in 
 the current implementation. And the following messsages are output.
 {code}
 2014-08-07 17:53:30,519 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Scheduling blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
  for deletion
 2014-08-07 17:53:31,426 INFO 
 org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool 
 BP-1887080305-172.28.0.101-1407398838872 Total blocks: 1, missing metadata 
 files:0, missing block files:0, missing blocks in memory:1, mismatched 
 blocks:0
 2014-08-07 17:53:31,426 WARN 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added 
 missing block to memory FinalizedReplica, blk_1073741825_1001, FINALIZED
   getNumBytes() = 21230663
   getBytesOnDisk()  = 21230663
   getVisibleLength()= 21230663
   getVolume()   = /hadoop/data1/dfs/data/current
   getBlockFile()= 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
   unlinked  =false
 2014-08-07 17:53:31,531 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
 {code}
 Deleting block information is registered in DataNode's memory.
 And when DataNode sends a block report, NameNode receives wrong block 
 information.
 For example, when we execute recommission or change the number of 
 replication, NameNode may delete the right block as ExcessReplicate by this 
 problem.
 And Under-Replicated Blocks and Missing Blocks occur.
 When DataNode run DirectoryScanner, DataNode should not register a deleting 
 block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6833) DirectoryScanner should not register a deleting block with memory of DataNode

2014-12-20 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14254995#comment-14254995
 ] 

Tsuyoshi OZAWA commented on HDFS-6833:
--

[~szetszwo] [~cnauroth] do you mind taking a look?

 DirectoryScanner should not register a deleting block with memory of DataNode
 -

 Key: HDFS-6833
 URL: https://issues.apache.org/jira/browse/HDFS-6833
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.5.0, 2.5.1
Reporter: Shinichi Yamashita
Assignee: Shinichi Yamashita
Priority: Critical
 Attachments: HDFS-6833-10.patch, HDFS-6833-11.patch, 
 HDFS-6833-12.patch, HDFS-6833-13.patch, HDFS-6833-6-2.patch, 
 HDFS-6833-6-3.patch, HDFS-6833-6.patch, HDFS-6833-7-2.patch, 
 HDFS-6833-7.patch, HDFS-6833.8.patch, HDFS-6833.9.patch, HDFS-6833.patch, 
 HDFS-6833.patch, HDFS-6833.patch, HDFS-6833.patch, HDFS-6833.patch


 When a block is deleted in DataNode, the following messages are usually 
 output.
 {code}
 2014-08-07 17:53:11,606 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Scheduling blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
  for deletion
 2014-08-07 17:53:11,617 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
 {code}
 However, DirectoryScanner may be executed when DataNode deletes the block in 
 the current implementation. And the following messsages are output.
 {code}
 2014-08-07 17:53:30,519 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Scheduling blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
  for deletion
 2014-08-07 17:53:31,426 INFO 
 org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool 
 BP-1887080305-172.28.0.101-1407398838872 Total blocks: 1, missing metadata 
 files:0, missing block files:0, missing blocks in memory:1, mismatched 
 blocks:0
 2014-08-07 17:53:31,426 WARN 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added 
 missing block to memory FinalizedReplica, blk_1073741825_1001, FINALIZED
   getNumBytes() = 21230663
   getBytesOnDisk()  = 21230663
   getVisibleLength()= 21230663
   getVolume()   = /hadoop/data1/dfs/data/current
   getBlockFile()= 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
   unlinked  =false
 2014-08-07 17:53:31,531 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
 {code}
 Deleting block information is registered in DataNode's memory.
 And when DataNode sends a block report, NameNode receives wrong block 
 information.
 For example, when we execute recommission or change the number of 
 replication, NameNode may delete the right block as ExcessReplicate by this 
 problem.
 And Under-Replicated Blocks and Missing Blocks occur.
 When DataNode run DirectoryScanner, DataNode should not register a deleting 
 block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6833) DirectoryScanner should not register a deleting block with memory of DataNode

2014-11-20 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220411#comment-14220411
 ] 

Tsuyoshi OZAWA commented on HDFS-6833:
--

Yongjun, Suresh, Chris, Shinichi, Masatake, Thanks for taking this JIRA and 
your comments. Should we make this change into 2.6.0 or 2.6.1? What do you 
think?

 DirectoryScanner should not register a deleting block with memory of DataNode
 -

 Key: HDFS-6833
 URL: https://issues.apache.org/jira/browse/HDFS-6833
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.5.0, 2.5.1
Reporter: Shinichi Yamashita
Assignee: Shinichi Yamashita
Priority: Critical
 Attachments: HDFS-6833-6-2.patch, HDFS-6833-6-3.patch, 
 HDFS-6833-6.patch, HDFS-6833-7-2.patch, HDFS-6833-7.patch, HDFS-6833.8.patch, 
 HDFS-6833.9.patch, HDFS-6833.patch, HDFS-6833.patch, HDFS-6833.patch, 
 HDFS-6833.patch, HDFS-6833.patch


 When a block is deleted in DataNode, the following messages are usually 
 output.
 {code}
 2014-08-07 17:53:11,606 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Scheduling blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
  for deletion
 2014-08-07 17:53:11,617 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
 {code}
 However, DirectoryScanner may be executed when DataNode deletes the block in 
 the current implementation. And the following messsages are output.
 {code}
 2014-08-07 17:53:30,519 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Scheduling blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
  for deletion
 2014-08-07 17:53:31,426 INFO 
 org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool 
 BP-1887080305-172.28.0.101-1407398838872 Total blocks: 1, missing metadata 
 files:0, missing block files:0, missing blocks in memory:1, mismatched 
 blocks:0
 2014-08-07 17:53:31,426 WARN 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added 
 missing block to memory FinalizedReplica, blk_1073741825_1001, FINALIZED
   getNumBytes() = 21230663
   getBytesOnDisk()  = 21230663
   getVisibleLength()= 21230663
   getVolume()   = /hadoop/data1/dfs/data/current
   getBlockFile()= 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
   unlinked  =false
 2014-08-07 17:53:31,531 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
 {code}
 Deleting block information is registered in DataNode's memory.
 And when DataNode sends a block report, NameNode receives wrong block 
 information.
 For example, when we execute recommission or change the number of 
 replication, NameNode may delete the right block as ExcessReplicate by this 
 problem.
 And Under-Replicated Blocks and Missing Blocks occur.
 When DataNode run DirectoryScanner, DataNode should not register a deleting 
 block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6833) DirectoryScanner should not register a deleting block with memory of DataNode

2014-11-20 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220414#comment-14220414
 ] 

Tsuyoshi OZAWA commented on HDFS-6833:
--

Oops, I overlooked that 2.6.0 is now being released. Let's target this into 
branch-2 and 2.6.1. Thanks!

 DirectoryScanner should not register a deleting block with memory of DataNode
 -

 Key: HDFS-6833
 URL: https://issues.apache.org/jira/browse/HDFS-6833
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.5.0, 2.5.1
Reporter: Shinichi Yamashita
Assignee: Shinichi Yamashita
Priority: Critical
 Attachments: HDFS-6833-6-2.patch, HDFS-6833-6-3.patch, 
 HDFS-6833-6.patch, HDFS-6833-7-2.patch, HDFS-6833-7.patch, HDFS-6833.8.patch, 
 HDFS-6833.9.patch, HDFS-6833.patch, HDFS-6833.patch, HDFS-6833.patch, 
 HDFS-6833.patch, HDFS-6833.patch


 When a block is deleted in DataNode, the following messages are usually 
 output.
 {code}
 2014-08-07 17:53:11,606 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Scheduling blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
  for deletion
 2014-08-07 17:53:11,617 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
 {code}
 However, DirectoryScanner may be executed when DataNode deletes the block in 
 the current implementation. And the following messsages are output.
 {code}
 2014-08-07 17:53:30,519 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Scheduling blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
  for deletion
 2014-08-07 17:53:31,426 INFO 
 org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool 
 BP-1887080305-172.28.0.101-1407398838872 Total blocks: 1, missing metadata 
 files:0, missing block files:0, missing blocks in memory:1, mismatched 
 blocks:0
 2014-08-07 17:53:31,426 WARN 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added 
 missing block to memory FinalizedReplica, blk_1073741825_1001, FINALIZED
   getNumBytes() = 21230663
   getBytesOnDisk()  = 21230663
   getVisibleLength()= 21230663
   getVolume()   = /hadoop/data1/dfs/data/current
   getBlockFile()= 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
   unlinked  =false
 2014-08-07 17:53:31,531 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
 {code}
 Deleting block information is registered in DataNode's memory.
 And when DataNode sends a block report, NameNode receives wrong block 
 information.
 For example, when we execute recommission or change the number of 
 replication, NameNode may delete the right block as ExcessReplicate by this 
 problem.
 And Under-Replicated Blocks and Missing Blocks occur.
 When DataNode run DirectoryScanner, DataNode should not register a deleting 
 block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6833) DirectoryScanner should not register a deleting block with memory of DataNode

2014-11-17 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HDFS-6833:
-
Affects Version/s: 2.5.1

 DirectoryScanner should not register a deleting block with memory of DataNode
 -

 Key: HDFS-6833
 URL: https://issues.apache.org/jira/browse/HDFS-6833
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.5.0, 2.5.1
Reporter: Shinichi Yamashita
Assignee: Shinichi Yamashita
Priority: Critical
 Attachments: HDFS-6833-6-2.patch, HDFS-6833-6-3.patch, 
 HDFS-6833-6.patch, HDFS-6833-7-2.patch, HDFS-6833-7.patch, HDFS-6833.patch, 
 HDFS-6833.patch, HDFS-6833.patch, HDFS-6833.patch, HDFS-6833.patch


 When a block is deleted in DataNode, the following messages are usually 
 output.
 {code}
 2014-08-07 17:53:11,606 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Scheduling blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
  for deletion
 2014-08-07 17:53:11,617 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
 {code}
 However, DirectoryScanner may be executed when DataNode deletes the block in 
 the current implementation. And the following messsages are output.
 {code}
 2014-08-07 17:53:30,519 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Scheduling blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
  for deletion
 2014-08-07 17:53:31,426 INFO 
 org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool 
 BP-1887080305-172.28.0.101-1407398838872 Total blocks: 1, missing metadata 
 files:0, missing block files:0, missing blocks in memory:1, mismatched 
 blocks:0
 2014-08-07 17:53:31,426 WARN 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added 
 missing block to memory FinalizedReplica, blk_1073741825_1001, FINALIZED
   getNumBytes() = 21230663
   getBytesOnDisk()  = 21230663
   getVisibleLength()= 21230663
   getVolume()   = /hadoop/data1/dfs/data/current
   getBlockFile()= 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
   unlinked  =false
 2014-08-07 17:53:31,531 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
 {code}
 Deleting block information is registered in DataNode's memory.
 And when DataNode sends a block report, NameNode receives wrong block 
 information.
 For example, when we execute recommission or change the number of 
 replication, NameNode may delete the right block as ExcessReplicate by this 
 problem.
 And Under-Replicated Blocks and Missing Blocks occur.
 When DataNode run DirectoryScanner, DataNode should not register a deleting 
 block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6833) DirectoryScanner should not register a deleting block with memory of DataNode

2014-11-17 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HDFS-6833:
-
Attachment: HDFS-6833.8.patch

Refreshing a patch by Shinichi for trunk.

 DirectoryScanner should not register a deleting block with memory of DataNode
 -

 Key: HDFS-6833
 URL: https://issues.apache.org/jira/browse/HDFS-6833
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.5.0, 2.5.1
Reporter: Shinichi Yamashita
Assignee: Shinichi Yamashita
Priority: Critical
 Attachments: HDFS-6833-6-2.patch, HDFS-6833-6-3.patch, 
 HDFS-6833-6.patch, HDFS-6833-7-2.patch, HDFS-6833-7.patch, HDFS-6833.8.patch, 
 HDFS-6833.patch, HDFS-6833.patch, HDFS-6833.patch, HDFS-6833.patch, 
 HDFS-6833.patch


 When a block is deleted in DataNode, the following messages are usually 
 output.
 {code}
 2014-08-07 17:53:11,606 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Scheduling blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
  for deletion
 2014-08-07 17:53:11,617 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
 {code}
 However, DirectoryScanner may be executed when DataNode deletes the block in 
 the current implementation. And the following messsages are output.
 {code}
 2014-08-07 17:53:30,519 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Scheduling blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
  for deletion
 2014-08-07 17:53:31,426 INFO 
 org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool 
 BP-1887080305-172.28.0.101-1407398838872 Total blocks: 1, missing metadata 
 files:0, missing block files:0, missing blocks in memory:1, mismatched 
 blocks:0
 2014-08-07 17:53:31,426 WARN 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added 
 missing block to memory FinalizedReplica, blk_1073741825_1001, FINALIZED
   getNumBytes() = 21230663
   getBytesOnDisk()  = 21230663
   getVisibleLength()= 21230663
   getVolume()   = /hadoop/data1/dfs/data/current
   getBlockFile()= 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
   unlinked  =false
 2014-08-07 17:53:31,531 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
 {code}
 Deleting block information is registered in DataNode's memory.
 And when DataNode sends a block report, NameNode receives wrong block 
 information.
 For example, when we execute recommission or change the number of 
 replication, NameNode may delete the right block as ExcessReplicate by this 
 problem.
 And Under-Replicated Blocks and Missing Blocks occur.
 When DataNode run DirectoryScanner, DataNode should not register a deleting 
 block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6833) DirectoryScanner should not register a deleting block with memory of DataNode

2014-11-17 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214665#comment-14214665
 ] 

Tsuyoshi OZAWA commented on HDFS-6833:
--

[~ajisakaa] Shinichi is busy now, so I rebased a patch instead of him. 

 DirectoryScanner should not register a deleting block with memory of DataNode
 -

 Key: HDFS-6833
 URL: https://issues.apache.org/jira/browse/HDFS-6833
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.5.0, 2.5.1
Reporter: Shinichi Yamashita
Assignee: Shinichi Yamashita
Priority: Critical
 Attachments: HDFS-6833-6-2.patch, HDFS-6833-6-3.patch, 
 HDFS-6833-6.patch, HDFS-6833-7-2.patch, HDFS-6833-7.patch, HDFS-6833.8.patch, 
 HDFS-6833.patch, HDFS-6833.patch, HDFS-6833.patch, HDFS-6833.patch, 
 HDFS-6833.patch


 When a block is deleted in DataNode, the following messages are usually 
 output.
 {code}
 2014-08-07 17:53:11,606 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Scheduling blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
  for deletion
 2014-08-07 17:53:11,617 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
 {code}
 However, DirectoryScanner may be executed when DataNode deletes the block in 
 the current implementation. And the following messsages are output.
 {code}
 2014-08-07 17:53:30,519 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Scheduling blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
  for deletion
 2014-08-07 17:53:31,426 INFO 
 org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool 
 BP-1887080305-172.28.0.101-1407398838872 Total blocks: 1, missing metadata 
 files:0, missing block files:0, missing blocks in memory:1, mismatched 
 blocks:0
 2014-08-07 17:53:31,426 WARN 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added 
 missing block to memory FinalizedReplica, blk_1073741825_1001, FINALIZED
   getNumBytes() = 21230663
   getBytesOnDisk()  = 21230663
   getVisibleLength()= 21230663
   getVolume()   = /hadoop/data1/dfs/data/current
   getBlockFile()= 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
   unlinked  =false
 2014-08-07 17:53:31,531 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
 {code}
 Deleting block information is registered in DataNode's memory.
 And when DataNode sends a block report, NameNode receives wrong block 
 information.
 For example, when we execute recommission or change the number of 
 replication, NameNode may delete the right block as ExcessReplicate by this 
 problem.
 And Under-Replicated Blocks and Missing Blocks occur.
 When DataNode run DirectoryScanner, DataNode should not register a deleting 
 block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7227) Fix findbugs warning about NP_DEREFERENCE_OF_READLINE_VALUE in SpanReceiverHost

2014-10-19 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14176280#comment-14176280
 ] 

Tsuyoshi OZAWA commented on HDFS-7227:
--

Hi [~cmccabe], [Java coding 
style|http://www.oracle.com/technetwork/java/javase/documentation/codeconventions-142311.html#449]
 says that we should avoid emitting braces:

{code}
The if-else class of statements should have the following form:

if (condition) {
statements;
}

Note: if statements always use braces, {}. Avoid the following error-prone form:

if (condition) //AVOID! THIS OMITS THE BRACES {}!
statement;
{code}


 Fix findbugs warning about NP_DEREFERENCE_OF_READLINE_VALUE in 
 SpanReceiverHost
 ---

 Key: HDFS-7227
 URL: https://issues.apache.org/jira/browse/HDFS-7227
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-7227.001.patch, HDFS-7227.002.patch


 Fix findbugs warning about NP_DEREFERENCE_OF_READLINE_VALUE in 
 SpanReceiverHost



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HDFS-7161) Hashcode is logging while logging block report message

2014-09-27 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA moved HADOOP-8252 to HDFS-7161:
--

Affects Version/s: (was: 2.0.0-alpha)
   2.0.0-alpha
  Key: HDFS-7161  (was: HADOOP-8252)
  Project: Hadoop HDFS  (was: Hadoop Common)

 Hashcode is logging while logging block report message
 --

 Key: HDFS-7161
 URL: https://issues.apache.org/jira/browse/HDFS-7161
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Brahma Reddy Battula
Priority: Trivial
 Attachments: Hadoop-8252.patch


 Scenario:
 =
 Start NN and DN.
 Check log messgae in DN..It's coming like following
 2012-03-13 14:34:36,008 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
 sent block report, processed 
 command:org.apache.hadoop.hdfs.server.protocol.FinalizeCommand@43deff3
 line num 413 in BpserviceActor.java LOG.info(sent block report, processed 
 command: + cmd);
 line num 388 in BpserviceActor.java cmd = 
 bpNamenode.blockReport(bpRegistration,bpos.getBlockPoolId(), report);
 It's better log message instead of hashcode



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7008) xlator should be closed upon exit from DFSAdmin#genericRefresh()

2014-09-05 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA reassigned HDFS-7008:


Assignee: Tsuyoshi OZAWA

 xlator should be closed upon exit from DFSAdmin#genericRefresh()
 

 Key: HDFS-7008
 URL: https://issues.apache.org/jira/browse/HDFS-7008
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Tsuyoshi OZAWA
Priority: Minor

 {code}
 GenericRefreshProtocol xlator =
   new GenericRefreshProtocolClientSideTranslatorPB(proxy);
 // Refresh
 CollectionRefreshResponse responses = xlator.refresh(identifier, args);
 {code}
 GenericRefreshProtocolClientSideTranslatorPB#close() should be called on 
 xlator before return.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7008) xlator should be closed upon exit from DFSAdmin#genericRefresh()

2014-09-05 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HDFS-7008:
-
Attachment: HDFS-7008.1.patch

Thanks for your reporting, Ted. Attached a first patch to fix problem.

 xlator should be closed upon exit from DFSAdmin#genericRefresh()
 

 Key: HDFS-7008
 URL: https://issues.apache.org/jira/browse/HDFS-7008
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Tsuyoshi OZAWA
Priority: Minor
 Attachments: HDFS-7008.1.patch


 {code}
 GenericRefreshProtocol xlator =
   new GenericRefreshProtocolClientSideTranslatorPB(proxy);
 // Refresh
 CollectionRefreshResponse responses = xlator.refresh(identifier, args);
 {code}
 GenericRefreshProtocolClientSideTranslatorPB#close() should be called on 
 xlator before return.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7008) xlator should be closed upon exit from DFSAdmin#genericRefresh()

2014-09-05 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HDFS-7008:
-
Status: Patch Available  (was: Open)

 xlator should be closed upon exit from DFSAdmin#genericRefresh()
 

 Key: HDFS-7008
 URL: https://issues.apache.org/jira/browse/HDFS-7008
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Tsuyoshi OZAWA
Priority: Minor
 Attachments: HDFS-7008.1.patch


 {code}
 GenericRefreshProtocol xlator =
   new GenericRefreshProtocolClientSideTranslatorPB(proxy);
 // Refresh
 CollectionRefreshResponse responses = xlator.refresh(identifier, args);
 {code}
 GenericRefreshProtocolClientSideTranslatorPB#close() should be called on 
 xlator before return.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-6980) TestWebHdfsFileSystemContract fails in trunk

2014-09-01 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA reassigned HDFS-6980:


Assignee: Tsuyoshi OZAWA

 TestWebHdfsFileSystemContract fails in trunk
 

 Key: HDFS-6980
 URL: https://issues.apache.org/jira/browse/HDFS-6980
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Akira AJISAKA
Assignee: Tsuyoshi OZAWA

 Many tests in TestWebHdfsFileSystemContract fail by too many open files 
 error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6980) TestWebHdfsFileSystemContract fails in trunk

2014-09-01 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HDFS-6980:
-
Attachment: HDFS-6980.1.patch

Thanks for reporting, Akira. I found that there are lots file descriptor leaks 
in the test cases. Attached a patch to fix the problem.

 TestWebHdfsFileSystemContract fails in trunk
 

 Key: HDFS-6980
 URL: https://issues.apache.org/jira/browse/HDFS-6980
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Akira AJISAKA
Assignee: Tsuyoshi OZAWA
 Attachments: HDFS-6980.1.patch


 Many tests in TestWebHdfsFileSystemContract fail by too many open files 
 error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6980) TestWebHdfsFileSystemContract fails in trunk

2014-09-01 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HDFS-6980:
-
Status: Patch Available  (was: Open)

 TestWebHdfsFileSystemContract fails in trunk
 

 Key: HDFS-6980
 URL: https://issues.apache.org/jira/browse/HDFS-6980
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Akira AJISAKA
Assignee: Tsuyoshi OZAWA
 Attachments: HDFS-6980.1.patch


 Many tests in TestWebHdfsFileSystemContract fail by too many open files 
 error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6980) TestWebHdfsFileSystemContract fails in trunk

2014-09-01 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HDFS-6980:
-
Attachment: HDFS-6980.1-2.patch

 TestWebHdfsFileSystemContract fails in trunk
 

 Key: HDFS-6980
 URL: https://issues.apache.org/jira/browse/HDFS-6980
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Akira AJISAKA
Assignee: Tsuyoshi OZAWA
 Attachments: HDFS-6980.1-2.patch, HDFS-6980.1.patch


 Many tests in TestWebHdfsFileSystemContract fail by too many open files 
 error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6980) TestWebHdfsFileSystemContract fails in trunk

2014-09-01 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14117913#comment-14117913
 ] 

Tsuyoshi OZAWA commented on HDFS-6980:
--

The test failure looks related to TestPipelinesFailover, and not related to 
this patch. To confirm this, resubmitting a same again.

 TestWebHdfsFileSystemContract fails in trunk
 

 Key: HDFS-6980
 URL: https://issues.apache.org/jira/browse/HDFS-6980
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Akira AJISAKA
Assignee: Tsuyoshi OZAWA
 Attachments: HDFS-6980.1-2.patch, HDFS-6980.1.patch


 Many tests in TestWebHdfsFileSystemContract fail by too many open files 
 error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6943) Improve NN allocateBlock log to include replicas' datanode IPs

2014-08-28 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14114838#comment-14114838
 ] 

Tsuyoshi OZAWA commented on HDFS-6943:
--

Hi [~mingma], thanks you for contribution. Maybe we should add test like 
TestDatanodeStorageInfo like TestContainerId.

 Improve NN allocateBlock log to include replicas' datanode IPs
 --

 Key: HDFS-6943
 URL: https://issues.apache.org/jira/browse/HDFS-6943
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma
Assignee: Ming Ma
 Attachments: HDFS-6943.patch


 Datanode storage ID used to use IP and port. It has changed to use UUID. This 
 makes debugging harder when we want to understand which DNs are assigned when 
 DFSClient calls addBlock. For example,
 {noformat}
 BLOCK* allocateBlock: /foo. BP-1980237412-xx.xx.xxx.xxx-1408142057773 
 blk_1227779764_154043834{blockUCState=UNDER_CONSTRUCTION, 
 primaryNodeIndex=-1, 
 replicas=[ReplicaUnderConstruction[[DISK]DS-9479727b-24c5-4068-8703-dfb9a41c056c:NORMAL|RBW],
  
 ReplicaUnderConstruction[[DISK]DS-abe7840c-1db8-4623-9da7-3aed6a28c4f4:NORMAL|RBW],
  
 ReplicaUnderConstruction[[DISK]DS-956023f4-56a0-4c30-a148-b78c61cf764b:NORMAL|RBW]]}
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6902) FileWriter should be closed in finally block in BlockReceiver#receiveBlock()

2014-08-27 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14113128#comment-14113128
 ] 

Tsuyoshi OZAWA commented on HDFS-6902:
--

Thanks for your review, Colin!

 FileWriter should be closed in finally block in BlockReceiver#receiveBlock()
 

 Key: HDFS-6902
 URL: https://issues.apache.org/jira/browse/HDFS-6902
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Tsuyoshi OZAWA
Priority: Minor
 Fix For: 2.6.0

 Attachments: HDFS-6902.1.patch, HDFS-6902.2.patch


 Here is code starting from line 828:
 {code}
 try {
   FileWriter out = new FileWriter(restartMeta);
   // write out the current time.
   out.write(Long.toString(Time.now() + restartBudget));
   out.flush();
   out.close();
 } catch (IOException ioe) {
 {code}
 If write() or flush() call throws IOException, out wouldn't be closed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6902) FileWriter should be closed in finally block in BlockReceiver#receiveBlock()

2014-08-25 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HDFS-6902:
-

Attachment: HDFS-6902.2.patch

[~cmccabe], thanks for your review. Updated a patch to use IOUtils#cleanup.

 FileWriter should be closed in finally block in BlockReceiver#receiveBlock()
 

 Key: HDFS-6902
 URL: https://issues.apache.org/jira/browse/HDFS-6902
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Tsuyoshi OZAWA
Priority: Minor
 Attachments: HDFS-6902.1.patch, HDFS-6902.2.patch


 Here is code starting from line 828:
 {code}
 try {
   FileWriter out = new FileWriter(restartMeta);
   // write out the current time.
   out.write(Long.toString(Time.now() + restartBudget));
   out.flush();
   out.close();
 } catch (IOException ioe) {
 {code}
 If write() or flush() call throws IOException, out wouldn't be closed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6902) FileWriter should be closed in finally block in BlockReceiver#receiveBlock()

2014-08-23 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HDFS-6902:
-

Assignee: Tsuyoshi OZAWA
  Status: Patch Available  (was: Open)

 FileWriter should be closed in finally block in BlockReceiver#receiveBlock()
 

 Key: HDFS-6902
 URL: https://issues.apache.org/jira/browse/HDFS-6902
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Tsuyoshi OZAWA
Priority: Minor
 Attachments: HDFS-6902.1.patch


 Here is code starting from line 828:
 {code}
 try {
   FileWriter out = new FileWriter(restartMeta);
   // write out the current time.
   out.write(Long.toString(Time.now() + restartBudget));
   out.flush();
   out.close();
 } catch (IOException ioe) {
 {code}
 If write() or flush() call throws IOException, out wouldn't be closed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6902) FileWriter should be closed in finally block in BlockReceiver#receiveBlock()

2014-08-23 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HDFS-6902:
-

Attachment: HDFS-6902.1.patch

 FileWriter should be closed in finally block in BlockReceiver#receiveBlock()
 

 Key: HDFS-6902
 URL: https://issues.apache.org/jira/browse/HDFS-6902
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor
 Attachments: HDFS-6902.1.patch


 Here is code starting from line 828:
 {code}
 try {
   FileWriter out = new FileWriter(restartMeta);
   // write out the current time.
   out.write(Long.toString(Time.now() + restartBudget));
   out.flush();
   out.close();
 } catch (IOException ioe) {
 {code}
 If write() or flush() call throws IOException, out wouldn't be closed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6833) DirectoryScanner should not register a deleting block with memory of DataNode

2014-08-19 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HDFS-6833:
-

Affects Version/s: 2.5.0

 DirectoryScanner should not register a deleting block with memory of DataNode
 -

 Key: HDFS-6833
 URL: https://issues.apache.org/jira/browse/HDFS-6833
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.5.0
Reporter: Shinichi Yamashita
Assignee: Shinichi Yamashita
 Attachments: HDFS-6833-6.patch, HDFS-6833-7-2.patch, 
 HDFS-6833-7.patch, HDFS-6833.patch, HDFS-6833.patch, HDFS-6833.patch, 
 HDFS-6833.patch, HDFS-6833.patch


 When a block is deleted in DataNode, the following messages are usually 
 output.
 {code}
 2014-08-07 17:53:11,606 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Scheduling blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
  for deletion
 2014-08-07 17:53:11,617 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
 {code}
 However, DirectoryScanner may be executed when DataNode deletes the block in 
 the current implementation. And the following messsages are output.
 {code}
 2014-08-07 17:53:30,519 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Scheduling blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
  for deletion
 2014-08-07 17:53:31,426 INFO 
 org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool 
 BP-1887080305-172.28.0.101-1407398838872 Total blocks: 1, missing metadata 
 files:0, missing block files:0, missing blocks in memory:1, mismatched 
 blocks:0
 2014-08-07 17:53:31,426 WARN 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added 
 missing block to memory FinalizedReplica, blk_1073741825_1001, FINALIZED
   getNumBytes() = 21230663
   getBytesOnDisk()  = 21230663
   getVisibleLength()= 21230663
   getVolume()   = /hadoop/data1/dfs/data/current
   getBlockFile()= 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
   unlinked  =false
 2014-08-07 17:53:31,531 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
 {code}
 Deleting block information is registered in DataNode's memory.
 And when DataNode sends a block report, NameNode receives wrong block 
 information.
 For example, when we execute recommission or change the number of 
 replication, NameNode may delete the right block as ExcessReplicate by this 
 problem.
 And Under-Replicated Blocks and Missing Blocks occur.
 When DataNode run DirectoryScanner, DataNode should not register a deleting 
 block.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6193) HftpFileSystem open should throw FileNotFoundException for non-existing paths

2014-05-05 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13990007#comment-13990007
 ] 

Tsuyoshi OZAWA commented on HDFS-6193:
--

Let's wait for review by HDFS experts.

 HftpFileSystem open should throw FileNotFoundException for non-existing paths
 -

 Key: HDFS-6193
 URL: https://issues.apache.org/jira/browse/HDFS-6193
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker
 Attachments: HDFS-6193-branch-2.4.0.v01.patch, 
 HDFS-6193-branch-2.4.v02.patch


 WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
 non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5436) Move HsFtpFileSystem and HFtpFileSystem into org.apache.hdfs.web

2014-05-02 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988004#comment-13988004
 ] 

Tsuyoshi OZAWA commented on HDFS-5436:
--

[~wheat9], [~arpitagarwal] I found that the latest patch removes 
HftpFileSystem.java. It blocks HDFS-6193, which is a blocker of 2.4.1 release. 
Can you recover it? 

{code}
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HftpFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HftpFileSystem.java
deleted file mode 100644
{code}

 Move HsFtpFileSystem and HFtpFileSystem into org.apache.hdfs.web
 

 Key: HDFS-5436
 URL: https://issues.apache.org/jira/browse/HDFS-5436
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.3.0

 Attachments: HDFS-5436.000.patch, HDFS-5436.001.patch, 
 HDFS-5436.002.patch


 Currently HsftpFilesystem, HftpFileSystem and WebHdfsFileSystem reside in 
 different packages. This force several methods in ByteInputStream and 
 URLConnectionFactory to be public methods.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   >