[jira] [Resolved] (HDFS-2569) DN decommissioning quirks

2016-11-24 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HDFS-2569.
---
Resolution: Cannot Reproduce
  Assignee: (was: Harsh J)

Cannot quite reproduce this on current versions.

> DN decommissioning quirks
> -
>
> Key: HDFS-2569
> URL: https://issues.apache.org/jira/browse/HDFS-2569
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 0.23.0
>Reporter: Harsh J
>
> Decommissioning a node is working slightly odd in 0.23+:
> The steps I did:
> - Start HDFS via {{hdfs namenode}} and {{hdfs datanode}}. 1-node cluster.
> - Zero files/blocks, so I go ahead and exclude-add my DN and do {{hdfs 
> dfsadmin -refreshNodes}}
> - I see the following log in NN tails, which is fine:
> {code}
> 11/11/20 09:28:10 INFO util.HostsFileReader: Setting the includes file to 
> 11/11/20 09:28:10 INFO util.HostsFileReader: Setting the excludes file to 
> build/test/excludes
> 11/11/20 09:28:10 INFO util.HostsFileReader: Refreshing hosts 
> (include/exclude) list
> 11/11/20 09:28:10 INFO util.HostsFileReader: Adding 192.168.1.23 to the list 
> of hosts from build/test/excludes
> {code}
> - However, DN log tail gets no new messages. DN still runs.
> - The dfshealth.jsp page shows this table, which makes no sense -- why is 
> there 1 live and 1 dead?:
> |Live Nodes|1 (Decommissioned: 1)|
> |Dead Nodes|1 (Decommissioned: 0)|
> |Decommissioning Nodes|0|
> - The live nodes page shows this, meaning DN is still up and heartbeating but 
> is decommissioned:
> |Node|Last Contact|Admin State|
> |192.168.1.23|0|Decommissioned|
> - The dead nodes page shows this, and the link to the DN is broken cause the 
> port is linked as -1. Also, showing 'false' for decommissioned makes no sense 
> when live node page shows that it is already decommissioned:
> |Node|Decommissioned|
> |192.168.1.23|false|
> Investigating if this is a quirk only observed when the DN had 0 blocks on it 
> in sum total.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11175) TransparentEncryption.md should be up-to-date since uppercase key names are unsupported.

2016-11-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15694950#comment-15694950
 ] 

Hadoop QA commented on HDFS-11175:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11175 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12840468/HDFS-11175.001.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 90bd802d8e59 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 01665e4 |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17662/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TransparentEncryption.md should be up-to-date since uppercase key names are 
> unsupported.
> 
>
> Key: HDFS-11175
> URL: https://issues.apache.org/jira/browse/HDFS-11175
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Yuanbo Liu
>Assignee: Yiqun Lin
>Priority: Trivial
>  Labels: newbie
> Attachments: HDFS-11175.001.patch
>
>
> After HADOOP-11311, key names has been restricted and uppercase key names are 
> not allowed. This section of {{TransparentEncryption.md}} should be modified.
> {quote}
> # As the normal user, create a new encryption key
> hadoop key create myKey
> # As the super user, create a new empty directory and make it an encryption 
> zone
> hadoop fs -mkdir /zone
> hdfs crypto -createZone -keyName myKey -path /zone
> # chown it to the normal user
> hadoop fs -chown myuser:myuser /zone
> # As the normal user, put a file in, read it out
> hadoop fs -put helloWorld /zone
> hadoop fs -cat /zone/helloWorld
> {quote}
> "myKey" is not allowed here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11175) TransparentEncryption.md should be up-to-date since uppercase key names are unsupported.

2016-11-24 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15694927#comment-15694927
 ] 

Yiqun Lin edited comment on HDFS-11175 at 11/25/16 5:57 AM:


It's a good catch, thanks [~yuanbo] for reporting this. One comment from me: 
I'm sure here the {{myKey}} has been updated to the {{mykey}}.The documenttion 
in trunk:
{code}
# As the normal user, create a new encryption key
hadoop key create mykey

# As the super user, create a new empty directory and make it an encryption 
zone
hadoop fs -mkdir /zone
hdfs crypto -createZone -keyName mykey -path /zone
{code}
I think we can make a change in the introduction of param {{keyName}}. Attach a 
patch. Correct me if I am wrong. Thanks.


was (Author: linyiqun):
It's a good catch, thanks [~yuanbo] for reporting this. One comment from me: 
I'm sure here the {{mykey}} has been updated to the {{mykey}}.The documenttion 
in trunk:
{code}
# As the normal user, create a new encryption key
hadoop key create mykey

# As the super user, create a new empty directory and make it an encryption 
zone
hadoop fs -mkdir /zone
hdfs crypto -createZone -keyName mykey -path /zone
{code}
I think we can make a change in the introduction of param {{keyName}}. Attach a 
patch. Correct me if I am wrong. Thanks.

> TransparentEncryption.md should be up-to-date since uppercase key names are 
> unsupported.
> 
>
> Key: HDFS-11175
> URL: https://issues.apache.org/jira/browse/HDFS-11175
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Yuanbo Liu
>Assignee: Yiqun Lin
>Priority: Trivial
>  Labels: newbie
> Attachments: HDFS-11175.001.patch
>
>
> After HADOOP-11311, key names has been restricted and uppercase key names are 
> not allowed. This section of {{TransparentEncryption.md}} should be modified.
> {quote}
> # As the normal user, create a new encryption key
> hadoop key create myKey
> # As the super user, create a new empty directory and make it an encryption 
> zone
> hadoop fs -mkdir /zone
> hdfs crypto -createZone -keyName myKey -path /zone
> # chown it to the normal user
> hadoop fs -chown myuser:myuser /zone
> # As the normal user, put a file in, read it out
> hadoop fs -put helloWorld /zone
> hadoop fs -cat /zone/helloWorld
> {quote}
> "myKey" is not allowed here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11175) TransparentEncryption.md should be up-to-date since uppercase key names are unsupported.

2016-11-24 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11175:
-
Attachment: HDFS-11175.001.patch

> TransparentEncryption.md should be up-to-date since uppercase key names are 
> unsupported.
> 
>
> Key: HDFS-11175
> URL: https://issues.apache.org/jira/browse/HDFS-11175
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Yuanbo Liu
>Assignee: Yiqun Lin
>Priority: Trivial
>  Labels: newbie
> Attachments: HDFS-11175.001.patch
>
>
> After HADOOP-11311, key names has been restricted and uppercase key names are 
> not allowed. This section of {{TransparentEncryption.md}} should be modified.
> {quote}
> # As the normal user, create a new encryption key
> hadoop key create myKey
> # As the super user, create a new empty directory and make it an encryption 
> zone
> hadoop fs -mkdir /zone
> hdfs crypto -createZone -keyName myKey -path /zone
> # chown it to the normal user
> hadoop fs -chown myuser:myuser /zone
> # As the normal user, put a file in, read it out
> hadoop fs -put helloWorld /zone
> hadoop fs -cat /zone/helloWorld
> {quote}
> "myKey" is not allowed here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11175) TransparentEncryption.md should be up-to-date since uppercase key names are unsupported.

2016-11-24 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11175:
-
Status: Patch Available  (was: Open)

> TransparentEncryption.md should be up-to-date since uppercase key names are 
> unsupported.
> 
>
> Key: HDFS-11175
> URL: https://issues.apache.org/jira/browse/HDFS-11175
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Yuanbo Liu
>Assignee: Yiqun Lin
>Priority: Trivial
>  Labels: newbie
>
> After HADOOP-11311, key names has been restricted and uppercase key names are 
> not allowed. This section of {{TransparentEncryption.md}} should be modified.
> {quote}
> # As the normal user, create a new encryption key
> hadoop key create myKey
> # As the super user, create a new empty directory and make it an encryption 
> zone
> hadoop fs -mkdir /zone
> hdfs crypto -createZone -keyName myKey -path /zone
> # chown it to the normal user
> hadoop fs -chown myuser:myuser /zone
> # As the normal user, put a file in, read it out
> hadoop fs -put helloWorld /zone
> hadoop fs -cat /zone/helloWorld
> {quote}
> "myKey" is not allowed here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11175) TransparentEncryption.md should be up-to-date since uppercase key names are unsupported.

2016-11-24 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15694927#comment-15694927
 ] 

Yiqun Lin commented on HDFS-11175:
--

It's a good catch, thanks [~yuanbo] for reporting this. One comment from me: 
I'm sure here the {{mykey}} has been updated to the {{mykey}}.The documenttion 
in trunk:
{code}
# As the normal user, create a new encryption key
hadoop key create mykey

# As the super user, create a new empty directory and make it an encryption 
zone
hadoop fs -mkdir /zone
hdfs crypto -createZone -keyName mykey -path /zone
{code}
I think we can make a change in the introduction of param {{keyName}}. Attach a 
patch. Correct me if I am wrong. Thanks.

> TransparentEncryption.md should be up-to-date since uppercase key names are 
> unsupported.
> 
>
> Key: HDFS-11175
> URL: https://issues.apache.org/jira/browse/HDFS-11175
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Yuanbo Liu
>Assignee: Yiqun Lin
>Priority: Trivial
>  Labels: newbie
>
> After HADOOP-11311, key names has been restricted and uppercase key names are 
> not allowed. This section of {{TransparentEncryption.md}} should be modified.
> {quote}
> # As the normal user, create a new encryption key
> hadoop key create myKey
> # As the super user, create a new empty directory and make it an encryption 
> zone
> hadoop fs -mkdir /zone
> hdfs crypto -createZone -keyName myKey -path /zone
> # chown it to the normal user
> hadoop fs -chown myuser:myuser /zone
> # As the normal user, put a file in, read it out
> hadoop fs -put helloWorld /zone
> hadoop fs -cat /zone/helloWorld
> {quote}
> "myKey" is not allowed here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11175) TransparentEncryption.md should be up-to-date since uppercase key names are unsupported.

2016-11-24 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin reassigned HDFS-11175:


Assignee: Yiqun Lin

> TransparentEncryption.md should be up-to-date since uppercase key names are 
> unsupported.
> 
>
> Key: HDFS-11175
> URL: https://issues.apache.org/jira/browse/HDFS-11175
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Yuanbo Liu
>Assignee: Yiqun Lin
>Priority: Trivial
>  Labels: newbie
>
> After HADOOP-11311, key names has been restricted and uppercase key names are 
> not allowed. This section of {{TransparentEncryption.md}} should be modified.
> {quote}
> # As the normal user, create a new encryption key
> hadoop key create myKey
> # As the super user, create a new empty directory and make it an encryption 
> zone
> hadoop fs -mkdir /zone
> hdfs crypto -createZone -keyName myKey -path /zone
> # chown it to the normal user
> hadoop fs -chown myuser:myuser /zone
> # As the normal user, put a file in, read it out
> hadoop fs -put helloWorld /zone
> hadoop fs -cat /zone/helloWorld
> {quote}
> "myKey" is not allowed here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11175) TransparentEncryption.md should be up-to-date since uppercase key names are unsupported.

2016-11-24 Thread Yuanbo Liu (JIRA)
Yuanbo Liu created HDFS-11175:
-

 Summary: TransparentEncryption.md should be up-to-date since 
uppercase key names are unsupported.
 Key: HDFS-11175
 URL: https://issues.apache.org/jira/browse/HDFS-11175
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Yuanbo Liu
Priority: Trivial


After HADOOP-11311, key names has been restricted and uppercase key names are 
not allowed. This section of {{TransparentEncryption.md}} should be modified.
{quote}
# As the normal user, create a new encryption key
hadoop key create myKey

# As the super user, create a new empty directory and make it an encryption zone
hadoop fs -mkdir /zone
hdfs crypto -createZone -keyName myKey -path /zone

# chown it to the normal user
hadoop fs -chown myuser:myuser /zone

# As the normal user, put a file in, read it out
hadoop fs -put helloWorld /zone
hadoop fs -cat /zone/helloWorld
{quote}
"myKey" is not allowed here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11174) Wrong HttpFS test command in doc

2016-11-24 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15694674#comment-15694674
 ] 

Yuanbo Liu commented on HDFS-11174:
---

Tested this command locally. I'm +1(no-binding) for this patch.
[~jzhuge] Thanks for your work.

> Wrong HttpFS test command in doc
> 
>
> Key: HDFS-11174
> URL: https://issues.apache.org/jira/browse/HDFS-11174
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, httpfs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HDFS-11174.001.patch
>
>
> There are 2 errors in section {{Test HttpFS is working}} in 
> http://hadoop.apache.org/docs/r2.7.3/hadoop-hdfs-httpfs/ServerSetup.html
> {noformat}
> ~ $ curl -i "http://:14000?user.name=babu=homedir"
> HTTP/1.1 200 OK
> Content-Type: application/json
> Transfer-Encoding: chunked
> {"homeDir":"http:\/\/:14000\/user\/babu"}
> {noformat}
> # The URL path should be {{/webhdfs/v1}}.
> # The {{op}} should be {{gethomedirectory}}, not {{homedir}}.
> The curl command would produce this error:
> {noformat}
> $ curl 'http://localhost:14000/webhdfs/v1?op=homedir=hdfs' | json_pp
> {
>"RemoteException" : {
>   "message" : "java.lang.IllegalArgumentException: No enum constant 
> org.apache.hadoop.fs.http.client.HttpFSFileSystem.Operation.HOMEDIR",
>   "exception" : "QueryParamException",
>   "javaClassName" : 
> "com.sun.jersey.api.ParamException$QueryParamException"
>}
> }
> {noformat}
> The correct command should be:
> {code}
> $ curl 'http://localhost:14000/webhdfs/v1?op=gethomedirectory=hdfs' 
> | json_pp
> {
>"Path" : "/user/hdfs"
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11169) GetBlockLocations returns a block when offset > filesize and file only has 1 block

2016-11-24 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15694609#comment-15694609
 ] 

Yiqun Lin commented on HDFS-11169:
--

Some failed tests are related. Will post a new patch later to fix these in case 
there are further comments from others.

> GetBlockLocations returns a block when offset > filesize and file only has 1 
> block
> --
>
> Key: HDFS-11169
> URL: https://issues.apache.org/jira/browse/HDFS-11169
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.1
> Environment: HDP 2.5, Ambari 2.4
>Reporter: David Tucker
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-11169.001.patch
>
>
> Start with a fresh deployment of HDFS.
> 1. Create a file.
> 2. AddBlock the file with an offest larger than the file size.
> 3. Call GetBlockLocations.
> Expectation: 0 blocks are returned because the only added block is incomplete.
> Observation: 1 block is returned.
> This only seems to occur when 1 block is in play (i.e. if you write a block 
> and call AddBlock again, GetBlockLocations seems to behave as expected).
> This seems to be related to HDFS-513.
> I suspect the following line needs revision: 
> https://github.com/apache/hadoop/blob/4484b48498b2ab2a40a404c487c7a4e875df10dc/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L1062
> I believe it should be >= instead of >:
> if (nrBlocks >= 0 && curBlk == nrBlocks)   // offset >= end of file



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11171) Add localBytesRead and localReadTime for datanode metrics to calculate local read rate which is useful to compare .

2016-11-24 Thread 5feixiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15694601#comment-15694601
 ] 

5feixiang edited comment on HDFS-11171 at 11/25/16 2:07 AM:


Current BytesRead metrics does not container localbytes of short circuit.How to 
get local read bytes of short-circuit?


was (Author: 5feixiang):
Current BytesRead metrics does not container localbytes read like short circuit.

> Add localBytesRead and localReadTime for datanode metrics to calculate local 
> read rate which is useful to compare .
> ---
>
> Key: HDFS-11171
> URL: https://issues.apache.org/jira/browse/HDFS-11171
> Project: Hadoop HDFS
>  Issue Type: Wish
>  Components: datanode
>Reporter: 5feixiang
>  Labels: metrics
> Attachments: localReadMetrics.patch
>
>
> Current dfs context metrics only contains bytesRead ,remoteBytesRead and 
> totalReadTime.We can find  that bytesRead= remoteBytesRead +balance remote 
> copyBytesRead+localBytesRead, so we can add  localBytesRead and localReadTime 
> to calcute local read rate which is usefu to compate local read rate between 
> short-circuit read and tcp-socket read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11171) Add localBytesRead and localReadTime for datanode metrics to calculate local read rate which is useful to compare .

2016-11-24 Thread 5feixiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15694601#comment-15694601
 ] 

5feixiang commented on HDFS-11171:
--

Current BytesRead metrics does not container localbytes read like short circuit.

> Add localBytesRead and localReadTime for datanode metrics to calculate local 
> read rate which is useful to compare .
> ---
>
> Key: HDFS-11171
> URL: https://issues.apache.org/jira/browse/HDFS-11171
> Project: Hadoop HDFS
>  Issue Type: Wish
>  Components: datanode
>Reporter: 5feixiang
>  Labels: metrics
> Attachments: localReadMetrics.patch
>
>
> Current dfs context metrics only contains bytesRead ,remoteBytesRead and 
> totalReadTime.We can find  that bytesRead= remoteBytesRead +balance remote 
> copyBytesRead+localBytesRead, so we can add  localBytesRead and localReadTime 
> to calcute local read rate which is usefu to compate local read rate between 
> short-circuit read and tcp-socket read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11171) Add localBytesRead and localReadTime for datanode metrics to calculate local read rate which is useful to compare .

2016-11-24 Thread 5feixiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15694597#comment-15694597
 ] 

5feixiang commented on HDFS-11171:
--

Okļ¼Œthank you!

> Add localBytesRead and localReadTime for datanode metrics to calculate local 
> read rate which is useful to compare .
> ---
>
> Key: HDFS-11171
> URL: https://issues.apache.org/jira/browse/HDFS-11171
> Project: Hadoop HDFS
>  Issue Type: Wish
>  Components: datanode
>Reporter: 5feixiang
>  Labels: metrics
> Attachments: localReadMetrics.patch
>
>
> Current dfs context metrics only contains bytesRead ,remoteBytesRead and 
> totalReadTime.We can find  that bytesRead= remoteBytesRead +balance remote 
> copyBytesRead+localBytesRead, so we can add  localBytesRead and localReadTime 
> to calcute local read rate which is usefu to compate local read rate between 
> short-circuit read and tcp-socket read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11171) Add localBytesRead and localReadTime for datanode metrics to calculate local read rate which is useful to compare .

2016-11-24 Thread 5feixiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

5feixiang updated HDFS-11171:
-
   Flags: Patch
Hadoop Flags: Reviewed

> Add localBytesRead and localReadTime for datanode metrics to calculate local 
> read rate which is useful to compare .
> ---
>
> Key: HDFS-11171
> URL: https://issues.apache.org/jira/browse/HDFS-11171
> Project: Hadoop HDFS
>  Issue Type: Wish
>  Components: datanode
>Reporter: 5feixiang
>  Labels: metrics
> Attachments: localReadMetrics.patch
>
>
> Current dfs context metrics only contains bytesRead ,remoteBytesRead and 
> totalReadTime.We can find  that bytesRead= remoteBytesRead +balance remote 
> copyBytesRead+localBytesRead, so we can add  localBytesRead and localReadTime 
> to calcute local read rate which is usefu to compate local read rate between 
> short-circuit read and tcp-socket read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-11171) Add localBytesRead and localReadTime for datanode metrics to calculate local read rate which is useful to compare .

2016-11-24 Thread 5feixiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

5feixiang reopened HDFS-11171:
--

> Add localBytesRead and localReadTime for datanode metrics to calculate local 
> read rate which is useful to compare .
> ---
>
> Key: HDFS-11171
> URL: https://issues.apache.org/jira/browse/HDFS-11171
> Project: Hadoop HDFS
>  Issue Type: Wish
>  Components: datanode
>Reporter: 5feixiang
>  Labels: metrics
> Attachments: localReadMetrics.patch
>
>
> Current dfs context metrics only contains bytesRead ,remoteBytesRead and 
> totalReadTime.We can find  that bytesRead= remoteBytesRead +balance remote 
> copyBytesRead+localBytesRead, so we can add  localBytesRead and localReadTime 
> to calcute local read rate which is usefu to compate local read rate between 
> short-circuit read and tcp-socket read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8630) WebHDFS : Support get/set/unset StoragePolicy

2016-11-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15694325#comment-15694325
 ] 

Hadoop QA commented on HDFS-8630:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} hadoop-hdfs-project: The patch generated 42 new 
+ 713 unchanged - 5 fixed = 755 total (was 718) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
54s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 84m 
49s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
12s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}118m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-8630 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12840455/HDFS-8630.008.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0ba3b7969cef 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 01665e4 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17661/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17661/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-httpfs 

[jira] [Updated] (HDFS-11174) Wrong HttpFS test command in doc

2016-11-24 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-11174:
--
Description: 
There are 2 errors in section {{Test HttpFS is working}} in 
http://hadoop.apache.org/docs/r2.7.3/hadoop-hdfs-httpfs/ServerSetup.html
{noformat}
~ $ curl -i "http://:14000?user.name=babu=homedir"
HTTP/1.1 200 OK
Content-Type: application/json
Transfer-Encoding: chunked

{"homeDir":"http:\/\/:14000\/user\/babu"}
{noformat}
# The URL path should be {{/webhdfs/v1}}.
# The {{op}} should be {{gethomedirectory}}, not {{homedir}}.

The curl command would produce this error:
{noformat}
$ curl 'http://localhost:14000/webhdfs/v1?op=homedir=hdfs' | json_pp
{
   "RemoteException" : {
  "message" : "java.lang.IllegalArgumentException: No enum constant 
org.apache.hadoop.fs.http.client.HttpFSFileSystem.Operation.HOMEDIR",
  "exception" : "QueryParamException",
  "javaClassName" : "com.sun.jersey.api.ParamException$QueryParamException"
   }
}
{noformat}

The correct command should be:
{code}
$ curl 'http://localhost:14000/webhdfs/v1?op=gethomedirectory=hdfs' | 
json_pp
{
   "Path" : "/user/hdfs"
}
{code}

  was:
There are 2 errors in section {{Test HttpFS is working}} in 
http://hadoop.apache.org/docs/r2.7.3/hadoop-hdfs-httpfs/ServerSetup.html:
{noformat}
~ $ curl -i "http://:14000?user.name=babu=homedir"
HTTP/1.1 200 OK
Content-Type: application/json
Transfer-Encoding: chunked

{"homeDir":"http:\/\/:14000\/user\/babu"}
{noformat}
# The URL path should be {{/webhdfs/v1}}.
# The {{op}} should be {{gethomedirectory}}, not {{homedir}}.

The curl command would produce this error:
{noformat}
$ curl 'http://localhost:14000/webhdfs/v1?op=homedir=hdfs' | json_pp
{
   "RemoteException" : {
  "message" : "java.lang.IllegalArgumentException: No enum constant 
org.apache.hadoop.fs.http.client.HttpFSFileSystem.Operation.HOMEDIR",
  "exception" : "QueryParamException",
  "javaClassName" : "com.sun.jersey.api.ParamException$QueryParamException"
   }
}
{noformat}

The correct command should be:
{code}
$ curl 'http://localhost:14000/webhdfs/v1?op=gethomedirectory=hdfs' | 
json_pp
{
   "Path" : "/user/hdfs"
}
{code}


> Wrong HttpFS test command in doc
> 
>
> Key: HDFS-11174
> URL: https://issues.apache.org/jira/browse/HDFS-11174
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, httpfs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HDFS-11174.001.patch
>
>
> There are 2 errors in section {{Test HttpFS is working}} in 
> http://hadoop.apache.org/docs/r2.7.3/hadoop-hdfs-httpfs/ServerSetup.html
> {noformat}
> ~ $ curl -i "http://:14000?user.name=babu=homedir"
> HTTP/1.1 200 OK
> Content-Type: application/json
> Transfer-Encoding: chunked
> {"homeDir":"http:\/\/:14000\/user\/babu"}
> {noformat}
> # The URL path should be {{/webhdfs/v1}}.
> # The {{op}} should be {{gethomedirectory}}, not {{homedir}}.
> The curl command would produce this error:
> {noformat}
> $ curl 'http://localhost:14000/webhdfs/v1?op=homedir=hdfs' | json_pp
> {
>"RemoteException" : {
>   "message" : "java.lang.IllegalArgumentException: No enum constant 
> org.apache.hadoop.fs.http.client.HttpFSFileSystem.Operation.HOMEDIR",
>   "exception" : "QueryParamException",
>   "javaClassName" : 
> "com.sun.jersey.api.ParamException$QueryParamException"
>}
> }
> {noformat}
> The correct command should be:
> {code}
> $ curl 'http://localhost:14000/webhdfs/v1?op=gethomedirectory=hdfs' 
> | json_pp
> {
>"Path" : "/user/hdfs"
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10176) WebHdfs LISTSTATUS does not offer any sorting

2016-11-24 Thread Lucas Lustosa Madureira (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15694182#comment-15694182
 ] 

Lucas Lustosa Madureira commented on HDFS-10176:


I've noticed the class FileSystem has been updated. Here is the [new 
patch|http://pastebin.com/raw/c9C56CE7] against the [latest 
commit|https://github.com/apache/hadoop/commit/01665e456de8d79000ce273dded5ea53aa62965a].

> WebHdfs LISTSTATUS does not offer any sorting
> -
>
> Key: HDFS-10176
> URL: https://issues.apache.org/jira/browse/HDFS-10176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.7.0
>Reporter: Romain Rigaux
>
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#List_a_Directory
> Times, names, sizes would allow a Web client to offer a richer experience to 
> its user:
> {code}
> {
> "accessTime"  : 1320171722771,
> "blockSize"   : 33554432,
> "group"   : "supergroup",
> "length"  : 24930,
> "modificationTime": 1320171722771,
> "owner"   : "webuser",
> "pathSuffix"  : "a.patch",
> "permission"  : "644",
> "replication" : 1,
> "type": "FILE"
>   },
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-11173) Reschedule CompletedActionXCommand if the job is not completed

2016-11-24 Thread Peter Cseh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Cseh resolved HDFS-11173.
---
Resolution: Invalid

> Reschedule CompletedActionXCommand if the job is not completed
> --
>
> Key: HDFS-11173
> URL: https://issues.apache.org/jira/browse/HDFS-11173
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Peter Cseh
>
> We've encountered cases when the LauncherMapper stuck around after sending 
> out the notifications to Oozie. If the callback is processed before the 
> external job's status is updated to FINISHED, Oozie won't update the action's 
> status for 10 minutes.
> We could add a delayed check to 
> [CompletedAcitonXCommand|https://github.com/apache/oozie/blob/master/core/src/main/java/org/apache/oozie/command/wf/CompletedActionXCommand.java#120
>  ] to avoid this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8630) WebHDFS : Support get/set/unset StoragePolicy

2016-11-24 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-8630:
-
Attachment: HDFS-8630.008.patch

Attached updated patch 
Please review...

> WebHDFS : Support get/set/unset StoragePolicy 
> --
>
> Key: HDFS-8630
> URL: https://issues.apache.org/jira/browse/HDFS-8630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: webhdfs
>Reporter: nijel
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-8630.001.patch, HDFS-8630.002.patch, 
> HDFS-8630.003.patch, HDFS-8630.004.patch, HDFS-8630.005.patch, 
> HDFS-8630.006.patch, HDFS-8630.007.patch, HDFS-8630.008.patch, HDFS-8630.patch
>
>
> User can set and get the storage policy from filesystem object. Same 
> operation can be allowed trough REST API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11174) Wrong HttpFS test command in doc

2016-11-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15694071#comment-15694071
 ] 

Hadoop QA commented on HDFS-11174:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  8m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11174 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12840453/HDFS-11174.001.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux e09c668de422 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 01665e4 |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: 
hadoop-hdfs-project/hadoop-hdfs-httpfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17660/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Wrong HttpFS test command in doc
> 
>
> Key: HDFS-11174
> URL: https://issues.apache.org/jira/browse/HDFS-11174
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, httpfs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HDFS-11174.001.patch
>
>
> There are 2 errors in section {{Test HttpFS is working}} in 
> http://hadoop.apache.org/docs/r2.7.3/hadoop-hdfs-httpfs/ServerSetup.html:
> {noformat}
> ~ $ curl -i "http://:14000?user.name=babu=homedir"
> HTTP/1.1 200 OK
> Content-Type: application/json
> Transfer-Encoding: chunked
> {"homeDir":"http:\/\/:14000\/user\/babu"}
> {noformat}
> # The URL path should be {{/webhdfs/v1}}.
> # The {{op}} should be {{gethomedirectory}}, not {{homedir}}.
> The curl command would produce this error:
> {noformat}
> $ curl 'http://localhost:14000/webhdfs/v1?op=homedir=hdfs' | json_pp
> {
>"RemoteException" : {
>   "message" : "java.lang.IllegalArgumentException: No enum constant 
> org.apache.hadoop.fs.http.client.HttpFSFileSystem.Operation.HOMEDIR",
>   "exception" : "QueryParamException",
>   "javaClassName" : 
> "com.sun.jersey.api.ParamException$QueryParamException"
>}
> }
> {noformat}
> The correct command should be:
> {code}
> $ curl 'http://localhost:14000/webhdfs/v1?op=gethomedirectory=hdfs' 
> | json_pp
> {
>"Path" : "/user/hdfs"
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11174) Wrong HttpFS test command in doc

2016-11-24 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-11174:
--
Status: Patch Available  (was: In Progress)

> Wrong HttpFS test command in doc
> 
>
> Key: HDFS-11174
> URL: https://issues.apache.org/jira/browse/HDFS-11174
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, httpfs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HDFS-11174.001.patch
>
>
> There are 2 errors in section {{Test HttpFS is working}} in 
> http://hadoop.apache.org/docs/r2.7.3/hadoop-hdfs-httpfs/ServerSetup.html:
> {noformat}
> ~ $ curl -i "http://:14000?user.name=babu=homedir"
> HTTP/1.1 200 OK
> Content-Type: application/json
> Transfer-Encoding: chunked
> {"homeDir":"http:\/\/:14000\/user\/babu"}
> {noformat}
> # The URL path should be {{/webhdfs/v1}}.
> # The {{op}} should be {{gethomedirectory}}, not {{homedir}}.
> The curl command would produce this error:
> {noformat}
> $ curl 'http://localhost:14000/webhdfs/v1?op=homedir=hdfs' | json_pp
> {
>"RemoteException" : {
>   "message" : "java.lang.IllegalArgumentException: No enum constant 
> org.apache.hadoop.fs.http.client.HttpFSFileSystem.Operation.HOMEDIR",
>   "exception" : "QueryParamException",
>   "javaClassName" : 
> "com.sun.jersey.api.ParamException$QueryParamException"
>}
> }
> {noformat}
> The correct command should be:
> {code}
> $ curl 'http://localhost:14000/webhdfs/v1?op=gethomedirectory=hdfs' 
> | json_pp
> {
>"Path" : "/user/hdfs"
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11174) Wrong HttpFS test command in doc

2016-11-24 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-11174:
--
Attachment: HDFS-11174.001.patch

Patch 001:
* Fix the HttpFS test command in {{ServerSetup.md.vm}}

> Wrong HttpFS test command in doc
> 
>
> Key: HDFS-11174
> URL: https://issues.apache.org/jira/browse/HDFS-11174
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, httpfs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HDFS-11174.001.patch
>
>
> There are 2 errors in section {{Test HttpFS is working}} in 
> http://hadoop.apache.org/docs/r2.7.3/hadoop-hdfs-httpfs/ServerSetup.html:
> {noformat}
> ~ $ curl -i "http://:14000?user.name=babu=homedir"
> HTTP/1.1 200 OK
> Content-Type: application/json
> Transfer-Encoding: chunked
> {"homeDir":"http:\/\/:14000\/user\/babu"}
> {noformat}
> # The URL path should be {{/webhdfs/v1}}.
> # The {{op}} should be {{gethomedirectory}}, not {{homedir}}.
> The curl command would produce this error:
> {noformat}
> $ curl 'http://localhost:14000/webhdfs/v1?op=homedir=hdfs' | json_pp
> {
>"RemoteException" : {
>   "message" : "java.lang.IllegalArgumentException: No enum constant 
> org.apache.hadoop.fs.http.client.HttpFSFileSystem.Operation.HOMEDIR",
>   "exception" : "QueryParamException",
>   "javaClassName" : 
> "com.sun.jersey.api.ParamException$QueryParamException"
>}
> }
> {noformat}
> The correct command should be:
> {code}
> $ curl 'http://localhost:14000/webhdfs/v1?op=gethomedirectory=hdfs' 
> | json_pp
> {
>"Path" : "/user/hdfs"
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-11174) Wrong HttpFS test command in doc

2016-11-24 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-11174 started by John Zhuge.
-
> Wrong HttpFS test command in doc
> 
>
> Key: HDFS-11174
> URL: https://issues.apache.org/jira/browse/HDFS-11174
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, httpfs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>
> There are 2 errors in section {{Test HttpFS is working}} in 
> http://hadoop.apache.org/docs/r2.7.3/hadoop-hdfs-httpfs/ServerSetup.html:
> {noformat}
> ~ $ curl -i "http://:14000?user.name=babu=homedir"
> HTTP/1.1 200 OK
> Content-Type: application/json
> Transfer-Encoding: chunked
> {"homeDir":"http:\/\/:14000\/user\/babu"}
> {noformat}
> # The URL path should be {{/webhdfs/v1}}.
> # The {{op}} should be {{gethomedirectory}}, not {{homedir}}.
> The curl command would produce this error:
> {noformat}
> $ curl 'http://localhost:14000/webhdfs/v1?op=homedir=hdfs' | json_pp
> {
>"RemoteException" : {
>   "message" : "java.lang.IllegalArgumentException: No enum constant 
> org.apache.hadoop.fs.http.client.HttpFSFileSystem.Operation.HOMEDIR",
>   "exception" : "QueryParamException",
>   "javaClassName" : 
> "com.sun.jersey.api.ParamException$QueryParamException"
>}
> }
> {noformat}
> The correct command should be:
> {code}
> $ curl 'http://localhost:14000/webhdfs/v1?op=gethomedirectory=hdfs' 
> | json_pp
> {
>"Path" : "/user/hdfs"
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11174) Wrong HttpFS test command in doc

2016-11-24 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-11174:
--
Description: 
There are 2 errors in section {{Test HttpFS is working}} in 
http://hadoop.apache.org/docs/r2.7.3/hadoop-hdfs-httpfs/ServerSetup.html:
{noformat}
~ $ curl -i "http://:14000?user.name=babu=homedir"
HTTP/1.1 200 OK
Content-Type: application/json
Transfer-Encoding: chunked

{"homeDir":"http:\/\/:14000\/user\/babu"}
{noformat}
# The URL path should be {{/webhdfs/v1}}.
# The {{op}} should be {{gethomedirectory}}, not {{homedir}}.

The curl command would produce this error:
{noformat}
$ curl 'http://localhost:14000/webhdfs/v1?op=homedir=hdfs' | json_pp
{
   "RemoteException" : {
  "message" : "java.lang.IllegalArgumentException: No enum constant 
org.apache.hadoop.fs.http.client.HttpFSFileSystem.Operation.HOMEDIR",
  "exception" : "QueryParamException",
  "javaClassName" : "com.sun.jersey.api.ParamException$QueryParamException"
   }
}
{noformat}

The correct command should be:
{code}
$ curl 'http://localhost:14000/webhdfs/v1?op=gethomedirectory=hdfs' | 
json_pp
{
   "Path" : "/user/hdfs"
}
{code}

  was:
There are 2 errors in section {{Test HttpFS is working}} in 
http://hadoop.apache.org/docs/r2.7.3/hadoop-hdfs-httpfs/ServerSetup.html:
{noformat}
~ $ curl -i "http://:14000?user.name=babu=homedir"
HTTP/1.1 200 OK
Content-Type: application/json
Transfer-Encoding: chunked

{"homeDir":"http:\/\/:14000\/user\/babu"}
{noformat}
# The URL path should be {{/webhdfs/v1}}.
# The {{op}} should be {{gethomedirectory}}, not {{homedir}}.

The curl command would produce this error:
{noformat}
$ curl 'http://localhost:14000/webhdfs/v1?op=homedir=hdfs'
{
   "RemoteException" : {
  "message" : "java.lang.IllegalArgumentException: No enum constant 
org.apache.hadoop.fs.http.client.HttpFSFileSystem.Operation.HOMEDIR",
  "exception" : "QueryParamException",
  "javaClassName" : "com.sun.jersey.api.ParamException$QueryParamException"
   }
}
{noformat}

The correct command should be:
{code}
curl 'http://localhost:14000/webhdfs/v1?op=gethomedirectory=hdfs'
{
   "Path" : "/user/hdfs"
}
{code}


> Wrong HttpFS test command in doc
> 
>
> Key: HDFS-11174
> URL: https://issues.apache.org/jira/browse/HDFS-11174
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, httpfs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>
> There are 2 errors in section {{Test HttpFS is working}} in 
> http://hadoop.apache.org/docs/r2.7.3/hadoop-hdfs-httpfs/ServerSetup.html:
> {noformat}
> ~ $ curl -i "http://:14000?user.name=babu=homedir"
> HTTP/1.1 200 OK
> Content-Type: application/json
> Transfer-Encoding: chunked
> {"homeDir":"http:\/\/:14000\/user\/babu"}
> {noformat}
> # The URL path should be {{/webhdfs/v1}}.
> # The {{op}} should be {{gethomedirectory}}, not {{homedir}}.
> The curl command would produce this error:
> {noformat}
> $ curl 'http://localhost:14000/webhdfs/v1?op=homedir=hdfs' | json_pp
> {
>"RemoteException" : {
>   "message" : "java.lang.IllegalArgumentException: No enum constant 
> org.apache.hadoop.fs.http.client.HttpFSFileSystem.Operation.HOMEDIR",
>   "exception" : "QueryParamException",
>   "javaClassName" : 
> "com.sun.jersey.api.ParamException$QueryParamException"
>}
> }
> {noformat}
> The correct command should be:
> {code}
> $ curl 'http://localhost:14000/webhdfs/v1?op=gethomedirectory=hdfs' 
> | json_pp
> {
>"Path" : "/user/hdfs"
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11174) Wrong HttpFS test command in doc

2016-11-24 Thread John Zhuge (JIRA)
John Zhuge created HDFS-11174:
-

 Summary: Wrong HttpFS test command in doc
 Key: HDFS-11174
 URL: https://issues.apache.org/jira/browse/HDFS-11174
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation, httpfs
Affects Versions: 2.6.0
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Minor


There are 2 errors in section {{Test HttpFS is working}} in 
http://hadoop.apache.org/docs/r2.7.3/hadoop-hdfs-httpfs/ServerSetup.html:
{noformat}
~ $ curl -i "http://:14000?user.name=babu=homedir"
HTTP/1.1 200 OK
Content-Type: application/json
Transfer-Encoding: chunked

{"homeDir":"http:\/\/:14000\/user\/babu"}
{noformat}
# The URL path should be {{/webhdfs/v1}}.
# The {{op}} should be {{gethomedirectory}}, not {{homedir}}.

The curl command would produce this error:
{noformat}
$ curl 'http://localhost:14000/webhdfs/v1?op=homedir=hdfs'
{
   "RemoteException" : {
  "message" : "java.lang.IllegalArgumentException: No enum constant 
org.apache.hadoop.fs.http.client.HttpFSFileSystem.Operation.HOMEDIR",
  "exception" : "QueryParamException",
  "javaClassName" : "com.sun.jersey.api.ParamException$QueryParamException"
   }
}
{noformat}

The correct command should be:
{code}
curl 'http://localhost:14000/webhdfs/v1?op=gethomedirectory=hdfs'
{
   "Path" : "/user/hdfs"
}
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11173) Reschedule CompletedActionXCommand if the job is not completed

2016-11-24 Thread Peter Cseh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Cseh updated HDFS-11173:
--
Description: 
We've encountered cases when the LauncherMapper stuck around after sending out 
the notifications to Oozie. If the callback is processed before the external 
job's status is updated to FINISHED, Oozie won't update the action's status for 
10 minutes.

We could add a delayed check to 
[CompletedAcitonXCommand|https://github.com/apache/oozie/blob/master/core/src/main/java/org/apache/oozie/command/wf/CompletedActionXCommand.java#120
 ] to avoid this.


  was:
We've encountered cases when the LauncherMapper stuck around after sending out 
the notifications to Oozie. If the callback is processed before the external 
job's status is updated to FINISHED, Oozie won't update the action's status for 
10 minutes.

We could add a delayed check to [CompletedAcitonXCommand | 
https://github.com/apache/oozie/blob/master/core/src/main/java/org/apache/oozie/command/wf/CompletedActionXCommand.java#L120
 ] to avoid this.


> Reschedule CompletedActionXCommand if the job is not completed
> --
>
> Key: HDFS-11173
> URL: https://issues.apache.org/jira/browse/HDFS-11173
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Peter Cseh
>
> We've encountered cases when the LauncherMapper stuck around after sending 
> out the notifications to Oozie. If the callback is processed before the 
> external job's status is updated to FINISHED, Oozie won't update the action's 
> status for 10 minutes.
> We could add a delayed check to 
> [CompletedAcitonXCommand|https://github.com/apache/oozie/blob/master/core/src/main/java/org/apache/oozie/command/wf/CompletedActionXCommand.java#120
>  ] to avoid this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11173) Reschedule CompletedActionXCommand if the job is not completed

2016-11-24 Thread Peter Cseh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15693876#comment-15693876
 ] 

Peter Cseh commented on HDFS-11173:
---

Please move this issue to project OOZIE. I've mistakenly opened it as HDFS.

> Reschedule CompletedActionXCommand if the job is not completed
> --
>
> Key: HDFS-11173
> URL: https://issues.apache.org/jira/browse/HDFS-11173
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Peter Cseh
>
> We've encountered cases when the LauncherMapper stuck around after sending 
> out the notifications to Oozie. If the callback is processed before the 
> external job's status is updated to FINISHED, Oozie won't update the action's 
> status for 10 minutes.
> We could add a delayed check to [CompletedAcitonXCommand | 
> https://github.com/apache/oozie/blob/master/core/src/main/java/org/apache/oozie/command/wf/CompletedActionXCommand.java#L120
>  ] to avoid this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11173) Reschedule CompletedActionXCommand if the job is not completed

2016-11-24 Thread Peter Cseh (JIRA)
Peter Cseh created HDFS-11173:
-

 Summary: Reschedule CompletedActionXCommand if the job is not 
completed
 Key: HDFS-11173
 URL: https://issues.apache.org/jira/browse/HDFS-11173
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Peter Cseh


We've encountered cases when the LauncherMapper stuck around after sending out 
the notifications to Oozie. If the callback is processed before the external 
job's status is updated to FINISHED, Oozie won't update the action's status for 
10 minutes.

We could add a delayed check to [CompletedAcitonXCommand | 
https://github.com/apache/oozie/blob/master/core/src/main/java/org/apache/oozie/command/wf/CompletedActionXCommand.java#L120
 ] to avoid this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11146) Excess replicas will not be deleted until all storages's FBR received after failover

2016-11-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15693624#comment-15693624
 ] 

Hadoop QA commented on HDFS-11146:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 31s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 17 new + 501 unchanged - 0 fixed = 518 total (was 501) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}127m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.fs.TestSymlinkHdfsFileSystem |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11146 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12840419/HDFS-11146.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux aaeba891da31 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / eb0a483 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17659/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17659/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17659/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17659/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   

[jira] [Commented] (HDFS-10885) [SPS]: Mover tool should not be allowed to run when Storage Policy Satisfier is on

2016-11-24 Thread Wei Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15693623#comment-15693623
 ] 

Wei Zhou commented on HDFS-10885:
-

Thanks [~rakeshr] for the suggestions! I'll update the patch. Thanks!

> [SPS]: Mover tool should not be allowed to run when Storage Policy Satisfier 
> is on
> --
>
> Key: HDFS-10885
> URL: https://issues.apache.org/jira/browse/HDFS-10885
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Wei Zhou
>Assignee: Wei Zhou
> Fix For: HDFS-10285
>
> Attachments: HDFS-10800-HDFS-10885-00.patch, 
> HDFS-10800-HDFS-10885-01.patch, HDFS-10800-HDFS-10885-02.patch, 
> HDFS-10885-HDFS-10285.03.patch, HDFS-10885-HDFS-10285.04.patch, 
> HDFS-10885-HDFS-10285.05.patch, HDFS-10885-HDFS-10285.06.patch, 
> HDFS-10885-HDFS-10285.07.patch
>
>
> These two can not work at the same time to avoid conflicts and fight with 
> each other.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9668) Optimize the locking in FsDatasetImpl

2016-11-24 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15693549#comment-15693549
 ] 

Jingcheng Du commented on HDFS-9668:


The failures in tests should not be related with the patch.

> Optimize the locking in FsDatasetImpl
> -
>
> Key: HDFS-9668
> URL: https://issues.apache.org/jira/browse/HDFS-9668
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Attachments: HDFS-9668-1.patch, HDFS-9668-10.patch, 
> HDFS-9668-11.patch, HDFS-9668-12.patch, HDFS-9668-13.patch, 
> HDFS-9668-14.patch, HDFS-9668-14.patch, HDFS-9668-15.patch, 
> HDFS-9668-16.patch, HDFS-9668-17.patch, HDFS-9668-18.patch, 
> HDFS-9668-19.patch, HDFS-9668-19.patch, HDFS-9668-2.patch, 
> HDFS-9668-20.patch, HDFS-9668-21.patch, HDFS-9668-22.patch, 
> HDFS-9668-23.patch, HDFS-9668-23.patch, HDFS-9668-24.patch, 
> HDFS-9668-25.patch, HDFS-9668-3.patch, HDFS-9668-4.patch, HDFS-9668-5.patch, 
> HDFS-9668-6.patch, HDFS-9668-7.patch, HDFS-9668-8.patch, HDFS-9668-9.patch, 
> execution_time.png
>
>
> During the HBase test on a tiered storage of HDFS (WAL is stored in 
> SSD/RAMDISK, and all other files are stored in HDD), we observe many 
> long-time BLOCKED threads on FsDatasetImpl in DataNode. The following is part 
> of the jstack result:
> {noformat}
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48521 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779272_40852]" - Thread 
> t@93336
>java.lang.Thread.State: BLOCKED
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:)
>   - waiting to lock <18324c9> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl) owned by 
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48520 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" t@93335
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
>   at java.lang.Thread.run(Thread.java:745)
>Locked ownable synchronizers:
>   - None
>   
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48520 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" - Thread 
> t@93335
>java.lang.Thread.State: RUNNABLE
>   at java.io.UnixFileSystem.createFileExclusively(Native Method)
>   at java.io.File.createNewFile(File.java:1012)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:66)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:271)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:286)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:1140)
>   - locked <18324c9> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
>   at java.lang.Thread.run(Thread.java:745)
>Locked ownable synchronizers:
>   - None
> {noformat}
> We measured the execution of some operations in FsDatasetImpl during the 
> test. Here following is the result.
> !execution_time.png!
> The operations of finalizeBlock, addBlock and createRbw on HDD in a heavy 
> load take a really long time.
> It means one slow operation of finalizeBlock, addBlock and createRbw in a 
> slow storage can block all the other same operations in the same DataNode, 
> especially in HBase when many wal/flusher/compactor are configured.
> We need a 

[jira] [Commented] (HDFS-11169) GetBlockLocations returns a block when offset > filesize and file only has 1 block

2016-11-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15693430#comment-15693430
 ] 

Hadoop QA commented on HDFS-11169:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 122 unchanged - 1 fixed = 123 total (was 123) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m  3s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 95m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReadStripedFileWithDecoding |
|   | hadoop.fs.TestSymlinkHdfsFileSystem |
|   | hadoop.hdfs.server.namenode.TestAddBlockRetry |
|   | hadoop.hdfs.server.namenode.TestAddStripedBlocks |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
|   | org.apache.hadoop.hdfs.server.namenode.TestBlockUnderConstruction |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11169 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12840414/HDFS-11169.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 447bd0f373a9 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / eb0a483 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17658/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17658/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17658/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDFS-10994) Support an XOR policy XOR-2-1-64k in HDFS

2016-11-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15693394#comment-15693394
 ] 

Hadoop QA commented on HDFS-10994:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
54s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
40s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
34s{color} | {color:green} root: The patch generated 0 new + 126 unchanged - 6 
fixed = 126 total (was 132) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
43s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
0s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m  8s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}150m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestSymlinkHdfsFileSystem |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-10994 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12840411/HDFS-10994-v5.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5dfd2a401efb 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e15c20e |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17657/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Updated] (HDFS-11146) Excess replicas will not be deleted until all storages's FBR received after failover

2016-11-24 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-11146:

Status: Patch Available  (was: Open)

> Excess replicas will not be deleted until all storages's FBR received after 
> failover
> 
>
> Key: HDFS-11146
> URL: https://issues.apache.org/jira/browse/HDFS-11146
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-11146.patch
>
>
> Excess replicas will not be deleted until all storages's FBR received after 
> failover.
> Thinking following soultion can help.
>  *Solution:* 
> I think after failover, As DNs aware of failover ,so they can send another 
> block report (FBR) irrespective of interval.May be some shuffle can be done, 
> similar to initial delay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11146) Excess replicas will not be deleted until all storages's FBR received after failover

2016-11-24 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-11146:

Attachment: HDFS-11146.patch

Uploading the draft patch..
Need to limit number of block reports requested at a time.

> Excess replicas will not be deleted until all storages's FBR received after 
> failover
> 
>
> Key: HDFS-11146
> URL: https://issues.apache.org/jira/browse/HDFS-11146
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-11146.patch
>
>
> Excess replicas will not be deleted until all storages's FBR received after 
> failover.
> Thinking following soultion can help.
>  *Solution:* 
> I think after failover, As DNs aware of failover ,so they can send another 
> block report (FBR) irrespective of interval.May be some shuffle can be done, 
> similar to initial delay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11169) GetBlockLocations returns a block when offset > filesize and file only has 1 block

2016-11-24 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11169:
-
Attachment: HDFS-11169.001.patch

> GetBlockLocations returns a block when offset > filesize and file only has 1 
> block
> --
>
> Key: HDFS-11169
> URL: https://issues.apache.org/jira/browse/HDFS-11169
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.1
> Environment: HDP 2.5, Ambari 2.4
>Reporter: David Tucker
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-11169.001.patch
>
>
> Start with a fresh deployment of HDFS.
> 1. Create a file.
> 2. AddBlock the file with an offest larger than the file size.
> 3. Call GetBlockLocations.
> Expectation: 0 blocks are returned because the only added block is incomplete.
> Observation: 1 block is returned.
> This only seems to occur when 1 block is in play (i.e. if you write a block 
> and call AddBlock again, GetBlockLocations seems to behave as expected).
> This seems to be related to HDFS-513.
> I suspect the following line needs revision: 
> https://github.com/apache/hadoop/blob/4484b48498b2ab2a40a404c487c7a4e875df10dc/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L1062
> I believe it should be >= instead of >:
> if (nrBlocks >= 0 && curBlk == nrBlocks)   // offset >= end of file



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11169) GetBlockLocations returns a block when offset > filesize and file only has 1 block

2016-11-24 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11169:
-
Status: Patch Available  (was: Open)

> GetBlockLocations returns a block when offset > filesize and file only has 1 
> block
> --
>
> Key: HDFS-11169
> URL: https://issues.apache.org/jira/browse/HDFS-11169
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.1
> Environment: HDP 2.5, Ambari 2.4
>Reporter: David Tucker
>Assignee: Yiqun Lin
>Priority: Minor
>
> Start with a fresh deployment of HDFS.
> 1. Create a file.
> 2. AddBlock the file with an offest larger than the file size.
> 3. Call GetBlockLocations.
> Expectation: 0 blocks are returned because the only added block is incomplete.
> Observation: 1 block is returned.
> This only seems to occur when 1 block is in play (i.e. if you write a block 
> and call AddBlock again, GetBlockLocations seems to behave as expected).
> This seems to be related to HDFS-513.
> I suspect the following line needs revision: 
> https://github.com/apache/hadoop/blob/4484b48498b2ab2a40a404c487c7a4e875df10dc/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L1062
> I believe it should be >= instead of >:
> if (nrBlocks >= 0 && curBlk == nrBlocks)   // offset >= end of file



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11169) GetBlockLocations returns a block when offset > filesize and file only has 1 block

2016-11-24 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15693222#comment-15693222
 ] 

Yiqun Lin commented on HDFS-11169:
--

I think this is a good find! Thanks David for reporting this, the proposal 
makes sense for me.  I have tested this in my local . Attach a initial patch to 
have a fix. I add a simple test to exam the change. Kindly review. Thanks.

> GetBlockLocations returns a block when offset > filesize and file only has 1 
> block
> --
>
> Key: HDFS-11169
> URL: https://issues.apache.org/jira/browse/HDFS-11169
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.1
> Environment: HDP 2.5, Ambari 2.4
>Reporter: David Tucker
>Assignee: Yiqun Lin
>Priority: Minor
>
> Start with a fresh deployment of HDFS.
> 1. Create a file.
> 2. AddBlock the file with an offest larger than the file size.
> 3. Call GetBlockLocations.
> Expectation: 0 blocks are returned because the only added block is incomplete.
> Observation: 1 block is returned.
> This only seems to occur when 1 block is in play (i.e. if you write a block 
> and call AddBlock again, GetBlockLocations seems to behave as expected).
> This seems to be related to HDFS-513.
> I suspect the following line needs revision: 
> https://github.com/apache/hadoop/blob/4484b48498b2ab2a40a404c487c7a4e875df10dc/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L1062
> I believe it should be >= instead of >:
> if (nrBlocks >= 0 && curBlk == nrBlocks)   // offset >= end of file



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11169) GetBlockLocations returns a block when offset > filesize and file only has 1 block

2016-11-24 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin reassigned HDFS-11169:


Assignee: Yiqun Lin

> GetBlockLocations returns a block when offset > filesize and file only has 1 
> block
> --
>
> Key: HDFS-11169
> URL: https://issues.apache.org/jira/browse/HDFS-11169
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.1
> Environment: HDP 2.5, Ambari 2.4
>Reporter: David Tucker
>Assignee: Yiqun Lin
>Priority: Minor
>
> Start with a fresh deployment of HDFS.
> 1. Create a file.
> 2. AddBlock the file with an offest larger than the file size.
> 3. Call GetBlockLocations.
> Expectation: 0 blocks are returned because the only added block is incomplete.
> Observation: 1 block is returned.
> This only seems to occur when 1 block is in play (i.e. if you write a block 
> and call AddBlock again, GetBlockLocations seems to behave as expected).
> This seems to be related to HDFS-513.
> I suspect the following line needs revision: 
> https://github.com/apache/hadoop/blob/4484b48498b2ab2a40a404c487c7a4e875df10dc/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L1062
> I believe it should be >= instead of >:
> if (nrBlocks >= 0 && curBlk == nrBlocks)   // offset >= end of file



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10994) Support an XOR policy XOR-2-1-64k in HDFS

2016-11-24 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HDFS-10994:
-
Attachment: HDFS-10994-v5.patch

Kai offline discussed the path with me, and gave me very good advice. I update 
the patch per his suggestion. 

> Support an XOR policy XOR-2-1-64k in HDFS
> -
>
> Key: HDFS-10994
> URL: https://issues.apache.org/jira/browse/HDFS-10994
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: SammiChen
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-10994-v1.patch, HDFS-10994-v2.patch, 
> HDFS-10994-v3.patch, HDFS-10994-v4.patch, HDFS-10994-v5.patch
>
>
> So far, "hdfs erasurecode" command supports three policies, 
> RS-DEFAULT-3-2-64k, RS-DEFAULT-6-3-64k and RS-LEGACY-6-3-64k. This task is 
> going to add XOR-2-1-64k policy to this command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11172) Support an erasure coding policy RS-DEFAULT-10-4-64k in HDFS

2016-11-24 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-11172:

Issue Type: Sub-task  (was: Task)
Parent: HDFS-8031

> Support an erasure coding policy RS-DEFAULT-10-4-64k in HDFS
> 
>
> Key: HDFS-11172
> URL: https://issues.apache.org/jira/browse/HDFS-11172
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: SammiChen
>Assignee: SammiChen
>
> So far, "hdfs erasurecode" command supports three policies, 
> RS-DEFAULT-3-2-64k, RS-DEFAULT-6-3-64k and RS-LEGACY-6-3-64k. This task is 
> going to add RS-DEFAULT-10-4-64k policy to this command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11172) Support an erasure coding policy RS-DEFAULT-10-4-64k in HDFS

2016-11-24 Thread SammiChen (JIRA)
SammiChen created HDFS-11172:


 Summary: Support an erasure coding policy RS-DEFAULT-10-4-64k in 
HDFS
 Key: HDFS-11172
 URL: https://issues.apache.org/jira/browse/HDFS-11172
 Project: Hadoop HDFS
  Issue Type: Task
Reporter: SammiChen
Assignee: SammiChen


So far, "hdfs erasurecode" command supports three policies, RS-DEFAULT-3-2-64k, 
RS-DEFAULT-6-3-64k and RS-LEGACY-6-3-64k. This task is going to add 
RS-DEFAULT-10-4-64k policy to this command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11171) Add localBytesRead and localReadTime for datanode metrics to calculate local read rate which is useful to compare .

2016-11-24 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15692841#comment-15692841
 ] 

Yiqun Lin commented on HDFS-11171:
--

Hi [~5feixiang], only the patch is reviewed by commiter and commit to the trunk 
then we can resolve the JIRA. I think you should reopen the issue and trigger 
the button to make the status of this JIRA  to be "patch available". Thanks.

> Add localBytesRead and localReadTime for datanode metrics to calculate local 
> read rate which is useful to compare .
> ---
>
> Key: HDFS-11171
> URL: https://issues.apache.org/jira/browse/HDFS-11171
> Project: Hadoop HDFS
>  Issue Type: Wish
>  Components: datanode
>Reporter: 5feixiang
>  Labels: metrics
> Attachments: localReadMetrics.patch
>
>
> Current dfs context metrics only contains bytesRead ,remoteBytesRead and 
> totalReadTime.We can find  that bytesRead= remoteBytesRead +balance remote 
> copyBytesRead+localBytesRead, so we can add  localBytesRead and localReadTime 
> to calcute local read rate which is usefu to compate local read rate between 
> short-circuit read and tcp-socket read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org