[jira] [Updated] (HDFS-14883) NPE when the second SNN is starting

2019-10-10 Thread Ranith Sardar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14883:
-
Affects Version/s: 3.1.1

> NPE when the second SNN is starting
> ---
>
> Key: HDFS-14883
> URL: https://issues.apache.org/jira/browse/HDFS-14883
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>  Labels: multi-sbnn
>
>  
> {{| WARN | qtp79782883-47 | /imagetransfer | ServletHandler.java:632
>  java.io.IOException: PutImage failed. java.lang.NullPointerException
>  at 
> org.apache.hadoop.hdfs.server.namenode.ImageServlet.validateRequest(ImageServlet.java:198)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ImageServlet.doPut(ImageServlet.java:485)
>  at javax.servlet.http.HttpServlet.service(HttpServlet.java:710)
>  at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>  at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14883) NPE when the second SNN is starting

2019-09-30 Thread Ranith Sardar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14883:
-
Description: 
 

{{| WARN | qtp79782883-47 | /imagetransfer | ServletHandler.java:632
 java.io.IOException: PutImage failed. java.lang.NullPointerException
 at 
org.apache.hadoop.hdfs.server.namenode.ImageServlet.validateRequest(ImageServlet.java:198)
 at 
org.apache.hadoop.hdfs.server.namenode.ImageServlet.doPut(ImageServlet.java:485)
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:710)
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
 at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
 at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644)}}

  was:
 

{{2019-09-25 22:41:31,889 | WARN  | qtp79782883-47 | /imagetransfer | 
ServletHandler.java:632
java.io.IOException: PutImage failed. java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.server.namenode.ImageServlet.validateRequest(ImageServlet.java:198)
at 
org.apache.hadoop.hdfs.server.namenode.ImageServlet.doPut(ImageServlet.java:485)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:710)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at 
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644)}}


> NPE when the second SNN is starting
> ---
>
> Key: HDFS-14883
> URL: https://issues.apache.org/jira/browse/HDFS-14883
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>
>  
> {{| WARN | qtp79782883-47 | /imagetransfer | ServletHandler.java:632
>  java.io.IOException: PutImage failed. java.lang.NullPointerException
>  at 
> org.apache.hadoop.hdfs.server.namenode.ImageServlet.validateRequest(ImageServlet.java:198)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ImageServlet.doPut(ImageServlet.java:485)
>  at javax.servlet.http.HttpServlet.service(HttpServlet.java:710)
>  at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>  at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14883) NPE when the second SNN is starting

2019-09-30 Thread Ranith Sardar (Jira)
Ranith Sardar created HDFS-14883:


 Summary: NPE when the second SNN is starting
 Key: HDFS-14883
 URL: https://issues.apache.org/jira/browse/HDFS-14883
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ranith Sardar
Assignee: Ranith Sardar


 

{{2019-09-25 22:41:31,889 | WARN  | qtp79782883-47 | /imagetransfer | 
ServletHandler.java:632
java.io.IOException: PutImage failed. java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.server.namenode.ImageServlet.validateRequest(ImageServlet.java:198)
at 
org.apache.hadoop.hdfs.server.namenode.ImageServlet.doPut(ImageServlet.java:485)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:710)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at 
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-14853) NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode is not present

2019-09-25 Thread Ranith Sardar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14853:
-
Comment: was deleted

(was: Sure [~xkrogen])

> NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode 
> is not present
> 
>
> Key: HDFS-14853
> URL: https://issues.apache.org/jira/browse/HDFS-14853
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14853.001.patch, HDFS-14853.002.patch, 
> HDFS-14853.003.patch
>
>
>  
> {{org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:229)
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:77)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14853) NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode is not present

2019-09-25 Thread Ranith Sardar (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16937972#comment-16937972
 ] 

Ranith Sardar commented on HDFS-14853:
--

Sure [~xkrogen]

> NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode 
> is not present
> 
>
> Key: HDFS-14853
> URL: https://issues.apache.org/jira/browse/HDFS-14853
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14853.001.patch, HDFS-14853.002.patch, 
> HDFS-14853.003.patch
>
>
>  
> {{org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:229)
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:77)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14853) NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode is deleted

2019-09-22 Thread Ranith Sardar (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16935226#comment-16935226
 ] 

Ranith Sardar commented on HDFS-14853:
--

Thanks [~ayushtkn], in the new patch, handled the checkstyle warning!

> NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode 
> is deleted
> 
>
> Key: HDFS-14853
> URL: https://issues.apache.org/jira/browse/HDFS-14853
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14853.001.patch, HDFS-14853.002.patch, 
> HDFS-14853.003.patch
>
>
>  
> {{org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:229)
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:77)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14853) NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode is deleted

2019-09-22 Thread Ranith Sardar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14853:
-
Attachment: HDFS-14853.003.patch

> NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode 
> is deleted
> 
>
> Key: HDFS-14853
> URL: https://issues.apache.org/jira/browse/HDFS-14853
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14853.001.patch, HDFS-14853.002.patch, 
> HDFS-14853.003.patch
>
>
>  
> {{org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:229)
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:77)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14853) NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode is deleted

2019-09-19 Thread Ranith Sardar (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933785#comment-16933785
 ] 

Ranith Sardar commented on HDFS-14853:
--

Thankx [~ayushtkn], for reviewing this patch. And yes, this UT looks pretty 
simple, thanks for the suggestion. Given new patch.

[~John Smith], We should not import *, has changed in the new patch.

The UT failure is not related to the patch.

 

> NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode 
> is deleted
> 
>
> Key: HDFS-14853
> URL: https://issues.apache.org/jira/browse/HDFS-14853
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14853.001.patch, HDFS-14853.002.patch
>
>
>  
> {{org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:229)
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:77)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14853) NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode is deleted

2019-09-19 Thread Ranith Sardar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14853:
-
Attachment: HDFS-14853.002.patch

> NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode 
> is deleted
> 
>
> Key: HDFS-14853
> URL: https://issues.apache.org/jira/browse/HDFS-14853
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14853.001.patch, HDFS-14853.002.patch
>
>
>  
> {{org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:229)
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:77)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14853) NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode is deleted

2019-09-19 Thread Ranith Sardar (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933100#comment-16933100
 ] 

Ranith Sardar commented on HDFS-14853:
--

Attached the patch. Please review it once.

> NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode 
> is deleted
> 
>
> Key: HDFS-14853
> URL: https://issues.apache.org/jira/browse/HDFS-14853
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14853.001.patch
>
>
>  
> {{org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:229)
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:77)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14853) NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode is deleted

2019-09-19 Thread Ranith Sardar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14853:
-
Status: Patch Available  (was: Open)

> NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode 
> is deleted
> 
>
> Key: HDFS-14853
> URL: https://issues.apache.org/jira/browse/HDFS-14853
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14853.001.patch
>
>
>  
> {{org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:229)
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:77)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14853) NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode is deleted

2019-09-19 Thread Ranith Sardar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14853:
-
Attachment: HDFS-14853.001.patch

> NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode 
> is deleted
> 
>
> Key: HDFS-14853
> URL: https://issues.apache.org/jira/browse/HDFS-14853
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14853.001.patch
>
>
>  
> {{org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:229)
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:77)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14853) NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode is deleted

2019-09-18 Thread Ranith Sardar (Jira)
Ranith Sardar created HDFS-14853:


 Summary: NPE in DFSNetworkTopology#chooseRandomWithStorageType() 
when the excludedNode is deleted
 Key: HDFS-14853
 URL: https://issues.apache.org/jira/browse/HDFS-14853
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ranith Sardar
Assignee: Ranith Sardar


 

{{org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
java.lang.NullPointerException
  at 
org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:229)
  at 
org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:77)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14827) RBF: Shared DN should display all info's in Router DataNode UI

2019-09-06 Thread Ranith Sardar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14827:
-
Summary: RBF: Shared DN should display all info's in Router DataNode UI  
(was: RBF: Shared DN should display all info's in Router DtaNode UI)

> RBF: Shared DN should display all info's in Router DataNode UI
> --
>
> Key: HDFS-14827
> URL: https://issues.apache.org/jira/browse/HDFS-14827
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14827) RBF: Shared DN should display all info's in Router DtaNode UI

2019-09-06 Thread Ranith Sardar (Jira)
Ranith Sardar created HDFS-14827:


 Summary: RBF: Shared DN should display all info's in Router 
DtaNode UI
 Key: HDFS-14827
 URL: https://issues.apache.org/jira/browse/HDFS-14827
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ranith Sardar
Assignee: Ranith Sardar






--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14777) RBF: Set ReadOnly is failing for mount Table but actually readonly succeed to set

2019-09-06 Thread Ranith Sardar (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924189#comment-16924189
 ] 

Ranith Sardar commented on HDFS-14777:
--

 Thanks [~surendrasingh] [~elgoiri].

> RBF: Set ReadOnly is failing for mount Table but actually readonly succeed to 
> set
> -
>
> Key: HDFS-14777
> URL: https://issues.apache.org/jira/browse/HDFS-14777
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14777.001.patch, HDFS-14777.002.patch, 
> HDFS-14777.003.patch, HDFS-14777.004.patch
>
>
> # hdfs dfsrouteradmin -update /test hacluster /test -readonly /opt/client # 
> hdfs dfsrouteradmin -update /test hacluster /test -readonly update: /test is 
> in a read only mount 
> pointorg.apache.hadoop.ipc.RemoteException(java.io.IOException): /test is in 
> a read only mount point at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:1419)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.getQuotaRemoteLocations(Quota.java:217)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.setQuota(Quota.java:75) 
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.synchronizeQuota(RouterAdminServer.java:288)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.updateMountTableEntry(RouterAdminServer.java:267)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14777) RBF: Set ReadOnly is failing for mount Table but actually readonly succed to set

2019-09-02 Thread Ranith Sardar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14777:
-
Attachment: HDFS-14777.004.patch

> RBF: Set ReadOnly is failing for mount Table but actually readonly succed to 
> set
> 
>
> Key: HDFS-14777
> URL: https://issues.apache.org/jira/browse/HDFS-14777
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14777.001.patch, HDFS-14777.002.patch, 
> HDFS-14777.003.patch, HDFS-14777.004.patch
>
>
> # hdfs dfsrouteradmin -update /test hacluster /test -readonly /opt/client # 
> hdfs dfsrouteradmin -update /test hacluster /test -readonly update: /test is 
> in a read only mount 
> pointorg.apache.hadoop.ipc.RemoteException(java.io.IOException): /test is in 
> a read only mount point at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:1419)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.getQuotaRemoteLocations(Quota.java:217)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.setQuota(Quota.java:75) 
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.synchronizeQuota(RouterAdminServer.java:288)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.updateMountTableEntry(RouterAdminServer.java:267)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14519) NameQuota is not update after concat operation, so namequota is wrong

2019-09-02 Thread Ranith Sardar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14519:
-
Attachment: HDFS-14519.002.patch

> NameQuota is not update after concat operation, so namequota is wrong
> -
>
> Key: HDFS-14519
> URL: https://issues.apache.org/jira/browse/HDFS-14519
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14519.001.patch, HDFS-14519.002.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14777) RBF: Set ReadOnly is failing for mount Table but actually readonly succed to set

2019-08-31 Thread Ranith Sardar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14777:
-
Attachment: HDFS-14777.003.patch

> RBF: Set ReadOnly is failing for mount Table but actually readonly succed to 
> set
> 
>
> Key: HDFS-14777
> URL: https://issues.apache.org/jira/browse/HDFS-14777
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14777.001.patch, HDFS-14777.002.patch, 
> HDFS-14777.003.patch
>
>
> # hdfs dfsrouteradmin -update /test hacluster /test -readonly /opt/client # 
> hdfs dfsrouteradmin -update /test hacluster /test -readonly update: /test is 
> in a read only mount 
> pointorg.apache.hadoop.ipc.RemoteException(java.io.IOException): /test is in 
> a read only mount point at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:1419)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.getQuotaRemoteLocations(Quota.java:217)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.setQuota(Quota.java:75) 
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.synchronizeQuota(RouterAdminServer.java:288)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.updateMountTableEntry(RouterAdminServer.java:267)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14777) RBF: Set ReadOnly is failing for mount Table but actually readonly succed to set

2019-08-31 Thread Ranith Sardar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14777:
-
Attachment: (was: HDFS-14777.003.patch)

> RBF: Set ReadOnly is failing for mount Table but actually readonly succed to 
> set
> 
>
> Key: HDFS-14777
> URL: https://issues.apache.org/jira/browse/HDFS-14777
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14777.001.patch, HDFS-14777.002.patch, 
> HDFS-14777.003.patch
>
>
> # hdfs dfsrouteradmin -update /test hacluster /test -readonly /opt/client # 
> hdfs dfsrouteradmin -update /test hacluster /test -readonly update: /test is 
> in a read only mount 
> pointorg.apache.hadoop.ipc.RemoteException(java.io.IOException): /test is in 
> a read only mount point at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:1419)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.getQuotaRemoteLocations(Quota.java:217)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.setQuota(Quota.java:75) 
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.synchronizeQuota(RouterAdminServer.java:288)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.updateMountTableEntry(RouterAdminServer.java:267)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14777) RBF: Set ReadOnly is failing for mount Table but actually readonly succed to set

2019-08-31 Thread Ranith Sardar (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16920087#comment-16920087
 ] 

Ranith Sardar commented on HDFS-14777:
--

Thanks, [~elgoiri] for reviewing. 

In my current patch, used a function to check quota updated or not 
(RouterAdminServer#isQuotaUpdated).

Also, fix toString(). 

> RBF: Set ReadOnly is failing for mount Table but actually readonly succed to 
> set
> 
>
> Key: HDFS-14777
> URL: https://issues.apache.org/jira/browse/HDFS-14777
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14777.001.patch, HDFS-14777.002.patch, 
> HDFS-14777.003.patch
>
>
> # hdfs dfsrouteradmin -update /test hacluster /test -readonly /opt/client # 
> hdfs dfsrouteradmin -update /test hacluster /test -readonly update: /test is 
> in a read only mount 
> pointorg.apache.hadoop.ipc.RemoteException(java.io.IOException): /test is in 
> a read only mount point at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:1419)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.getQuotaRemoteLocations(Quota.java:217)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.setQuota(Quota.java:75) 
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.synchronizeQuota(RouterAdminServer.java:288)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.updateMountTableEntry(RouterAdminServer.java:267)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14777) RBF: Set ReadOnly is failing for mount Table but actually readonly succed to set

2019-08-31 Thread Ranith Sardar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14777:
-
Attachment: HDFS-14777.003.patch

> RBF: Set ReadOnly is failing for mount Table but actually readonly succed to 
> set
> 
>
> Key: HDFS-14777
> URL: https://issues.apache.org/jira/browse/HDFS-14777
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14777.001.patch, HDFS-14777.002.patch, 
> HDFS-14777.003.patch
>
>
> # hdfs dfsrouteradmin -update /test hacluster /test -readonly /opt/client # 
> hdfs dfsrouteradmin -update /test hacluster /test -readonly update: /test is 
> in a read only mount 
> pointorg.apache.hadoop.ipc.RemoteException(java.io.IOException): /test is in 
> a read only mount point at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:1419)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.getQuotaRemoteLocations(Quota.java:217)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.setQuota(Quota.java:75) 
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.synchronizeQuota(RouterAdminServer.java:288)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.updateMountTableEntry(RouterAdminServer.java:267)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14777) RBF: Set ReadOnly is failing for mount Table but actually readonly succed to set

2019-08-29 Thread Ranith Sardar (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918650#comment-16918650
 ] 

Ranith Sardar commented on HDFS-14777:
--

{{Thanks, [~surendrasingh] for reviewing the patch. }}
{{In the current patch, handled both comments, check style and whitespace. }}
{quote}
{{3. This is not related to your patch, but try catch should catch expected 
exception not all. If you want to handle it in different patch then also it is 
fine.}}
{quote}
Will create a new jira.

> RBF: Set ReadOnly is failing for mount Table but actually readonly succed to 
> set
> 
>
> Key: HDFS-14777
> URL: https://issues.apache.org/jira/browse/HDFS-14777
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14777.001.patch, HDFS-14777.002.patch
>
>
> # hdfs dfsrouteradmin -update /test hacluster /test -readonly /opt/client # 
> hdfs dfsrouteradmin -update /test hacluster /test -readonly update: /test is 
> in a read only mount 
> pointorg.apache.hadoop.ipc.RemoteException(java.io.IOException): /test is in 
> a read only mount point at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:1419)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.getQuotaRemoteLocations(Quota.java:217)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.setQuota(Quota.java:75) 
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.synchronizeQuota(RouterAdminServer.java:288)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.updateMountTableEntry(RouterAdminServer.java:267)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14777) RBF: Set ReadOnly is failing for mount Table but actually readonly succed to set

2019-08-29 Thread Ranith Sardar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14777:
-
Attachment: HDFS-14777.002.patch

> RBF: Set ReadOnly is failing for mount Table but actually readonly succed to 
> set
> 
>
> Key: HDFS-14777
> URL: https://issues.apache.org/jira/browse/HDFS-14777
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14777.001.patch, HDFS-14777.002.patch
>
>
> # hdfs dfsrouteradmin -update /test hacluster /test -readonly /opt/client # 
> hdfs dfsrouteradmin -update /test hacluster /test -readonly update: /test is 
> in a read only mount 
> pointorg.apache.hadoop.ipc.RemoteException(java.io.IOException): /test is in 
> a read only mount point at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:1419)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.getQuotaRemoteLocations(Quota.java:217)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.setQuota(Quota.java:75) 
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.synchronizeQuota(RouterAdminServer.java:288)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.updateMountTableEntry(RouterAdminServer.java:267)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14777) RBF: Set ReadOnly is failing for mount Table but actually readonly succed to set

2019-08-27 Thread Ranith Sardar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14777:
-
Assignee: Ranith Sardar
  Status: Patch Available  (was: Open)

> RBF: Set ReadOnly is failing for mount Table but actually readonly succed to 
> set
> 
>
> Key: HDFS-14777
> URL: https://issues.apache.org/jira/browse/HDFS-14777
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14777.001.patch
>
>
> # hdfs dfsrouteradmin -update /test hacluster /test -readonly /opt/client # 
> hdfs dfsrouteradmin -update /test hacluster /test -readonly update: /test is 
> in a read only mount 
> pointorg.apache.hadoop.ipc.RemoteException(java.io.IOException): /test is in 
> a read only mount point at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:1419)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.getQuotaRemoteLocations(Quota.java:217)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.setQuota(Quota.java:75) 
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.synchronizeQuota(RouterAdminServer.java:288)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.updateMountTableEntry(RouterAdminServer.java:267)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14777) RBF: Set ReadOnly is failing for mount Table but actually readonly succed to set

2019-08-27 Thread Ranith Sardar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14777:
-
Attachment: HDFS-14777.001.patch

> RBF: Set ReadOnly is failing for mount Table but actually readonly succed to 
> set
> 
>
> Key: HDFS-14777
> URL: https://issues.apache.org/jira/browse/HDFS-14777
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14777.001.patch
>
>
> # hdfs dfsrouteradmin -update /test hacluster /test -readonly /opt/client # 
> hdfs dfsrouteradmin -update /test hacluster /test -readonly update: /test is 
> in a read only mount 
> pointorg.apache.hadoop.ipc.RemoteException(java.io.IOException): /test is in 
> a read only mount point at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:1419)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.getQuotaRemoteLocations(Quota.java:217)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.setQuota(Quota.java:75) 
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.synchronizeQuota(RouterAdminServer.java:288)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.updateMountTableEntry(RouterAdminServer.java:267)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14777) RBF: Set ReadOnly is failing for mount Table but actually readonly succed to set

2019-08-26 Thread Ranith Sardar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14777:
-
Description: # hdfs dfsrouteradmin -update /test hacluster /test -readonly 
/opt/client # hdfs dfsrouteradmin -update /test hacluster /test -readonly 
update: /test is in a read only mount 
pointorg.apache.hadoop.ipc.RemoteException(java.io.IOException): /test is in a 
read only mount point at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:1419)
 at 
org.apache.hadoop.hdfs.server.federation.router.Quota.getQuotaRemoteLocations(Quota.java:217)
 at 
org.apache.hadoop.hdfs.server.federation.router.Quota.setQuota(Quota.java:75) 
at 
org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.synchronizeQuota(RouterAdminServer.java:288)
 at 
org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.updateMountTableEntry(RouterAdminServer.java:267)
  (was: /opt/client # hdfs dfsrouteradmin -update /test hacluster /test 
-readonly /opt/client # hdfs dfsrouteradmin -update /test hacluster /test 
-readonly update: /test is in a read only mount 
pointorg.apache.hadoop.ipc.RemoteException(java.io.IOException): /test is in a 
read only mount point at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:1419)
 at 
org.apache.hadoop.hdfs.server.federation.router.Quota.getQuotaRemoteLocations(Quota.java:217)
 at 
org.apache.hadoop.hdfs.server.federation.router.Quota.setQuota(Quota.java:75) 
at 
org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.synchronizeQuota(RouterAdminServer.java:288)
 at 
org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.updateMountTableEntry(RouterAdminServer.java:267))

> RBF: Set ReadOnly is failing for mount Table but actually readonly succed to 
> set
> 
>
> Key: HDFS-14777
> URL: https://issues.apache.org/jira/browse/HDFS-14777
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Priority: Major
>
> # hdfs dfsrouteradmin -update /test hacluster /test -readonly /opt/client # 
> hdfs dfsrouteradmin -update /test hacluster /test -readonly update: /test is 
> in a read only mount 
> pointorg.apache.hadoop.ipc.RemoteException(java.io.IOException): /test is in 
> a read only mount point at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:1419)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.getQuotaRemoteLocations(Quota.java:217)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.setQuota(Quota.java:75) 
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.synchronizeQuota(RouterAdminServer.java:288)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.updateMountTableEntry(RouterAdminServer.java:267)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14777) RBF: Set ReadOnly is failing for mount Table but actually readonly succed to set

2019-08-26 Thread Ranith Sardar (Jira)
Ranith Sardar created HDFS-14777:


 Summary: RBF: Set ReadOnly is failing for mount Table but actually 
readonly succed to set
 Key: HDFS-14777
 URL: https://issues.apache.org/jira/browse/HDFS-14777
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ranith Sardar


/opt/client # hdfs dfsrouteradmin -update /test hacluster /test -readonly 
/opt/client # hdfs dfsrouteradmin -update /test hacluster /test -readonly 
update: /test is in a read only mount 
pointorg.apache.hadoop.ipc.RemoteException(java.io.IOException): /test is in a 
read only mount point at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:1419)
 at 
org.apache.hadoop.hdfs.server.federation.router.Quota.getQuotaRemoteLocations(Quota.java:217)
 at 
org.apache.hadoop.hdfs.server.federation.router.Quota.setQuota(Quota.java:75) 
at 
org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.synchronizeQuota(RouterAdminServer.java:288)
 at 
org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.updateMountTableEntry(RouterAdminServer.java:267)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14660) [SBN Read] ObserverNameNode should throw StandbyException for requests not from ObserverProxyProvider

2019-07-19 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16888569#comment-16888569
 ] 

Ranith Sardar commented on HDFS-14660:
--

We can also check the stateId using 
"Server.getCurCall().get().getClientStateId()". From the Server it will give us 
the StateId.

> [SBN Read] ObserverNameNode should throw StandbyException for requests not 
> from ObserverProxyProvider
> -
>
> Key: HDFS-14660
> URL: https://issues.apache.org/jira/browse/HDFS-14660
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
>
> In a HDFS HA cluster with consistent reads enabled (HDFS-12943), clients 
> could be using either {{ObserverReadProxyProvider}}, 
> {{ConfiguredProxyProvider}}, or something else. Since observer is just a 
> special type of SBN and we allow transitions between them, a client NOT using 
> {{ObserverReadProxyProvider}} will need to have 
> {{dfs.ha.namenodes.}} include all NameNodes in the cluster, and 
> therefore, it may send request to a observer node.
> For this case, we should check whether the {{stateId}} in the incoming RPC 
> header is set or not, and throw an {{StandbyException}} when it is not. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14636) SBN : If you configure the default proxy provider still read Request going to Observer namenode only.

2019-07-08 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar reassigned HDFS-14636:


Assignee: Ranith Sardar

> SBN : If you configure the default proxy provider still read Request going to 
> Observer namenode only.
> -
>
> Key: HDFS-14636
> URL: https://issues.apache.org/jira/browse/HDFS-14636
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.1
>Reporter: Harshakiran Reddy
>Assignee: Ranith Sardar
>Priority: Major
>
> {noformat}
> In Observer cluster, will configure the default proxy provider instead of 
> "org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider", still 
> Read request going to Observer namenode only.{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14368) RBF: In router UI under Namenode Information, Nameservice and Web address is not coming properly.

2019-06-26 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16873133#comment-16873133
 ] 

Ranith Sardar commented on HDFS-14368:
--

yes [~brahmareddy]

> RBF: In router UI under Namenode Information, Nameservice and Web address is 
> not coming properly.
> -
>
> Key: HDFS-14368
> URL: https://issues.apache.org/jira/browse/HDFS-14368
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14368-HDFS-13891.000.patch
>
>
> In router UI under Namenode Information, Nameservice and Web address is not 
> coming properly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14537) Journaled Edits Cache is not cleared when formatting the JN

2019-06-06 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16857489#comment-16857489
 ] 

Ranith Sardar commented on HDFS-14537:
--

Thank you [~brahmareddy] , [~xkrogen] .  Added a new patch with all changes.

> Journaled Edits Cache is not cleared when formatting the JN
> ---
>
> Key: HDFS-14537
> URL: https://issues.apache.org/jira/browse/HDFS-14537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14537.001.patch, HDFS-14537.002.patch
>
>
> {code:java}
> private final JournaledEditsCache cache;
> {code}
> When formatting the journal node, not clearing the cache value. 
> {code:java}
> void format(NamespaceInfo nsInfo, boolean force) throws IOException {
> Preconditions.checkState(nsInfo.getNamespaceID() != 0,
> "can't format with uninitialized namespace info: %s",
> nsInfo);
> LOG.info("Formatting journal id : " + journalId + " with namespace info: 
> " +
> nsInfo + " and force: " + force);
> storage.format(nsInfo, force);
> refreshCachedData();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14537) Journaled Edits Cache is not cleared when formatting the JN

2019-06-06 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14537:
-
Attachment: HDFS-14537.002.patch

> Journaled Edits Cache is not cleared when formatting the JN
> ---
>
> Key: HDFS-14537
> URL: https://issues.apache.org/jira/browse/HDFS-14537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14537.001.patch, HDFS-14537.002.patch
>
>
> {code:java}
> private final JournaledEditsCache cache;
> {code}
> When formatting the journal node, not clearing the cache value. 
> {code:java}
> void format(NamespaceInfo nsInfo, boolean force) throws IOException {
> Preconditions.checkState(nsInfo.getNamespaceID() != 0,
> "can't format with uninitialized namespace info: %s",
> nsInfo);
> LOG.info("Formatting journal id : " + journalId + " with namespace info: 
> " +
> nsInfo + " and force: " + force);
> storage.format(nsInfo, force);
> refreshCachedData();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14537) Journaled Edits Cache is not cleared when formatting the JN

2019-06-03 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16854989#comment-16854989
 ] 

Ranith Sardar commented on HDFS-14537:
--

Attached the basic patch.

> Journaled Edits Cache is not cleared when formatting the JN
> ---
>
> Key: HDFS-14537
> URL: https://issues.apache.org/jira/browse/HDFS-14537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14537.001.patch
>
>
> {code:java}
> private final JournaledEditsCache cache;
> {code}
> When formatting the journal node, not clearing the cache value. 
> {code:java}
> void format(NamespaceInfo nsInfo, boolean force) throws IOException {
> Preconditions.checkState(nsInfo.getNamespaceID() != 0,
> "can't format with uninitialized namespace info: %s",
> nsInfo);
> LOG.info("Formatting journal id : " + journalId + " with namespace info: 
> " +
> nsInfo + " and force: " + force);
> storage.format(nsInfo, force);
> refreshCachedData();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14537) Journaled Edits Cache is not cleared when formatting the JN

2019-06-03 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14537:
-
Attachment: HDFS-14537.001.patch

> Journaled Edits Cache is not cleared when formatting the JN
> ---
>
> Key: HDFS-14537
> URL: https://issues.apache.org/jira/browse/HDFS-14537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14537.001.patch
>
>
> {code:java}
> private final JournaledEditsCache cache;
> {code}
> When formatting the journal node, not clearing the cache value. 
> {code:java}
> void format(NamespaceInfo nsInfo, boolean force) throws IOException {
> Preconditions.checkState(nsInfo.getNamespaceID() != 0,
> "can't format with uninitialized namespace info: %s",
> nsInfo);
> LOG.info("Formatting journal id : " + journalId + " with namespace info: 
> " +
> nsInfo + " and force: " + force);
> storage.format(nsInfo, force);
> refreshCachedData();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14519) NameQuota is not update after concat operation, so namequota is wrong

2019-06-03 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14519:
-
Status: Patch Available  (was: Open)

> NameQuota is not update after concat operation, so namequota is wrong
> -
>
> Key: HDFS-14519
> URL: https://issues.apache.org/jira/browse/HDFS-14519
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14519.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14519) NameQuota is not update after concat operation, so namequota is wrong

2019-06-03 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14519:
-
Attachment: HDFS-14519.001.patch

> NameQuota is not update after concat operation, so namequota is wrong
> -
>
> Key: HDFS-14519
> URL: https://issues.apache.org/jira/browse/HDFS-14519
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14519.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14537) Journaled Edits Cache is not cleared when formatting the JN

2019-06-03 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14537:
-
Attachment: (was: HDFS-14537.001.patch)

> Journaled Edits Cache is not cleared when formatting the JN
> ---
>
> Key: HDFS-14537
> URL: https://issues.apache.org/jira/browse/HDFS-14537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>
> {code:java}
> private final JournaledEditsCache cache;
> {code}
> When formatting the journal node, not clearing the cache value. 
> {code:java}
> void format(NamespaceInfo nsInfo, boolean force) throws IOException {
> Preconditions.checkState(nsInfo.getNamespaceID() != 0,
> "can't format with uninitialized namespace info: %s",
> nsInfo);
> LOG.info("Formatting journal id : " + journalId + " with namespace info: 
> " +
> nsInfo + " and force: " + force);
> storage.format(nsInfo, force);
> refreshCachedData();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14537) Journaled Edits Cache is not cleared when formatting the JN

2019-06-03 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14537:
-
Attachment: HDFS-14537.001.patch

> Journaled Edits Cache is not cleared when formatting the JN
> ---
>
> Key: HDFS-14537
> URL: https://issues.apache.org/jira/browse/HDFS-14537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14537.001.patch
>
>
> {code:java}
> private final JournaledEditsCache cache;
> {code}
> When formatting the journal node, not clearing the cache value. 
> {code:java}
> void format(NamespaceInfo nsInfo, boolean force) throws IOException {
> Preconditions.checkState(nsInfo.getNamespaceID() != 0,
> "can't format with uninitialized namespace info: %s",
> nsInfo);
> LOG.info("Formatting journal id : " + journalId + " with namespace info: 
> " +
> nsInfo + " and force: " + force);
> storage.format(nsInfo, force);
> refreshCachedData();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14537) Journaled Edits Cache is not cleared when formatting the JN

2019-06-03 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14537:
-
Description: 
{code:java}
private final JournaledEditsCache cache;
{code}
When formatting the journal node, not clearing the cache value. 
{code:java}
void format(NamespaceInfo nsInfo, boolean force) throws IOException {
Preconditions.checkState(nsInfo.getNamespaceID() != 0,
"can't format with uninitialized namespace info: %s",
nsInfo);
LOG.info("Formatting journal id : " + journalId + " with namespace info: " +
nsInfo + " and force: " + force);
storage.format(nsInfo, force);
refreshCachedData();
  }
{code}

> Journaled Edits Cache is not cleared when formatting the JN
> ---
>
> Key: HDFS-14537
> URL: https://issues.apache.org/jira/browse/HDFS-14537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>
> {code:java}
> private final JournaledEditsCache cache;
> {code}
> When formatting the journal node, not clearing the cache value. 
> {code:java}
> void format(NamespaceInfo nsInfo, boolean force) throws IOException {
> Preconditions.checkState(nsInfo.getNamespaceID() != 0,
> "can't format with uninitialized namespace info: %s",
> nsInfo);
> LOG.info("Formatting journal id : " + journalId + " with namespace info: 
> " +
> nsInfo + " and force: " + force);
> storage.format(nsInfo, force);
> refreshCachedData();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14537) Journaled Edits Cache is not cleared when formatting the JN

2019-06-03 Thread Ranith Sardar (JIRA)
Ranith Sardar created HDFS-14537:


 Summary: Journaled Edits Cache is not cleared when formatting the 
JN
 Key: HDFS-14537
 URL: https://issues.apache.org/jira/browse/HDFS-14537
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ranith Sardar
Assignee: Ranith Sardar






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14519) NameQuota is not update after concat operation, so namequota is wrong

2019-05-29 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar reassigned HDFS-14519:


Assignee: Ranith Sardar

> NameQuota is not update after concat operation, so namequota is wrong
> -
>
> Key: HDFS-14519
> URL: https://issues.apache.org/jira/browse/HDFS-14519
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14519) NameQuota is not update after concat operation, so namequota is wrong

2019-05-29 Thread Ranith Sardar (JIRA)
Ranith Sardar created HDFS-14519:


 Summary: NameQuota is not update after concat operation, so 
namequota is wrong
 Key: HDFS-14519
 URL: https://issues.apache.org/jira/browse/HDFS-14519
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ranith Sardar






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13787) RBF: Add Snapshot related ClientProtocol APIs

2019-05-23 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-13787:
-
Attachment: HDFS-13787-HDFS-13891.005.patch

> RBF: Add Snapshot related ClientProtocol APIs
> -
>
> Key: HDFS-13787
> URL: https://issues.apache.org/jira/browse/HDFS-13787
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13787-HDFS-13891.003.patch, 
> HDFS-13787-HDFS-13891.004.patch, HDFS-13787-HDFS-13891.005.patch, 
> HDFS-13787.001.patch, HDFS-13787.002.patch
>
>
> Currently, allowSnapshot, disallowSnapshot, renameSnapshot, createSnapshot, 
> deleteSnapshot , SnapshottableDirectoryStatus, getSnapshotDiffReport and 
> getSnapshotDiffReportListing are not implemented in RouterRpcServer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13787) RBF: Add Snapshot related ClientProtocol APIs

2019-05-23 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846896#comment-16846896
 ] 

Ranith Sardar commented on HDFS-13787:
--

[~elgoiri], i have added 3 more api. Please check once. then you can update. 
Thank you.

> RBF: Add Snapshot related ClientProtocol APIs
> -
>
> Key: HDFS-13787
> URL: https://issues.apache.org/jira/browse/HDFS-13787
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13787-HDFS-13891.003.patch, 
> HDFS-13787-HDFS-13891.004.patch, HDFS-13787.001.patch, HDFS-13787.002.patch
>
>
> Currently, allowSnapshot, disallowSnapshot, renameSnapshot, createSnapshot, 
> deleteSnapshot , SnapshottableDirectoryStatus, getSnapshotDiffReport and 
> getSnapshotDiffReportListing are not implemented in RouterRpcServer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13787) RBF: Add Snapshot related ClientProtocol APIs

2019-05-23 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-13787:
-
Attachment: HDFS-13787-HDFS-13891.004.patch

> RBF: Add Snapshot related ClientProtocol APIs
> -
>
> Key: HDFS-13787
> URL: https://issues.apache.org/jira/browse/HDFS-13787
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13787-HDFS-13891.003.patch, 
> HDFS-13787-HDFS-13891.004.patch, HDFS-13787.001.patch, HDFS-13787.002.patch
>
>
> Currently, allowSnapshot, disallowSnapshot, renameSnapshot, createSnapshot, 
> deleteSnapshot , SnapshottableDirectoryStatus, getSnapshotDiffReport and 
> getSnapshotDiffReportListing are not implemented in RouterRpcServer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14487) Missing Space in Client Error Message

2019-05-17 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841935#comment-16841935
 ] 

Ranith Sardar commented on HDFS-14487:
--

[~belugabehr], are you planing to give patch?

> Missing Space in Client Error Message
> -
>
> Key: HDFS-14487
> URL: https://issues.apache.org/jira/browse/HDFS-14487
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Priority: Minor
>  Labels: newbie, noob
>
> {code:java}
>   if (retries == 0) {
> throw new IOException("Unable to close file because the last 
> block"
> + last + " does not have enough number of replicas.");
>   }
> {code}
> Note the missing space after "last block".
> https://github.com/apache/hadoop/blob/f940ab242da80a22bae95509d5c282d7e2f7ecdb/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java#L968-L969



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13787) RBF: Add Snapshot related ClientProtocol APIs

2019-04-24 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825377#comment-16825377
 ] 

Ranith Sardar edited comment on HDFS-13787 at 4/24/19 5:39 PM:
---

Thanks [~elgoiri] for quick reviewing. Updated the patch for HDFS-13891 branch, 
rest will handle in the next patch.


was (Author: ranith):
updated the patch for HDFS-13891, rest will handle in the next patch.

> RBF: Add Snapshot related ClientProtocol APIs
> -
>
> Key: HDFS-13787
> URL: https://issues.apache.org/jira/browse/HDFS-13787
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13787-HDFS-13891.003.patch, HDFS-13787.001.patch, 
> HDFS-13787.002.patch
>
>
> Currently, allowSnapshot, disallowSnapshot, renameSnapshot, createSnapshot, 
> deleteSnapshot , SnapshottableDirectoryStatus, getSnapshotDiffReport and 
> getSnapshotDiffReportListing are not implemented in RouterRpcServer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13787) RBF: Add Snapshot related ClientProtocol APIs

2019-04-24 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825377#comment-16825377
 ] 

Ranith Sardar commented on HDFS-13787:
--

updated the patch for HDFS-13891, rest will handle in the next patch.

> RBF: Add Snapshot related ClientProtocol APIs
> -
>
> Key: HDFS-13787
> URL: https://issues.apache.org/jira/browse/HDFS-13787
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13787-HDFS-13891.003.patch, HDFS-13787.001.patch, 
> HDFS-13787.002.patch
>
>
> Currently, allowSnapshot, disallowSnapshot, renameSnapshot, createSnapshot, 
> deleteSnapshot , SnapshottableDirectoryStatus, getSnapshotDiffReport and 
> getSnapshotDiffReportListing are not implemented in RouterRpcServer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13787) RBF: Add Snapshot related ClientProtocol APIs

2019-04-24 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-13787:
-
Attachment: HDFS-13787-HDFS-13891.003.patch

> RBF: Add Snapshot related ClientProtocol APIs
> -
>
> Key: HDFS-13787
> URL: https://issues.apache.org/jira/browse/HDFS-13787
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13787-HDFS-13891.003.patch, HDFS-13787.001.patch, 
> HDFS-13787.002.patch
>
>
> Currently, allowSnapshot, disallowSnapshot, renameSnapshot, createSnapshot, 
> deleteSnapshot , SnapshottableDirectoryStatus, getSnapshotDiffReport and 
> getSnapshotDiffReportListing are not implemented in RouterRpcServer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13787) RBF: Add Snapshot related ClientProtocol APIs

2019-04-24 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825197#comment-16825197
 ] 

Ranith Sardar commented on HDFS-13787:
--

[~brahmareddy], sorry for being late. Updated new patch.

> RBF: Add Snapshot related ClientProtocol APIs
> -
>
> Key: HDFS-13787
> URL: https://issues.apache.org/jira/browse/HDFS-13787
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13787.001.patch, HDFS-13787.002.patch
>
>
> Currently, allowSnapshot, disallowSnapshot, renameSnapshot, createSnapshot, 
> deleteSnapshot , SnapshottableDirectoryStatus, getSnapshotDiffReport and 
> getSnapshotDiffReportListing are not implemented in RouterRpcServer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13787) RBF: Add Snapshot related ClientProtocol APIs

2019-04-24 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-13787:
-
Attachment: HDFS-13787.002.patch

> RBF: Add Snapshot related ClientProtocol APIs
> -
>
> Key: HDFS-13787
> URL: https://issues.apache.org/jira/browse/HDFS-13787
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13787.001.patch, HDFS-13787.002.patch
>
>
> Currently, allowSnapshot, disallowSnapshot, renameSnapshot, createSnapshot, 
> deleteSnapshot , SnapshottableDirectoryStatus, getSnapshotDiffReport and 
> getSnapshotDiffReportListing are not implemented in RouterRpcServer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14443) Throwing RemoteException in the time of Read Operation

2019-04-19 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821905#comment-16821905
 ] 

Ranith Sardar edited comment on HDFS-14443 at 4/19/19 1:05 PM:
---

When we are performing some read operation, it is giving "Operation category 
WRITE is not supported in state observer" error for observer node. Although, we 
have not performed any such write operation.

For each operation, it is throwing the same error.


was (Author: ranith):
When we are performing some read operation, it is giving "Operation category 
WRITE is not supported in state observer" error for observer node. Although, we 
have not performed any such write operation.

> Throwing RemoteException in the time of Read Operation
> --
>
> Key: HDFS-14443
> URL: https://issues.apache.org/jira/browse/HDFS-14443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ranith Sardar
>Priority: Major
>
> 2019-04-19 20:54:59,178 DEBUG 
> org.apache.hadoop.io.retry.RetryInvocationHandler: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
>  Operation category WRITE is not supported in state observer. Visit 
> [https://s.apache.org/sbnn-error]
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:98)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1990)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1443)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.msync(NameNodeRpcServer.java:1372)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.msync(ClientNamenodeProtocolServerSideTranslatorPB.java:1929)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:531)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:927)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:862)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2791)
>  , while invoking $Proxy5.getFileInfo over 
> [host-*-*-*-*/*.*.*.*:6*5,host-*-*-*-*/*.*.*.*:**,host-*-*-*-*/*.*.*.*:6**5]. 
> Trying to failover immediately.
>  
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
>  Operation category WRITE is not supported in state observer. Visit 
> [https://s.apache.org/sbnn-error]
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:98)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14443) Throwing RemoteException in the time of Read Operation

2019-04-19 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821905#comment-16821905
 ] 

Ranith Sardar commented on HDFS-14443:
--

When we are performing some read operation, it is giving "Operation category 
WRITE is not supported in state observer" error for observer node. Although, we 
have not performed any such write operation.

> Throwing RemoteException in the time of Read Operation
> --
>
> Key: HDFS-14443
> URL: https://issues.apache.org/jira/browse/HDFS-14443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ranith Sardar
>Priority: Major
>
> 2019-04-19 20:54:59,178 DEBUG 
> org.apache.hadoop.io.retry.RetryInvocationHandler: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
>  Operation category WRITE is not supported in state observer. Visit 
> [https://s.apache.org/sbnn-error]
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:98)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1990)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1443)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.msync(NameNodeRpcServer.java:1372)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.msync(ClientNamenodeProtocolServerSideTranslatorPB.java:1929)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:531)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:927)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:862)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2791)
>  , while invoking $Proxy5.getFileInfo over 
> [host-*-*-*-*/*.*.*.*:6*5,host-*-*-*-*/*.*.*.*:**,host-*-*-*-*/*.*.*.*:6**5]. 
> Trying to failover immediately.
>  
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
>  Operation category WRITE is not supported in state observer. Visit 
> [https://s.apache.org/sbnn-error]
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:98)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14443) Throwing RemoteException in the time of Read Operation

2019-04-19 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14443:
-
Description: 
2019-04-19 20:54:59,178 DEBUG 
org.apache.hadoop.io.retry.RetryInvocationHandler: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): 
Operation category WRITE is not supported in state observer. Visit 
[https://s.apache.org/sbnn-error]
 at 
org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:98)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1990)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1443)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.msync(NameNodeRpcServer.java:1372)
 at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.msync(ClientNamenodeProtocolServerSideTranslatorPB.java:1929)
 at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:531)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:927)
 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:862)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2791)
 , while invoking $Proxy5.getFileInfo over 
[host-*-*-*-*/*.*.*.*:6*5,host-*-*-*-*/*.*.*.*:**,host-*-*-*-*/*.*.*.*:6**5]. 
Trying to failover immediately.
 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): 
Operation category WRITE is not supported in state observer. Visit 
[https://s.apache.org/sbnn-error]
 at 
org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:98)

  was:
2019-04-19 20:54:59,178 DEBUG 
org.apache.hadoop.io.retry.RetryInvocationHandler: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): 
Operation category WRITE is not supported in state observer. Visit 
[https://s.apache.org/sbnn-error]
 at 
org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:98)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1990)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1443)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.msync(NameNodeRpcServer.java:1372)
 at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.msync(ClientNamenodeProtocolServerSideTranslatorPB.java:1929)
 at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:531)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:927)
 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:862)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2791)
 , while invoking $Proxy5.getFileInfo over 
[host-*-*-*-*/*.*.*.*:6*5,host-*-*-*-*/*.*.*.*:**,host-*-*-*-*/*.*.*.*:6**5]. 
Trying to failover immediately.
 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): 
Operation category WRITE is not supported in state observer. Visit 
[https://s.apache.org/sbnn-error]
 at 
org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:98)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1990)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1443)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.msync(NameNodeRpcServer.java:1372)
 at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.msync(ClientNamenodeProtocolServerSideTranslatorPB.java:1929)
 at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:531)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:927)
 at 

[jira] [Updated] (HDFS-14443) Throwing RemoteException in the time of Read Operation

2019-04-19 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14443:
-
Description: 
2019-04-19 20:54:59,178 DEBUG 
org.apache.hadoop.io.retry.RetryInvocationHandler: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): 
Operation category WRITE is not supported in state observer. Visit 
[https://s.apache.org/sbnn-error]
 at 
org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:98)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1990)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1443)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.msync(NameNodeRpcServer.java:1372)
 at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.msync(ClientNamenodeProtocolServerSideTranslatorPB.java:1929)
 at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:531)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:927)
 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:862)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2791)
 , while invoking $Proxy5.getFileInfo over 
[host-*-*-*-*/*.*.*.*:6*5,host-*-*-*-*/*.*.*.*:**,host-*-*-*-*/*.*.*.*:6**5]. 
Trying to failover immediately.
 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): 
Operation category WRITE is not supported in state observer. Visit 
[https://s.apache.org/sbnn-error]
 at 
org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:98)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1990)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1443)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.msync(NameNodeRpcServer.java:1372)
 at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.msync(ClientNamenodeProtocolServerSideTranslatorPB.java:1929)
 at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:531)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:927)
 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:862)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2791)

at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552)
 at org.apache.hadoop.ipc.Client.call(Client.java:1498)
 at org.apache.hadoop.ipc.Client.call(Client.java:1397)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:234)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
 at com.sun.proxy.$Proxy16.msync(Unknown Source)
 at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.msync(ClientNamenodeProtocolTranslatorPB.java:2000)
 at 
org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.initializeMsync(ObserverReadProxyProvider.java:283)
 at 
org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.access$500(ObserverReadProxyProvider.java:68)
 at 
org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider$ObserverReadInvocationHandler.invoke(ObserverReadProxyProvider.java:339)

  was:
2019-04-19 20:54:59,178 DEBUG 
org.apache.hadoop.io.retry.RetryInvocationHandler: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): 
Operation category WRITE is not supported in state observer. Visit 
https://s.apache.org/sbnn-error
 at 
org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:98)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1990)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1443)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.msync(NameNodeRpcServer.java:1372)
 at 

[jira] [Created] (HDFS-14443) Throwing RemoteException in the time of Read Operation

2019-04-19 Thread Ranith Sardar (JIRA)
Ranith Sardar created HDFS-14443:


 Summary: Throwing RemoteException in the time of Read Operation
 Key: HDFS-14443
 URL: https://issues.apache.org/jira/browse/HDFS-14443
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ranith Sardar


2019-04-19 20:54:59,178 DEBUG 
org.apache.hadoop.io.retry.RetryInvocationHandler: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): 
Operation category WRITE is not supported in state observer. Visit 
https://s.apache.org/sbnn-error
 at 
org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:98)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1990)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1443)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.msync(NameNodeRpcServer.java:1372)
 at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.msync(ClientNamenodeProtocolServerSideTranslatorPB.java:1929)
 at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:531)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:927)
 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:862)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2791)
, while invoking $Proxy5.getFileInfo over 
[host-*-*-*-*/*.*.*.*:6*5,host-*-*-*-*/*.*.*.*:**,host-*-*-*-*/*.*.*.*:6**5]. 
Trying to failover immediately.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): 
Operation category WRITE is not supported in state observer. Visit 
https://s.apache.org/sbnn-error
 at 
org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:98)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1990)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1443)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.msync(NameNodeRpcServer.java:1372)
 at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.msync(ClientNamenodeProtocolServerSideTranslatorPB.java:1929)
 at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:531)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:927)
 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:862)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2791)

at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552)
 at org.apache.hadoop.ipc.Client.call(Client.java:1498)
 at org.apache.hadoop.ipc.Client.call(Client.java:1397)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:234)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
 at com.sun.proxy.$Proxy16.msync(Unknown Source)
 at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.msync(ClientNamenodeProtocolTranslatorPB.java:2000)
 at 
org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.initializeMsync(ObserverReadProxyProvider.java:283)
 at 
org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.access$500(ObserverReadProxyProvider.java:68)
 at 
org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider$ObserverReadInvocationHandler.invoke(ObserverReadProxyProvider.java:339)
 at com.sun.proxy.$Proxy5.getFileInfo(Unknown Source)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
 at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
 at 

[jira] [Updated] (HDFS-14368) RBF: In router UI under Namenode Information, Nameservice and Web address is not coming properly.

2019-03-13 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14368:
-
Status: Patch Available  (was: Open)

> RBF: In router UI under Namenode Information, Nameservice and Web address is 
> not coming properly.
> -
>
> Key: HDFS-14368
> URL: https://issues.apache.org/jira/browse/HDFS-14368
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14368-HDFS-13891.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14368) RBF: In router UI under Namenode Information, Nameservice and Web address is not coming properly.

2019-03-13 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14368:
-
Attachment: HDFS-14368-HDFS-13891.000.patch

> RBF: In router UI under Namenode Information, Nameservice and Web address is 
> not coming properly.
> -
>
> Key: HDFS-14368
> URL: https://issues.apache.org/jira/browse/HDFS-14368
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14368-HDFS-13891.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14368) RBF: In router UI under Namenode Information, Nameservice and Web address is not coming properly.

2019-03-13 Thread Ranith Sardar (JIRA)
Ranith Sardar created HDFS-14368:


 Summary: RBF: In router UI under Namenode Information, Nameservice 
and Web address is not coming properly.
 Key: HDFS-14368
 URL: https://issues.apache.org/jira/browse/HDFS-14368
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ranith Sardar
Assignee: Ranith Sardar






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14254) RBF: Getfacl gives a wrong acl entries when the order of the mount table set to HASH_ALL or RANDOM

2019-03-04 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14254:
-
Attachment: HDFS-14254-HDFS-13891.003.patch

> RBF: Getfacl gives a wrong acl entries when the order of the mount table set 
> to HASH_ALL or RANDOM
> --
>
> Key: HDFS-14254
> URL: https://issues.apache.org/jira/browse/HDFS-14254
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Shubham Dewan
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14254-HDFS-13891.000.patch, 
> HDFS-14254-HDFS-13891.001.patch, HDFS-14254-HDFS-13891.002.patch, 
> HDFS-14254-HDFS-13891.003.patch
>
>
> ACL entries are missing when Order is set to HASH_ALL or RANDOM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14259) RBF: Fix safemode message for Router

2019-03-02 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782621#comment-16782621
 ] 

Ranith Sardar commented on HDFS-14259:
--

Thank you [~elgoiri] for committing the patch :)

> RBF: Fix safemode message for Router
> 
>
> Key: HDFS-14259
> URL: https://issues.apache.org/jira/browse/HDFS-14259
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14259-HDFS-13891.000.patch, 
> HDFS-14259-HDFS-13891.001.patch, HDFS-14259-HDFS-13891.002.patch
>
>
> Currently, the {{getSafemode()}} bean checks the state of the Router but 
> returns the error if the status is different than SAFEMODE:
> {code}
>   public String getSafemode() {
>   if (!getRouter().isRouterState(RouterServiceState.SAFEMODE)) {
> return "Safe mode is ON. " + this.getSafeModeTip();
>   }
> } catch (IOException e) {
>   return "Failed to get safemode status. Please check router"
>   + "log for more detail.";
> }
> return "";
>   }
> {code}
> The condition should be reversed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14259) RBF: Fix safemode message for Router

2019-03-01 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14259:
-
Attachment: HDFS-14259-HDFS-13891.002.patch

> RBF: Fix safemode message for Router
> 
>
> Key: HDFS-14259
> URL: https://issues.apache.org/jira/browse/HDFS-14259
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14259-HDFS-13891.000.patch, 
> HDFS-14259-HDFS-13891.001.patch, HDFS-14259-HDFS-13891.002.patch
>
>
> Currently, the {{getSafemode()}} bean checks the state of the Router but 
> returns the error if the status is different than SAFEMODE:
> {code}
>   public String getSafemode() {
>   if (!getRouter().isRouterState(RouterServiceState.SAFEMODE)) {
> return "Safe mode is ON. " + this.getSafeModeTip();
>   }
> } catch (IOException e) {
>   return "Failed to get safemode status. Please check router"
>   + "log for more detail.";
> }
> return "";
>   }
> {code}
> The condition should be reversed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14254) RBF: Getfacl gives a wrong acl entries when the order of the mount table set to HASH_ALL or RANDOM

2019-02-20 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772901#comment-16772901
 ] 

Ranith Sardar commented on HDFS-14254:
--

Added new patch for UT failure.

> RBF: Getfacl gives a wrong acl entries when the order of the mount table set 
> to HASH_ALL or RANDOM
> --
>
> Key: HDFS-14254
> URL: https://issues.apache.org/jira/browse/HDFS-14254
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Shubham Dewan
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14254-HDFS-13891.000.patch, 
> HDFS-14254-HDFS-13891.001.patch, HDFS-14254-HDFS-13891.002.patch
>
>
> ACL entries are missing when Order is set to HASH_ALL or RANDOM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14254) RBF: Getfacl gives a wrong acl entries when the order of the mount table set to HASH_ALL or RANDOM

2019-02-20 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14254:
-
Attachment: HDFS-14254-HDFS-13891.002.patch

> RBF: Getfacl gives a wrong acl entries when the order of the mount table set 
> to HASH_ALL or RANDOM
> --
>
> Key: HDFS-14254
> URL: https://issues.apache.org/jira/browse/HDFS-14254
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Shubham Dewan
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14254-HDFS-13891.000.patch, 
> HDFS-14254-HDFS-13891.001.patch, HDFS-14254-HDFS-13891.002.patch
>
>
> ACL entries are missing when Order is set to HASH_ALL or RANDOM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14259) RBF: Fix safemode message for Router

2019-02-20 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772868#comment-16772868
 ] 

Ranith Sardar commented on HDFS-14259:
--

[~elgoiri], Attached the patch. Please review once.

> RBF: Fix safemode message for Router
> 
>
> Key: HDFS-14259
> URL: https://issues.apache.org/jira/browse/HDFS-14259
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14259-HDFS-13891.000.patch, 
> HDFS-14259-HDFS-13891.001.patch
>
>
> Currently, the {{getSafemode()}} bean checks the state of the Router but 
> returns the error if the status is different than SAFEMODE:
> {code}
>   public String getSafemode() {
>   if (!getRouter().isRouterState(RouterServiceState.SAFEMODE)) {
> return "Safe mode is ON. " + this.getSafeModeTip();
>   }
> } catch (IOException e) {
>   return "Failed to get safemode status. Please check router"
>   + "log for more detail.";
> }
> return "";
>   }
> {code}
> The condition should be reversed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14259) RBF: Fix safemode message for Router

2019-02-20 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14259:
-
Attachment: HDFS-14259-HDFS-13891.001.patch

> RBF: Fix safemode message for Router
> 
>
> Key: HDFS-14259
> URL: https://issues.apache.org/jira/browse/HDFS-14259
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14259-HDFS-13891.000.patch, 
> HDFS-14259-HDFS-13891.001.patch
>
>
> Currently, the {{getSafemode()}} bean checks the state of the Router but 
> returns the error if the status is different than SAFEMODE:
> {code}
>   public String getSafemode() {
>   if (!getRouter().isRouterState(RouterServiceState.SAFEMODE)) {
> return "Safe mode is ON. " + this.getSafeModeTip();
>   }
> } catch (IOException e) {
>   return "Failed to get safemode status. Please check router"
>   + "log for more detail.";
> }
> return "";
>   }
> {code}
> The condition should be reversed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14235) Handle ArrayIndexOutOfBoundsException in DataNodeDiskMetrics#slowDiskDetectionDaemon

2019-02-20 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772817#comment-16772817
 ] 

Ranith Sardar commented on HDFS-14235:
--

[~surendrasingh], Fixed the checkstyle. Please review it once.

> Handle ArrayIndexOutOfBoundsException in 
> DataNodeDiskMetrics#slowDiskDetectionDaemon 
> -
>
> Key: HDFS-14235
> URL: https://issues.apache.org/jira/browse/HDFS-14235
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Surendra Singh Lilhore
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14235.000.patch, HDFS-14235.001.patch, 
> HDFS-14235.002.patch, HDFS-14235.003.patch, NPE.png, exception.png
>
>
> below code throwing exception because {{volumeIterator.next()}} called two 
> time without checking hashNext().
> {code:java}
> while (volumeIterator.hasNext()) {
>   FsVolumeSpi volume = volumeIterator.next();
>   DataNodeVolumeMetrics metrics = volumeIterator.next().getMetrics();
>   String volumeName = volume.getBaseURI().getPath();
>   metadataOpStats.put(volumeName,
>   metrics.getMetadataOperationMean());
>   readIoStats.put(volumeName, metrics.getReadIoMean());
>   writeIoStats.put(volumeName, metrics.getWriteIoMean());
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14235) Handle ArrayIndexOutOfBoundsException in DataNodeDiskMetrics#slowDiskDetectionDaemon

2019-02-19 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14235:
-
Attachment: HDFS-14235.003.patch

> Handle ArrayIndexOutOfBoundsException in 
> DataNodeDiskMetrics#slowDiskDetectionDaemon 
> -
>
> Key: HDFS-14235
> URL: https://issues.apache.org/jira/browse/HDFS-14235
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Surendra Singh Lilhore
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14235.000.patch, HDFS-14235.001.patch, 
> HDFS-14235.002.patch, HDFS-14235.003.patch, NPE.png, exception.png
>
>
> below code throwing exception because {{volumeIterator.next()}} called two 
> time without checking hashNext().
> {code:java}
> while (volumeIterator.hasNext()) {
>   FsVolumeSpi volume = volumeIterator.next();
>   DataNodeVolumeMetrics metrics = volumeIterator.next().getMetrics();
>   String volumeName = volume.getBaseURI().getPath();
>   metadataOpStats.put(volumeName,
>   metrics.getMetadataOperationMean());
>   readIoStats.put(volumeName, metrics.getReadIoMean());
>   writeIoStats.put(volumeName, metrics.getWriteIoMean());
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14254) RBF: Getfacl gives a wrong acl entries when the order of the mount table set to HASH_ALL or RANDOM

2019-02-19 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14254:
-
Attachment: HDFS-14254-HDFS-13891.001.patch

> RBF: Getfacl gives a wrong acl entries when the order of the mount table set 
> to HASH_ALL or RANDOM
> --
>
> Key: HDFS-14254
> URL: https://issues.apache.org/jira/browse/HDFS-14254
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Shubham Dewan
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14254-HDFS-13891.000.patch, 
> HDFS-14254-HDFS-13891.001.patch
>
>
> ACL entries are missing when Order is set to HASH_ALL or RANDOM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14235) Handle ArrayIndexOutOfBoundsException in DataNodeDiskMetrics#slowDiskDetectionDaemon

2019-02-19 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14235:
-
Attachment: HDFS-14235.002.patch

> Handle ArrayIndexOutOfBoundsException in 
> DataNodeDiskMetrics#slowDiskDetectionDaemon 
> -
>
> Key: HDFS-14235
> URL: https://issues.apache.org/jira/browse/HDFS-14235
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Surendra Singh Lilhore
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14235.000.patch, HDFS-14235.001.patch, 
> HDFS-14235.002.patch, NPE.png, exception.png
>
>
> below code throwing exception because {{volumeIterator.next()}} called two 
> time without checking hashNext().
> {code:java}
> while (volumeIterator.hasNext()) {
>   FsVolumeSpi volume = volumeIterator.next();
>   DataNodeVolumeMetrics metrics = volumeIterator.next().getMetrics();
>   String volumeName = volume.getBaseURI().getPath();
>   metadataOpStats.put(volumeName,
>   metrics.getMetadataOperationMean());
>   readIoStats.put(volumeName, metrics.getReadIoMean());
>   writeIoStats.put(volumeName, metrics.getWriteIoMean());
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14235) Handle ArrayIndexOutOfBoundsException in DataNodeDiskMetrics#slowDiskDetectionDaemon

2019-02-18 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16771638#comment-16771638
 ] 

Ranith Sardar commented on HDFS-14235:
--

Thanks [~surendrasingh]. I will update very soon.

> Handle ArrayIndexOutOfBoundsException in 
> DataNodeDiskMetrics#slowDiskDetectionDaemon 
> -
>
> Key: HDFS-14235
> URL: https://issues.apache.org/jira/browse/HDFS-14235
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Surendra Singh Lilhore
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14235.000.patch, HDFS-14235.001.patch, NPE.png, 
> exception.png
>
>
> below code throwing exception because {{volumeIterator.next()}} called two 
> time without checking hashNext().
> {code:java}
> while (volumeIterator.hasNext()) {
>   FsVolumeSpi volume = volumeIterator.next();
>   DataNodeVolumeMetrics metrics = volumeIterator.next().getMetrics();
>   String volumeName = volume.getBaseURI().getPath();
>   metadataOpStats.put(volumeName,
>   metrics.getMetadataOperationMean());
>   readIoStats.put(volumeName, metrics.getReadIoMean());
>   writeIoStats.put(volumeName, metrics.getWriteIoMean());
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14235) Handle ArrayIndexOutOfBoundsException in DataNodeDiskMetrics#slowDiskDetectionDaemon

2019-02-18 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16771409#comment-16771409
 ] 

Ranith Sardar commented on HDFS-14235:
--

Thanks [~surendrasingh], for reviewing the patch. I have updated accordingly 
and added the new patch.

> Handle ArrayIndexOutOfBoundsException in 
> DataNodeDiskMetrics#slowDiskDetectionDaemon 
> -
>
> Key: HDFS-14235
> URL: https://issues.apache.org/jira/browse/HDFS-14235
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Surendra Singh Lilhore
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14235.000.patch, HDFS-14235.001.patch, NPE.png, 
> exception.png
>
>
> below code throwing exception because {{volumeIterator.next()}} called two 
> time without checking hashNext().
> {code:java}
> while (volumeIterator.hasNext()) {
>   FsVolumeSpi volume = volumeIterator.next();
>   DataNodeVolumeMetrics metrics = volumeIterator.next().getMetrics();
>   String volumeName = volume.getBaseURI().getPath();
>   metadataOpStats.put(volumeName,
>   metrics.getMetadataOperationMean());
>   readIoStats.put(volumeName, metrics.getReadIoMean());
>   writeIoStats.put(volumeName, metrics.getWriteIoMean());
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14235) Handle ArrayIndexOutOfBoundsException in DataNodeDiskMetrics#slowDiskDetectionDaemon

2019-02-18 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14235:
-
Attachment: HDFS-14235.001.patch

> Handle ArrayIndexOutOfBoundsException in 
> DataNodeDiskMetrics#slowDiskDetectionDaemon 
> -
>
> Key: HDFS-14235
> URL: https://issues.apache.org/jira/browse/HDFS-14235
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Surendra Singh Lilhore
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14235.000.patch, HDFS-14235.001.patch, NPE.png, 
> exception.png
>
>
> below code throwing exception because {{volumeIterator.next()}} called two 
> time without checking hashNext().
> {code:java}
> while (volumeIterator.hasNext()) {
>   FsVolumeSpi volume = volumeIterator.next();
>   DataNodeVolumeMetrics metrics = volumeIterator.next().getMetrics();
>   String volumeName = volume.getBaseURI().getPath();
>   metadataOpStats.put(volumeName,
>   metrics.getMetadataOperationMean());
>   readIoStats.put(volumeName, metrics.getReadIoMean());
>   writeIoStats.put(volumeName, metrics.getWriteIoMean());
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14259) RBF: Fix safemode message for Router

2019-02-15 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769577#comment-16769577
 ] 

Ranith Sardar commented on HDFS-14259:
--

[~elgoiri], very soon I will update the patch.

> RBF: Fix safemode message for Router
> 
>
> Key: HDFS-14259
> URL: https://issues.apache.org/jira/browse/HDFS-14259
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14259-HDFS-13891.000.patch
>
>
> Currently, the {{getSafemode()}} bean checks the state of the Router but 
> returns the error if the status is different than SAFEMODE:
> {code}
>   public String getSafemode() {
>   if (!getRouter().isRouterState(RouterServiceState.SAFEMODE)) {
> return "Safe mode is ON. " + this.getSafeModeTip();
>   }
> } catch (IOException e) {
>   return "Failed to get safemode status. Please check router"
>   + "log for more detail.";
> }
> return "";
>   }
> {code}
> The condition should be reversed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14265) In WEBHDFS Output Extra TATs are printing

2019-02-10 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar reassigned HDFS-14265:


Assignee: Ranith Sardar

> In WEBHDFS Output Extra TATs are printing
> -
>
> Key: HDFS-14265
> URL: https://issues.apache.org/jira/browse/HDFS-14265
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.1.1
>Reporter: Harshakiran Reddy
>Assignee: Ranith Sardar
>Priority: Major
>
> {noformat}
> bin> curl -i -k -X PUT --negotiate -u: 
> "http://NNIP:9864/webhdfs/v1/file1?op=CREATE=hacluster1==true=false;
> HTTP/1.1 100 Continue
> HTTP/1.1 403 Forbidden
> Content-Type: application/json; charset=utf-8
> Content-Length: 2110
> Connection: close
> {"RemoteException":{"exception":"AccessControlException","javaClassName":"org.apache.hadoop.security.AccessControlException","message":"Permission
>  denied: user=dr.who, access=WRITE, 
> inode=\"/\":securedn:supergroup:drwxr-xr-x\n\tat 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399)\n\tat
>  
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:255)\n\tat
>  
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193)\n\tat
>  
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1904)\n\tat
>  
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1888)\n\tat
>  
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1847)\n\tat
>  
> org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.resolvePathForStartFile(FSDirWriteFileOp.java:376)\n\tat
>  
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2418)\n\tat
>  
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2362)\n\tat
>  
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:775)\n\tat
>  
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:490)\n\tat
>  
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat
>  
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)\n\tat
>  org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)\n\tat 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)\n\tat 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)\n\tat 
> java.security.AccessController.doPrivileged(Native Method)\n\tat 
> javax.security.auth.Subject.doAs(Subject.java:422)\n\tat 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)\n\tat
>  org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)\n"}}
> /bin>
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14259) RBF: Fix safemode message for Router

2019-02-07 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16762563#comment-16762563
 ] 

Ranith Sardar edited comment on HDFS-14259 at 2/7/19 11:01 AM:
---

Added the patch with UT. Please review it once.


was (Author: ranith):
Added the patch with UT.

> RBF: Fix safemode message for Router
> 
>
> Key: HDFS-14259
> URL: https://issues.apache.org/jira/browse/HDFS-14259
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14259-HDFS-13891.000.patch
>
>
> Currently, the {{getSafemode()}} bean checks the state of the Router but 
> returns the error if the status is different than SAFEMODE:
> {code}
>   public String getSafemode() {
>   if (!getRouter().isRouterState(RouterServiceState.SAFEMODE)) {
> return "Safe mode is ON. " + this.getSafeModeTip();
>   }
> } catch (IOException e) {
>   return "Failed to get safemode status. Please check router"
>   + "log for more detail.";
> }
> return "";
>   }
> {code}
> The condition should be reversed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14259) RBF: Fix safemode message for Router

2019-02-07 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16762563#comment-16762563
 ] 

Ranith Sardar commented on HDFS-14259:
--

Added the patch with UT.

> RBF: Fix safemode message for Router
> 
>
> Key: HDFS-14259
> URL: https://issues.apache.org/jira/browse/HDFS-14259
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14259-HDFS-13891.000.patch
>
>
> Currently, the {{getSafemode()}} bean checks the state of the Router but 
> returns the error if the status is different than SAFEMODE:
> {code}
>   public String getSafemode() {
>   if (!getRouter().isRouterState(RouterServiceState.SAFEMODE)) {
> return "Safe mode is ON. " + this.getSafeModeTip();
>   }
> } catch (IOException e) {
>   return "Failed to get safemode status. Please check router"
>   + "log for more detail.";
> }
> return "";
>   }
> {code}
> The condition should be reversed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14259) RBF: Fix safemode message for Router

2019-02-07 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14259:
-
Assignee: Ranith Sardar
  Status: Patch Available  (was: Open)

> RBF: Fix safemode message for Router
> 
>
> Key: HDFS-14259
> URL: https://issues.apache.org/jira/browse/HDFS-14259
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14259-HDFS-13891.000.patch
>
>
> Currently, the {{getSafemode()}} bean checks the state of the Router but 
> returns the error if the status is different than SAFEMODE:
> {code}
>   public String getSafemode() {
>   if (!getRouter().isRouterState(RouterServiceState.SAFEMODE)) {
> return "Safe mode is ON. " + this.getSafeModeTip();
>   }
> } catch (IOException e) {
>   return "Failed to get safemode status. Please check router"
>   + "log for more detail.";
> }
> return "";
>   }
> {code}
> The condition should be reversed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14259) RBF: Fix safemode message for Router

2019-02-07 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14259:
-
Attachment: HDFS-14259-HDFS-13891.000.patch

> RBF: Fix safemode message for Router
> 
>
> Key: HDFS-14259
> URL: https://issues.apache.org/jira/browse/HDFS-14259
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14259-HDFS-13891.000.patch
>
>
> Currently, the {{getSafemode()}} bean checks the state of the Router but 
> returns the error if the status is different than SAFEMODE:
> {code}
>   public String getSafemode() {
>   if (!getRouter().isRouterState(RouterServiceState.SAFEMODE)) {
> return "Safe mode is ON. " + this.getSafeModeTip();
>   }
> } catch (IOException e) {
>   return "Failed to get safemode status. Please check router"
>   + "log for more detail.";
> }
> return "";
>   }
> {code}
> The condition should be reversed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14191) RBF: Remove hard coded router status from FederationMetrics.

2019-02-06 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16762379#comment-16762379
 ] 

Ranith Sardar commented on HDFS-14191:
--

yes, '!' the check should not be there in if condition.

> RBF: Remove hard coded router status from FederationMetrics.
> 
>
> Key: HDFS-14191
> URL: https://issues.apache.org/jira/browse/HDFS-14191
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.1.1
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14191-HDFS-13891.002.patch, 
> HDFS-14191-HDFS-13891.003.patch, HDFS-14191.001.patch, 
> IMG_20190109_023713.jpg, image-2019-01-08-16-05-34-736.png, 
> image-2019-01-08-16-09-46-648.png
>
>
> Status value in "Router Information" and in Overview tab, is not matching for 
> "SAFEMODE" condition.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14259) RBF: Fix safemode message for Router

2019-02-06 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16762369#comment-16762369
 ] 

Ranith Sardar commented on HDFS-14259:
--

sure. I will give the patch.

> RBF: Fix safemode message for Router
> 
>
> Key: HDFS-14259
> URL: https://issues.apache.org/jira/browse/HDFS-14259
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Priority: Major
>
> Currently, the {{getSafemode()}} bean checks the state of the Router but 
> returns the error if the status is different than SAFEMODE:
> {code}
>   public String getSafemode() {
>   if (!getRouter().isRouterState(RouterServiceState.SAFEMODE)) {
> return "Safe mode is ON. " + this.getSafeModeTip();
>   }
> } catch (IOException e) {
>   return "Failed to get safemode status. Please check router"
>   + "log for more detail.";
> }
> return "";
>   }
> {code}
> The condition should be reversed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14240) blockReport test in NNThroughputBenchmark throws ArrayIndexOutOfBoundsException

2019-02-06 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16761669#comment-16761669
 ] 

Ranith Sardar commented on HDFS-14240:
--

[~shenyinjie], please mention which command you used?

> blockReport test in NNThroughputBenchmark throws 
> ArrayIndexOutOfBoundsException
> ---
>
> Key: HDFS-14240
> URL: https://issues.apache.org/jira/browse/HDFS-14240
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Shen Yinjie
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: screenshot-1.png
>
>
> _emphasized text_When I run a blockReport test with NNThroughputBenchmark, 
> BlockReportStats.addBlocks() throws ArrayIndexOutOfBoundsException.
> digging the code:
> {code:java}
> for(DatanodeInfo dnInfo : loc.getLocations())
> { int dnIdx = dnInfo.getXferPort() - 1; 
> datanodes[dnIdx].addBlock(loc.getBlock().getLocalBlock());{code}
>  
> problem is here:array datanodes's length is determined by args as 
> "-datanodes" or "-threads" ,but dnIdx = dnInfo.getXferPort() is a random port.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14254) RBF: Getfacl gives a wrong acl entries when the order of the mount table set to HASH_ALL or RANDOM

2019-02-05 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14254:
-
Status: Patch Available  (was: Open)

> RBF: Getfacl gives a wrong acl entries when the order of the mount table set 
> to HASH_ALL or RANDOM
> --
>
> Key: HDFS-14254
> URL: https://issues.apache.org/jira/browse/HDFS-14254
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Shubham Dewan
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14254-HDFS-13891.000.patch
>
>
> ACL entries are missing when Order is set to HASH_ALL or RANDOM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14254) RBF: Getfacl gives a wrong acl entries when the order of the mount table set to HASH_ALL or RANDOM

2019-02-05 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16761531#comment-16761531
 ] 

Ranith Sardar commented on HDFS-14254:
--

Added the initial patch.

> RBF: Getfacl gives a wrong acl entries when the order of the mount table set 
> to HASH_ALL or RANDOM
> --
>
> Key: HDFS-14254
> URL: https://issues.apache.org/jira/browse/HDFS-14254
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Shubham Dewan
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14254-HDFS-13891.000.patch
>
>
> ACL entries are missing when Order is set to HASH_ALL or RANDOM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14254) RBF: Getfacl gives a wrong acl entries when the order of the mount table set to HASH_ALL or RANDOM

2019-02-05 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14254:
-
Attachment: HDFS-14254-HDFS-13891.000.patch

> RBF: Getfacl gives a wrong acl entries when the order of the mount table set 
> to HASH_ALL or RANDOM
> --
>
> Key: HDFS-14254
> URL: https://issues.apache.org/jira/browse/HDFS-14254
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Shubham Dewan
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14254-HDFS-13891.000.patch
>
>
> ACL entries are missing when Order is set to HASH_ALL or RANDOM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14240) blockReport test in NNThroughputBenchmark throws ArrayIndexOutOfBoundsException

2019-02-05 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14240:
-
Description: 
_emphasized text_When I run a blockReport test with NNThroughputBenchmark, 
BlockReportStats.addBlocks() throws ArrayIndexOutOfBoundsException.

digging the code:
{code:java}
for(DatanodeInfo dnInfo : loc.getLocations())

{ int dnIdx = dnInfo.getXferPort() - 1; 
datanodes[dnIdx].addBlock(loc.getBlock().getLocalBlock());{code}
 

problem is here:array datanodes's length is determined by args as "-datanodes" 
or "-threads" ,but dnIdx = dnInfo.getXferPort() is a random port.

  was:
When I run a blockReport test with NNThroughputBenchmark, 
BlockReportStats.addBlocks() throws ArrayIndexOutOfBoundsException.

digging the code:
{code:java}
for(DatanodeInfo dnInfo : loc.getLocations())

{ int dnIdx = dnInfo.getXferPort() - 1; 
datanodes[dnIdx].addBlock(loc.getBlock().getLocalBlock());{code}
 

problem is here:array datanodes's length is determined by args as "-datanodes" 
or "-threads" ,but dnIdx = dnInfo.getXferPort() is a random port.


> blockReport test in NNThroughputBenchmark throws 
> ArrayIndexOutOfBoundsException
> ---
>
> Key: HDFS-14240
> URL: https://issues.apache.org/jira/browse/HDFS-14240
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Shen Yinjie
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: screenshot-1.png
>
>
> _emphasized text_When I run a blockReport test with NNThroughputBenchmark, 
> BlockReportStats.addBlocks() throws ArrayIndexOutOfBoundsException.
> digging the code:
> {code:java}
> for(DatanodeInfo dnInfo : loc.getLocations())
> { int dnIdx = dnInfo.getXferPort() - 1; 
> datanodes[dnIdx].addBlock(loc.getBlock().getLocalBlock());{code}
>  
> problem is here:array datanodes's length is determined by args as 
> "-datanodes" or "-threads" ,but dnIdx = dnInfo.getXferPort() is a random port.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14225) RBF : MiniRouterDFSCluster should configure the failover proxy provider for namespace

2019-02-04 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760470#comment-16760470
 ] 

Ranith Sardar commented on HDFS-14225:
--

Thanks [~surendrasingh] and [~elgoiri]. :)

> RBF : MiniRouterDFSCluster should configure the failover proxy provider for 
> namespace
> -
>
> Key: HDFS-14225
> URL: https://issues.apache.org/jira/browse/HDFS-14225
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Ranith Sardar
>Priority: Minor
> Fix For: HDFS-13891
>
> Attachments: HDFS-14225-HDFS-13891.000.patch
>
>
> Getting UnknownHostException in UT.
> {noformat}
> org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
> java.net.UnknownHostException: ns0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14202) "dfs.disk.balancer.max.disk.throughputInMBperSec" property is not working as per set value.

2019-02-04 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760166#comment-16760166
 ] 

Ranith Sardar commented on HDFS-14202:
--

[~elgoiri], please check the recent patch. I have updated according to your 
comments.

> "dfs.disk.balancer.max.disk.throughputInMBperSec" property is not working as 
> per set value.
> ---
>
> Key: HDFS-14202
> URL: https://issues.apache.org/jira/browse/HDFS-14202
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Affects Versions: 3.0.1
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14202.001.patch, HDFS-14202.002.patch, 
> HDFS-14202.003.patch, HDFS-14202.004.patch, HDFS-14202.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14202) "dfs.disk.balancer.max.disk.throughputInMBperSec" property is not working as per set value.

2019-02-04 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760170#comment-16760170
 ] 

Ranith Sardar commented on HDFS-14202:
--

Thank you [~elgoiri] :)

> "dfs.disk.balancer.max.disk.throughputInMBperSec" property is not working as 
> per set value.
> ---
>
> Key: HDFS-14202
> URL: https://issues.apache.org/jira/browse/HDFS-14202
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Affects Versions: 3.0.1
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14202.001.patch, HDFS-14202.002.patch, 
> HDFS-14202.003.patch, HDFS-14202.004.patch, HDFS-14202.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14254) Getfacl gives a wrong acl entries when the order of the mount table set to HASH_ALL or RANDOM

2019-02-04 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16759696#comment-16759696
 ] 

Ranith Sardar commented on HDFS-14254:
--

would like to work on it.

> Getfacl gives a wrong acl entries when the order of the mount table set to 
> HASH_ALL or RANDOM
> -
>
> Key: HDFS-14254
> URL: https://issues.apache.org/jira/browse/HDFS-14254
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Shubham Dewan
>Priority: Major
>
> ACL entries are missing when Order is set to HASH_ALL or RANDOM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14202) "dfs.disk.balancer.max.disk.throughputInMBperSec" property is not working as per set value.

2019-02-03 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14202:
-
Attachment: HDFS-14202.005.patch

> "dfs.disk.balancer.max.disk.throughputInMBperSec" property is not working as 
> per set value.
> ---
>
> Key: HDFS-14202
> URL: https://issues.apache.org/jira/browse/HDFS-14202
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Affects Versions: 3.0.1
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14202.001.patch, HDFS-14202.002.patch, 
> HDFS-14202.003.patch, HDFS-14202.004.patch, HDFS-14202.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14202) "dfs.disk.balancer.max.disk.throughputInMBperSec" property is not working as per set value.

2019-02-03 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16759534#comment-16759534
 ] 

Ranith Sardar commented on HDFS-14202:
--

[~elgoiri], thanks for reviewing the patch.

I have updated the patch with the changes and as the throughput is 10MB/sec and 
we are tring to move 20MB, so it should take total 2sec (2000 millisec).  
According to that, I have made the changes in UT.

> "dfs.disk.balancer.max.disk.throughputInMBperSec" property is not working as 
> per set value.
> ---
>
> Key: HDFS-14202
> URL: https://issues.apache.org/jira/browse/HDFS-14202
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Affects Versions: 3.0.1
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14202.001.patch, HDFS-14202.002.patch, 
> HDFS-14202.003.patch, HDFS-14202.004.patch, HDFS-14202.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14202) "dfs.disk.balancer.max.disk.throughputInMBperSec" property is not working as per set value.

2019-01-31 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757989#comment-16757989
 ] 

Ranith Sardar commented on HDFS-14202:
--

{quote}For the assert, can also check for a particular number instead of <= 8000
{quote}
 As we are mocking computedelay and used a particular time to move the data 
with default bandwidth 10mb/s, By default it will return a fixed time. Here, it 
is 8000 ms. 
{quote}Can we also clarify 21936966
{quote}
In UT, have already mentioned that we are trying to move 20MB(20*1024*1024) =  
21936966 byes (approx).

> "dfs.disk.balancer.max.disk.throughputInMBperSec" property is not working as 
> per set value.
> ---
>
> Key: HDFS-14202
> URL: https://issues.apache.org/jira/browse/HDFS-14202
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Affects Versions: 3.0.1
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14202.001.patch, HDFS-14202.002.patch, 
> HDFS-14202.003.patch, HDFS-14202.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14202) "dfs.disk.balancer.max.disk.throughputInMBperSec" property is not working as per set value.

2019-01-30 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756360#comment-16756360
 ] 

Ranith Sardar commented on HDFS-14202:
--

[~elgoiri], I have updated the patch. please check once.

> "dfs.disk.balancer.max.disk.throughputInMBperSec" property is not working as 
> per set value.
> ---
>
> Key: HDFS-14202
> URL: https://issues.apache.org/jira/browse/HDFS-14202
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Affects Versions: 3.0.1
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14202.001.patch, HDFS-14202.002.patch, 
> HDFS-14202.003.patch, HDFS-14202.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14202) "dfs.disk.balancer.max.disk.throughputInMBperSec" property is not working as per set value.

2019-01-30 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14202:
-
Attachment: HDFS-14202.004.patch

> "dfs.disk.balancer.max.disk.throughputInMBperSec" property is not working as 
> per set value.
> ---
>
> Key: HDFS-14202
> URL: https://issues.apache.org/jira/browse/HDFS-14202
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Affects Versions: 3.0.1
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14202.001.patch, HDFS-14202.002.patch, 
> HDFS-14202.003.patch, HDFS-14202.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14240) blockReport test in NNThroughputBenchmark throws ArrayIndexOutOfBoundsException

2019-01-28 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16754671#comment-16754671
 ] 

Ranith Sardar commented on HDFS-14240:
--

would like to work on this issue.

> blockReport test in NNThroughputBenchmark throws 
> ArrayIndexOutOfBoundsException
> ---
>
> Key: HDFS-14240
> URL: https://issues.apache.org/jira/browse/HDFS-14240
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Shen Yinjie
>Priority: Major
> Attachments: screenshot-1.png
>
>
> When I run a blockReport test with NNThroughputBenchmark, 
> BlockReportStats.addBlocks() throws ArrayIndexOutOfBoundsException.
> digging the code:
> {code:java}
> for(DatanodeInfo dnInfo : loc.getLocations())
> { int dnIdx = dnInfo.getXferPort() - 1; 
> datanodes[dnIdx].addBlock(loc.getBlock().getLocalBlock());{code}
>  
> problem is here:array datanodes's length is determined by args as 
> "-datanodes" or "-threads" ,but dnIdx = dnInfo.getXferPort() is a random port.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14196) ArrayIndexOutOfBoundsException in JN metrics makes JN out of sync

2019-01-28 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16754641#comment-16754641
 ] 

Ranith Sardar commented on HDFS-14196:
--

[~ste...@apache.org], Any suggestions, How can we handle this 
ArrayIndexOutOfBoundsException. Thank you.

> ArrayIndexOutOfBoundsException in JN metrics makes JN out of sync
> -
>
> Key: HDFS-14196
> URL: https://issues.apache.org/jira/browse/HDFS-14196
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14196.001.patch
>
>
> {{2018-11-26 21:55:39,100 | WARN | IPC Server handler 4 on 25012 | IPC Server 
> handler 4 on 25012, call 
> org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol.journal from 
> 192.100.2.4:41622 Call#785140293 Retry#0 | Server.java:2334 
> java.lang.ArrayIndexOutOfBoundsException: 500 at 
> org.apache.hadoop.metrics2.util.SampleQuantiles.insert(SampleQuantiles.java:114)
>  at 
> org.apache.hadoop.metrics2.lib.MutableQuantiles.add(MutableQuantiles.java:130)
>  at 
> org.apache.hadoop.hdfs.qjournal.server.JournalMetrics.addSync(JournalMetrics.java:120)
>  at org.apache.hadoop.hdfs.qjournal.server.Journal.journal(Journal.java:400) 
> at 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.journal(JournalNodeRpcServer.java:153)
>  at 
> org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.journal(QJournalProtocolServerSideTranslatorPB.java:158)
>  at 
> org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:2542}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14235) Handle ArrayIndexOutOfBoundsException in DataNodeDiskMetrics#slowDiskDetectionDaemon

2019-01-28 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14235:
-
Status: Patch Available  (was: Open)

> Handle ArrayIndexOutOfBoundsException in 
> DataNodeDiskMetrics#slowDiskDetectionDaemon 
> -
>
> Key: HDFS-14235
> URL: https://issues.apache.org/jira/browse/HDFS-14235
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Surendra Singh Lilhore
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14235.000.patch, NPE.png, exception.png
>
>
> below code throwing exception because {{volumeIterator.next()}} called two 
> time without checking hashNext().
> {code:java}
> while (volumeIterator.hasNext()) {
>   FsVolumeSpi volume = volumeIterator.next();
>   DataNodeVolumeMetrics metrics = volumeIterator.next().getMetrics();
>   String volumeName = volume.getBaseURI().getPath();
>   metadataOpStats.put(volumeName,
>   metrics.getMetadataOperationMean());
>   readIoStats.put(volumeName, metrics.getReadIoMean());
>   writeIoStats.put(volumeName, metrics.getWriteIoMean());
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14235) Handle ArrayIndexOutOfBoundsException in DataNodeDiskMetrics#slowDiskDetectionDaemon

2019-01-28 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16753934#comment-16753934
 ] 

Ranith Sardar commented on HDFS-14235:
--

Attached the initial patch.

> Handle ArrayIndexOutOfBoundsException in 
> DataNodeDiskMetrics#slowDiskDetectionDaemon 
> -
>
> Key: HDFS-14235
> URL: https://issues.apache.org/jira/browse/HDFS-14235
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Surendra Singh Lilhore
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14235.000.patch, NPE.png, exception.png
>
>
> below code throwing exception because {{volumeIterator.next()}} called two 
> time without checking hashNext().
> {code:java}
> while (volumeIterator.hasNext()) {
>   FsVolumeSpi volume = volumeIterator.next();
>   DataNodeVolumeMetrics metrics = volumeIterator.next().getMetrics();
>   String volumeName = volume.getBaseURI().getPath();
>   metadataOpStats.put(volumeName,
>   metrics.getMetadataOperationMean());
>   readIoStats.put(volumeName, metrics.getReadIoMean());
>   writeIoStats.put(volumeName, metrics.getWriteIoMean());
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   4   5   >