[jira] [Commented] (HADOOP-17257) pid file delete when service stop (secure datanode ) show cat no directory

2020-09-10 Thread Andras Bokor (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17193661#comment-17193661
 ] 

Andras Bokor commented on HADOOP-17257:
---

Is it the same as the HADOOP-13238?

> pid file delete when service stop (secure datanode ) show cat no directory
> --
>
> Key: HADOOP-17257
> URL: https://issues.apache.org/jira/browse/HADOOP-17257
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts, security
>Affects Versions: 3.4.0
>Reporter: zhuqi
>Priority: Major
> Attachments: HADOOP-17257-0.0.1.patch
>
>
> when stop running secure datanode
> show cat no directory .
>  
> when stop unrunning secure datanode
> also show cat no pid directory
>  
> It's both unreasonable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17145) Unauthenticated users are not authorized to access this page message is misleading in HttpServer2.java

2020-08-11 Thread Andras Bokor (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17175493#comment-17175493
 ] 

Andras Bokor commented on HADOOP-17145:
---

With patch 007 everything went well. That changes the error message and the 
error code as well.

> Unauthenticated users are not authorized to access this page message is 
> misleading in HttpServer2.java
> --
>
> Key: HADOOP-17145
> URL: https://issues.apache.org/jira/browse/HADOOP-17145
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-17145.001.patch, HADOOP-17145.002.patch, 
> HADOOP-17145.003.patch, HADOOP-17145.004.patch, HADOOP-17145.005.patch, 
> HADOOP-17145.006.patch, HADOOP-17145.007.patch
>
>
> Recently one of the users were misled by the message "Unauthenticated users 
> are not authorized to access this page" when the user was not an admin user.
> At that point the user is authenticated but has no admin access, so it's 
> actually not an authentication issue but an authorization issue.
> Also, 401 as error code would be better.
> Something like "User is unauthorized to access the page" would help to users 
> to find out what is the problem during access an http endpoint.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17145) Unauthenticated users are not authorized to access this page message is misleading in HttpServer2.java

2020-08-07 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-17145:
--
Attachment: HADOOP-17145.007.patch

> Unauthenticated users are not authorized to access this page message is 
> misleading in HttpServer2.java
> --
>
> Key: HADOOP-17145
> URL: https://issues.apache.org/jira/browse/HADOOP-17145
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-17145.001.patch, HADOOP-17145.002.patch, 
> HADOOP-17145.003.patch, HADOOP-17145.004.patch, HADOOP-17145.005.patch, 
> HADOOP-17145.006.patch, HADOOP-17145.007.patch
>
>
> Recently one of the users were misled by the message "Unauthenticated users 
> are not authorized to access this page" when the user was not an admin user.
> At that point the user is authenticated but has no admin access, so it's 
> actually not an authentication issue but an authorization issue.
> Also, 401 as error code would be better.
> Something like "User is unauthorized to access the page" would help to users 
> to find out what is the problem during access an http endpoint.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17145) Unauthenticated users are not authorized to access this page message is misleading in HttpServer2.java

2020-08-06 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-17145:
--
Attachment: HADOOP-17145.006.patch

> Unauthenticated users are not authorized to access this page message is 
> misleading in HttpServer2.java
> --
>
> Key: HADOOP-17145
> URL: https://issues.apache.org/jira/browse/HADOOP-17145
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-17145.001.patch, HADOOP-17145.002.patch, 
> HADOOP-17145.003.patch, HADOOP-17145.004.patch, HADOOP-17145.005.patch, 
> HADOOP-17145.006.patch
>
>
> Recently one of the users were misled by the message "Unauthenticated users 
> are not authorized to access this page" when the user was not an admin user.
> At that point the user is authenticated but has no admin access, so it's 
> actually not an authentication issue but an authorization issue.
> Also, 401 as error code would be better.
> Something like "User is unauthorized to access the page" would help to users 
> to find out what is the problem during access an http endpoint.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17145) Unauthenticated users are not authorized to access this page message is misleading in HttpServer2.java

2020-08-06 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-17145:
--
Attachment: HADOOP-17145.005.patch

> Unauthenticated users are not authorized to access this page message is 
> misleading in HttpServer2.java
> --
>
> Key: HADOOP-17145
> URL: https://issues.apache.org/jira/browse/HADOOP-17145
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-17145.001.patch, HADOOP-17145.002.patch, 
> HADOOP-17145.003.patch, HADOOP-17145.004.patch, HADOOP-17145.005.patch
>
>
> Recently one of the users were misled by the message "Unauthenticated users 
> are not authorized to access this page" when the user was not an admin user.
> At that point the user is authenticated but has no admin access, so it's 
> actually not an authentication issue but an authorization issue.
> Also, 401 as error code would be better.
> Something like "User is unauthorized to access the page" would help to users 
> to find out what is the problem during access an http endpoint.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17145) Unauthenticated users are not authorized to access this page message is misleading in HttpServer2.java

2020-08-05 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-17145:
--
Attachment: HADOOP-17145.004.patch

> Unauthenticated users are not authorized to access this page message is 
> misleading in HttpServer2.java
> --
>
> Key: HADOOP-17145
> URL: https://issues.apache.org/jira/browse/HADOOP-17145
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-17145.001.patch, HADOOP-17145.002.patch, 
> HADOOP-17145.003.patch, HADOOP-17145.004.patch
>
>
> Recently one of the users were misled by the message "Unauthenticated users 
> are not authorized to access this page" when the user was not an admin user.
> At that point the user is authenticated but has no admin access, so it's 
> actually not an authentication issue but an authorization issue.
> Also, 401 as error code would be better.
> Something like "User is unauthorized to access the page" would help to users 
> to find out what is the problem during access an http endpoint.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17145) Unauthenticated users are not authorized to access this page message is misleading in HttpServer2.java

2020-08-05 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-17145:
--
Attachment: HADOOP-17145.003.patch

> Unauthenticated users are not authorized to access this page message is 
> misleading in HttpServer2.java
> --
>
> Key: HADOOP-17145
> URL: https://issues.apache.org/jira/browse/HADOOP-17145
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-17145.001.patch, HADOOP-17145.002.patch, 
> HADOOP-17145.003.patch
>
>
> Recently one of the users were misled by the message "Unauthenticated users 
> are not authorized to access this page" when the user was not an admin user.
> At that point the user is authenticated but has no admin access, so it's 
> actually not an authentication issue but an authorization issue.
> Also, 401 as error code would be better.
> Something like "User is unauthorized to access the page" would help to users 
> to find out what is the problem during access an http endpoint.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17145) Unauthenticated users are not authorized to access this page message is misleading in HttpServer2.java

2020-07-24 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-17145:
--
Attachment: HADOOP-17145.002.patch

> Unauthenticated users are not authorized to access this page message is 
> misleading in HttpServer2.java
> --
>
> Key: HADOOP-17145
> URL: https://issues.apache.org/jira/browse/HADOOP-17145
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-17145.001.patch, HADOOP-17145.002.patch
>
>
> Recently one of the users were misled by the message "Unauthenticated users 
> are not authorized to access this page" when the user was not an admin user.
> At that point the user is authenticated but has no admin access, so it's 
> actually not an authentication issue but an authorization issue.
> Also, 401 as error code would be better.
> Something like "User is unauthorized to access the page" would help to users 
> to find out what is the problem during access an http endpoint.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17145) Unauthenticated users are not authorized to access this page message is misleading in HttpServer2.java

2020-07-24 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-17145:
--
Attachment: HADOOP-17145.001.patch

> Unauthenticated users are not authorized to access this page message is 
> misleading in HttpServer2.java
> --
>
> Key: HADOOP-17145
> URL: https://issues.apache.org/jira/browse/HADOOP-17145
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-17145.001.patch
>
>
> Recently one of the users were misled by the message "Unauthenticated users 
> are not authorized to access this page" when the user was not an admin user.
> At that point the user is authenticated but has no admin access, so it's 
> actually not an authentication issue but an authorization issue.
> Also, 401 as error code would be better.
> Something like "User is unauthorized to access the page" would help to users 
> to find out what is the problem during access an http endpoint.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17145) Unauthenticated users are not authorized to access this page message is misleading in HttpServer2.java

2020-07-24 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-17145:
--
Status: Patch Available  (was: Open)

> Unauthenticated users are not authorized to access this page message is 
> misleading in HttpServer2.java
> --
>
> Key: HADOOP-17145
> URL: https://issues.apache.org/jira/browse/HADOOP-17145
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-17145.001.patch
>
>
> Recently one of the users were misled by the message "Unauthenticated users 
> are not authorized to access this page" when the user was not an admin user.
> At that point the user is authenticated but has no admin access, so it's 
> actually not an authentication issue but an authorization issue.
> Also, 401 as error code would be better.
> Something like "User is unauthorized to access the page" would help to users 
> to find out what is the problem during access an http endpoint.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17145) Unauthenticated users are not authorized to access this page message is misleading in HttpServer2.java

2020-07-21 Thread Andras Bokor (Jira)
Andras Bokor created HADOOP-17145:
-

 Summary: Unauthenticated users are not authorized to access this 
page message is misleading in HttpServer2.java
 Key: HADOOP-17145
 URL: https://issues.apache.org/jira/browse/HADOOP-17145
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Andras Bokor
Assignee: Andras Bokor


Recently one of the users were misled by the message "Unauthenticated users are 
not authorized to access this page" when the user was not an admin user.
At that point the user is authenticated but has no admin access, so it's 
actually not an authentication issue but an authorization issue.
Also, 401 as error code would be better.
Something like "User is unauthorized to access the page" would help to users to 
find out what is the problem during access an http endpoint.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17044) Revert "HADOOP-8143. Change distcp to have -pb on by default"

2020-07-09 Thread Andras Bokor (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17154694#comment-17154694
 ] 

Andras Bokor commented on HADOOP-17044:
---

This ticket reverts HADOOP-14557 as well.

> Revert "HADOOP-8143. Change distcp to have -pb on by default"
> -
>
> Key: HADOOP-17044
> URL: https://issues.apache.org/jira/browse/HADOOP-17044
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.0.3, 3.3.0, 3.2.1, 3.1.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.0.4, 3.2.2, 3.3.1, 3.1.5
>
>
> revert the HADOOP-8143. "distcp -pb as default" feature as it was
> * breaking s3a uploads
> * breaking incremental uploads to any object store



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9851) dfs -chown does not like "+" plus sign in user name

2020-06-16 Thread Andras Bokor (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-9851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136517#comment-17136517
 ] 

Andras Bokor commented on HADOOP-9851:
--

[~ayushtkn],
The checkstyle is not caused by my patch. The indentation was wrong even before 
my patch.
I did not fix it because I did not want bigger patch than needed and 
indentation fixes decrease the readability in diff tools. But I am not sure 
what is the best practice here.

> dfs -chown does not like "+" plus sign in user name
> ---
>
> Key: HADOOP-9851
> URL: https://issues.apache.org/jira/browse/HADOOP-9851
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.5-alpha
>Reporter: Marc Villacorta
>Assignee: Andras Bokor
>Priority: Minor
> Attachments: HADOOP-9851.01.patch, HADOOP-9851.02.patch
>
>
> I intend to set user and group:
> *User:* _MYCOMPANY+marc.villacorta_
> *Group:* hadoop
> where _'+'_ is what we use as a winbind separator.
> And this is what I get:
> {code:none}
> sudo -u hdfs hadoop fs -touchz /tmp/test.txt
> sudo -u hdfs hadoop fs -chown MYCOMPANY+marc.villacorta:hadoop /tmp/test.txt
> -chown: 'MYCOMPANY+marc.villacorta:hadoop' does not match expected pattern 
> for [owner][:group].
> Usage: hadoop fs [generic options] -chown [-R] [OWNER][:[GROUP]] PATH...
> {code}
> I am using version: 2.0.0-cdh4.3.0
> Quote 
> [source|http://h30097.www3.hp.com/docs/iass/OSIS_62/MAN/MAN8/0044.HTM]:
> {quote}
> winbind separator
>The winbind separator option allows you to specify how NT domain names
>and user names are combined into unix user names when presented to
>users. By default, winbindd will use the traditional '\' separator so
>that the unix user names look like DOMAIN\username. In some cases this
>separator character may cause problems as the '\' character has
>special meaning in unix shells. In that case you can use the winbind
>separator option to specify an alternative separator character. Good
>alternatives may be '/' (although that conflicts with the unix
>directory separator) or a '+ 'character. The '+' character appears to
>be the best choice for 100% compatibility with existing unix
>utilities, but may be an aesthetically bad choice depending on your
>taste.
>Default: winbind separator = \
>Example: winbind separator = +
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9851) dfs -chown does not like "+" plus sign in user name

2020-06-15 Thread Andras Bokor (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-9851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17135783#comment-17135783
 ] 

Andras Bokor commented on HADOOP-9851:
--

[~ayushtkn],
Windows remained unchanged only Linux will allow + sign.

> dfs -chown does not like "+" plus sign in user name
> ---
>
> Key: HADOOP-9851
> URL: https://issues.apache.org/jira/browse/HADOOP-9851
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.5-alpha
>Reporter: Marc Villacorta
>Assignee: Andras Bokor
>Priority: Minor
> Attachments: HADOOP-9851.01.patch, HADOOP-9851.02.patch
>
>
> I intend to set user and group:
> *User:* _MYCOMPANY+marc.villacorta_
> *Group:* hadoop
> where _'+'_ is what we use as a winbind separator.
> And this is what I get:
> {code:none}
> sudo -u hdfs hadoop fs -touchz /tmp/test.txt
> sudo -u hdfs hadoop fs -chown MYCOMPANY+marc.villacorta:hadoop /tmp/test.txt
> -chown: 'MYCOMPANY+marc.villacorta:hadoop' does not match expected pattern 
> for [owner][:group].
> Usage: hadoop fs [generic options] -chown [-R] [OWNER][:[GROUP]] PATH...
> {code}
> I am using version: 2.0.0-cdh4.3.0
> Quote 
> [source|http://h30097.www3.hp.com/docs/iass/OSIS_62/MAN/MAN8/0044.HTM]:
> {quote}
> winbind separator
>The winbind separator option allows you to specify how NT domain names
>and user names are combined into unix user names when presented to
>users. By default, winbindd will use the traditional '\' separator so
>that the unix user names look like DOMAIN\username. In some cases this
>separator character may cause problems as the '\' character has
>special meaning in unix shells. In that case you can use the winbind
>separator option to specify an alternative separator character. Good
>alternatives may be '/' (although that conflicts with the unix
>directory separator) or a '+ 'character. The '+' character appears to
>be the best choice for 100% compatibility with existing unix
>utilities, but may be an aesthetically bad choice depending on your
>taste.
>Default: winbind separator = \
>Example: winbind separator = +
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9851) dfs -chown does not like "+" plus sign in user name

2020-06-15 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-9851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-9851:
-
Attachment: HADOOP-9851.02.patch

> dfs -chown does not like "+" plus sign in user name
> ---
>
> Key: HADOOP-9851
> URL: https://issues.apache.org/jira/browse/HADOOP-9851
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.5-alpha
>Reporter: Marc Villacorta
>Assignee: Andras Bokor
>Priority: Minor
> Attachments: HADOOP-9851.01.patch, HADOOP-9851.02.patch
>
>
> I intend to set user and group:
> *User:* _MYCOMPANY+marc.villacorta_
> *Group:* hadoop
> where _'+'_ is what we use as a winbind separator.
> And this is what I get:
> {code:none}
> sudo -u hdfs hadoop fs -touchz /tmp/test.txt
> sudo -u hdfs hadoop fs -chown MYCOMPANY+marc.villacorta:hadoop /tmp/test.txt
> -chown: 'MYCOMPANY+marc.villacorta:hadoop' does not match expected pattern 
> for [owner][:group].
> Usage: hadoop fs [generic options] -chown [-R] [OWNER][:[GROUP]] PATH...
> {code}
> I am using version: 2.0.0-cdh4.3.0
> Quote 
> [source|http://h30097.www3.hp.com/docs/iass/OSIS_62/MAN/MAN8/0044.HTM]:
> {quote}
> winbind separator
>The winbind separator option allows you to specify how NT domain names
>and user names are combined into unix user names when presented to
>users. By default, winbindd will use the traditional '\' separator so
>that the unix user names look like DOMAIN\username. In some cases this
>separator character may cause problems as the '\' character has
>special meaning in unix shells. In that case you can use the winbind
>separator option to specify an alternative separator character. Good
>alternatives may be '/' (although that conflicts with the unix
>directory separator) or a '+ 'character. The '+' character appears to
>be the best choice for 100% compatibility with existing unix
>utilities, but may be an aesthetically bad choice depending on your
>taste.
>Default: winbind separator = \
>Example: winbind separator = +
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15446) WASB: PageBlobInputStream.skip breaks HBASE replication

2020-06-08 Thread Andras Bokor (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17128251#comment-17128251
 ] 

Andras Bokor commented on HADOOP-15446:
---

Git greppers!

2 commits belongs to this ticket:
{noformat}
HADOOP-15446. WASB: PageBlobInputStream.skip breaks HBASE replication.
HADOOP-15446. ABFS: tune imports & javadocs; stabilise tests.
{noformat}
The second one actually belongs to HADOOP-15546.

> WASB: PageBlobInputStream.skip breaks HBASE replication
> ---
>
> Key: HADOOP-15446
> URL: https://issues.apache.org/jira/browse/HADOOP-15446
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.9.0, 3.0.2
>Reporter: Thomas Marqardt
>Assignee: Thomas Marqardt
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2
>
> Attachments: HADOOP-15446-001.patch, HADOOP-15446-002.patch, 
> HADOOP-15446-003.patch, HADOOP-15446-branch-2.001.patch
>
>
> Page Blobs are primarily used by HBASE.  HBASE replication, which apparently 
> has not been used with WASB until recently, performs non-sequential reads on 
> log files using PageBlobInputStream.  There are bugs in this stream 
> implementation which prevent skip and seek from working properly, and 
> eventually the stream state becomes corrupt and unusable.
> I believe this bug affects all releases of WASB/HADOOP.  It appears to be a 
> day-0 bug in PageBlobInputStream.  There were similar bugs opened in the past 
> (HADOOP-15042) but the issue was not properly fixed, and no test coverage was 
> added.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15546) ABFS: tune imports & javadocs; stabilise tests

2020-06-08 Thread Andras Bokor (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17128247#comment-17128247
 ] 

Andras Bokor commented on HADOOP-15546:
---

For git greppers: this was committed with the following commit message:

"HADOOP-15446. ABFS: tune imports & javadocs; stabilise tests."

So grepping for HADOOP-15546 will show no result.

> ABFS: tune imports & javadocs; stabilise tests
> --
>
> Key: HADOOP-15546
> URL: https://issues.apache.org/jira/browse/HADOOP-15546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: HADOOP-15407
>Reporter: Steve Loughran
>Assignee: Thomas Marqardt
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15546-001.patch, 
> HADOOP-15546-HADOOP-15407-001.patch, HADOOP-15546-HADOOP-15407-002.patch, 
> HADOOP-15546-HADOOP-15407-003.patch, HADOOP-15546-HADOOP-15407-004.patch, 
> HADOOP-15546-HADOOP-15407-005.patch, HADOOP-15546-HADOOP-15407-006.patch, 
> HADOOP-15546-HADOOP-15407-006.patch, HADOOP-15546-HADOOP-15407-007.patch, 
> HADOOP-15546-HADOOP-15407-008.patch, HADOOP-15546-HADOOP-15407-009.patch, 
> HADOOP-15546-HADOOP-15407-010.patch, HADOOP-15546-HADOOP-15407-011.patch, 
> HADOOP-15546-HADOOP-15407-012.patch, azure-auth-keys.xml
>
>
> Followup on HADOOP-15540 with some initial review tuning
> h2. Tuning
> * ordering of imports
> * rely on azure-auth-keys.xml to store credentials (change imports, 
> docs,.gitignore)
> * log4j -> info
> * add a "." to the first sentence of all the javadocs I noticed.
> * remove @Public annotations except for some constants (which includes some 
> commitment to maintain them).
> * move the AbstractFS declarations out of the src/test/resources XML file 
> into core-default.xml for all to use
> * other IDE-suggested tweaks
> h2. Testing
> Review the tests, move to ContractTestUtil assertions, make more consistent 
> to contract test setup, and general work to make the tests work well over 
> slower links, document, etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-6377) ChecksumFileSystem.getContentSummary throws NPE when directory contains inaccessible directories

2020-01-08 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-6377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-6377.
--
Resolution: Duplicate

> ChecksumFileSystem.getContentSummary throws NPE when directory contains 
> inaccessible directories
> 
>
> Key: HADOOP-6377
> URL: https://issues.apache.org/jira/browse/HADOOP-6377
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.21.0, 0.22.0
>Reporter: Todd Lipcon
>Assignee: Andras Bokor
>Priority: Major
>
> When getContentSummary is called on a path that contains an unreadable 
> directory, it throws NPE, since RawLocalFileSystem.listStatus(Path) returns 
> null when File.list() returns null.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16771) Update checkstyle to 8.26 and maven-checkstyle-plugin to 3.1.0

2019-12-20 Thread Andras Bokor (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17000762#comment-17000762
 ] 

Andras Bokor commented on HADOOP-16771:
---

Thanks, [~aajisaka]!

> Update checkstyle to 8.26 and maven-checkstyle-plugin to 3.1.0
> --
>
> Key: HADOOP-16771
> URL: https://issues.apache.org/jira/browse/HADOOP-16771
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.3.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HADOOP-16771.001.patch
>
>
> After upgrading to the latest IDEA the IDE throws error message when I try to 
> load the checkstyle xml
> {code:java}
> TreeWalker is not allowed as a parent of LineLength Please review 'Parent 
> Module' section for this Check in web documentation if Check is 
> standard.{code}
> [This is caused by some backward incompatible changes in checkstyle source 
> code|https://github.com/checkstyle/checkstyle/issues/2116]
> IDEA uses checkstyle 8.26
> We should upgrade our checkstyle version to be compatible with IDEA's 
> checkstyle plugin which is the latest.
>  Also it's a good time to upgrade maven-checkstyle-plugin as well to 3.1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16771) Checkstyle version is not compatible with IDEA's checkstyle plugin

2019-12-19 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-16771:
--
Description: 
After upgrading to the latest IDEA the IDE throws error message when I try to 
load the checkstyle xml
{code:java}
TreeWalker is not allowed as a parent of LineLength Please review 'Parent 
Module' section for this Check in web documentation if Check is standard.{code}
[This is caused by some backward incompatible changes in checkstyle source 
code|https://github.com/checkstyle/checkstyle/issues/2116]

IDEA uses checkstyle 8.26

We should upgrade our checkstyle version to be compatible with IDEA's 
checkstyle plugin which is the latest.
 Also it's a good time to upgrade maven-checkstyle-plugin as well to 3.1.

  was:
After upgrading to the latest IDEA the IDE throws error message when I try to 
load the checkstyle xml
{code:java}
TreeWalker is not allowed as a parent of LineLength Please review 'Parent 
Module' section for this Check in web documentation if Check is standard.{code}
[This is caused by some backward incompatible changes in checkstyle source 
code|https://github.com/checkstyle/checkstyle/issues/2116]

IDEA uses checkstyle 8.26

We should upgrade our checkstyle version to be compatible with IDEA's 
checkstyle plugin which is the latest.
 Also it's a good time to upgrade maven-checkstyle-plugin as well to brand new 
3.1.


> Checkstyle version is not compatible with IDEA's checkstyle plugin
> --
>
> Key: HADOOP-16771
> URL: https://issues.apache.org/jira/browse/HADOOP-16771
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.3.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-16771.001.patch
>
>
> After upgrading to the latest IDEA the IDE throws error message when I try to 
> load the checkstyle xml
> {code:java}
> TreeWalker is not allowed as a parent of LineLength Please review 'Parent 
> Module' section for this Check in web documentation if Check is 
> standard.{code}
> [This is caused by some backward incompatible changes in checkstyle source 
> code|https://github.com/checkstyle/checkstyle/issues/2116]
> IDEA uses checkstyle 8.26
> We should upgrade our checkstyle version to be compatible with IDEA's 
> checkstyle plugin which is the latest.
>  Also it's a good time to upgrade maven-checkstyle-plugin as well to 3.1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16771) Checkstyle version is not compatible with IDEA's checkstyle plugin

2019-12-19 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-16771:
--
Component/s: build

> Checkstyle version is not compatible with IDEA's checkstyle plugin
> --
>
> Key: HADOOP-16771
> URL: https://issues.apache.org/jira/browse/HADOOP-16771
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-16771.001.patch
>
>
> After upgrading to the latest IDEA the IDE throws error message when I try to 
> load the checkstyle xml
> {code:java}
> TreeWalker is not allowed as a parent of LineLength Please review 'Parent 
> Module' section for this Check in web documentation if Check is 
> standard.{code}
> [This is caused by some backward incompatible changes in checkstyle source 
> code|https://github.com/checkstyle/checkstyle/issues/2116]
> IDEA uses checkstyle 8.26
> We should upgrade our checkstyle version to be compatible with IDEA's 
> checkstyle plugin which is the latest.
>  Also it's a good time to upgrade maven-checkstyle-plugin as well to brand 
> new 3.1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16771) Checkstyle version is not compatible with IDEA's checkstyle plugin

2019-12-19 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-16771:
--
Affects Version/s: 3.3.0

> Checkstyle version is not compatible with IDEA's checkstyle plugin
> --
>
> Key: HADOOP-16771
> URL: https://issues.apache.org/jira/browse/HADOOP-16771
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.3.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-16771.001.patch
>
>
> After upgrading to the latest IDEA the IDE throws error message when I try to 
> load the checkstyle xml
> {code:java}
> TreeWalker is not allowed as a parent of LineLength Please review 'Parent 
> Module' section for this Check in web documentation if Check is 
> standard.{code}
> [This is caused by some backward incompatible changes in checkstyle source 
> code|https://github.com/checkstyle/checkstyle/issues/2116]
> IDEA uses checkstyle 8.26
> We should upgrade our checkstyle version to be compatible with IDEA's 
> checkstyle plugin which is the latest.
>  Also it's a good time to upgrade maven-checkstyle-plugin as well to brand 
> new 3.1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16771) Checkstyle version is not compatible with IDEA's checkstyle plugin

2019-12-19 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-16771:
--
   Fix Version/s: (was: 3.0.4)
  (was: 3.1.0)
Hadoop Flags:   (was: Reviewed)
Release Note: Updated checkstyle to 8.26 and updated 
maven-checkstyle-plugin to 3.1.0.  (was: Updated checkstyle to 8.8 and updated 
maven-checkstyle-plugin to 3.0.0.)
Target Version/s:   (was: 3.2.0)
  Status: Patch Available  (was: Open)

> Checkstyle version is not compatible with IDEA's checkstyle plugin
> --
>
> Key: HADOOP-16771
> URL: https://issues.apache.org/jira/browse/HADOOP-16771
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-16771.001.patch
>
>
> After upgrading to the latest IDEA the IDE throws error message when I try to 
> load the checkstyle xml
> {code:java}
> TreeWalker is not allowed as a parent of LineLength Please review 'Parent 
> Module' section for this Check in web documentation if Check is 
> standard.{code}
> [This is caused by some backward incompatible changes in checkstyle source 
> code|https://github.com/checkstyle/checkstyle/issues/2116]
> IDEA uses checkstyle 8.26
> We should upgrade our checkstyle version to be compatible with IDEA's 
> checkstyle plugin which is the latest.
>  Also it's a good time to upgrade maven-checkstyle-plugin as well to brand 
> new 3.1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16771) Checkstyle version is not compatible with IDEA's checkstyle plugin

2019-12-19 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-16771:
--
Attachment: HADOOP-16771.001.patch

> Checkstyle version is not compatible with IDEA's checkstyle plugin
> --
>
> Key: HADOOP-16771
> URL: https://issues.apache.org/jira/browse/HADOOP-16771
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Fix For: 3.1.0, 3.0.4
>
> Attachments: HADOOP-16771.001.patch
>
>
> After upgrading to the latest IDEA the IDE throws error message when I try to 
> load the checkstyle xml
> {code:java}
> TreeWalker is not allowed as a parent of LineLength Please review 'Parent 
> Module' section for this Check in web documentation if Check is 
> standard.{code}
> [This is caused by some backward incompatible changes in checkstyle source 
> code|https://github.com/checkstyle/checkstyle/issues/2116]
> IDEA uses checkstyle 8.26
> We should upgrade our checkstyle version to be compatible with IDEA's 
> checkstyle plugin which is the latest.
>  Also it's a good time to upgrade maven-checkstyle-plugin as well to brand 
> new 3.1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16771) Checkstyle version is not compatible with IDEA's checkstyle plugin

2019-12-19 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-16771:
--
Description: 
After upgrading to the latest IDEA the IDE throws error message when I try to 
load the checkstyle xml
{code:java}
TreeWalker is not allowed as a parent of LineLength Please review 'Parent 
Module' section for this Check in web documentation if Check is standard.{code}
[This is caused by some backward incompatible changes in checkstyle source 
code|https://github.com/checkstyle/checkstyle/issues/2116]

IDEA uses checkstyle 8.26

We should upgrade our checkstyle version to be compatible with IDEA's 
checkstyle plugin which is the latest.
 Also it's a good time to upgrade maven-checkstyle-plugin as well to brand new 
3.1.

  was:
After upgrading to the latest IDEA the IDE throws error message when I try to 
load the checkstyle xml
{code:java}
TreeWalker is not allowed as a parent of LineLength Please review 'Parent 
Module' section for this Check in web documentation if Check is standard.{code}
[This is caused by some backward incompatible changes in checkstyle source 
code|https://github.com/checkstyle/checkstyle/issues/2116]

IDEA uses checkstyle 8.26

We should upgrade our checkstyle version to be compatible with IDEA's 
checkstyle plugin.
 Also it's a good time to upgrade maven-checkstyle-plugin as well to brand new 
3.1.


> Checkstyle version is not compatible with IDEA's checkstyle plugin
> --
>
> Key: HADOOP-16771
> URL: https://issues.apache.org/jira/browse/HADOOP-16771
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Fix For: 3.1.0, 3.0.4
>
>
> After upgrading to the latest IDEA the IDE throws error message when I try to 
> load the checkstyle xml
> {code:java}
> TreeWalker is not allowed as a parent of LineLength Please review 'Parent 
> Module' section for this Check in web documentation if Check is 
> standard.{code}
> [This is caused by some backward incompatible changes in checkstyle source 
> code|https://github.com/checkstyle/checkstyle/issues/2116]
> IDEA uses checkstyle 8.26
> We should upgrade our checkstyle version to be compatible with IDEA's 
> checkstyle plugin which is the latest.
>  Also it's a good time to upgrade maven-checkstyle-plugin as well to brand 
> new 3.1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16771) Checkstyle version is not compatible with IDEA's checkstyle plugin

2019-12-19 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-16771:
--
Description: 
After upgrading to the latest IDEA the IDE throws error message when I try to 
load the checkstyle xml
{code:java}
TreeWalker is not allowed as a parent of LineLength Please review 'Parent 
Module' section for this Check in web documentation if Check is standard.{code}
[This is caused by some backward incompatible changes in checkstyle source 
code|https://github.com/checkstyle/checkstyle/issues/2116]

IDEA uses checkstyle 8.26

We should upgrade our checkstyle version to be compatible with IDEA's 
checkstyle plugin.
 Also it's a good time to upgrade maven-checkstyle-plugin as well to brand new 
3.1.

  was:
After upgrading to the latest IDEA the IDE throws error messages in every few 
minutes like
{code:java}
The Checkstyle rules file could not be parsed.
SuppressionCommentFilter is not allowed as a child in Checker
The file has been blacklisted for 60s.{code}
This is caused by some backward incompatible changes in checkstyle source code:
 [http://checkstyle.sourceforge.net/releasenotes.html]
 * 8.1: Make SuppressionCommentFilter and SuppressWithNearbyCommentFilter 
children of TreeWalker.
 * 8.2: remove FileContentsHolder module as FileContents object is available 
for filters on TreeWalker in TreeWalkerAudit Event.

IDEA uses checkstyle 8.8

We should upgrade our checkstyle version to be compatible with IDEA's 
checkstyle plugin.
 Also it's a good time to upgrade maven-checkstyle-plugin as well to brand new 
3.0.


> Checkstyle version is not compatible with IDEA's checkstyle plugin
> --
>
> Key: HADOOP-16771
> URL: https://issues.apache.org/jira/browse/HADOOP-16771
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Fix For: 3.1.0, 3.0.4
>
>
> After upgrading to the latest IDEA the IDE throws error message when I try to 
> load the checkstyle xml
> {code:java}
> TreeWalker is not allowed as a parent of LineLength Please review 'Parent 
> Module' section for this Check in web documentation if Check is 
> standard.{code}
> [This is caused by some backward incompatible changes in checkstyle source 
> code|https://github.com/checkstyle/checkstyle/issues/2116]
> IDEA uses checkstyle 8.26
> We should upgrade our checkstyle version to be compatible with IDEA's 
> checkstyle plugin.
>  Also it's a good time to upgrade maven-checkstyle-plugin as well to brand 
> new 3.1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16771) Checkstyle version is not compatible with IDEA's checkstyle plugin

2019-12-19 Thread Andras Bokor (Jira)
Andras Bokor created HADOOP-16771:
-

 Summary: Checkstyle version is not compatible with IDEA's 
checkstyle plugin
 Key: HADOOP-16771
 URL: https://issues.apache.org/jira/browse/HADOOP-16771
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Andras Bokor
Assignee: Andras Bokor
 Fix For: 3.1.0, 3.0.4


After upgrading to the latest IDEA the IDE throws error messages in every few 
minutes like
{code:java}
The Checkstyle rules file could not be parsed.
SuppressionCommentFilter is not allowed as a child in Checker
The file has been blacklisted for 60s.{code}
This is caused by some backward incompatible changes in checkstyle source code:
 [http://checkstyle.sourceforge.net/releasenotes.html]
 * 8.1: Make SuppressionCommentFilter and SuppressWithNearbyCommentFilter 
children of TreeWalker.
 * 8.2: remove FileContentsHolder module as FileContents object is available 
for filters on TreeWalker in TreeWalkerAudit Event.

IDEA uses checkstyle 8.8

We should upgrade our checkstyle version to be compatible with IDEA's 
checkstyle plugin.
 Also it's a good time to upgrade maven-checkstyle-plugin as well to brand new 
3.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16710) testing_azure.md documentation is misleading

2019-11-14 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-16710:
--
Component/s: test

> testing_azure.md documentation is misleading
> 
>
> Key: HADOOP-16710
> URL: https://issues.apache.org/jira/browse/HADOOP-16710
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Affects Versions: 3.2.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-16710.001.patch
>
>
> testing_azure.md states that "-Dparallel-tests" will run all the integration 
> tests in parallel.
> But in fact using -Dparallel-tests without any value actually skips the 
> integration tests and runs only the unit tests.
> The reason is that to activate a profile which is able to run ITs in parallel 
> requires parallel-tests property to have a value (abfs, wasb or 'both'). 
> sequential-tests profile defines !parallel-tests as value which means that 
> the property should not even be mentioned.
> Please check the output of help:active-profiles command:
>  
> {code:java}
> cd hadoop-tools/hadoop-azure
> andrasbokor$ mvn help:active-profiles -Dparallel-tests=abfs 
> - parallel-tests-abfs (source: org.apache.hadoop:hadoop-azure:3.3.0-SNAPSHOT) 
> - os.mac (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT) 
> - hbase1 (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT) {code}
> {code:java}
> andrasbokor$ mvn help:active-profiles -Dparallel-tests
> - os.mac (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT)
> - hbase1 (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT)
> {code}
> {code:java}
> mvn help:active-profiles
> - sequential-tests (source: org.apache.hadoop:hadoop-azure:3.3.0-SNAPSHOT)
> - os.mac (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT)
> - hbase1 (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT){code}
> The help:active-profiles shows that -Dparallel-tests does not add any IT 
> related profiles and results in skipping all the integration tests during 
> verify phrase.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16710) testing_azure.md documentation is misleading

2019-11-14 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-16710:
--
Status: Patch Available  (was: Open)

> testing_azure.md documentation is misleading
> 
>
> Key: HADOOP-16710
> URL: https://issues.apache.org/jira/browse/HADOOP-16710
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-16710.001.patch
>
>
> testing_azure.md states that "-Dparallel-tests" will run all the integration 
> tests in parallel.
> But in fact using -Dparallel-tests without any value actually skips the 
> integration tests and runs only the unit tests.
> The reason is that to activate a profile which is able to run ITs in parallel 
> requires parallel-tests property to have a value (abfs, wasb or 'both'). 
> sequential-tests profile defines !parallel-tests as value which means that 
> the property should not even be mentioned.
> Please check the output of help:active-profiles command:
>  
> {code:java}
> cd hadoop-tools/hadoop-azure
> andrasbokor$ mvn help:active-profiles -Dparallel-tests=abfs 
> - parallel-tests-abfs (source: org.apache.hadoop:hadoop-azure:3.3.0-SNAPSHOT) 
> - os.mac (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT) 
> - hbase1 (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT) {code}
> {code:java}
> andrasbokor$ mvn help:active-profiles -Dparallel-tests
> - os.mac (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT)
> - hbase1 (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT)
> {code}
> {code:java}
> mvn help:active-profiles
> - sequential-tests (source: org.apache.hadoop:hadoop-azure:3.3.0-SNAPSHOT)
> - os.mac (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT)
> - hbase1 (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT){code}
> The help:active-profiles shows that -Dparallel-tests does not add any IT 
> related profiles and results in skipping all the integration tests during 
> verify phrase.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16710) testing_azure.md documentation is misleading

2019-11-14 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-16710:
--
Attachment: HADOOP-16710.001.patch

> testing_azure.md documentation is misleading
> 
>
> Key: HADOOP-16710
> URL: https://issues.apache.org/jira/browse/HADOOP-16710
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-16710.001.patch
>
>
> testing_azure.md states that "-Dparallel-tests" will run all the integration 
> tests in parallel.
> But in fact using -Dparallel-tests without any value actually skips the 
> integration tests and runs only the unit tests.
> The reason is that to activate a profile which is able to run ITs in parallel 
> requires parallel-tests property to have a value (abfs, wasb or 'both'). 
> sequential-tests profile defines !parallel-tests as value which means that 
> the property should not even be mentioned.
> Please check the output of help:active-profiles command:
>  
> {code:java}
> cd hadoop-tools/hadoop-azure
> andrasbokor$ mvn help:active-profiles -Dparallel-tests=abfs 
> - parallel-tests-abfs (source: org.apache.hadoop:hadoop-azure:3.3.0-SNAPSHOT) 
> - os.mac (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT) 
> - hbase1 (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT) {code}
> {code:java}
> andrasbokor$ mvn help:active-profiles -Dparallel-tests
> - os.mac (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT)
> - hbase1 (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT)
> {code}
> {code:java}
> mvn help:active-profiles
> - sequential-tests (source: org.apache.hadoop:hadoop-azure:3.3.0-SNAPSHOT)
> - os.mac (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT)
> - hbase1 (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT){code}
> The help:active-profiles shows that -Dparallel-tests does not add any IT 
> related profiles and results in skipping all the integration tests during 
> verify phrase.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16710) testing_azure.md documentation is misleading

2019-11-14 Thread Andras Bokor (Jira)
Andras Bokor created HADOOP-16710:
-

 Summary: testing_azure.md documentation is misleading
 Key: HADOOP-16710
 URL: https://issues.apache.org/jira/browse/HADOOP-16710
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/azure
Affects Versions: 3.2.0
Reporter: Andras Bokor
Assignee: Andras Bokor


testing_azure.md states that "-Dparallel-tests" will run all the integration 
tests in parallel.

But in fact using -Dparallel-tests without any value actually skips the 
integration tests and runs only the unit tests.

The reason is that to activate a profile which is able to run ITs in parallel 
requires parallel-tests property to have a value (abfs, wasb or 'both'). 
sequential-tests profile defines !parallel-tests as value which means that the 
property should not even be mentioned.

Please check the output of help:active-profiles command:

 
{code:java}
cd hadoop-tools/hadoop-azure
andrasbokor$ mvn help:active-profiles -Dparallel-tests=abfs 
- parallel-tests-abfs (source: org.apache.hadoop:hadoop-azure:3.3.0-SNAPSHOT) 
- os.mac (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT) 
- hbase1 (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT) {code}
{code:java}
andrasbokor$ mvn help:active-profiles -Dparallel-tests
- os.mac (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT)
- hbase1 (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT)
{code}
{code:java}
mvn help:active-profiles
- sequential-tests (source: org.apache.hadoop:hadoop-azure:3.3.0-SNAPSHOT)
- os.mac (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT)
- hbase1 (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT){code}
The help:active-profiles shows that -Dparallel-tests does not add any IT 
related profiles and results in skipping all the integration tests during 
verify phrase.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16617) ITestGetNameSpaceEnabled#testFailedRequestWhenFSNotExist fails with ns disabled account

2019-09-30 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-16617:
--
Affects Version/s: 3.2.1

> ITestGetNameSpaceEnabled#testFailedRequestWhenFSNotExist fails with ns 
> disabled account
> ---
>
> Key: HADOOP-16617
> URL: https://issues.apache.org/jira/browse/HADOOP-16617
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Affects Versions: 3.2.1
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
>
> AzureBlobFileSystemStore#getIsNamespaceEnabled() gets ACL status of root path 
> to decide whether the account is XNS or not. If not it returns with 400 as 
> error code which means the account is a non-XNS acc.
> The problem is that we get 400 and the getIsNamespaceEnabled return false 
> even if the filesystem does not exist which seems ok but according to the 
> test we should get 404. So it seems the expected behavior is to return 404.
> At this point I am not sure how to fix it. Should we insist to the expected 
> behavior and fix it on server side or we just adjust the test to expect false 
> in case of non XNS account?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16617) ITestGetNameSpaceEnabled#testFailedRequestWhenFSNotExist fails with ns disabled account

2019-09-30 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-16617:
--
Component/s: test
 fs/azure

> ITestGetNameSpaceEnabled#testFailedRequestWhenFSNotExist fails with ns 
> disabled account
> ---
>
> Key: HADOOP-16617
> URL: https://issues.apache.org/jira/browse/HADOOP-16617
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
>
> AzureBlobFileSystemStore#getIsNamespaceEnabled() gets ACL status of root path 
> to decide whether the account is XNS or not. If not it returns with 400 as 
> error code which means the account is a non-XNS acc.
> The problem is that we get 400 and the getIsNamespaceEnabled return false 
> even if the filesystem does not exist which seems ok but according to the 
> test we should get 404. So it seems the expected behavior is to return 404.
> At this point I am not sure how to fix it. Should we insist to the expected 
> behavior and fix it on server side or we just adjust the test to expect false 
> in case of non XNS account?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16616) ITestAzureFileSystemInstrumentation#testMetricsOnBigFileCreateRead fails

2019-09-30 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-16616:
--
Affects Version/s: 3.2.1

> ITestAzureFileSystemInstrumentation#testMetricsOnBigFileCreateRead fails
> 
>
> Key: HADOOP-16616
> URL: https://issues.apache.org/jira/browse/HADOOP-16616
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Affects Versions: 3.2.1
>Reporter: Andras Bokor
>Priority: Major
>
> {code:java}
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 238.687 s <<< FAILURE! - in 
> org.apache.hadoop.fs.azure.metrics.ITestAzureFileSystemInstrumentation
> [ERROR] 
> testMetricsOnBigFileCreateRead(org.apache.hadoop.fs.azure.metrics.ITestAzureFileSystemInstrumentation)
>   Time elapsed: 238.5 s  <<< FAILURE!
> java.lang.AssertionError: The download latency 0 should be greater than zero 
> now that I've just downloaded a file.
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.fs.azure.metrics.ITestAzureFileSystemInstrumentation.testMetricsOnBigFileCreateRead(ITestAzureFileSystemInstrumentation.java:303)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16616) ITestAzureFileSystemInstrumentation#testMetricsOnBigFileCreateRead fails

2019-09-30 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-16616:
--
Component/s: fs/azure

> ITestAzureFileSystemInstrumentation#testMetricsOnBigFileCreateRead fails
> 
>
> Key: HADOOP-16616
> URL: https://issues.apache.org/jira/browse/HADOOP-16616
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Reporter: Andras Bokor
>Priority: Major
>
> {code:java}
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 238.687 s <<< FAILURE! - in 
> org.apache.hadoop.fs.azure.metrics.ITestAzureFileSystemInstrumentation
> [ERROR] 
> testMetricsOnBigFileCreateRead(org.apache.hadoop.fs.azure.metrics.ITestAzureFileSystemInstrumentation)
>   Time elapsed: 238.5 s  <<< FAILURE!
> java.lang.AssertionError: The download latency 0 should be greater than zero 
> now that I've just downloaded a file.
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.fs.azure.metrics.ITestAzureFileSystemInstrumentation.testMetricsOnBigFileCreateRead(ITestAzureFileSystemInstrumentation.java:303)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16617) ITestGetNameSpaceEnabled#testFailedRequestWhenFSNotExist fails with ns disabled account

2019-09-30 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-16617:
--
Description: 
AzureBlobFileSystemStore#getIsNamespaceEnabled() gets ACL status of root path 
to decide whether the account is XNS or not. If not it returns with 400 as 
error code which means the account is a non-XNS acc.

The problem is that we get 400 and the getIsNamespaceEnabled return false even 
if the filesystem does not exist which seems ok but according to the test we 
should get 404. So it seems the expected behavior is to return 404.

At this point I am not sure how to fix it. Should we insist to the expected 
behavior and fix it on server side or we just adjust the test to expect false 
in case of non XNS account?

  was:
AzureBlobFileSystemStore#getIsNamespaceEnabled() gets ACL status of root path 
to decide whether the account is XNS or not. If not it return with 400 as error 
code which means the account is a non-XNS acc.

The problem is that we get 400 and the getIsNamespaceEnabled return false even 
if the filesystem does not exist which seems ok but according to the test we 
should get 404. So it seems the expected behavior is to return 404.

At this point I am not sure how to fix it. Should we insist to the expected 
behavior and fix it on server side or we just adjust the test to expect false 
in case of non XNS account?


> ITestGetNameSpaceEnabled#testFailedRequestWhenFSNotExist fails with ns 
> disabled account
> ---
>
> Key: HADOOP-16617
> URL: https://issues.apache.org/jira/browse/HADOOP-16617
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
>
> AzureBlobFileSystemStore#getIsNamespaceEnabled() gets ACL status of root path 
> to decide whether the account is XNS or not. If not it returns with 400 as 
> error code which means the account is a non-XNS acc.
> The problem is that we get 400 and the getIsNamespaceEnabled return false 
> even if the filesystem does not exist which seems ok but according to the 
> test we should get 404. So it seems the expected behavior is to return 404.
> At this point I am not sure how to fix it. Should we insist to the expected 
> behavior and fix it on server side or we just adjust the test to expect false 
> in case of non XNS account?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16617) ITestGetNameSpaceEnabled#testFailedRequestWhenFSNotExist fails with ns disabled account

2019-09-30 Thread Andras Bokor (Jira)
Andras Bokor created HADOOP-16617:
-

 Summary: ITestGetNameSpaceEnabled#testFailedRequestWhenFSNotExist 
fails with ns disabled account
 Key: HADOOP-16617
 URL: https://issues.apache.org/jira/browse/HADOOP-16617
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Andras Bokor
Assignee: Andras Bokor


AzureBlobFileSystemStore#getIsNamespaceEnabled() gets ACL status of root path 
to decide whether the account is XNS or not. If not it return with 400 as error 
code which means the account is a non-XNS acc.

The problem is that we get 400 and the getIsNamespaceEnabled return false even 
if the filesystem does not exist which seems ok but according to the test we 
should get 404. So it seems the expected behavior is to return 404.

At this point I am not sure how to fix it. Should we insist to the expected 
behavior and fix it on server side or we just adjust the test to expect false 
in case of non XNS account?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16616) ITestAzureFileSystemInstrumentation#testMetricsOnBigFileCreateRead fails

2019-09-30 Thread Andras Bokor (Jira)
Andras Bokor created HADOOP-16616:
-

 Summary: 
ITestAzureFileSystemInstrumentation#testMetricsOnBigFileCreateRead fails
 Key: HADOOP-16616
 URL: https://issues.apache.org/jira/browse/HADOOP-16616
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Andras Bokor


{code:java}
[ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 238.687 
s <<< FAILURE! - in 
org.apache.hadoop.fs.azure.metrics.ITestAzureFileSystemInstrumentation
[ERROR] 
testMetricsOnBigFileCreateRead(org.apache.hadoop.fs.azure.metrics.ITestAzureFileSystemInstrumentation)
  Time elapsed: 238.5 s  <<< FAILURE!
java.lang.AssertionError: The download latency 0 should be greater than zero 
now that I've just downloaded a file.
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.fs.azure.metrics.ITestAzureFileSystemInstrumentation.testMetricsOnBigFileCreateRead(ITestAzureFileSystemInstrumentation.java:303)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:745)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-12342) Use SLF4j in ProtobufRpcEngine class

2019-08-08 Thread Andras Bokor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-12342.
---
Resolution: Duplicate

> Use SLF4j in ProtobufRpcEngine class
> 
>
> Key: HADOOP-12342
> URL: https://issues.apache.org/jira/browse/HADOOP-12342
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>
> This are considerable amount of debug/trace level logs in this class. This 
> ticket is opened to convert it to use SLF4J for better performance. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-12660) TestZKDelegationTokenSecretManager.testMultiNodeOperations failing

2019-08-08 Thread Andras Bokor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-12660.
---
Resolution: Cannot Reproduce

> TestZKDelegationTokenSecretManager.testMultiNodeOperations failing
> --
>
> Key: HADOOP-12660
> URL: https://issues.apache.org/jira/browse/HADOOP-12660
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ha, test
>Affects Versions: 3.0.0-alpha1
> Environment: Jenkins Java8
>Reporter: Steve Loughran
>Priority: Major
>
> Test failure
> {code}
> java.lang.AssertionError: Expected InvalidToken
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.security.token.delegation.TestZKDelegationTokenSecretManager.testMultiNodeOperations(TestZKDelegationTokenSecretManager.java:127)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-3353) DataNode.run() join() and shutdown() ought to have synchronized access to dataNodeThread

2019-08-08 Thread Andras Bokor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-3353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-3353.
--
Resolution: Invalid

This code has totally changed in the past 11 years.

> DataNode.run() join()  and shutdown() ought to have  synchronized access to 
> dataNodeThread
> --
>
> Key: HADOOP-3353
> URL: https://issues.apache.org/jira/browse/HADOOP-3353
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Steve Loughran
>Priority: Major
>
> Looking at the DataNode.run() and join() methods, they are manipulating the 
> state of the dataNodeThread:
>  void join() {
> if (dataNodeThread != null) {
>   try {
> dataNodeThread.join();
>   } catch (InterruptedException e) {}
> }
>   }
> There's something similar in shutdown()
> This could lead to race conditions on shutdown, where the check passes and 
> then the reference is null when the next method is invoked. 
> Marking major as race conditions are always trouble, and hard to test. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-16405) Upgrade Wildfly Openssl version to 1.0.7.Final

2019-08-08 Thread Andras Bokor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor reopened HADOOP-16405:
---

> Upgrade Wildfly Openssl version to 1.0.7.Final
> --
>
> Key: HADOOP-16405
> URL: https://issues.apache.org/jira/browse/HADOOP-16405
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/azure
>Affects Versions: 3.2.0
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
>Priority: Major
> Fix For: 3.3.0
>
>
> Upgrade Wildfly Openssl version to 1.0.7.Final. This version has SNI support 
> which is essential for firewall enabled clusters along with many stability 
> related fixes.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16405) Upgrade Wildfly Openssl version to 1.0.7.Final

2019-08-08 Thread Andras Bokor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-16405.
---
Resolution: Duplicate

> Upgrade Wildfly Openssl version to 1.0.7.Final
> --
>
> Key: HADOOP-16405
> URL: https://issues.apache.org/jira/browse/HADOOP-16405
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/azure
>Affects Versions: 3.2.0
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
>Priority: Major
> Fix For: 3.3.0
>
>
> Upgrade Wildfly Openssl version to 1.0.7.Final. This version has SNI support 
> which is essential for firewall enabled clusters along with many stability 
> related fixes.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15218) Make Hadoop compatible with Guava 22.0+

2019-07-11 Thread Andras Bokor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-15218.
---
Resolution: Duplicate

> Make Hadoop compatible with Guava 22.0+
> ---
>
> Key: HADOOP-15218
> URL: https://issues.apache.org/jira/browse/HADOOP-15218
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Major
> Attachments: HADOOP-15218-001.patch
>
>
> Deprecated HostAndPort#getHostText method was deleted in Guava 22.0 and new 
> HostAndPort#getHost method is not available before Guava 20.0.
> This patch implements getHost(HostAndPort) method that extracts host from 
> HostAndPort#toString value.
> This is a little hacky, that's why I'm not sure if it worth to merge this 
> patch, but it could be nice if Hadoop will be Guava-neutral.
> With this patch Hadoop can be built against latest Guava v24.0.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-15218) Make Hadoop compatible with Guava 22.0+

2019-07-11 Thread Andras Bokor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor reopened HADOOP-15218:
---

> Make Hadoop compatible with Guava 22.0+
> ---
>
> Key: HADOOP-15218
> URL: https://issues.apache.org/jira/browse/HADOOP-15218
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Major
> Attachments: HADOOP-15218-001.patch
>
>
> Deprecated HostAndPort#getHostText method was deleted in Guava 22.0 and new 
> HostAndPort#getHost method is not available before Guava 20.0.
> This patch implements getHost(HostAndPort) method that extracts host from 
> HostAndPort#toString value.
> This is a little hacky, that's why I'm not sure if it worth to merge this 
> patch, but it could be nice if Hadoop will be Guava-neutral.
> With this patch Hadoop can be built against latest Guava v24.0.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16174) Disable wildfly logs to the console

2019-03-08 Thread Andras Bokor (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788028#comment-16788028
 ] 

Andras Bokor commented on HADOOP-16174:
---

{quote}I am not convinced the...
{quote}
We had exactly the same doubt but our tests on a live azure cluster showed that 
only one reference is enough. But basically I agree, there is no clear 
documentation about how javac handles this situation and what does "active 
reference" mean. Also indeed, it's not obvious why don't clean that code. So 
there are some risks.
{quote}Reinstating the log to info afterwards will guarantee that the reference 
is retained, and stop anyone cleaning up the code from unintentionally removing 
the reference. Add a comment to the clause to explain the problem too.
{quote}
I agree. Setting back to INFO along with a short comment solves all the 
concerns. Let's do this. Thanks.

> Disable wildfly logs to the console
> ---
>
> Key: HADOOP-16174
> URL: https://issues.apache.org/jira/browse/HADOOP-16174
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Denes Gerencser
>Priority: Major
> Attachments: HADOOP-16174-001.patch
>
>
> We experience that the wildfly log
> {code:java}
> Mar 06, 2019 4:33:53 PM org.wildfly.openssl.SSL init
> INFO: WFOPENSSL0002 OpenSSL Version OpenSSL 1.0.2g  1 Mar 2016
> {code}
> (sometimes) appears on the console but it should never. Note: this is a 
> consequence of HADOOP-15851.
> Our analysis shows the reason is that 
> {code:java}
> java.util.logging.Logger.getLogger()
> {code}
> is not guaranteed to always return the _same_ logger instance so 
> SSLSocketFactoryEx may set log level on different logger object than the one 
> used by wildfly-openssl 
> ([https://github.com/wildfly/wildfly-openssl/blob/ace72ba07d0c746b6eb46635f4a8b122846c47c8/java/src/main/java/org/wildfly/openssl/SSL.java#L196)].
> From javadoc of java.util.logging.Logger.getLogger:
> 'Note: The LogManager may only retain a weak reference to the newly created 
> Logger. It is important to understand that a previously created Logger with 
> the given name may be garbage collected at any time if there is no strong 
> reference to the Logger. In particular, this means that two back-to-back 
> calls like{{getLogger("MyLogger").log(...)}} may use different Logger objects 
> named "MyLogger" if there is no strong reference to the Logger named 
> "MyLogger" elsewhere in the program.'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16174) Disable wildfly logs to the console

2019-03-08 Thread Andras Bokor (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16787924#comment-16787924
 ] 

Andras Bokor commented on HADOOP-16174:
---

We do not want to preserve the ref outside of the switch case. Using a hard ref 
to the logger keeps the object in memory while we reach SSL.java:196 which is 
enough for us. We used local variable because we won't need that logger anymore 
so the scope is kept as small as possible. 

Another question is that should we set the log level back to INFO. Currently 
there is no other logger message in SSL.java but setting back seems better and 
seems a complete workaround excluding any possible side effect in the future. 
[~denes.gerencser]?

[~vishwajeet.dusane], Denes' question seems reasonable. Why only one branch is 
protected?

> Disable wildfly logs to the console
> ---
>
> Key: HADOOP-16174
> URL: https://issues.apache.org/jira/browse/HADOOP-16174
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Denes Gerencser
>Priority: Major
> Attachments: HADOOP-16174-001.patch
>
>
> We experience that the wildfly log
> {code:java}
> Mar 06, 2019 4:33:53 PM org.wildfly.openssl.SSL init
> INFO: WFOPENSSL0002 OpenSSL Version OpenSSL 1.0.2g  1 Mar 2016
> {code}
> (sometimes) appears on the console but it should never. Note: this is a 
> consequence of HADOOP-15851.
> Our analysis shows the reason is that 
> {code:java}
> java.util.logging.Logger.getLogger()
> {code}
> is not guaranteed to always return the _same_ logger instance so 
> SSLSocketFactoryEx may set log level on different logger object than the one 
> used by wildfly-openssl 
> ([https://github.com/wildfly/wildfly-openssl/blob/ace72ba07d0c746b6eb46635f4a8b122846c47c8/java/src/main/java/org/wildfly/openssl/SSL.java#L196)].
> From javadoc of java.util.logging.Logger.getLogger:
> 'Note: The LogManager may only retain a weak reference to the newly created 
> Logger. It is important to understand that a previously created Logger with 
> the given name may be garbage collected at any time if there is no strong 
> reference to the Logger. In particular, this means that two back-to-back 
> calls like{{getLogger("MyLogger").log(...)}} may use different Logger objects 
> named "MyLogger" if there is no strong reference to the Logger named 
> "MyLogger" elsewhere in the program.'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16174) Disable wildfly logs to the console

2019-03-07 Thread Andras Bokor (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786879#comment-16786879
 ] 

Andras Bokor commented on HADOOP-16174:
---

[Another description of the 
problem...|http://findbugs.sourceforge.net/bugDescriptions.html#LG_LOST_LOGGER_DUE_TO_WEAK_REFERENCE]

> Disable wildfly logs to the console
> ---
>
> Key: HADOOP-16174
> URL: https://issues.apache.org/jira/browse/HADOOP-16174
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Denes Gerencser
>Priority: Major
> Attachments: HADOOP-16174-001.patch
>
>
> We experience that the wildfly log
> {code:java}
> Mar 06, 2019 4:33:53 PM org.wildfly.openssl.SSL init
> INFO: WFOPENSSL0002 OpenSSL Version OpenSSL 1.0.2g  1 Mar 2016
> {code}
> (sometimes) appears on the console but it should never. Note: this is a 
> consequence of HADOOP-15851.
> Our analysis shows the reason is that 
> {code:java}
> java.util.logging.Logger.getLogger()
> {code}
> is not guaranteed to always return the _same_ logger instance so 
> SSLSocketFactoryEx may set log level on different logger object than the one 
> used by wildfly-openssl 
> ([https://github.com/wildfly/wildfly-openssl/blob/ace72ba07d0c746b6eb46635f4a8b122846c47c8/java/src/main/java/org/wildfly/openssl/SSL.java#L196)].
> From javadoc of java.util.logging.Logger.getLogger:
> 'Note: The LogManager may only retain a weak reference to the newly created 
> Logger. It is important to understand that a previously created Logger with 
> the given name may be garbage collected at any time if there is no strong 
> reference to the Logger. In particular, this means that two back-to-back 
> calls like{{getLogger("MyLogger").log(...)}} may use different Logger objects 
> named "MyLogger" if there is no strong reference to the Logger named 
> "MyLogger" elsewhere in the program.'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16174) Disable wildfly logs to the console

2019-03-07 Thread Andras Bokor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-16174:
--
Status: Patch Available  (was: Open)

> Disable wildfly logs to the console
> ---
>
> Key: HADOOP-16174
> URL: https://issues.apache.org/jira/browse/HADOOP-16174
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Denes Gerencser
>Priority: Major
> Attachments: HADOOP-16174-001.patch
>
>
> We experience that the wildfly log
> {code:java}
> Mar 06, 2019 4:33:53 PM org.wildfly.openssl.SSL init
> INFO: WFOPENSSL0002 OpenSSL Version OpenSSL 1.0.2g  1 Mar 2016
> {code}
> (sometimes) appears on the console but it should never. Note: this is a 
> consequence of HADOOP-15851.
> Our analysis shows the reason is that 
> {code:java}
> java.util.logging.Logger.getLogger()
> {code}
> is not guaranteed to always return the _same_ logger instance so 
> SSLSocketFactoryEx may set log level on different logger object than the one 
> used by wildfly-openssl 
> ([https://github.com/wildfly/wildfly-openssl/blob/ace72ba07d0c746b6eb46635f4a8b122846c47c8/java/src/main/java/org/wildfly/openssl/SSL.java#L196)].
> From javadoc of java.util.logging.Logger.getLogger:
> 'Note: The LogManager may only retain a weak reference to the newly created 
> Logger. It is important to understand that a previously created Logger with 
> the given name may be garbage collected at any time if there is no strong 
> reference to the Logger. In particular, this means that two back-to-back 
> calls like{{getLogger("MyLogger").log(...)}} may use different Logger objects 
> named "MyLogger" if there is no strong reference to the Logger named 
> "MyLogger" elsewhere in the program.'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16174) Disable wildfly logs to the console

2019-03-07 Thread Andras Bokor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-16174:
--
Issue Type: Bug  (was: Task)

> Disable wildfly logs to the console
> ---
>
> Key: HADOOP-16174
> URL: https://issues.apache.org/jira/browse/HADOOP-16174
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Denes Gerencser
>Priority: Major
> Attachments: HADOOP-16174-001.patch
>
>
> We experience that the wildfly log
> {code:java}
> Mar 06, 2019 4:33:53 PM org.wildfly.openssl.SSL init
> INFO: WFOPENSSL0002 OpenSSL Version OpenSSL 1.0.2g  1 Mar 2016
> {code}
> (sometimes) appears on the console but it should never. Note: this is a 
> consequence of HADOOP-15851.
> Our analysis shows the reason is that 
> {code:java}
> java.util.logging.Logger.getLogger()
> {code}
> is not guaranteed to always return the _same_ logger instance so 
> SSLSocketFactoryEx may set log level on different logger object than the one 
> used by wildfly-openssl 
> ([https://github.com/wildfly/wildfly-openssl/blob/ace72ba07d0c746b6eb46635f4a8b122846c47c8/java/src/main/java/org/wildfly/openssl/SSL.java#L196)].
> From javadoc of java.util.logging.Logger.getLogger:
> 'Note: The LogManager may only retain a weak reference to the newly created 
> Logger. It is important to understand that a previously created Logger with 
> the given name may be garbage collected at any time if there is no strong 
> reference to the Logger. In particular, this means that two back-to-back 
> calls like{{getLogger("MyLogger").log(...)}} may use different Logger objects 
> named "MyLogger" if there is no strong reference to the Logger named 
> "MyLogger" elsewhere in the program.'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15969) ABFS: getNamespaceEnabled can fail blocking user access thru ACLs

2018-12-04 Thread Andras Bokor (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16709064#comment-16709064
 ] 

Andras Bokor commented on HADOOP-15969:
---

ITestGetNameSpaceEnabled.java:
Can you please double check "XNS is not enabled" and "XNS is enabled" strings 
are correct? I'd expect them in the reverse order.
Also,
Can you use assertFalse in testNonXNSAccount and correct the message?

> ABFS: getNamespaceEnabled can fail blocking user access thru ACLs
> -
>
> Key: HADOOP-15969
> URL: https://issues.apache.org/jira/browse/HADOOP-15969
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15969-001.patch, HADOOP-15969-002.patch
>
>
> The Get Filesystem Properties operation requires Read permission to the 
> Filesystem.  Read permission to the Filesystem can only be granted thru RBAC, 
> Shared Key, or SAS.  This prevents giving low privilege users access to 
> specific files or directories within the filesystem.  An administrator should 
> be able to set an ACL on a file granting read permission to a user, without 
> giving them read permission to the entire Filesystem.
> Fortunately there is another way to determine if HNS is enabled.  The Get 
> Path Access Control (getAclStatus) operation only requires traversal access, 
> and for the root folder / all authenticated users have traversal access.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-7022) MD5Hash does not return file size

2018-10-25 Thread Andras Bokor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-7022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-7022.
--
Resolution: Won't Fix

> MD5Hash does not return file size
> -
>
> Key: HADOOP-7022
> URL: https://issues.apache.org/jira/browse/HADOOP-7022
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 0.21.0
>Reporter: Junjie Liang
>Priority: Minor
> Attachments: 7022.PATCH
>
>
> Currently, MD5Hash reads the file but does not return the size of the file. 
> We can tweak the function so that we get the additional bit of information 
> essentially for free.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15744) AbstractContractAppendTest fails against HDFS on HADOOP-15407 branch

2018-09-11 Thread Andras Bokor (JIRA)
Andras Bokor created HADOOP-15744:
-

 Summary: AbstractContractAppendTest fails against HDFS on 
HADOOP-15407 branch
 Key: HADOOP-15744
 URL: https://issues.apache.org/jira/browse/HADOOP-15744
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Andras Bokor
Assignee: Andras Bokor


{code:java}
mvn test 
-Dtest=TestHDFSContractAppend#testAppendDirectory,TestRouterWebHDFSContractAppend#testAppendDirectory{code}
In case of TestHDFSContractAppend the test excepts FileAlreadyExistsException 
but HDFS sends the exception wrapped into a RemoteException.
In case of TestRouterWebHDFSContractAppend the append does not even throw 
exception.

[~ste...@apache.org], [~tmarquardt], any thoughts?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15680) ITestNativeAzureFileSystemConcurrencyLive times out

2018-08-23 Thread Andras Bokor (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16590110#comment-16590110
 ] 

Andras Bokor commented on HADOOP-15680:
---

Thanks, attached a patch with 30 sec timeout.

> ITestNativeAzureFileSystemConcurrencyLive times out
> ---
>
> Key: HADOOP-15680
> URL: https://issues.apache.org/jira/browse/HADOOP-15680
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-15680.001.patch, HADOOP-15680.002.patch
>
>
> When I am running tests locally ITestNativeAzureFileSystemConcurrencyLive 
> sometimes times out.
> I would like to increase the timeout to avoid unnecessary noise.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15680) ITestNativeAzureFileSystemConcurrencyLive times out

2018-08-23 Thread Andras Bokor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-15680:
--
Attachment: HADOOP-15680.002.patch

> ITestNativeAzureFileSystemConcurrencyLive times out
> ---
>
> Key: HADOOP-15680
> URL: https://issues.apache.org/jira/browse/HADOOP-15680
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-15680.001.patch, HADOOP-15680.002.patch
>
>
> When I am running tests locally ITestNativeAzureFileSystemConcurrencyLive 
> sometimes times out.
> I would like to increase the timeout to avoid unnecessary noise.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15680) ITestNativeAzureFileSystemConcurrencyLive times out

2018-08-16 Thread Andras Bokor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-15680:
--
Attachment: HADOOP-15680.001.patch

> ITestNativeAzureFileSystemConcurrencyLive times out
> ---
>
> Key: HADOOP-15680
> URL: https://issues.apache.org/jira/browse/HADOOP-15680
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-15680.001.patch
>
>
> When I am running tests locally ITestNativeAzureFileSystemConcurrencyLive 
> sometimes times out.
> I would like to increase the timeout to avoid unnecessary noise.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15680) ITestNativeAzureFileSystemConcurrencyLive times out

2018-08-16 Thread Andras Bokor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-15680:
--
Status: Patch Available  (was: Open)

> ITestNativeAzureFileSystemConcurrencyLive times out
> ---
>
> Key: HADOOP-15680
> URL: https://issues.apache.org/jira/browse/HADOOP-15680
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-15680.001.patch
>
>
> When I am running tests locally ITestNativeAzureFileSystemConcurrencyLive 
> sometimes times out.
> I would like to increase the timeout to avoid unnecessary noise.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15680) ITestNativeAzureFileSystemConcurrencyLive times out

2018-08-16 Thread Andras Bokor (JIRA)
Andras Bokor created HADOOP-15680:
-

 Summary: ITestNativeAzureFileSystemConcurrencyLive times out
 Key: HADOOP-15680
 URL: https://issues.apache.org/jira/browse/HADOOP-15680
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Andras Bokor
Assignee: Andras Bokor


When I am running tests locally ITestNativeAzureFileSystemConcurrencyLive 
sometimes times out.

I would like to increase the timeout to avoid unnecessary noise.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15668) mvn test goal fails on HADOOP-15407 branch

2018-08-13 Thread Andras Bokor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-15668.
---
Resolution: Fixed

It's working now. Thanks!

> mvn test goal fails on HADOOP-15407 branch
> --
>
> Key: HADOOP-15668
> URL: https://issues.apache.org/jira/browse/HADOOP-15668
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
>
> It's very easy to reproduce:
> {code}cd hadoop-common-project/hadoop-common/
> mvn test -Dtest=Whatever{code}
> The error is due to
> {code} [exec] Running bats -t hadoop_stop_daemon.bats
>  [exec] 1..2
>  [exec] ok 1 hadoop_stop_daemon_changing_pid
>  [exec] not ok 2 hadoop_stop_daemon_force_kill
>  [exec] # (in test file hadoop_stop_daemon.bats, line 43)
>  [exec] #   `[ -f ${TMP}/pidfile ]' failed
>  [exec] # bindir: 
> /Users/abokor/work/hadoop/hadoop-common-project/hadoop-common/src/test/scripts
>  [exec] # sh: 
> /Users/abokor/work/hadoop/hadoop-common-project/hadoop-common/src/test/scripts/process_with_sigterm_trap.sh:
>  No such file or directory{code}
> This happens because actually 3 commits belong to HADOOP-15527 but 
> HADOOP-15407 branch contains only one of them so the test won't find 
> process_with_sigterm_trap.sh.
> I am not sure what is the best practice to solve this kind of issues. Is 
> patch required or can somebody just cherry-pick the missing commits?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15668) mvn test goal fails on HADOOP-15407 branch

2018-08-10 Thread Andras Bokor (JIRA)
Andras Bokor created HADOOP-15668:
-

 Summary: mvn test goal fails on HADOOP-15407 branch
 Key: HADOOP-15668
 URL: https://issues.apache.org/jira/browse/HADOOP-15668
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Andras Bokor
Assignee: Andras Bokor


It's very easy to reproduce:
{code}cd hadoop-common-project/hadoop-common/
mvn test -Dtest=Whatever{code}

The error is due to
{code} [exec] Running bats -t hadoop_stop_daemon.bats
 [exec] 1..2
 [exec] ok 1 hadoop_stop_daemon_changing_pid
 [exec] not ok 2 hadoop_stop_daemon_force_kill
 [exec] # (in test file hadoop_stop_daemon.bats, line 43)
 [exec] #   `[ -f ${TMP}/pidfile ]' failed
 [exec] # bindir: 
/Users/abokor/work/hadoop/hadoop-common-project/hadoop-common/src/test/scripts
 [exec] # sh: 
/Users/abokor/work/hadoop/hadoop-common-project/hadoop-common/src/test/scripts/process_with_sigterm_trap.sh:
 No such file or directory{code}


This happens because actually 3 commits belong to HADOOP-15527 but HADOOP-15407 
branch contains only one of them so the test won't find 
process_with_sigterm_trap.sh.

I am not sure what is the best practice to solve this kind of issues. Is patch 
required or can somebody just cherry-pick the missing commits?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14178) Move Mockito up to version 2.x

2018-06-08 Thread Andras Bokor (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506123#comment-16506123
 ] 

Andras Bokor commented on HADOOP-14178:
---

[~ajisakaa],

# The one checkstyle warning make sense. Importing ContainerStatus in 
TestChildQueueOrder no longer needed after the patch.
# TestTaskAttemptListenerImpl#testCheckpointIDTracking: mockTask, mockJob, 
clock objects became unused.

Other than these minor things I do not have new comment.

> Move Mockito up to version 2.x
> --
>
> Key: HADOOP-14178
> URL: https://issues.apache.org/jira/browse/HADOOP-14178
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-14178.001.patch, HADOOP-14178.002.patch, 
> HADOOP-14178.003.patch, HADOOP-14178.004.patch, HADOOP-14178.005-wip.patch, 
> HADOOP-14178.005-wip2.patch, HADOOP-14178.005-wip3.patch, 
> HADOOP-14178.005-wip4.patch, HADOOP-14178.005-wip5.patch, 
> HADOOP-14178.005-wip6.patch, HADOOP-14178.005.patch, HADOOP-14178.006.patch, 
> HADOOP-14178.007.patch, HADOOP-14178.008.patch, HADOOP-14178.009.patch, 
> HADOOP-14178.010.patch, HADOOP-14178.011.patch, HADOOP-14178.012.patch, 
> HADOOP-14178.013.patch, HADOOP-14178.014.patch, HADOOP-14178.015.patch, 
> HADOOP-14178.016.patch, HADOOP-14178.017.patch
>
>
> I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 
> since the switch to maven in 2011. 
> Mockito is now at version 2.1, [with lots of Java 8 
> support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. 
> That' s not just defining actions as closures, but in supporting Optional 
> types, mocking methods in interfaces, etc. 
> It's only used for testing, and, *provided there aren't regressions*, cost of 
> upgrade is low. The good news: test tools usually come with good test 
> coverage. The bad: mockito does go deep into java bytecodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15361) RawLocalFileSystem should use Java nio framework for rename

2018-04-27 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16456240#comment-16456240
 ] 

Andras Bokor commented on HADOOP-15361:
---

Reattaching patch 03. I cannot reproduce the failing UT and logs are no longer 
available on Jenkins.

> RawLocalFileSystem should use Java nio framework for rename
> ---
>
> Key: HADOOP-15361
> URL: https://issues.apache.org/jira/browse/HADOOP-15361
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
>  Labels: incompatibleChange
> Attachments: HADOOP-15361.01.patch, HADOOP-15361.02.patch, 
> HADOOP-15361.03.patch, HADOOP-15361.04.patch
>
>
> Currently RawLocalFileSystem uses a fallback logic for cross-volume renames. 
> The fallback logic is a copy-on-fail logic so when rename fails it copies the 
> source then delete it.
>  An additional fallback logic was needed for Windows to provide POSIX rename 
> behavior.
> Due to the fallback logic RawLocalFileSystem does not pass the contract tests 
> (HADOOP-13082).
> With using Java nio framework both could be eliminated since it is not 
> platform dependent and provides cross-volume rename.
> In addition the fallback logic for Windows is not correct since Java io 
> overrides the destination only if the source is also a directory but 
> handleEmptyDstDirectoryOnWindows method checks only the destination. That 
> means rename allows to override a directory with a file on Windows but not on 
> Unix.
> File#renameTo and Files#move are not 100% compatible:
>  If the source is a directory and the destination is an empty directory 
> File#renameTo overrides the source but Files#move is does not. We have to use 
> {{StandardCopyOption.REPLACE_EXISTING}} but it overrides the destination even 
> if the source or the destination is a file. So to make them compatible we 
> have to check that the either the source or the destination is a directory 
> before we add the copy option.
> I think the correct strategy is
>  * Where the contract test passed so far it should pass after this
>  * Where the contract test failed because of Java specific think and not 
> because of the fallback logic we should keep the original behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15361) RawLocalFileSystem should use Java nio framework for rename

2018-04-27 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-15361:
--
Attachment: HADOOP-15361.04.patch

> RawLocalFileSystem should use Java nio framework for rename
> ---
>
> Key: HADOOP-15361
> URL: https://issues.apache.org/jira/browse/HADOOP-15361
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
>  Labels: incompatibleChange
> Attachments: HADOOP-15361.01.patch, HADOOP-15361.02.patch, 
> HADOOP-15361.03.patch, HADOOP-15361.04.patch
>
>
> Currently RawLocalFileSystem uses a fallback logic for cross-volume renames. 
> The fallback logic is a copy-on-fail logic so when rename fails it copies the 
> source then delete it.
>  An additional fallback logic was needed for Windows to provide POSIX rename 
> behavior.
> Due to the fallback logic RawLocalFileSystem does not pass the contract tests 
> (HADOOP-13082).
> With using Java nio framework both could be eliminated since it is not 
> platform dependent and provides cross-volume rename.
> In addition the fallback logic for Windows is not correct since Java io 
> overrides the destination only if the source is also a directory but 
> handleEmptyDstDirectoryOnWindows method checks only the destination. That 
> means rename allows to override a directory with a file on Windows but not on 
> Unix.
> File#renameTo and Files#move are not 100% compatible:
>  If the source is a directory and the destination is an empty directory 
> File#renameTo overrides the source but Files#move is does not. We have to use 
> {{StandardCopyOption.REPLACE_EXISTING}} but it overrides the destination even 
> if the source or the destination is a file. So to make them compatible we 
> have to check that the either the source or the destination is a directory 
> before we add the copy option.
> I think the correct strategy is
>  * Where the contract test passed so far it should pass after this
>  * Where the contract test failed because of Java specific think and not 
> because of the fallback logic we should keep the original behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15380) TestViewFileSystemLocalFileSystem#testTrashRoot leaves an unnecessary file

2018-04-27 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16456183#comment-16456183
 ] 

Andras Bokor commented on HADOOP-15380:
---

It's not a test issue but a problem with LocalFilesystem. ChecksumFileSystem 
has no method for rename(Path, Path, Options.Rename...) so FilterFilesystem's 
method will be called which does not handle crc files. HADOOP-15388 will solve 
this.

> TestViewFileSystemLocalFileSystem#testTrashRoot leaves an unnecessary file
> --
>
> Key: HADOOP-15380
> URL: https://issues.apache.org/jira/browse/HADOOP-15380
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
>
> After running
> {code}mvn test -Dtest=TestViewFileSystemLocalFileSystem#testTrashRoot
> git status{code}
> Git reports an untracked file: 
> {{hadoop-common-project/hadoop-common/.debug.log.crc}}
> It seems some cleanup issue.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15388) LocalFilesystem#rename(Path, Path, Options.Rename...) does not handle crc files

2018-04-16 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-15388:
--
Status: Patch Available  (was: Open)

> LocalFilesystem#rename(Path, Path, Options.Rename...) does not handle crc 
> files
> ---
>
> Key: HADOOP-15388
> URL: https://issues.apache.org/jira/browse/HADOOP-15388
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-15388.01.patch
>
>
> ChecksumFilesystem#rename(Path, Path, Options.Rename...) is missing and 
> FilterFileSystem does not care with crc files. That causes abandoned crc 
> files in case of rename.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15388) LocalFilesystem#rename(Path, Path, Options.Rename...) does not handle crc files

2018-04-16 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-15388:
--
Attachment: HADOOP-15388.01.patch

> LocalFilesystem#rename(Path, Path, Options.Rename...) does not handle crc 
> files
> ---
>
> Key: HADOOP-15388
> URL: https://issues.apache.org/jira/browse/HADOOP-15388
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-15388.01.patch
>
>
> ChecksumFilesystem#rename(Path, Path, Options.Rename...) is missing and 
> FilterFileSystem does not care with crc files. That causes abandoned crc 
> files in case of rename.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-15388) LocalFilesystem#rename(Path, Path, Options.Rename...) does not handle crc files

2018-04-16 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor moved HDFS-13457 to HADOOP-15388:
--

Target Version/s: 3.1.0  (was: 3.1.0)
 Key: HADOOP-15388  (was: HDFS-13457)
 Project: Hadoop Common  (was: Hadoop HDFS)

> LocalFilesystem#rename(Path, Path, Options.Rename...) does not handle crc 
> files
> ---
>
> Key: HADOOP-15388
> URL: https://issues.apache.org/jira/browse/HADOOP-15388
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
>
> ChecksumFilesystem#rename(Path, Path, Options.Rename...) is missing and 
> FilterFileSystem does not care with crc files. That causes abandoned crc 
> files in case of rename.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15380) TestViewFileSystemLocalFileSystem#testTrashRoot leaves an unnecessary file

2018-04-11 Thread Andras Bokor (JIRA)
Andras Bokor created HADOOP-15380:
-

 Summary: TestViewFileSystemLocalFileSystem#testTrashRoot leaves an 
unnecessary file
 Key: HADOOP-15380
 URL: https://issues.apache.org/jira/browse/HADOOP-15380
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Andras Bokor
Assignee: Andras Bokor


After running

{code}mvn test -Dtest=TestViewFileSystemLocalFileSystem#testTrashRoot
git status{code}
Git reports an untracked file: 
{{hadoop-common-project/hadoop-common/.debug.log.crc}}
It seems some cleanup issue.
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15361) RawLocalFileSystem should use Java nio framework for rename

2018-04-11 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-15361:
--
Attachment: HADOOP-15361.03.patch

> RawLocalFileSystem should use Java nio framework for rename
> ---
>
> Key: HADOOP-15361
> URL: https://issues.apache.org/jira/browse/HADOOP-15361
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
>  Labels: incompatibleChange
> Attachments: HADOOP-15361.01.patch, HADOOP-15361.02.patch, 
> HADOOP-15361.03.patch
>
>
> Currently RawLocalFileSystem uses a fallback logic for cross-volume renames. 
> The fallback logic is a copy-on-fail logic so when rename fails it copies the 
> source then delete it.
>  An additional fallback logic was needed for Windows to provide POSIX rename 
> behavior.
> Due to the fallback logic RawLocalFileSystem does not pass the contract tests 
> (HADOOP-13082).
> With using Java nio framework both could be eliminated since it is not 
> platform dependent and provides cross-volume rename.
> In addition the fallback logic for Windows is not correct since Java io 
> overrides the destination only if the source is also a directory but 
> handleEmptyDstDirectoryOnWindows method checks only the destination. That 
> means rename allows to override a directory with a file on Windows but not on 
> Unix.
> File#renameTo and Files#move are not 100% compatible:
>  If the source is a directory and the destination is an empty directory 
> File#renameTo overrides the source but Files#move is does not. We have to use 
> {{StandardCopyOption.REPLACE_EXISTING}} but it overrides the destination even 
> if the source or the destination is a file. So to make them compatible we 
> have to check that the either the source or the destination is a directory 
> before we add the copy option.
> I think the correct strategy is
>  * Where the contract test passed so far it should pass after this
>  * Where the contract test failed because of Java specific think and not 
> because of the fallback logic we should keep the original behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15361) RawLocalFileSystem should use Java nio framework for rename

2018-04-10 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432622#comment-16432622
 ] 

Andras Bokor commented on HADOOP-15361:
---

Patch 02 keeps the behavior of the old logic where necessary.

Let's see what does Hadoop QA think.

> RawLocalFileSystem should use Java nio framework for rename
> ---
>
> Key: HADOOP-15361
> URL: https://issues.apache.org/jira/browse/HADOOP-15361
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
>  Labels: incompatibleChange
> Attachments: HADOOP-15361.01.patch, HADOOP-15361.02.patch
>
>
> Currently RawLocalFileSystem uses a fallback logic for cross-volume renames. 
> The fallback logic is a copy-on-fail logic so when rename fails it copies the 
> source then delete it.
>  An additional fallback logic was needed for Windows to provide POSIX rename 
> behavior.
> Due to the fallback logic RawLocalFileSystem does not pass the contract tests 
> (HADOOP-13082).
> With using Java nio framework both could be eliminated since it is not 
> platform dependent and provides cross-volume rename.
> In addition the fallback logic for Windows is not correct since Java io 
> overrides the destination only if the source is also a directory but 
> handleEmptyDstDirectoryOnWindows method checks only the destination. That 
> means rename allows to override a directory with a file on Windows but not on 
> Unix.
> File#renameTo and Files#move are not 100% compatible:
>  If the source is a directory and the destination is an empty directory 
> File#renameTo overrides the source but Files#move is does not. We have to use 
> {{StandardCopyOption.REPLACE_EXISTING}} but it overrides the destination even 
> if the source or the destination is a file. So to make them compatible we 
> have to check that the either the source or the destination is a directory 
> before we add the copy option.
> I think the correct strategy is
>  * Where the contract test passed so far it should pass after this
>  * Where the contract test failed because of Java specific think and not 
> because of the fallback logic we should keep the original behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15361) RawLocalFileSystem should use Java nio framework for rename

2018-04-10 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-15361:
--
Attachment: HADOOP-15361.02.patch

> RawLocalFileSystem should use Java nio framework for rename
> ---
>
> Key: HADOOP-15361
> URL: https://issues.apache.org/jira/browse/HADOOP-15361
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
>  Labels: incompatibleChange
> Attachments: HADOOP-15361.01.patch, HADOOP-15361.02.patch
>
>
> Currently RawLocalFileSystem uses a fallback logic for cross-volume renames. 
> The fallback logic is a copy-on-fail logic so when rename fails it copies the 
> source then delete it.
>  An additional fallback logic was needed for Windows to provide POSIX rename 
> behavior.
> Due to the fallback logic RawLocalFileSystem does not pass the contract tests 
> (HADOOP-13082).
> With using Java nio framework both could be eliminated since it is not 
> platform dependent and provides cross-volume rename.
> In addition the fallback logic for Windows is not correct since Java io 
> overrides the destination only if the source is also a directory but 
> handleEmptyDstDirectoryOnWindows method checks only the destination. That 
> means rename allows to override a directory with a file on Windows but not on 
> Unix.
> File#renameTo and Files#move are not 100% compatible:
>  If the source is a directory and the destination is an empty directory 
> File#renameTo overrides the source but Files#move is does not. We have to use 
> {{StandardCopyOption.REPLACE_EXISTING}} but it overrides the destination even 
> if the source or the destination is a file. So to make them compatible we 
> have to check that the either the source or the destination is a directory 
> before we add the copy option.
> I think the correct strategy is
>  * Where the contract test passed so far it should pass after this
>  * Where the contract test failed because of Java specific think and not 
> because of the fallback logic we should keep the original behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-5342) DataNodes do not start up because InconsistentFSStateException on just part of the disks in use

2018-04-10 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-5342.
--
Resolution: Cannot Reproduce

Last reported occurrence was in 2010 so closing as Cannot Reproduce. Please 
reopen if you still experience this.

> DataNodes do not start up because InconsistentFSStateException on just part 
> of the disks in use
> ---
>
> Key: HADOOP-5342
> URL: https://issues.apache.org/jira/browse/HADOOP-5342
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.18.2
>Reporter: Christian Kunz
>Assignee: Hairong Kuang
>Priority: Critical
>
> After restarting a cluster (including rebooting) the dfs got corrupted 
> because many DataNodes did not start up, running into the following exception:
> 2009-02-26 22:33:53,774 ERROR org.apache.hadoop.dfs.DataNode: 
> org.apache.hadoop.dfs.InconsistentFSStateException: Directory xxx  is in an 
> inconsistent state: version file in current directory is missing.
>   at 
> org.apache.hadoop.dfs.Storage$StorageDirectory.analyzeStorage(Storage.java:326)
>   at 
> org.apache.hadoop.dfs.DataStorage.recoverTransitionRead(DataStorage.java:105)
>   at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:306)
>   at org.apache.hadoop.dfs.DataNode.(DataNode.java:223)
>   at org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:3030)
>   at 
> org.apache.hadoop.dfs.DataNode.instantiateDataNode(DataNode.java:2985)
>   at org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:2993)
>   at org.apache.hadoop.dfs.DataNode.main(DataNode.java:3115)
> This happens when using multiple disks with at least one previously marked as 
> read-only, such that the storage version became out-dated, but after reboot 
> it was mounted read-write, resulting in the DataNode not starting because of 
> out-dated version.
> This is a big headache. If a DataNode has multiple disks of which at least 
> one has the correct storage version then out-dated versions should not bring 
> down the DataNode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-7031) Make DelegateToFileSystem constructor public to allow implementations from other packages for testing

2018-04-10 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-7031.
--
Resolution: Won't Fix

There was no activity or at least new watcher on this issue in the last 7 years 
so closing for now.

> Make DelegateToFileSystem constructor public to allow implementations from 
> other packages for testing
> -
>
> Key: HADOOP-7031
> URL: https://issues.apache.org/jira/browse/HADOOP-7031
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Krishna Ramachandran
>Priority: Major
>
> Mapreduce tests use ileSystem APIs to implement a TestFileSystem to simulate 
> various error and failure conditions. This is no longer possible with the new 
> FileContext APIs
> for example we would like to extend DelegateToFileSystem in unit testing 
> framework 
>   public static class TestFileSystem extends DelegateToFileSystem {
> public TestFileSystem(Configuration conf) throws IOException, 
> URISyntaxException {
>   super(URI.create("faildel:///"), new FakeFileSystem(conf), conf, 
> "faildel",
>   false);
> }
>   }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15361) RawLocalFileSystem should use Java nio framework for rename

2018-04-07 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429576#comment-16429576
 ] 

Andras Bokor commented on HADOOP-15361:
---

[~ste...@apache.org],

I am bit confused how to resolve the caveats.
{quote}The compatibility is the troublespot here. How does it relate to what we 
have in filesystem.md?
{quote}
There are some caveats and differences between RawLocal and HDFS that could be 
affected:
* filsystem.md states that if the source does not exist we should throw 
FileNotFoundException but HDFS does not throw exception and contract test also 
expects only a false (FileSystemContractBaseTest#testRenameNonExistentPath).
 * local filesystem is able to replace a file but HDFS does not
 * if the parent folder of the destination does not exist HDFS fails but local 
filesystem creates the missing directories.

What is best strategy here? Should we keep the sync with filesystem.md or 
follow HDFS and the contract tests?

For me it seems filesystem.md just states what is happening, these behaviors 
are not intended.

> RawLocalFileSystem should use Java nio framework for rename
> ---
>
> Key: HADOOP-15361
> URL: https://issues.apache.org/jira/browse/HADOOP-15361
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
>  Labels: incompatibleChange
> Attachments: HADOOP-15361.01.patch
>
>
> Currently RawLocalFileSystem uses a fallback logic for cross-volume renames. 
> The fallback logic is a copy-on-fail logic so when rename fails it copies the 
> source then delete it.
>  An additional fallback logic was needed for Windows to provide POSIX rename 
> behavior.
> Due to the fallback logic RawLocalFileSystem does not pass the contract tests 
> (HADOOP-13082).
> With using Java nio framework both could be eliminated since it is not 
> platform dependent and provides cross-volume rename.
> In addition the fallback logic for Windows is not correct since Java io 
> overrides the destination only if the source is also a directory but 
> handleEmptyDstDirectoryOnWindows method checks only the destination. That 
> means rename allows to override a directory with a file on Windows but not on 
> Unix.
> File#renameTo and Files#move are not 100% compatible:
>  If the source is a directory and the destination is an empty directory 
> File#renameTo overrides the source but Files#move is does not. We have to use 
> {{StandardCopyOption.REPLACE_EXISTING}} but it overrides the destination even 
> if the source or the destination is a file. So to make them compatible we 
> have to check that the either the source or the destination is a directory 
> before we add the copy option.
> I think the correct strategy is
>  * Where the contract test passed so far it should pass after this
>  * Where the contract test failed because of Java specific think and not 
> because of the fallback logic we should keep the original behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15361) RawLocalFileSystem should use Java nio framework for rename

2018-04-04 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-15361:
--
Attachment: HADOOP-15361.01.patch

> RawLocalFileSystem should use Java nio framework for rename
> ---
>
> Key: HADOOP-15361
> URL: https://issues.apache.org/jira/browse/HADOOP-15361
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
>  Labels: incompatibleChange
> Attachments: HADOOP-15361.01.patch
>
>
> Currently RawLocalFileSystem uses a fallback logic for cross-volume renames. 
> The fallback logic is a copy-on-fail logic so when rename fails it copies the 
> source then delete it.
>  An additional fallback logic was needed for Windows to provide POSIX rename 
> behavior.
> Due to the fallback logic RawLocalFileSystem does not pass the contract tests 
> (HADOOP-13082).
> With using Java nio framework both could be eliminated since it is not 
> platform dependent and provides cross-volume rename.
> In addition the fallback logic for Windows is not correct since Java io 
> overrides the destination only if the source is also a directory but 
> handleEmptyDstDirectoryOnWindows method checks only the destination. That 
> means rename allows to override a directory with a file on Windows but not on 
> Unix.
> File#renameTo and Files#move are not 100% compatible:
>  If the source is a directory and the destination is an empty directory 
> File#renameTo overrides the source but Files#move is does not. We have to use 
> {{StandardCopyOption.REPLACE_EXISTING}} but it overrides the destination even 
> if the source or the destination is a file. So to make them compatible we 
> have to check that the either the source or the destination is a directory 
> before we add the copy option.
> I think the correct strategy is
>  * Where the contract test passed so far it should pass after this
>  * Where the contract test failed because of Java specific think and not 
> because of the fallback logic we should keep the original behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15361) RawLocalFileSystem should use Java nio framework for rename

2018-04-04 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-15361:
--
Status: Patch Available  (was: Open)

> RawLocalFileSystem should use Java nio framework for rename
> ---
>
> Key: HADOOP-15361
> URL: https://issues.apache.org/jira/browse/HADOOP-15361
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
>  Labels: incompatibleChange
> Attachments: HADOOP-15361.01.patch
>
>
> Currently RawLocalFileSystem uses a fallback logic for cross-volume renames. 
> The fallback logic is a copy-on-fail logic so when rename fails it copies the 
> source then delete it.
>  An additional fallback logic was needed for Windows to provide POSIX rename 
> behavior.
> Due to the fallback logic RawLocalFileSystem does not pass the contract tests 
> (HADOOP-13082).
> With using Java nio framework both could be eliminated since it is not 
> platform dependent and provides cross-volume rename.
> In addition the fallback logic for Windows is not correct since Java io 
> overrides the destination only if the source is also a directory but 
> handleEmptyDstDirectoryOnWindows method checks only the destination. That 
> means rename allows to override a directory with a file on Windows but not on 
> Unix.
> File#renameTo and Files#move are not 100% compatible:
>  If the source is a directory and the destination is an empty directory 
> File#renameTo overrides the source but Files#move is does not. We have to use 
> {{StandardCopyOption.REPLACE_EXISTING}} but it overrides the destination even 
> if the source or the destination is a file. So to make them compatible we 
> have to check that the either the source or the destination is a directory 
> before we add the copy option.
> I think the correct strategy is
>  * Where the contract test passed so far it should pass after this
>  * Where the contract test failed because of Java specific think and not 
> because of the fallback logic we should keep the original behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15361) RawLocalFileSystem should use Java nio framework for rename

2018-04-04 Thread Andras Bokor (JIRA)
Andras Bokor created HADOOP-15361:
-

 Summary: RawLocalFileSystem should use Java nio framework for 
rename
 Key: HADOOP-15361
 URL: https://issues.apache.org/jira/browse/HADOOP-15361
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Andras Bokor
Assignee: Andras Bokor


Currently RawLocalFileSystem uses a fallback logic for cross-volume renames. 
The fallback logic is a copy-on-fail logic so when rename fails it copies the 
source then delete it.
 An additional fallback logic was needed for Windows to provide POSIX rename 
behavior.

Due to the fallback logic RawLocalFileSystem does not pass the contract tests 
(HADOOP-13082).

With using Java nio framework both could be eliminated since it is not platform 
dependent and provides cross-volume rename.

In addition the fallback logic for Windows is not correct since Java io 
overrides the destination only if the source is also a directory but 
handleEmptyDstDirectoryOnWindows method checks only the destination. That means 
rename allows to override a directory with a file on Windows but not on Unix.

File#renameTo and Files#move are not 100% compatible:
 If the source is a directory and the destination is an empty directory 
File#renameTo overrides the source but Files#move is does not. We have to use 
{{StandardCopyOption.REPLACE_EXISTING}} but it overrides the destination even 
if the source or the destination is a file. So to make them compatible we have 
to check that the either the source or the destination is a directory before we 
add the copy option.

I think the correct strategy is
 * Where the contract test passed so far it should pass after this
 * Where the contract test failed because of Java specific think and not 
because of the fallback logic we should keep the original behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-6822) Provide information as to whether or not security is enabled on web interface

2018-03-22 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-6822.
--
Resolution: Invalid

There is no JT anymore. YARN new UI is in progress. If this feature is required 
on new UI a new ticket is needed.

> Provide information as to whether or not security is enabled on web interface
> -
>
> Key: HADOOP-6822
> URL: https://issues.apache.org/jira/browse/HADOOP-6822
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Jakob Homan
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-6897) FileSystem#mkdirs(FileSystem, Path, FsPermission) should not call setPermission if mkdirs failled

2018-03-22 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409686#comment-16409686
 ] 

Andras Bokor commented on HADOOP-6897:
--

It's still an issue. The patch seems valid. We cannot remove the static mkdirs 
since it has different behavior than member ones.
It sets the permission without applying umask.

> FileSystem#mkdirs(FileSystem, Path, FsPermission) should not call 
> setPermission if mkdirs failled
> -
>
> Key: HADOOP-6897
> URL: https://issues.apache.org/jira/browse/HADOOP-6897
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.22.0
>Reporter: Hairong Kuang
>Assignee: Hairong Kuang
>Priority: Major
> Attachments: mkdirs.patch
>
>
> Here is the piece of code that has the bug. fs.setPermission should not be 
> called if result is false.
> {code}
>   public static boolean mkdirs(FileSystem fs, Path dir, FsPermission 
> permission)
>   throws IOException {
> // create the directory using the default permission
> boolean result = fs.mkdirs(dir);
> // set its permission to be the supplied one
> fs.setPermission(dir, permission);
> return result;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-6897) FileSystem#mkdirs(FileSystem, Path, FsPermission) should not call setPermission if mkdirs failled

2018-03-22 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-6897:
-
Status: Patch Available  (was: Open)

> FileSystem#mkdirs(FileSystem, Path, FsPermission) should not call 
> setPermission if mkdirs failled
> -
>
> Key: HADOOP-6897
> URL: https://issues.apache.org/jira/browse/HADOOP-6897
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.22.0
>Reporter: Hairong Kuang
>Assignee: Hairong Kuang
>Priority: Major
> Attachments: mkdirs.patch
>
>
> Here is the piece of code that has the bug. fs.setPermission should not be 
> called if result is false.
> {code}
>   public static boolean mkdirs(FileSystem fs, Path dir, FsPermission 
> permission)
>   throws IOException {
> // create the directory using the default permission
> boolean result = fs.mkdirs(dir);
> // set its permission to be the supplied one
> fs.setPermission(dir, permission);
> return result;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-6672) BytesWritable.write(buf) use much more CPU in writeInt() then write(buf)

2018-03-22 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-6672.
--
Resolution: Duplicate

> BytesWritable.write(buf) use much more CPU in writeInt() then write(buf)
> 
>
> Key: HADOOP-6672
> URL: https://issues.apache.org/jira/browse/HADOOP-6672
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 0.20.2
>Reporter: Kang Xiao
>Priority: Major
>  Labels: BytesWritable, hadoop, io
> Attachments: BytesWritable.java.patch, screenshot-1.jpg, 
> screenshot-2.jpg
>
>
> BytesWritable.write() use nearly 4 times of CPU in wirteInt() as write 
> buffer. It may be optimized.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-7195) RawLocalFileSystem.rename() should not try to do copy

2018-03-22 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-7195.
--
Resolution: Later

> RawLocalFileSystem.rename() should not try to do copy
> -
>
> Key: HADOOP-7195
> URL: https://issues.apache.org/jira/browse/HADOOP-7195
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.20.2, 0.21.0
>Reporter: Kang Xiao
>Priority: Major
> Attachments: HADOOP-7195-v2.patch, HADOOP-7195-v2.patch, 
> HADOOP-7195.patch
>
>
> RawLocalFileSystem.rename() try to copy file if fails to call rename of java 
> File. It's really confusing to do copy in a rename interface. For example 
> rename(/a/b/c, /e/f/g) will invoke the copy if /e/f does not exist.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-7195) RawLocalFileSystem.rename() should not try to do copy

2018-03-22 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409576#comment-16409576
 ] 

Andras Bokor commented on HADOOP-7195:
--

The fallback logic is to cover cross volume renames. Pls check HADOOP-13082 for 
the details.

> RawLocalFileSystem.rename() should not try to do copy
> -
>
> Key: HADOOP-7195
> URL: https://issues.apache.org/jira/browse/HADOOP-7195
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.20.2, 0.21.0
>Reporter: Kang Xiao
>Priority: Major
> Attachments: HADOOP-7195-v2.patch, HADOOP-7195-v2.patch, 
> HADOOP-7195.patch
>
>
> RawLocalFileSystem.rename() try to copy file if fails to call rename of java 
> File. It's really confusing to do copy in a rename interface. For example 
> rename(/a/b/c, /e/f/g) will invoke the copy if /e/f does not exist.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13592) Outputs errors and warnings by checkstyle at compile time

2018-03-09 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-13592.
---
Resolution: Won't Fix

> Outputs errors and warnings by checkstyle at compile time
> -
>
> Key: HADOOP-13592
> URL: https://issues.apache.org/jira/browse/HADOOP-13592
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>Priority: Major
> Attachments: HADOOP-13592.001.patch
>
>
> Currently, Apache Hadoop has lots checkstyle errors and warnings, but it's 
> not outputted at compile time. This prevents us from fixing the errors and 
> warnings.
> We should output errors and warnings at compile time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15252) Checkstyle version is not compatible with IDEA's checkstyle plugin

2018-03-03 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16384664#comment-16384664
 ] 

Andras Bokor commented on HADOOP-15252:
---

Thanks [~ajisakaa]!

> Checkstyle version is not compatible with IDEA's checkstyle plugin
> --
>
> Key: HADOOP-15252
> URL: https://issues.apache.org/jira/browse/HADOOP-15252
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15252.001.patch, HADOOP-15252.002.patch, 
> HADOOP-15252.003.patch, idea_checkstyle_settings.png
>
>
> After upgrading to the latest IDEA the IDE throws error messages in every few 
> minutes like
> {code:java}
> The Checkstyle rules file could not be parsed.
> SuppressionCommentFilter is not allowed as a child in Checker
> The file has been blacklisted for 60s.{code}
> This is caused by some backward incompatible changes in checkstyle source 
> code:
>  [http://checkstyle.sourceforge.net/releasenotes.html]
>  * 8.1: Make SuppressionCommentFilter and SuppressWithNearbyCommentFilter 
> children of TreeWalker.
>  * 8.2: remove FileContentsHolder module as FileContents object is available 
> for filters on TreeWalker in TreeWalkerAudit Event.
> IDEA uses checkstyle 8.8
> We should upgrade our checkstyle version to be compatible with IDEA's 
> checkstyle plugin.
>  Also it's a good time to upgrade maven-checkstyle-plugin as well to brand 
> new 3.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15252) Checkstyle version is not compatible with IDEA's checkstyle plugin

2018-02-27 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-15252:
--
Attachment: HADOOP-15252.003.patch

> Checkstyle version is not compatible with IDEA's checkstyle plugin
> --
>
> Key: HADOOP-15252
> URL: https://issues.apache.org/jira/browse/HADOOP-15252
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-15252.001.patch, HADOOP-15252.002.patch, 
> HADOOP-15252.003.patch, idea_checkstyle_settings.png
>
>
> After upgrading to the latest IDEA the IDE throws error messages in every few 
> minutes like
> {code:java}
> The Checkstyle rules file could not be parsed.
> SuppressionCommentFilter is not allowed as a child in Checker
> The file has been blacklisted for 60s.{code}
> This is caused by some backward incompatible changes in checkstyle source 
> code:
>  [http://checkstyle.sourceforge.net/releasenotes.html]
>  * 8.1: Make SuppressionCommentFilter and SuppressWithNearbyCommentFilter 
> children of TreeWalker.
>  * 8.2: remove FileContentsHolder module as FileContents object is available 
> for filters on TreeWalker in TreeWalkerAudit Event.
> IDEA uses checkstyle 8.8
> We should upgrade our checkstyle version to be compatible with IDEA's 
> checkstyle plugin.
>  Also it's a good time to upgrade maven-checkstyle-plugin as well to brand 
> new 3.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-12585) [Umbrella] Removing the usages of deprecated methods

2018-02-23 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-12585.
---
  Resolution: Done
Target Version/s:   (was: )

This umbrella is no longer in use.

> [Umbrella] Removing the usages of deprecated methods
> 
>
> Key: HADOOP-12585
> URL: https://issues.apache.org/jira/browse/HADOOP-12585
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>Priority: Major
>
> There are lots usages of deprecated methods in hadoop - we should avoid using 
> them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-7675) Ant option to run disabled kerberos authentication tests.

2018-02-23 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374467#comment-16374467
 ] 

Andras Bokor commented on HADOOP-7675:
--

I don't see the @Ignore annotation on the referenced classes. Is this ticket 
still valid?

> Ant option to run disabled kerberos authentication tests.
> -
>
> Key: HADOOP-7675
> URL: https://issues.apache.org/jira/browse/HADOOP-7675
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Jitendra Nath Pandey
>Priority: Major
>
> The kerberos tests, TestKerberosAuthenticator and 
> TestKerberosAuthenticationHandler, are disabled using @Ignore. A better 
> approach would be to have an ant option to run them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-2829) JT should consider the disk each task is on before scheduling jobs...

2018-02-23 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-2829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-2829.
--
Resolution: Invalid

It seems obsolete.

> JT should consider the disk each task is on before scheduling jobs...
> -
>
> Key: HADOOP-2829
> URL: https://issues.apache.org/jira/browse/HADOOP-2829
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: eric baldeschwieler
>Priority: Major
>
> The DataNode can support a JBOD config, where blocks exist on explicit disks. 
>  But this information is not exported or considered by the JT when assigning 
> tasks.  This leads to non-optimal disk use.  if 4 slots are used, 2 running 
> tasks will likely be on the same disk and we observe them running more slowly 
> then other tasks on the same machine.
> We could follow a number of strategies to address this.
> for example: The data nodes could support a what disk is this block on call.  
> Then the JT could discover the info and assign jobs accordingly.
> Of course the TT itself uses disks for merge and temp space and the datanodes 
> on the same machine can be used by off node sources, so it is not clear 
> optimizing all of this is simple enough to be worth it.
> This issue deserves study.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-6291) Confusing warn message from Configuration

2018-02-23 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-6291.
--
Resolution: Duplicate

> Confusing warn message from Configuration
> -
>
> Key: HADOOP-6291
> URL: https://issues.apache.org/jira/browse/HADOOP-6291
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 0.21.0
>Reporter: Tsz Wo Nicholas Sze
>Priority: Major
>
> Starting a cluster without setting mapreduce.task.attempt.id and then
> {noformat}
> $ ./bin/hadoop fs -put README.txt r.txt
> 09/09/29 22:28:10 WARN conf.Configuration: mapred.task.id is deprecated. 
> Instead, use mapreduce.task.attempt.id
> 09/09/29 22:28:10 INFO hdfs.DFSClient: Done flushing
> 09/09/29 22:28:10 INFO hdfs.DFSClient: Closing the streams...
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15252) Checkstyle version is not compatible with IDEA's checkstyle plugin

2018-02-23 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-15252:
--
Attachment: HADOOP-15252.002.patch

> Checkstyle version is not compatible with IDEA's checkstyle plugin
> --
>
> Key: HADOOP-15252
> URL: https://issues.apache.org/jira/browse/HADOOP-15252
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-15252.001.patch, HADOOP-15252.002.patch, 
> idea_checkstyle_settings.png
>
>
> After upgrading to the latest IDEA the IDE throws error messages in every few 
> minutes like
> {code:java}
> The Checkstyle rules file could not be parsed.
> SuppressionCommentFilter is not allowed as a child in Checker
> The file has been blacklisted for 60s.{code}
> This is caused by some backward incompatible changes in checkstyle source 
> code:
>  [http://checkstyle.sourceforge.net/releasenotes.html]
>  * 8.1: Make SuppressionCommentFilter and SuppressWithNearbyCommentFilter 
> children of TreeWalker.
>  * 8.2: remove FileContentsHolder module as FileContents object is available 
> for filters on TreeWalker in TreeWalkerAudit Event.
> IDEA uses checkstyle 8.8
> We should upgrade our checkstyle version to be compatible with IDEA's 
> checkstyle plugin.
>  Also it's a good time to upgrade maven-checkstyle-plugin as well to brand 
> new 3.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15252) Checkstyle version is not compatible with IDEA's checkstyle plugin

2018-02-23 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374316#comment-16374316
 ] 

Andras Bokor commented on HADOOP-15252:
---

Attaching the same patch to kick Hadoop QA.

> Checkstyle version is not compatible with IDEA's checkstyle plugin
> --
>
> Key: HADOOP-15252
> URL: https://issues.apache.org/jira/browse/HADOOP-15252
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-15252.001.patch, HADOOP-15252.002.patch, 
> idea_checkstyle_settings.png
>
>
> After upgrading to the latest IDEA the IDE throws error messages in every few 
> minutes like
> {code:java}
> The Checkstyle rules file could not be parsed.
> SuppressionCommentFilter is not allowed as a child in Checker
> The file has been blacklisted for 60s.{code}
> This is caused by some backward incompatible changes in checkstyle source 
> code:
>  [http://checkstyle.sourceforge.net/releasenotes.html]
>  * 8.1: Make SuppressionCommentFilter and SuppressWithNearbyCommentFilter 
> children of TreeWalker.
>  * 8.2: remove FileContentsHolder module as FileContents object is available 
> for filters on TreeWalker in TreeWalkerAudit Event.
> IDEA uses checkstyle 8.8
> We should upgrade our checkstyle version to be compatible with IDEA's 
> checkstyle plugin.
>  Also it's a good time to upgrade maven-checkstyle-plugin as well to brand 
> new 3.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15252) Checkstyle version is not compatible with IDEA's checkstyle plugin

2018-02-22 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-15252:
--
Status: Patch Available  (was: Open)

> Checkstyle version is not compatible with IDEA's checkstyle plugin
> --
>
> Key: HADOOP-15252
> URL: https://issues.apache.org/jira/browse/HADOOP-15252
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-15252.001.patch, idea_checkstyle_settings.png
>
>
> After upgrading to the latest IDEA the IDE throws error messages in every few 
> minutes like
> {code:java}
> The Checkstyle rules file could not be parsed.
> SuppressionCommentFilter is not allowed as a child in Checker
> The file has been blacklisted for 60s.{code}
> This is caused by some backward incompatible changes in checkstyle source 
> code:
>  [http://checkstyle.sourceforge.net/releasenotes.html]
>  * 8.1: Make SuppressionCommentFilter and SuppressWithNearbyCommentFilter 
> children of TreeWalker.
>  * 8.2: remove FileContentsHolder module as FileContents object is available 
> for filters on TreeWalker in TreeWalkerAudit Event.
> IDEA uses checkstyle 8.8
> We should upgrade our checkstyle version to be compatible with IDEA's 
> checkstyle plugin.
>  Also it's a good time to upgrade maven-checkstyle-plugin as well to brand 
> new 3.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15252) Checkstyle version is not compatible with IDEA's checkstyle plugin

2018-02-22 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-15252:
--
Attachment: HADOOP-15252.001.patch

> Checkstyle version is not compatible with IDEA's checkstyle plugin
> --
>
> Key: HADOOP-15252
> URL: https://issues.apache.org/jira/browse/HADOOP-15252
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-15252.001.patch
>
>
> After upgrading to the latest IDEA the IDE throws error messages in every few 
> minutes like
> {code:java}
> The Checkstyle rules file could not be parsed.
> SuppressionCommentFilter is not allowed as a child in Checker
> The file has been blacklisted for 60s.{code}
> This is caused by some backward incompatible changes in checkstyle source 
> code:
>  [http://checkstyle.sourceforge.net/releasenotes.html]
>  * 8.1: Make SuppressionCommentFilter and SuppressWithNearbyCommentFilter 
> children of TreeWalker.
>  * 8.2: remove FileContentsHolder module as FileContents object is available 
> for filters on TreeWalker in TreeWalkerAudit Event.
> IDEA uses checkstyle 8.8
> We should upgrade our checkstyle version to be compatible with IDEA's 
> checkstyle plugin.
>  Also it's a good time to upgrade maven-checkstyle-plugin as well to brand 
> new 3.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15252) Checkstyle version is not compatible with IDEA's checkstyle plugin

2018-02-22 Thread Andras Bokor (JIRA)
Andras Bokor created HADOOP-15252:
-

 Summary: Checkstyle version is not compatible with IDEA's 
checkstyle plugin
 Key: HADOOP-15252
 URL: https://issues.apache.org/jira/browse/HADOOP-15252
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Andras Bokor
Assignee: Andras Bokor


After upgrading to the latest IDEA the IDE throws error messages in every few 
minutes like
{code:java}
The Checkstyle rules file could not be parsed.
SuppressionCommentFilter is not allowed as a child in Checker
The file has been blacklisted for 60s.{code}
This is caused by some backward incompatible changes in checkstyle source code:
 [http://checkstyle.sourceforge.net/releasenotes.html]
 * 8.1: Make SuppressionCommentFilter and SuppressWithNearbyCommentFilter 
children of TreeWalker.
 * 8.2: remove FileContentsHolder module as FileContents object is available 
for filters on TreeWalker in TreeWalkerAudit Event.

IDEA uses checkstyle 8.8

We should upgrade our checkstyle version to be compatible with IDEA's 
checkstyle plugin.
 Also it's a good time to upgrade maven-checkstyle-plugin as well to brand new 
3.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10571) Use Log.*(Object, Throwable) overload to log exceptions

2018-02-22 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372550#comment-16372550
 ] 

Andras Bokor commented on HADOOP-10571:
---

Thanks for the review and commit!

> Use Log.*(Object, Throwable) overload to log exceptions
> ---
>
> Key: HADOOP-10571
> URL: https://issues.apache.org/jira/browse/HADOOP-10571
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Arpit Agarwal
>Assignee: Andras Bokor
>Priority: Major
> Fix For: 3.1.0, 3.0.1
>
> Attachments: HADOOP-10571-branch-3.0.001.patch, 
> HADOOP-10571-branch-3.0.002.patch, HADOOP-10571.01.patch, 
> HADOOP-10571.01.patch, HADOOP-10571.02.patch, HADOOP-10571.03.patch, 
> HADOOP-10571.04.patch, HADOOP-10571.05.patch, HADOOP-10571.06.patch, 
> HADOOP-10571.07.patch
>
>
> When logging an exception, we often convert the exception to string or call 
> {{.getMessage}}. Instead we can use the log method overloads which take 
> {{Throwable}} as a parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10571) Use Log.*(Object, Throwable) overload to log exceptions

2018-02-21 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371157#comment-16371157
 ] 

Andras Bokor commented on HADOOP-10571:
---

Patch for branch-3.0 seems good to go.

> Use Log.*(Object, Throwable) overload to log exceptions
> ---
>
> Key: HADOOP-10571
> URL: https://issues.apache.org/jira/browse/HADOOP-10571
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Arpit Agarwal
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-10571-branch-3.0.001.patch, 
> HADOOP-10571-branch-3.0.002.patch, HADOOP-10571.01.patch, 
> HADOOP-10571.01.patch, HADOOP-10571.02.patch, HADOOP-10571.03.patch, 
> HADOOP-10571.04.patch, HADOOP-10571.05.patch, HADOOP-10571.06.patch, 
> HADOOP-10571.07.patch
>
>
> When logging an exception, we often convert the exception to string or call 
> {{.getMessage}}. Instead we can use the log method overloads which take 
> {{Throwable}} as a parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10571) Use Log.*(Object, Throwable) overload to log exceptions

2018-02-20 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-10571:
--
Attachment: HADOOP-10571-branch-3.0.002.patch

> Use Log.*(Object, Throwable) overload to log exceptions
> ---
>
> Key: HADOOP-10571
> URL: https://issues.apache.org/jira/browse/HADOOP-10571
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Arpit Agarwal
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-10571-branch-3.0.001.patch, 
> HADOOP-10571-branch-3.0.002.patch, HADOOP-10571.01.patch, 
> HADOOP-10571.01.patch, HADOOP-10571.02.patch, HADOOP-10571.03.patch, 
> HADOOP-10571.04.patch, HADOOP-10571.05.patch, HADOOP-10571.06.patch, 
> HADOOP-10571.07.patch
>
>
> When logging an exception, we often convert the exception to string or call 
> {{.getMessage}}. Instead we can use the log method overloads which take 
> {{Throwable}} as a parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10571) Use Log.*(Object, Throwable) overload to log exceptions

2018-02-16 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-10571:
--
Attachment: (was: HADOOP-10571-branch-3.0.01.patch)

> Use Log.*(Object, Throwable) overload to log exceptions
> ---
>
> Key: HADOOP-10571
> URL: https://issues.apache.org/jira/browse/HADOOP-10571
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Arpit Agarwal
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-10571-branch-3.0.001.patch, 
> HADOOP-10571.01.patch, HADOOP-10571.01.patch, HADOOP-10571.02.patch, 
> HADOOP-10571.03.patch, HADOOP-10571.04.patch, HADOOP-10571.05.patch, 
> HADOOP-10571.06.patch, HADOOP-10571.07.patch
>
>
> When logging an exception, we often convert the exception to string or call 
> {{.getMessage}}. Instead we can use the log method overloads which take 
> {{Throwable}} as a parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10571) Use Log.*(Object, Throwable) overload to log exceptions

2018-02-16 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-10571:
--
Attachment: HADOOP-10571-branch-3.0.001.patch

> Use Log.*(Object, Throwable) overload to log exceptions
> ---
>
> Key: HADOOP-10571
> URL: https://issues.apache.org/jira/browse/HADOOP-10571
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Arpit Agarwal
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-10571-branch-3.0.001.patch, 
> HADOOP-10571.01.patch, HADOOP-10571.01.patch, HADOOP-10571.02.patch, 
> HADOOP-10571.03.patch, HADOOP-10571.04.patch, HADOOP-10571.05.patch, 
> HADOOP-10571.06.patch, HADOOP-10571.07.patch
>
>
> When logging an exception, we often convert the exception to string or call 
> {{.getMessage}}. Instead we can use the log method overloads which take 
> {{Throwable}} as a parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   3   4   5   6   >