[jira] [Updated] (HADOOP-16174) Disable wildfly logs to the console

2019-03-07 Thread Andras Bokor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-16174:
--
Issue Type: Bug  (was: Task)

> Disable wildfly logs to the console
> ---
>
> Key: HADOOP-16174
> URL: https://issues.apache.org/jira/browse/HADOOP-16174
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Denes Gerencser
>Priority: Major
> Attachments: HADOOP-16174-001.patch
>
>
> We experience that the wildfly log
> {code:java}
> Mar 06, 2019 4:33:53 PM org.wildfly.openssl.SSL init
> INFO: WFOPENSSL0002 OpenSSL Version OpenSSL 1.0.2g  1 Mar 2016
> {code}
> (sometimes) appears on the console but it should never. Note: this is a 
> consequence of HADOOP-15851.
> Our analysis shows the reason is that 
> {code:java}
> java.util.logging.Logger.getLogger()
> {code}
> is not guaranteed to always return the _same_ logger instance so 
> SSLSocketFactoryEx may set log level on different logger object than the one 
> used by wildfly-openssl 
> ([https://github.com/wildfly/wildfly-openssl/blob/ace72ba07d0c746b6eb46635f4a8b122846c47c8/java/src/main/java/org/wildfly/openssl/SSL.java#L196)].
> From javadoc of java.util.logging.Logger.getLogger:
> 'Note: The LogManager may only retain a weak reference to the newly created 
> Logger. It is important to understand that a previously created Logger with 
> the given name may be garbage collected at any time if there is no strong 
> reference to the Logger. In particular, this means that two back-to-back 
> calls like{{getLogger("MyLogger").log(...)}} may use different Logger objects 
> named "MyLogger" if there is no strong reference to the Logger named 
> "MyLogger" elsewhere in the program.'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16174) Disable wildfly logs to the console

2019-03-07 Thread Andras Bokor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-16174:
--
Status: Patch Available  (was: Open)

> Disable wildfly logs to the console
> ---
>
> Key: HADOOP-16174
> URL: https://issues.apache.org/jira/browse/HADOOP-16174
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Denes Gerencser
>Priority: Major
> Attachments: HADOOP-16174-001.patch
>
>
> We experience that the wildfly log
> {code:java}
> Mar 06, 2019 4:33:53 PM org.wildfly.openssl.SSL init
> INFO: WFOPENSSL0002 OpenSSL Version OpenSSL 1.0.2g  1 Mar 2016
> {code}
> (sometimes) appears on the console but it should never. Note: this is a 
> consequence of HADOOP-15851.
> Our analysis shows the reason is that 
> {code:java}
> java.util.logging.Logger.getLogger()
> {code}
> is not guaranteed to always return the _same_ logger instance so 
> SSLSocketFactoryEx may set log level on different logger object than the one 
> used by wildfly-openssl 
> ([https://github.com/wildfly/wildfly-openssl/blob/ace72ba07d0c746b6eb46635f4a8b122846c47c8/java/src/main/java/org/wildfly/openssl/SSL.java#L196)].
> From javadoc of java.util.logging.Logger.getLogger:
> 'Note: The LogManager may only retain a weak reference to the newly created 
> Logger. It is important to understand that a previously created Logger with 
> the given name may be garbage collected at any time if there is no strong 
> reference to the Logger. In particular, this means that two back-to-back 
> calls like{{getLogger("MyLogger").log(...)}} may use different Logger objects 
> named "MyLogger" if there is no strong reference to the Logger named 
> "MyLogger" elsewhere in the program.'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16174) Disable wildfly logs to the console

2019-03-07 Thread Andras Bokor (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786879#comment-16786879
 ] 

Andras Bokor commented on HADOOP-16174:
---

[Another description of the 
problem...|http://findbugs.sourceforge.net/bugDescriptions.html#LG_LOST_LOGGER_DUE_TO_WEAK_REFERENCE]

> Disable wildfly logs to the console
> ---
>
> Key: HADOOP-16174
> URL: https://issues.apache.org/jira/browse/HADOOP-16174
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Denes Gerencser
>Priority: Major
> Attachments: HADOOP-16174-001.patch
>
>
> We experience that the wildfly log
> {code:java}
> Mar 06, 2019 4:33:53 PM org.wildfly.openssl.SSL init
> INFO: WFOPENSSL0002 OpenSSL Version OpenSSL 1.0.2g  1 Mar 2016
> {code}
> (sometimes) appears on the console but it should never. Note: this is a 
> consequence of HADOOP-15851.
> Our analysis shows the reason is that 
> {code:java}
> java.util.logging.Logger.getLogger()
> {code}
> is not guaranteed to always return the _same_ logger instance so 
> SSLSocketFactoryEx may set log level on different logger object than the one 
> used by wildfly-openssl 
> ([https://github.com/wildfly/wildfly-openssl/blob/ace72ba07d0c746b6eb46635f4a8b122846c47c8/java/src/main/java/org/wildfly/openssl/SSL.java#L196)].
> From javadoc of java.util.logging.Logger.getLogger:
> 'Note: The LogManager may only retain a weak reference to the newly created 
> Logger. It is important to understand that a previously created Logger with 
> the given name may be garbage collected at any time if there is no strong 
> reference to the Logger. In particular, this means that two back-to-back 
> calls like{{getLogger("MyLogger").log(...)}} may use different Logger objects 
> named "MyLogger" if there is no strong reference to the Logger named 
> "MyLogger" elsewhere in the program.'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16174) Disable wildfly logs to the console

2019-03-08 Thread Andras Bokor (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16787924#comment-16787924
 ] 

Andras Bokor commented on HADOOP-16174:
---

We do not want to preserve the ref outside of the switch case. Using a hard ref 
to the logger keeps the object in memory while we reach SSL.java:196 which is 
enough for us. We used local variable because we won't need that logger anymore 
so the scope is kept as small as possible. 

Another question is that should we set the log level back to INFO. Currently 
there is no other logger message in SSL.java but setting back seems better and 
seems a complete workaround excluding any possible side effect in the future. 
[~denes.gerencser]?

[~vishwajeet.dusane], Denes' question seems reasonable. Why only one branch is 
protected?

> Disable wildfly logs to the console
> ---
>
> Key: HADOOP-16174
> URL: https://issues.apache.org/jira/browse/HADOOP-16174
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Denes Gerencser
>Priority: Major
> Attachments: HADOOP-16174-001.patch
>
>
> We experience that the wildfly log
> {code:java}
> Mar 06, 2019 4:33:53 PM org.wildfly.openssl.SSL init
> INFO: WFOPENSSL0002 OpenSSL Version OpenSSL 1.0.2g  1 Mar 2016
> {code}
> (sometimes) appears on the console but it should never. Note: this is a 
> consequence of HADOOP-15851.
> Our analysis shows the reason is that 
> {code:java}
> java.util.logging.Logger.getLogger()
> {code}
> is not guaranteed to always return the _same_ logger instance so 
> SSLSocketFactoryEx may set log level on different logger object than the one 
> used by wildfly-openssl 
> ([https://github.com/wildfly/wildfly-openssl/blob/ace72ba07d0c746b6eb46635f4a8b122846c47c8/java/src/main/java/org/wildfly/openssl/SSL.java#L196)].
> From javadoc of java.util.logging.Logger.getLogger:
> 'Note: The LogManager may only retain a weak reference to the newly created 
> Logger. It is important to understand that a previously created Logger with 
> the given name may be garbage collected at any time if there is no strong 
> reference to the Logger. In particular, this means that two back-to-back 
> calls like{{getLogger("MyLogger").log(...)}} may use different Logger objects 
> named "MyLogger" if there is no strong reference to the Logger named 
> "MyLogger" elsewhere in the program.'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16174) Disable wildfly logs to the console

2019-03-08 Thread Andras Bokor (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16788028#comment-16788028
 ] 

Andras Bokor commented on HADOOP-16174:
---

{quote}I am not convinced the...
{quote}
We had exactly the same doubt but our tests on a live azure cluster showed that 
only one reference is enough. But basically I agree, there is no clear 
documentation about how javac handles this situation and what does "active 
reference" mean. Also indeed, it's not obvious why don't clean that code. So 
there are some risks.
{quote}Reinstating the log to info afterwards will guarantee that the reference 
is retained, and stop anyone cleaning up the code from unintentionally removing 
the reference. Add a comment to the clause to explain the problem too.
{quote}
I agree. Setting back to INFO along with a short comment solves all the 
concerns. Let's do this. Thanks.

> Disable wildfly logs to the console
> ---
>
> Key: HADOOP-16174
> URL: https://issues.apache.org/jira/browse/HADOOP-16174
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Denes Gerencser
>Priority: Major
> Attachments: HADOOP-16174-001.patch
>
>
> We experience that the wildfly log
> {code:java}
> Mar 06, 2019 4:33:53 PM org.wildfly.openssl.SSL init
> INFO: WFOPENSSL0002 OpenSSL Version OpenSSL 1.0.2g  1 Mar 2016
> {code}
> (sometimes) appears on the console but it should never. Note: this is a 
> consequence of HADOOP-15851.
> Our analysis shows the reason is that 
> {code:java}
> java.util.logging.Logger.getLogger()
> {code}
> is not guaranteed to always return the _same_ logger instance so 
> SSLSocketFactoryEx may set log level on different logger object than the one 
> used by wildfly-openssl 
> ([https://github.com/wildfly/wildfly-openssl/blob/ace72ba07d0c746b6eb46635f4a8b122846c47c8/java/src/main/java/org/wildfly/openssl/SSL.java#L196)].
> From javadoc of java.util.logging.Logger.getLogger:
> 'Note: The LogManager may only retain a weak reference to the newly created 
> Logger. It is important to understand that a previously created Logger with 
> the given name may be garbage collected at any time if there is no strong 
> reference to the Logger. In particular, this means that two back-to-back 
> calls like{{getLogger("MyLogger").log(...)}} may use different Logger objects 
> named "MyLogger" if there is no strong reference to the Logger named 
> "MyLogger" elsewhere in the program.'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17257) pid file delete when service stop (secure datanode ) show cat no directory

2020-09-10 Thread Andras Bokor (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17193661#comment-17193661
 ] 

Andras Bokor commented on HADOOP-17257:
---

Is it the same as the HADOOP-13238?

> pid file delete when service stop (secure datanode ) show cat no directory
> --
>
> Key: HADOOP-17257
> URL: https://issues.apache.org/jira/browse/HADOOP-17257
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts, security
>Affects Versions: 3.4.0
>Reporter: zhuqi
>Priority: Major
> Attachments: HADOOP-17257-0.0.1.patch
>
>
> when stop running secure datanode
> show cat no directory .
>  
> when stop unrunning secure datanode
> also show cat no pid directory
>  
> It's both unreasonable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15546) ABFS: tune imports & javadocs; stabilise tests

2020-06-08 Thread Andras Bokor (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17128247#comment-17128247
 ] 

Andras Bokor commented on HADOOP-15546:
---

For git greppers: this was committed with the following commit message:

"HADOOP-15446. ABFS: tune imports & javadocs; stabilise tests."

So grepping for HADOOP-15546 will show no result.

> ABFS: tune imports & javadocs; stabilise tests
> --
>
> Key: HADOOP-15546
> URL: https://issues.apache.org/jira/browse/HADOOP-15546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: HADOOP-15407
>Reporter: Steve Loughran
>Assignee: Thomas Marqardt
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15546-001.patch, 
> HADOOP-15546-HADOOP-15407-001.patch, HADOOP-15546-HADOOP-15407-002.patch, 
> HADOOP-15546-HADOOP-15407-003.patch, HADOOP-15546-HADOOP-15407-004.patch, 
> HADOOP-15546-HADOOP-15407-005.patch, HADOOP-15546-HADOOP-15407-006.patch, 
> HADOOP-15546-HADOOP-15407-006.patch, HADOOP-15546-HADOOP-15407-007.patch, 
> HADOOP-15546-HADOOP-15407-008.patch, HADOOP-15546-HADOOP-15407-009.patch, 
> HADOOP-15546-HADOOP-15407-010.patch, HADOOP-15546-HADOOP-15407-011.patch, 
> HADOOP-15546-HADOOP-15407-012.patch, azure-auth-keys.xml
>
>
> Followup on HADOOP-15540 with some initial review tuning
> h2. Tuning
> * ordering of imports
> * rely on azure-auth-keys.xml to store credentials (change imports, 
> docs,.gitignore)
> * log4j -> info
> * add a "." to the first sentence of all the javadocs I noticed.
> * remove @Public annotations except for some constants (which includes some 
> commitment to maintain them).
> * move the AbstractFS declarations out of the src/test/resources XML file 
> into core-default.xml for all to use
> * other IDE-suggested tweaks
> h2. Testing
> Review the tests, move to ContractTestUtil assertions, make more consistent 
> to contract test setup, and general work to make the tests work well over 
> slower links, document, etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15446) WASB: PageBlobInputStream.skip breaks HBASE replication

2020-06-08 Thread Andras Bokor (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17128251#comment-17128251
 ] 

Andras Bokor commented on HADOOP-15446:
---

Git greppers!

2 commits belongs to this ticket:
{noformat}
HADOOP-15446. WASB: PageBlobInputStream.skip breaks HBASE replication.
HADOOP-15446. ABFS: tune imports & javadocs; stabilise tests.
{noformat}
The second one actually belongs to HADOOP-15546.

> WASB: PageBlobInputStream.skip breaks HBASE replication
> ---
>
> Key: HADOOP-15446
> URL: https://issues.apache.org/jira/browse/HADOOP-15446
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.9.0, 3.0.2
>Reporter: Thomas Marqardt
>Assignee: Thomas Marqardt
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2
>
> Attachments: HADOOP-15446-001.patch, HADOOP-15446-002.patch, 
> HADOOP-15446-003.patch, HADOOP-15446-branch-2.001.patch
>
>
> Page Blobs are primarily used by HBASE.  HBASE replication, which apparently 
> has not been used with WASB until recently, performs non-sequential reads on 
> log files using PageBlobInputStream.  There are bugs in this stream 
> implementation which prevent skip and seek from working properly, and 
> eventually the stream state becomes corrupt and unusable.
> I believe this bug affects all releases of WASB/HADOOP.  It appears to be a 
> day-0 bug in PageBlobInputStream.  There were similar bugs opened in the past 
> (HADOOP-15042) but the issue was not properly fixed, and no test coverage was 
> added.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9851) dfs -chown does not like "+" plus sign in user name

2020-06-15 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-9851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-9851:
-
Attachment: HADOOP-9851.02.patch

> dfs -chown does not like "+" plus sign in user name
> ---
>
> Key: HADOOP-9851
> URL: https://issues.apache.org/jira/browse/HADOOP-9851
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.5-alpha
>Reporter: Marc Villacorta
>Assignee: Andras Bokor
>Priority: Minor
> Attachments: HADOOP-9851.01.patch, HADOOP-9851.02.patch
>
>
> I intend to set user and group:
> *User:* _MYCOMPANY+marc.villacorta_
> *Group:* hadoop
> where _'+'_ is what we use as a winbind separator.
> And this is what I get:
> {code:none}
> sudo -u hdfs hadoop fs -touchz /tmp/test.txt
> sudo -u hdfs hadoop fs -chown MYCOMPANY+marc.villacorta:hadoop /tmp/test.txt
> -chown: 'MYCOMPANY+marc.villacorta:hadoop' does not match expected pattern 
> for [owner][:group].
> Usage: hadoop fs [generic options] -chown [-R] [OWNER][:[GROUP]] PATH...
> {code}
> I am using version: 2.0.0-cdh4.3.0
> Quote 
> [source|http://h30097.www3.hp.com/docs/iass/OSIS_62/MAN/MAN8/0044.HTM]:
> {quote}
> winbind separator
>The winbind separator option allows you to specify how NT domain names
>and user names are combined into unix user names when presented to
>users. By default, winbindd will use the traditional '\' separator so
>that the unix user names look like DOMAIN\username. In some cases this
>separator character may cause problems as the '\' character has
>special meaning in unix shells. In that case you can use the winbind
>separator option to specify an alternative separator character. Good
>alternatives may be '/' (although that conflicts with the unix
>directory separator) or a '+ 'character. The '+' character appears to
>be the best choice for 100% compatibility with existing unix
>utilities, but may be an aesthetically bad choice depending on your
>taste.
>Default: winbind separator = \
>Example: winbind separator = +
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9851) dfs -chown does not like "+" plus sign in user name

2020-06-15 Thread Andras Bokor (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-9851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17135783#comment-17135783
 ] 

Andras Bokor commented on HADOOP-9851:
--

[~ayushtkn],
Windows remained unchanged only Linux will allow + sign.

> dfs -chown does not like "+" plus sign in user name
> ---
>
> Key: HADOOP-9851
> URL: https://issues.apache.org/jira/browse/HADOOP-9851
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.5-alpha
>Reporter: Marc Villacorta
>Assignee: Andras Bokor
>Priority: Minor
> Attachments: HADOOP-9851.01.patch, HADOOP-9851.02.patch
>
>
> I intend to set user and group:
> *User:* _MYCOMPANY+marc.villacorta_
> *Group:* hadoop
> where _'+'_ is what we use as a winbind separator.
> And this is what I get:
> {code:none}
> sudo -u hdfs hadoop fs -touchz /tmp/test.txt
> sudo -u hdfs hadoop fs -chown MYCOMPANY+marc.villacorta:hadoop /tmp/test.txt
> -chown: 'MYCOMPANY+marc.villacorta:hadoop' does not match expected pattern 
> for [owner][:group].
> Usage: hadoop fs [generic options] -chown [-R] [OWNER][:[GROUP]] PATH...
> {code}
> I am using version: 2.0.0-cdh4.3.0
> Quote 
> [source|http://h30097.www3.hp.com/docs/iass/OSIS_62/MAN/MAN8/0044.HTM]:
> {quote}
> winbind separator
>The winbind separator option allows you to specify how NT domain names
>and user names are combined into unix user names when presented to
>users. By default, winbindd will use the traditional '\' separator so
>that the unix user names look like DOMAIN\username. In some cases this
>separator character may cause problems as the '\' character has
>special meaning in unix shells. In that case you can use the winbind
>separator option to specify an alternative separator character. Good
>alternatives may be '/' (although that conflicts with the unix
>directory separator) or a '+ 'character. The '+' character appears to
>be the best choice for 100% compatibility with existing unix
>utilities, but may be an aesthetically bad choice depending on your
>taste.
>Default: winbind separator = \
>Example: winbind separator = +
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9851) dfs -chown does not like "+" plus sign in user name

2020-06-16 Thread Andras Bokor (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-9851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136517#comment-17136517
 ] 

Andras Bokor commented on HADOOP-9851:
--

[~ayushtkn],
The checkstyle is not caused by my patch. The indentation was wrong even before 
my patch.
I did not fix it because I did not want bigger patch than needed and 
indentation fixes decrease the readability in diff tools. But I am not sure 
what is the best practice here.

> dfs -chown does not like "+" plus sign in user name
> ---
>
> Key: HADOOP-9851
> URL: https://issues.apache.org/jira/browse/HADOOP-9851
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.5-alpha
>Reporter: Marc Villacorta
>Assignee: Andras Bokor
>Priority: Minor
> Attachments: HADOOP-9851.01.patch, HADOOP-9851.02.patch
>
>
> I intend to set user and group:
> *User:* _MYCOMPANY+marc.villacorta_
> *Group:* hadoop
> where _'+'_ is what we use as a winbind separator.
> And this is what I get:
> {code:none}
> sudo -u hdfs hadoop fs -touchz /tmp/test.txt
> sudo -u hdfs hadoop fs -chown MYCOMPANY+marc.villacorta:hadoop /tmp/test.txt
> -chown: 'MYCOMPANY+marc.villacorta:hadoop' does not match expected pattern 
> for [owner][:group].
> Usage: hadoop fs [generic options] -chown [-R] [OWNER][:[GROUP]] PATH...
> {code}
> I am using version: 2.0.0-cdh4.3.0
> Quote 
> [source|http://h30097.www3.hp.com/docs/iass/OSIS_62/MAN/MAN8/0044.HTM]:
> {quote}
> winbind separator
>The winbind separator option allows you to specify how NT domain names
>and user names are combined into unix user names when presented to
>users. By default, winbindd will use the traditional '\' separator so
>that the unix user names look like DOMAIN\username. In some cases this
>separator character may cause problems as the '\' character has
>special meaning in unix shells. In that case you can use the winbind
>separator option to specify an alternative separator character. Good
>alternatives may be '/' (although that conflicts with the unix
>directory separator) or a '+ 'character. The '+' character appears to
>be the best choice for 100% compatibility with existing unix
>utilities, but may be an aesthetically bad choice depending on your
>taste.
>Default: winbind separator = \
>Example: winbind separator = +
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17044) Revert "HADOOP-8143. Change distcp to have -pb on by default"

2020-07-09 Thread Andras Bokor (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17154694#comment-17154694
 ] 

Andras Bokor commented on HADOOP-17044:
---

This ticket reverts HADOOP-14557 as well.

> Revert "HADOOP-8143. Change distcp to have -pb on by default"
> -
>
> Key: HADOOP-17044
> URL: https://issues.apache.org/jira/browse/HADOOP-17044
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.0.3, 3.3.0, 3.2.1, 3.1.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.0.4, 3.2.2, 3.3.1, 3.1.5
>
>
> revert the HADOOP-8143. "distcp -pb as default" feature as it was
> * breaking s3a uploads
> * breaking incremental uploads to any object store



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17145) Unauthenticated users are not authorized to access this page message is misleading in HttpServer2.java

2020-07-21 Thread Andras Bokor (Jira)
Andras Bokor created HADOOP-17145:
-

 Summary: Unauthenticated users are not authorized to access this 
page message is misleading in HttpServer2.java
 Key: HADOOP-17145
 URL: https://issues.apache.org/jira/browse/HADOOP-17145
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Andras Bokor
Assignee: Andras Bokor


Recently one of the users were misled by the message "Unauthenticated users are 
not authorized to access this page" when the user was not an admin user.
At that point the user is authenticated but has no admin access, so it's 
actually not an authentication issue but an authorization issue.
Also, 401 as error code would be better.
Something like "User is unauthorized to access the page" would help to users to 
find out what is the problem during access an http endpoint.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17145) Unauthenticated users are not authorized to access this page message is misleading in HttpServer2.java

2020-07-23 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-17145:
--
Status: Patch Available  (was: Open)

> Unauthenticated users are not authorized to access this page message is 
> misleading in HttpServer2.java
> --
>
> Key: HADOOP-17145
> URL: https://issues.apache.org/jira/browse/HADOOP-17145
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-17145.001.patch
>
>
> Recently one of the users were misled by the message "Unauthenticated users 
> are not authorized to access this page" when the user was not an admin user.
> At that point the user is authenticated but has no admin access, so it's 
> actually not an authentication issue but an authorization issue.
> Also, 401 as error code would be better.
> Something like "User is unauthorized to access the page" would help to users 
> to find out what is the problem during access an http endpoint.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17145) Unauthenticated users are not authorized to access this page message is misleading in HttpServer2.java

2020-07-23 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-17145:
--
Attachment: HADOOP-17145.001.patch

> Unauthenticated users are not authorized to access this page message is 
> misleading in HttpServer2.java
> --
>
> Key: HADOOP-17145
> URL: https://issues.apache.org/jira/browse/HADOOP-17145
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-17145.001.patch
>
>
> Recently one of the users were misled by the message "Unauthenticated users 
> are not authorized to access this page" when the user was not an admin user.
> At that point the user is authenticated but has no admin access, so it's 
> actually not an authentication issue but an authorization issue.
> Also, 401 as error code would be better.
> Something like "User is unauthorized to access the page" would help to users 
> to find out what is the problem during access an http endpoint.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17145) Unauthenticated users are not authorized to access this page message is misleading in HttpServer2.java

2020-07-24 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-17145:
--
Attachment: HADOOP-17145.002.patch

> Unauthenticated users are not authorized to access this page message is 
> misleading in HttpServer2.java
> --
>
> Key: HADOOP-17145
> URL: https://issues.apache.org/jira/browse/HADOOP-17145
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-17145.001.patch, HADOOP-17145.002.patch
>
>
> Recently one of the users were misled by the message "Unauthenticated users 
> are not authorized to access this page" when the user was not an admin user.
> At that point the user is authenticated but has no admin access, so it's 
> actually not an authentication issue but an authorization issue.
> Also, 401 as error code would be better.
> Something like "User is unauthorized to access the page" would help to users 
> to find out what is the problem during access an http endpoint.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17145) Unauthenticated users are not authorized to access this page message is misleading in HttpServer2.java

2020-08-05 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-17145:
--
Attachment: HADOOP-17145.003.patch

> Unauthenticated users are not authorized to access this page message is 
> misleading in HttpServer2.java
> --
>
> Key: HADOOP-17145
> URL: https://issues.apache.org/jira/browse/HADOOP-17145
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-17145.001.patch, HADOOP-17145.002.patch, 
> HADOOP-17145.003.patch
>
>
> Recently one of the users were misled by the message "Unauthenticated users 
> are not authorized to access this page" when the user was not an admin user.
> At that point the user is authenticated but has no admin access, so it's 
> actually not an authentication issue but an authorization issue.
> Also, 401 as error code would be better.
> Something like "User is unauthorized to access the page" would help to users 
> to find out what is the problem during access an http endpoint.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17145) Unauthenticated users are not authorized to access this page message is misleading in HttpServer2.java

2020-08-05 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-17145:
--
Attachment: HADOOP-17145.004.patch

> Unauthenticated users are not authorized to access this page message is 
> misleading in HttpServer2.java
> --
>
> Key: HADOOP-17145
> URL: https://issues.apache.org/jira/browse/HADOOP-17145
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-17145.001.patch, HADOOP-17145.002.patch, 
> HADOOP-17145.003.patch, HADOOP-17145.004.patch
>
>
> Recently one of the users were misled by the message "Unauthenticated users 
> are not authorized to access this page" when the user was not an admin user.
> At that point the user is authenticated but has no admin access, so it's 
> actually not an authentication issue but an authorization issue.
> Also, 401 as error code would be better.
> Something like "User is unauthorized to access the page" would help to users 
> to find out what is the problem during access an http endpoint.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17145) Unauthenticated users are not authorized to access this page message is misleading in HttpServer2.java

2020-08-06 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-17145:
--
Attachment: HADOOP-17145.005.patch

> Unauthenticated users are not authorized to access this page message is 
> misleading in HttpServer2.java
> --
>
> Key: HADOOP-17145
> URL: https://issues.apache.org/jira/browse/HADOOP-17145
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-17145.001.patch, HADOOP-17145.002.patch, 
> HADOOP-17145.003.patch, HADOOP-17145.004.patch, HADOOP-17145.005.patch
>
>
> Recently one of the users were misled by the message "Unauthenticated users 
> are not authorized to access this page" when the user was not an admin user.
> At that point the user is authenticated but has no admin access, so it's 
> actually not an authentication issue but an authorization issue.
> Also, 401 as error code would be better.
> Something like "User is unauthorized to access the page" would help to users 
> to find out what is the problem during access an http endpoint.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17145) Unauthenticated users are not authorized to access this page message is misleading in HttpServer2.java

2020-08-06 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-17145:
--
Attachment: HADOOP-17145.006.patch

> Unauthenticated users are not authorized to access this page message is 
> misleading in HttpServer2.java
> --
>
> Key: HADOOP-17145
> URL: https://issues.apache.org/jira/browse/HADOOP-17145
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-17145.001.patch, HADOOP-17145.002.patch, 
> HADOOP-17145.003.patch, HADOOP-17145.004.patch, HADOOP-17145.005.patch, 
> HADOOP-17145.006.patch
>
>
> Recently one of the users were misled by the message "Unauthenticated users 
> are not authorized to access this page" when the user was not an admin user.
> At that point the user is authenticated but has no admin access, so it's 
> actually not an authentication issue but an authorization issue.
> Also, 401 as error code would be better.
> Something like "User is unauthorized to access the page" would help to users 
> to find out what is the problem during access an http endpoint.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17145) Unauthenticated users are not authorized to access this page message is misleading in HttpServer2.java

2020-08-07 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-17145:
--
Attachment: HADOOP-17145.007.patch

> Unauthenticated users are not authorized to access this page message is 
> misleading in HttpServer2.java
> --
>
> Key: HADOOP-17145
> URL: https://issues.apache.org/jira/browse/HADOOP-17145
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-17145.001.patch, HADOOP-17145.002.patch, 
> HADOOP-17145.003.patch, HADOOP-17145.004.patch, HADOOP-17145.005.patch, 
> HADOOP-17145.006.patch, HADOOP-17145.007.patch
>
>
> Recently one of the users were misled by the message "Unauthenticated users 
> are not authorized to access this page" when the user was not an admin user.
> At that point the user is authenticated but has no admin access, so it's 
> actually not an authentication issue but an authorization issue.
> Also, 401 as error code would be better.
> Something like "User is unauthorized to access the page" would help to users 
> to find out what is the problem during access an http endpoint.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17145) Unauthenticated users are not authorized to access this page message is misleading in HttpServer2.java

2020-08-11 Thread Andras Bokor (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17175493#comment-17175493
 ] 

Andras Bokor commented on HADOOP-17145:
---

With patch 007 everything went well. That changes the error message and the 
error code as well.

> Unauthenticated users are not authorized to access this page message is 
> misleading in HttpServer2.java
> --
>
> Key: HADOOP-17145
> URL: https://issues.apache.org/jira/browse/HADOOP-17145
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-17145.001.patch, HADOOP-17145.002.patch, 
> HADOOP-17145.003.patch, HADOOP-17145.004.patch, HADOOP-17145.005.patch, 
> HADOOP-17145.006.patch, HADOOP-17145.007.patch
>
>
> Recently one of the users were misled by the message "Unauthenticated users 
> are not authorized to access this page" when the user was not an admin user.
> At that point the user is authenticated but has no admin access, so it's 
> actually not an authentication issue but an authorization issue.
> Also, 401 as error code would be better.
> Something like "User is unauthorized to access the page" would help to users 
> to find out what is the problem during access an http endpoint.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16616) ITestAzureFileSystemInstrumentation#testMetricsOnBigFileCreateRead fails

2019-09-30 Thread Andras Bokor (Jira)
Andras Bokor created HADOOP-16616:
-

 Summary: 
ITestAzureFileSystemInstrumentation#testMetricsOnBigFileCreateRead fails
 Key: HADOOP-16616
 URL: https://issues.apache.org/jira/browse/HADOOP-16616
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Andras Bokor


{code:java}
[ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 238.687 
s <<< FAILURE! - in 
org.apache.hadoop.fs.azure.metrics.ITestAzureFileSystemInstrumentation
[ERROR] 
testMetricsOnBigFileCreateRead(org.apache.hadoop.fs.azure.metrics.ITestAzureFileSystemInstrumentation)
  Time elapsed: 238.5 s  <<< FAILURE!
java.lang.AssertionError: The download latency 0 should be greater than zero 
now that I've just downloaded a file.
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.fs.azure.metrics.ITestAzureFileSystemInstrumentation.testMetricsOnBigFileCreateRead(ITestAzureFileSystemInstrumentation.java:303)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:745)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16617) ITestGetNameSpaceEnabled#testFailedRequestWhenFSNotExist fails with ns disabled account

2019-09-30 Thread Andras Bokor (Jira)
Andras Bokor created HADOOP-16617:
-

 Summary: ITestGetNameSpaceEnabled#testFailedRequestWhenFSNotExist 
fails with ns disabled account
 Key: HADOOP-16617
 URL: https://issues.apache.org/jira/browse/HADOOP-16617
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Andras Bokor
Assignee: Andras Bokor


AzureBlobFileSystemStore#getIsNamespaceEnabled() gets ACL status of root path 
to decide whether the account is XNS or not. If not it return with 400 as error 
code which means the account is a non-XNS acc.

The problem is that we get 400 and the getIsNamespaceEnabled return false even 
if the filesystem does not exist which seems ok but according to the test we 
should get 404. So it seems the expected behavior is to return 404.

At this point I am not sure how to fix it. Should we insist to the expected 
behavior and fix it on server side or we just adjust the test to expect false 
in case of non XNS account?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16617) ITestGetNameSpaceEnabled#testFailedRequestWhenFSNotExist fails with ns disabled account

2019-09-30 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-16617:
--
Description: 
AzureBlobFileSystemStore#getIsNamespaceEnabled() gets ACL status of root path 
to decide whether the account is XNS or not. If not it returns with 400 as 
error code which means the account is a non-XNS acc.

The problem is that we get 400 and the getIsNamespaceEnabled return false even 
if the filesystem does not exist which seems ok but according to the test we 
should get 404. So it seems the expected behavior is to return 404.

At this point I am not sure how to fix it. Should we insist to the expected 
behavior and fix it on server side or we just adjust the test to expect false 
in case of non XNS account?

  was:
AzureBlobFileSystemStore#getIsNamespaceEnabled() gets ACL status of root path 
to decide whether the account is XNS or not. If not it return with 400 as error 
code which means the account is a non-XNS acc.

The problem is that we get 400 and the getIsNamespaceEnabled return false even 
if the filesystem does not exist which seems ok but according to the test we 
should get 404. So it seems the expected behavior is to return 404.

At this point I am not sure how to fix it. Should we insist to the expected 
behavior and fix it on server side or we just adjust the test to expect false 
in case of non XNS account?


> ITestGetNameSpaceEnabled#testFailedRequestWhenFSNotExist fails with ns 
> disabled account
> ---
>
> Key: HADOOP-16617
> URL: https://issues.apache.org/jira/browse/HADOOP-16617
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
>
> AzureBlobFileSystemStore#getIsNamespaceEnabled() gets ACL status of root path 
> to decide whether the account is XNS or not. If not it returns with 400 as 
> error code which means the account is a non-XNS acc.
> The problem is that we get 400 and the getIsNamespaceEnabled return false 
> even if the filesystem does not exist which seems ok but according to the 
> test we should get 404. So it seems the expected behavior is to return 404.
> At this point I am not sure how to fix it. Should we insist to the expected 
> behavior and fix it on server side or we just adjust the test to expect false 
> in case of non XNS account?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16616) ITestAzureFileSystemInstrumentation#testMetricsOnBigFileCreateRead fails

2019-09-30 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-16616:
--
Component/s: fs/azure

> ITestAzureFileSystemInstrumentation#testMetricsOnBigFileCreateRead fails
> 
>
> Key: HADOOP-16616
> URL: https://issues.apache.org/jira/browse/HADOOP-16616
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Reporter: Andras Bokor
>Priority: Major
>
> {code:java}
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 238.687 s <<< FAILURE! - in 
> org.apache.hadoop.fs.azure.metrics.ITestAzureFileSystemInstrumentation
> [ERROR] 
> testMetricsOnBigFileCreateRead(org.apache.hadoop.fs.azure.metrics.ITestAzureFileSystemInstrumentation)
>   Time elapsed: 238.5 s  <<< FAILURE!
> java.lang.AssertionError: The download latency 0 should be greater than zero 
> now that I've just downloaded a file.
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.fs.azure.metrics.ITestAzureFileSystemInstrumentation.testMetricsOnBigFileCreateRead(ITestAzureFileSystemInstrumentation.java:303)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16616) ITestAzureFileSystemInstrumentation#testMetricsOnBigFileCreateRead fails

2019-09-30 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-16616:
--
Affects Version/s: 3.2.1

> ITestAzureFileSystemInstrumentation#testMetricsOnBigFileCreateRead fails
> 
>
> Key: HADOOP-16616
> URL: https://issues.apache.org/jira/browse/HADOOP-16616
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Affects Versions: 3.2.1
>Reporter: Andras Bokor
>Priority: Major
>
> {code:java}
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 238.687 s <<< FAILURE! - in 
> org.apache.hadoop.fs.azure.metrics.ITestAzureFileSystemInstrumentation
> [ERROR] 
> testMetricsOnBigFileCreateRead(org.apache.hadoop.fs.azure.metrics.ITestAzureFileSystemInstrumentation)
>   Time elapsed: 238.5 s  <<< FAILURE!
> java.lang.AssertionError: The download latency 0 should be greater than zero 
> now that I've just downloaded a file.
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.fs.azure.metrics.ITestAzureFileSystemInstrumentation.testMetricsOnBigFileCreateRead(ITestAzureFileSystemInstrumentation.java:303)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16617) ITestGetNameSpaceEnabled#testFailedRequestWhenFSNotExist fails with ns disabled account

2019-09-30 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-16617:
--
Component/s: test
 fs/azure

> ITestGetNameSpaceEnabled#testFailedRequestWhenFSNotExist fails with ns 
> disabled account
> ---
>
> Key: HADOOP-16617
> URL: https://issues.apache.org/jira/browse/HADOOP-16617
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
>
> AzureBlobFileSystemStore#getIsNamespaceEnabled() gets ACL status of root path 
> to decide whether the account is XNS or not. If not it returns with 400 as 
> error code which means the account is a non-XNS acc.
> The problem is that we get 400 and the getIsNamespaceEnabled return false 
> even if the filesystem does not exist which seems ok but according to the 
> test we should get 404. So it seems the expected behavior is to return 404.
> At this point I am not sure how to fix it. Should we insist to the expected 
> behavior and fix it on server side or we just adjust the test to expect false 
> in case of non XNS account?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16617) ITestGetNameSpaceEnabled#testFailedRequestWhenFSNotExist fails with ns disabled account

2019-09-30 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-16617:
--
Affects Version/s: 3.2.1

> ITestGetNameSpaceEnabled#testFailedRequestWhenFSNotExist fails with ns 
> disabled account
> ---
>
> Key: HADOOP-16617
> URL: https://issues.apache.org/jira/browse/HADOOP-16617
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Affects Versions: 3.2.1
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
>
> AzureBlobFileSystemStore#getIsNamespaceEnabled() gets ACL status of root path 
> to decide whether the account is XNS or not. If not it returns with 400 as 
> error code which means the account is a non-XNS acc.
> The problem is that we get 400 and the getIsNamespaceEnabled return false 
> even if the filesystem does not exist which seems ok but according to the 
> test we should get 404. So it seems the expected behavior is to return 404.
> At this point I am not sure how to fix it. Should we insist to the expected 
> behavior and fix it on server side or we just adjust the test to expect false 
> in case of non XNS account?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16710) testing_azure.md documentation is misleading

2019-11-14 Thread Andras Bokor (Jira)
Andras Bokor created HADOOP-16710:
-

 Summary: testing_azure.md documentation is misleading
 Key: HADOOP-16710
 URL: https://issues.apache.org/jira/browse/HADOOP-16710
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/azure
Affects Versions: 3.2.0
Reporter: Andras Bokor
Assignee: Andras Bokor


testing_azure.md states that "-Dparallel-tests" will run all the integration 
tests in parallel.

But in fact using -Dparallel-tests without any value actually skips the 
integration tests and runs only the unit tests.

The reason is that to activate a profile which is able to run ITs in parallel 
requires parallel-tests property to have a value (abfs, wasb or 'both'). 
sequential-tests profile defines !parallel-tests as value which means that the 
property should not even be mentioned.

Please check the output of help:active-profiles command:

 
{code:java}
cd hadoop-tools/hadoop-azure
andrasbokor$ mvn help:active-profiles -Dparallel-tests=abfs 
- parallel-tests-abfs (source: org.apache.hadoop:hadoop-azure:3.3.0-SNAPSHOT) 
- os.mac (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT) 
- hbase1 (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT) {code}
{code:java}
andrasbokor$ mvn help:active-profiles -Dparallel-tests
- os.mac (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT)
- hbase1 (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT)
{code}
{code:java}
mvn help:active-profiles
- sequential-tests (source: org.apache.hadoop:hadoop-azure:3.3.0-SNAPSHOT)
- os.mac (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT)
- hbase1 (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT){code}
The help:active-profiles shows that -Dparallel-tests does not add any IT 
related profiles and results in skipping all the integration tests during 
verify phrase.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16710) testing_azure.md documentation is misleading

2019-11-14 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-16710:
--
Attachment: HADOOP-16710.001.patch

> testing_azure.md documentation is misleading
> 
>
> Key: HADOOP-16710
> URL: https://issues.apache.org/jira/browse/HADOOP-16710
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-16710.001.patch
>
>
> testing_azure.md states that "-Dparallel-tests" will run all the integration 
> tests in parallel.
> But in fact using -Dparallel-tests without any value actually skips the 
> integration tests and runs only the unit tests.
> The reason is that to activate a profile which is able to run ITs in parallel 
> requires parallel-tests property to have a value (abfs, wasb or 'both'). 
> sequential-tests profile defines !parallel-tests as value which means that 
> the property should not even be mentioned.
> Please check the output of help:active-profiles command:
>  
> {code:java}
> cd hadoop-tools/hadoop-azure
> andrasbokor$ mvn help:active-profiles -Dparallel-tests=abfs 
> - parallel-tests-abfs (source: org.apache.hadoop:hadoop-azure:3.3.0-SNAPSHOT) 
> - os.mac (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT) 
> - hbase1 (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT) {code}
> {code:java}
> andrasbokor$ mvn help:active-profiles -Dparallel-tests
> - os.mac (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT)
> - hbase1 (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT)
> {code}
> {code:java}
> mvn help:active-profiles
> - sequential-tests (source: org.apache.hadoop:hadoop-azure:3.3.0-SNAPSHOT)
> - os.mac (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT)
> - hbase1 (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT){code}
> The help:active-profiles shows that -Dparallel-tests does not add any IT 
> related profiles and results in skipping all the integration tests during 
> verify phrase.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16710) testing_azure.md documentation is misleading

2019-11-14 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-16710:
--
Status: Patch Available  (was: Open)

> testing_azure.md documentation is misleading
> 
>
> Key: HADOOP-16710
> URL: https://issues.apache.org/jira/browse/HADOOP-16710
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-16710.001.patch
>
>
> testing_azure.md states that "-Dparallel-tests" will run all the integration 
> tests in parallel.
> But in fact using -Dparallel-tests without any value actually skips the 
> integration tests and runs only the unit tests.
> The reason is that to activate a profile which is able to run ITs in parallel 
> requires parallel-tests property to have a value (abfs, wasb or 'both'). 
> sequential-tests profile defines !parallel-tests as value which means that 
> the property should not even be mentioned.
> Please check the output of help:active-profiles command:
>  
> {code:java}
> cd hadoop-tools/hadoop-azure
> andrasbokor$ mvn help:active-profiles -Dparallel-tests=abfs 
> - parallel-tests-abfs (source: org.apache.hadoop:hadoop-azure:3.3.0-SNAPSHOT) 
> - os.mac (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT) 
> - hbase1 (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT) {code}
> {code:java}
> andrasbokor$ mvn help:active-profiles -Dparallel-tests
> - os.mac (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT)
> - hbase1 (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT)
> {code}
> {code:java}
> mvn help:active-profiles
> - sequential-tests (source: org.apache.hadoop:hadoop-azure:3.3.0-SNAPSHOT)
> - os.mac (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT)
> - hbase1 (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT){code}
> The help:active-profiles shows that -Dparallel-tests does not add any IT 
> related profiles and results in skipping all the integration tests during 
> verify phrase.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16710) testing_azure.md documentation is misleading

2019-11-14 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-16710:
--
Component/s: test

> testing_azure.md documentation is misleading
> 
>
> Key: HADOOP-16710
> URL: https://issues.apache.org/jira/browse/HADOOP-16710
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Affects Versions: 3.2.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-16710.001.patch
>
>
> testing_azure.md states that "-Dparallel-tests" will run all the integration 
> tests in parallel.
> But in fact using -Dparallel-tests without any value actually skips the 
> integration tests and runs only the unit tests.
> The reason is that to activate a profile which is able to run ITs in parallel 
> requires parallel-tests property to have a value (abfs, wasb or 'both'). 
> sequential-tests profile defines !parallel-tests as value which means that 
> the property should not even be mentioned.
> Please check the output of help:active-profiles command:
>  
> {code:java}
> cd hadoop-tools/hadoop-azure
> andrasbokor$ mvn help:active-profiles -Dparallel-tests=abfs 
> - parallel-tests-abfs (source: org.apache.hadoop:hadoop-azure:3.3.0-SNAPSHOT) 
> - os.mac (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT) 
> - hbase1 (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT) {code}
> {code:java}
> andrasbokor$ mvn help:active-profiles -Dparallel-tests
> - os.mac (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT)
> - hbase1 (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT)
> {code}
> {code:java}
> mvn help:active-profiles
> - sequential-tests (source: org.apache.hadoop:hadoop-azure:3.3.0-SNAPSHOT)
> - os.mac (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT)
> - hbase1 (source: org.apache.hadoop:hadoop-project:3.3.0-SNAPSHOT){code}
> The help:active-profiles shows that -Dparallel-tests does not add any IT 
> related profiles and results in skipping all the integration tests during 
> verify phrase.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-16405) Upgrade Wildfly Openssl version to 1.0.7.Final

2019-08-08 Thread Andras Bokor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor reopened HADOOP-16405:
---

> Upgrade Wildfly Openssl version to 1.0.7.Final
> --
>
> Key: HADOOP-16405
> URL: https://issues.apache.org/jira/browse/HADOOP-16405
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/azure
>Affects Versions: 3.2.0
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
>Priority: Major
> Fix For: 3.3.0
>
>
> Upgrade Wildfly Openssl version to 1.0.7.Final. This version has SNI support 
> which is essential for firewall enabled clusters along with many stability 
> related fixes.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16405) Upgrade Wildfly Openssl version to 1.0.7.Final

2019-08-08 Thread Andras Bokor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-16405.
---
Resolution: Duplicate

> Upgrade Wildfly Openssl version to 1.0.7.Final
> --
>
> Key: HADOOP-16405
> URL: https://issues.apache.org/jira/browse/HADOOP-16405
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/azure
>Affects Versions: 3.2.0
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
>Priority: Major
> Fix For: 3.3.0
>
>
> Upgrade Wildfly Openssl version to 1.0.7.Final. This version has SNI support 
> which is essential for firewall enabled clusters along with many stability 
> related fixes.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-3353) DataNode.run() join() and shutdown() ought to have synchronized access to dataNodeThread

2019-08-08 Thread Andras Bokor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-3353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-3353.
--
Resolution: Invalid

This code has totally changed in the past 11 years.

> DataNode.run() join()  and shutdown() ought to have  synchronized access to 
> dataNodeThread
> --
>
> Key: HADOOP-3353
> URL: https://issues.apache.org/jira/browse/HADOOP-3353
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Steve Loughran
>Priority: Major
>
> Looking at the DataNode.run() and join() methods, they are manipulating the 
> state of the dataNodeThread:
>  void join() {
> if (dataNodeThread != null) {
>   try {
> dataNodeThread.join();
>   } catch (InterruptedException e) {}
> }
>   }
> There's something similar in shutdown()
> This could lead to race conditions on shutdown, where the check passes and 
> then the reference is null when the next method is invoked. 
> Marking major as race conditions are always trouble, and hard to test. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-12660) TestZKDelegationTokenSecretManager.testMultiNodeOperations failing

2019-08-08 Thread Andras Bokor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-12660.
---
Resolution: Cannot Reproduce

> TestZKDelegationTokenSecretManager.testMultiNodeOperations failing
> --
>
> Key: HADOOP-12660
> URL: https://issues.apache.org/jira/browse/HADOOP-12660
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ha, test
>Affects Versions: 3.0.0-alpha1
> Environment: Jenkins Java8
>Reporter: Steve Loughran
>Priority: Major
>
> Test failure
> {code}
> java.lang.AssertionError: Expected InvalidToken
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.security.token.delegation.TestZKDelegationTokenSecretManager.testMultiNodeOperations(TestZKDelegationTokenSecretManager.java:127)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-12342) Use SLF4j in ProtobufRpcEngine class

2019-08-08 Thread Andras Bokor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-12342.
---
Resolution: Duplicate

> Use SLF4j in ProtobufRpcEngine class
> 
>
> Key: HADOOP-12342
> URL: https://issues.apache.org/jira/browse/HADOOP-12342
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>
> This are considerable amount of debug/trace level logs in this class. This 
> ticket is opened to convert it to use SLF4J for better performance. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16771) Checkstyle version is not compatible with IDEA's checkstyle plugin

2019-12-19 Thread Andras Bokor (Jira)
Andras Bokor created HADOOP-16771:
-

 Summary: Checkstyle version is not compatible with IDEA's 
checkstyle plugin
 Key: HADOOP-16771
 URL: https://issues.apache.org/jira/browse/HADOOP-16771
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Andras Bokor
Assignee: Andras Bokor
 Fix For: 3.1.0, 3.0.4


After upgrading to the latest IDEA the IDE throws error messages in every few 
minutes like
{code:java}
The Checkstyle rules file could not be parsed.
SuppressionCommentFilter is not allowed as a child in Checker
The file has been blacklisted for 60s.{code}
This is caused by some backward incompatible changes in checkstyle source code:
 [http://checkstyle.sourceforge.net/releasenotes.html]
 * 8.1: Make SuppressionCommentFilter and SuppressWithNearbyCommentFilter 
children of TreeWalker.
 * 8.2: remove FileContentsHolder module as FileContents object is available 
for filters on TreeWalker in TreeWalkerAudit Event.

IDEA uses checkstyle 8.8

We should upgrade our checkstyle version to be compatible with IDEA's 
checkstyle plugin.
 Also it's a good time to upgrade maven-checkstyle-plugin as well to brand new 
3.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16771) Checkstyle version is not compatible with IDEA's checkstyle plugin

2019-12-19 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-16771:
--
Description: 
After upgrading to the latest IDEA the IDE throws error message when I try to 
load the checkstyle xml
{code:java}
TreeWalker is not allowed as a parent of LineLength Please review 'Parent 
Module' section for this Check in web documentation if Check is standard.{code}
[This is caused by some backward incompatible changes in checkstyle source 
code|https://github.com/checkstyle/checkstyle/issues/2116]

IDEA uses checkstyle 8.26

We should upgrade our checkstyle version to be compatible with IDEA's 
checkstyle plugin.
 Also it's a good time to upgrade maven-checkstyle-plugin as well to brand new 
3.1.

  was:
After upgrading to the latest IDEA the IDE throws error messages in every few 
minutes like
{code:java}
The Checkstyle rules file could not be parsed.
SuppressionCommentFilter is not allowed as a child in Checker
The file has been blacklisted for 60s.{code}
This is caused by some backward incompatible changes in checkstyle source code:
 [http://checkstyle.sourceforge.net/releasenotes.html]
 * 8.1: Make SuppressionCommentFilter and SuppressWithNearbyCommentFilter 
children of TreeWalker.
 * 8.2: remove FileContentsHolder module as FileContents object is available 
for filters on TreeWalker in TreeWalkerAudit Event.

IDEA uses checkstyle 8.8

We should upgrade our checkstyle version to be compatible with IDEA's 
checkstyle plugin.
 Also it's a good time to upgrade maven-checkstyle-plugin as well to brand new 
3.0.


> Checkstyle version is not compatible with IDEA's checkstyle plugin
> --
>
> Key: HADOOP-16771
> URL: https://issues.apache.org/jira/browse/HADOOP-16771
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Fix For: 3.1.0, 3.0.4
>
>
> After upgrading to the latest IDEA the IDE throws error message when I try to 
> load the checkstyle xml
> {code:java}
> TreeWalker is not allowed as a parent of LineLength Please review 'Parent 
> Module' section for this Check in web documentation if Check is 
> standard.{code}
> [This is caused by some backward incompatible changes in checkstyle source 
> code|https://github.com/checkstyle/checkstyle/issues/2116]
> IDEA uses checkstyle 8.26
> We should upgrade our checkstyle version to be compatible with IDEA's 
> checkstyle plugin.
>  Also it's a good time to upgrade maven-checkstyle-plugin as well to brand 
> new 3.1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16771) Checkstyle version is not compatible with IDEA's checkstyle plugin

2019-12-19 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-16771:
--
Description: 
After upgrading to the latest IDEA the IDE throws error message when I try to 
load the checkstyle xml
{code:java}
TreeWalker is not allowed as a parent of LineLength Please review 'Parent 
Module' section for this Check in web documentation if Check is standard.{code}
[This is caused by some backward incompatible changes in checkstyle source 
code|https://github.com/checkstyle/checkstyle/issues/2116]

IDEA uses checkstyle 8.26

We should upgrade our checkstyle version to be compatible with IDEA's 
checkstyle plugin which is the latest.
 Also it's a good time to upgrade maven-checkstyle-plugin as well to brand new 
3.1.

  was:
After upgrading to the latest IDEA the IDE throws error message when I try to 
load the checkstyle xml
{code:java}
TreeWalker is not allowed as a parent of LineLength Please review 'Parent 
Module' section for this Check in web documentation if Check is standard.{code}
[This is caused by some backward incompatible changes in checkstyle source 
code|https://github.com/checkstyle/checkstyle/issues/2116]

IDEA uses checkstyle 8.26

We should upgrade our checkstyle version to be compatible with IDEA's 
checkstyle plugin.
 Also it's a good time to upgrade maven-checkstyle-plugin as well to brand new 
3.1.


> Checkstyle version is not compatible with IDEA's checkstyle plugin
> --
>
> Key: HADOOP-16771
> URL: https://issues.apache.org/jira/browse/HADOOP-16771
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Fix For: 3.1.0, 3.0.4
>
>
> After upgrading to the latest IDEA the IDE throws error message when I try to 
> load the checkstyle xml
> {code:java}
> TreeWalker is not allowed as a parent of LineLength Please review 'Parent 
> Module' section for this Check in web documentation if Check is 
> standard.{code}
> [This is caused by some backward incompatible changes in checkstyle source 
> code|https://github.com/checkstyle/checkstyle/issues/2116]
> IDEA uses checkstyle 8.26
> We should upgrade our checkstyle version to be compatible with IDEA's 
> checkstyle plugin which is the latest.
>  Also it's a good time to upgrade maven-checkstyle-plugin as well to brand 
> new 3.1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16771) Checkstyle version is not compatible with IDEA's checkstyle plugin

2019-12-19 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-16771:
--
Attachment: HADOOP-16771.001.patch

> Checkstyle version is not compatible with IDEA's checkstyle plugin
> --
>
> Key: HADOOP-16771
> URL: https://issues.apache.org/jira/browse/HADOOP-16771
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Fix For: 3.1.0, 3.0.4
>
> Attachments: HADOOP-16771.001.patch
>
>
> After upgrading to the latest IDEA the IDE throws error message when I try to 
> load the checkstyle xml
> {code:java}
> TreeWalker is not allowed as a parent of LineLength Please review 'Parent 
> Module' section for this Check in web documentation if Check is 
> standard.{code}
> [This is caused by some backward incompatible changes in checkstyle source 
> code|https://github.com/checkstyle/checkstyle/issues/2116]
> IDEA uses checkstyle 8.26
> We should upgrade our checkstyle version to be compatible with IDEA's 
> checkstyle plugin which is the latest.
>  Also it's a good time to upgrade maven-checkstyle-plugin as well to brand 
> new 3.1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16771) Checkstyle version is not compatible with IDEA's checkstyle plugin

2019-12-19 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-16771:
--
Affects Version/s: 3.3.0

> Checkstyle version is not compatible with IDEA's checkstyle plugin
> --
>
> Key: HADOOP-16771
> URL: https://issues.apache.org/jira/browse/HADOOP-16771
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.3.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-16771.001.patch
>
>
> After upgrading to the latest IDEA the IDE throws error message when I try to 
> load the checkstyle xml
> {code:java}
> TreeWalker is not allowed as a parent of LineLength Please review 'Parent 
> Module' section for this Check in web documentation if Check is 
> standard.{code}
> [This is caused by some backward incompatible changes in checkstyle source 
> code|https://github.com/checkstyle/checkstyle/issues/2116]
> IDEA uses checkstyle 8.26
> We should upgrade our checkstyle version to be compatible with IDEA's 
> checkstyle plugin which is the latest.
>  Also it's a good time to upgrade maven-checkstyle-plugin as well to brand 
> new 3.1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16771) Checkstyle version is not compatible with IDEA's checkstyle plugin

2019-12-19 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-16771:
--
   Fix Version/s: (was: 3.0.4)
  (was: 3.1.0)
Hadoop Flags:   (was: Reviewed)
Release Note: Updated checkstyle to 8.26 and updated 
maven-checkstyle-plugin to 3.1.0.  (was: Updated checkstyle to 8.8 and updated 
maven-checkstyle-plugin to 3.0.0.)
Target Version/s:   (was: 3.2.0)
  Status: Patch Available  (was: Open)

> Checkstyle version is not compatible with IDEA's checkstyle plugin
> --
>
> Key: HADOOP-16771
> URL: https://issues.apache.org/jira/browse/HADOOP-16771
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-16771.001.patch
>
>
> After upgrading to the latest IDEA the IDE throws error message when I try to 
> load the checkstyle xml
> {code:java}
> TreeWalker is not allowed as a parent of LineLength Please review 'Parent 
> Module' section for this Check in web documentation if Check is 
> standard.{code}
> [This is caused by some backward incompatible changes in checkstyle source 
> code|https://github.com/checkstyle/checkstyle/issues/2116]
> IDEA uses checkstyle 8.26
> We should upgrade our checkstyle version to be compatible with IDEA's 
> checkstyle plugin which is the latest.
>  Also it's a good time to upgrade maven-checkstyle-plugin as well to brand 
> new 3.1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16771) Checkstyle version is not compatible with IDEA's checkstyle plugin

2019-12-19 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-16771:
--
Component/s: build

> Checkstyle version is not compatible with IDEA's checkstyle plugin
> --
>
> Key: HADOOP-16771
> URL: https://issues.apache.org/jira/browse/HADOOP-16771
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-16771.001.patch
>
>
> After upgrading to the latest IDEA the IDE throws error message when I try to 
> load the checkstyle xml
> {code:java}
> TreeWalker is not allowed as a parent of LineLength Please review 'Parent 
> Module' section for this Check in web documentation if Check is 
> standard.{code}
> [This is caused by some backward incompatible changes in checkstyle source 
> code|https://github.com/checkstyle/checkstyle/issues/2116]
> IDEA uses checkstyle 8.26
> We should upgrade our checkstyle version to be compatible with IDEA's 
> checkstyle plugin which is the latest.
>  Also it's a good time to upgrade maven-checkstyle-plugin as well to brand 
> new 3.1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16771) Checkstyle version is not compatible with IDEA's checkstyle plugin

2019-12-19 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-16771:
--
Description: 
After upgrading to the latest IDEA the IDE throws error message when I try to 
load the checkstyle xml
{code:java}
TreeWalker is not allowed as a parent of LineLength Please review 'Parent 
Module' section for this Check in web documentation if Check is standard.{code}
[This is caused by some backward incompatible changes in checkstyle source 
code|https://github.com/checkstyle/checkstyle/issues/2116]

IDEA uses checkstyle 8.26

We should upgrade our checkstyle version to be compatible with IDEA's 
checkstyle plugin which is the latest.
 Also it's a good time to upgrade maven-checkstyle-plugin as well to 3.1.

  was:
After upgrading to the latest IDEA the IDE throws error message when I try to 
load the checkstyle xml
{code:java}
TreeWalker is not allowed as a parent of LineLength Please review 'Parent 
Module' section for this Check in web documentation if Check is standard.{code}
[This is caused by some backward incompatible changes in checkstyle source 
code|https://github.com/checkstyle/checkstyle/issues/2116]

IDEA uses checkstyle 8.26

We should upgrade our checkstyle version to be compatible with IDEA's 
checkstyle plugin which is the latest.
 Also it's a good time to upgrade maven-checkstyle-plugin as well to brand new 
3.1.


> Checkstyle version is not compatible with IDEA's checkstyle plugin
> --
>
> Key: HADOOP-16771
> URL: https://issues.apache.org/jira/browse/HADOOP-16771
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.3.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-16771.001.patch
>
>
> After upgrading to the latest IDEA the IDE throws error message when I try to 
> load the checkstyle xml
> {code:java}
> TreeWalker is not allowed as a parent of LineLength Please review 'Parent 
> Module' section for this Check in web documentation if Check is 
> standard.{code}
> [This is caused by some backward incompatible changes in checkstyle source 
> code|https://github.com/checkstyle/checkstyle/issues/2116]
> IDEA uses checkstyle 8.26
> We should upgrade our checkstyle version to be compatible with IDEA's 
> checkstyle plugin which is the latest.
>  Also it's a good time to upgrade maven-checkstyle-plugin as well to 3.1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16771) Update checkstyle to 8.26 and maven-checkstyle-plugin to 3.1.0

2019-12-20 Thread Andras Bokor (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17000762#comment-17000762
 ] 

Andras Bokor commented on HADOOP-16771:
---

Thanks, [~aajisaka]!

> Update checkstyle to 8.26 and maven-checkstyle-plugin to 3.1.0
> --
>
> Key: HADOOP-16771
> URL: https://issues.apache.org/jira/browse/HADOOP-16771
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.3.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HADOOP-16771.001.patch
>
>
> After upgrading to the latest IDEA the IDE throws error message when I try to 
> load the checkstyle xml
> {code:java}
> TreeWalker is not allowed as a parent of LineLength Please review 'Parent 
> Module' section for this Check in web documentation if Check is 
> standard.{code}
> [This is caused by some backward incompatible changes in checkstyle source 
> code|https://github.com/checkstyle/checkstyle/issues/2116]
> IDEA uses checkstyle 8.26
> We should upgrade our checkstyle version to be compatible with IDEA's 
> checkstyle plugin which is the latest.
>  Also it's a good time to upgrade maven-checkstyle-plugin as well to 3.1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-6377) ChecksumFileSystem.getContentSummary throws NPE when directory contains inaccessible directories

2020-01-08 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-6377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-6377.
--
Resolution: Duplicate

> ChecksumFileSystem.getContentSummary throws NPE when directory contains 
> inaccessible directories
> 
>
> Key: HADOOP-6377
> URL: https://issues.apache.org/jira/browse/HADOOP-6377
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.21.0, 0.22.0
>Reporter: Todd Lipcon
>Assignee: Andras Bokor
>Priority: Major
>
> When getContentSummary is called on a path that contains an unreadable 
> directory, it throws NPE, since RawLocalFileSystem.listStatus(Path) returns 
> null when File.list() returns null.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12005) Switch off checkstyle file length warnings

2017-10-06 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-12005:
--
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

> Switch off checkstyle file length warnings
> --
>
> Key: HADOOP-12005
> URL: https://issues.apache.org/jira/browse/HADOOP-12005
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Arpit Agarwal
>Assignee: Andras Bokor
> Attachments: HADOOP-12005.001.patch
>
>
> We have many large files over 2000 lines. checkstyle warns every time there 
> is a change to one of these files.
> Let's switch off this check or increase the limit to reduce the number of 
> non-actionable -1s from Jenkins.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-8621) FileUtil.symLink fails if spaces in path

2017-10-06 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16194318#comment-16194318
 ] 

Andras Bokor commented on HADOOP-8621:
--

HADOOP-8952 do the same change. It should not be an issue now.

> FileUtil.symLink fails if spaces in path
> 
>
> Key: HADOOP-8621
> URL: https://issues.apache.org/jira/browse/HADOOP-8621
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Robert Fuller
>Assignee: Andras Bokor
>Priority: Minor
> Attachments: hadoop-8621.txt, patch.txt
>
>
> the 'ln -s' command fails in the current implementation if there is a space 
> in the path for the target or linkname. A small change resolves the issue.
> String cmd = "ln -s " + target + " " + linkname;
> //Process p = Runtime.getRuntime().exec(cmd, null); //broken
> Process p = Runtime.getRuntime().exec(new 
> String[]{"ln","-s",target,linkname}, null);



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-8621) FileUtil.symLink fails if spaces in path

2017-10-06 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-8621:
-
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

> FileUtil.symLink fails if spaces in path
> 
>
> Key: HADOOP-8621
> URL: https://issues.apache.org/jira/browse/HADOOP-8621
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Robert Fuller
>Assignee: Andras Bokor
>Priority: Minor
> Attachments: hadoop-8621.txt, patch.txt
>
>
> the 'ln -s' command fails in the current implementation if there is a space 
> in the path for the target or linkname. A small change resolves the issue.
> String cmd = "ln -s " + target + " " + linkname;
> //Process p = Runtime.getRuntime().exec(cmd, null); //broken
> Process p = Runtime.getRuntime().exec(new 
> String[]{"ln","-s",target,linkname}, null);



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-8621) FileUtil.symLink fails if spaces in path

2017-10-06 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16194318#comment-16194318
 ] 

Andras Bokor edited comment on HADOOP-8621 at 10/6/17 9:13 AM:
---

HADOOP-8562 does the same change. It should not be an issue now.


was (Author: boky01):
HADOOP-8952 do the same change. It should not be an issue now.

> FileUtil.symLink fails if spaces in path
> 
>
> Key: HADOOP-8621
> URL: https://issues.apache.org/jira/browse/HADOOP-8621
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Robert Fuller
>Assignee: Andras Bokor
>Priority: Minor
> Attachments: hadoop-8621.txt, patch.txt
>
>
> the 'ln -s' command fails in the current implementation if there is a space 
> in the path for the target or linkname. A small change resolves the issue.
> String cmd = "ln -s " + target + " " + linkname;
> //Process p = Runtime.getRuntime().exec(cmd, null); //broken
> Process p = Runtime.getRuntime().exec(new 
> String[]{"ln","-s",target,linkname}, null);



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-5943) IOUtils#copyBytes methods should not close streams that are passed in as parameters

2017-10-06 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-5943:
-
Status: Patch Available  (was: Open)

> IOUtils#copyBytes methods should not close streams that are passed in as 
> parameters
> ---
>
> Key: HADOOP-5943
> URL: https://issues.apache.org/jira/browse/HADOOP-5943
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Hairong Kuang
>Assignee: Andras Bokor
> Attachments: HADOOP-5943.01.patch
>
>
> The following methods in IOUtils close the streams that are passed in as 
> parameters. Calling these methods can easily trigger findbug OBL: Method may 
> fail to clean up stream or resource (OBL_UNSATISFIED_OBLIGATION). A good 
> practice should be to close a stream in the same method where the stream is 
> opened. 
> public static void copyBytes(InputStream in, OutputStream out, int buffSize, 
> boolean close) 
> public static void copyBytes(InputStream in, OutputStream out, Configuration 
> conf, boolean close)
> These methods should be deprecated.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-5943) IOUtils#copyBytes methods should not close streams that are passed in as parameters

2017-10-06 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-5943:
-
Attachment: HADOOP-5943.01.patch

> IOUtils#copyBytes methods should not close streams that are passed in as 
> parameters
> ---
>
> Key: HADOOP-5943
> URL: https://issues.apache.org/jira/browse/HADOOP-5943
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Hairong Kuang
>Assignee: Andras Bokor
> Attachments: HADOOP-5943.01.patch
>
>
> The following methods in IOUtils close the streams that are passed in as 
> parameters. Calling these methods can easily trigger findbug OBL: Method may 
> fail to clean up stream or resource (OBL_UNSATISFIED_OBLIGATION). A good 
> practice should be to close a stream in the same method where the stream is 
> opened. 
> public static void copyBytes(InputStream in, OutputStream out, int buffSize, 
> boolean close) 
> public static void copyBytes(InputStream in, OutputStream out, Configuration 
> conf, boolean close)
> These methods should be deprecated.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-5943) IOUtils#copyBytes methods should not close streams that are passed in as parameters

2017-10-06 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-5943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16194654#comment-16194654
 ] 

Andras Bokor commented on HADOOP-5943:
--

I agree:
* The current logic does not meet with the Java 7 pattern. In Java 7 
try-catch-resources are preferred.
* Misleading. I found some misusages that may cause double calls in 
InputStreams. Please see HADOOP-14691 for example.

> IOUtils#copyBytes methods should not close streams that are passed in as 
> parameters
> ---
>
> Key: HADOOP-5943
> URL: https://issues.apache.org/jira/browse/HADOOP-5943
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Hairong Kuang
>Assignee: Andras Bokor
> Attachments: HADOOP-5943.01.patch
>
>
> The following methods in IOUtils close the streams that are passed in as 
> parameters. Calling these methods can easily trigger findbug OBL: Method may 
> fail to clean up stream or resource (OBL_UNSATISFIED_OBLIGATION). A good 
> practice should be to close a stream in the same method where the stream is 
> opened. 
> public static void copyBytes(InputStream in, OutputStream out, int buffSize, 
> boolean close) 
> public static void copyBytes(InputStream in, OutputStream out, Configuration 
> conf, boolean close)
> These methods should be deprecated.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-5943) IOUtils#copyBytes methods should not close streams that are passed in as parameters

2017-10-06 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-5943:
-
Target Version/s: 3.0.0

> IOUtils#copyBytes methods should not close streams that are passed in as 
> parameters
> ---
>
> Key: HADOOP-5943
> URL: https://issues.apache.org/jira/browse/HADOOP-5943
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Hairong Kuang
>Assignee: Andras Bokor
> Attachments: HADOOP-5943.01.patch
>
>
> The following methods in IOUtils close the streams that are passed in as 
> parameters. Calling these methods can easily trigger findbug OBL: Method may 
> fail to clean up stream or resource (OBL_UNSATISFIED_OBLIGATION). A good 
> practice should be to close a stream in the same method where the stream is 
> opened. 
> public static void copyBytes(InputStream in, OutputStream out, int buffSize, 
> boolean close) 
> public static void copyBytes(InputStream in, OutputStream out, Configuration 
> conf, boolean close)
> These methods should be deprecated.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14698) Make copyFromLocal's -t option available for put as well

2017-10-06 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-14698:
--
Attachment: HADOOP-14698.07.patch

Attaching the the same patch as 06 was to ensure that the fleaky 
{{TestCopyFromLocal}} does not hide any issue.

> Make copyFromLocal's -t option available for put as well
> 
>
> Key: HADOOP-14698
> URL: https://issues.apache.org/jira/browse/HADOOP-14698
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Attachments: HADOOP-14698.01.patch, HADOOP-14698.02.patch, 
> HADOOP-14698.03.patch, HADOOP-14698.04.patch, HADOOP-14698.05.patch, 
> HADOOP-14698.06.patch, HADOOP-14698.07.patch
>
>
> After HDFS-11786 copyFromLocal and put are no longer identical.
> I do not see any reason why not to add the new feature to put as well.
> Being non-identical makes the understanding/usage of command more complicated 
> from user point of view.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14698) Make copyFromLocal's -t option available for put as well

2017-10-06 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16194664#comment-16194664
 ] 

Andras Bokor edited comment on HADOOP-14698 at 10/6/17 2:39 PM:


Attaching same patch as 06 was to ensure that the fleaky {{TestCopyFromLocal}} 
does not hide any issue.


was (Author: boky01):
Attaching the the same patch as 06 was to ensure that the fleaky 
{{TestCopyFromLocal}} does not hide any issue.

> Make copyFromLocal's -t option available for put as well
> 
>
> Key: HADOOP-14698
> URL: https://issues.apache.org/jira/browse/HADOOP-14698
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Attachments: HADOOP-14698.01.patch, HADOOP-14698.02.patch, 
> HADOOP-14698.03.patch, HADOOP-14698.04.patch, HADOOP-14698.05.patch, 
> HADOOP-14698.06.patch, HADOOP-14698.07.patch
>
>
> After HDFS-11786 copyFromLocal and put are no longer identical.
> I do not see any reason why not to add the new feature to put as well.
> Being non-identical makes the understanding/usage of command more complicated 
> from user point of view.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14698) Make copyFromLocal's -t option available for put as well

2017-10-06 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16194664#comment-16194664
 ] 

Andras Bokor edited comment on HADOOP-14698 at 10/6/17 2:40 PM:


Attaching the same patch as 06 was to ensure that the flaky 
{{TestCopyFromLocal}} does not hide any issue.


was (Author: boky01):
Attaching same patch as 06 was to ensure that the fleaky {{TestCopyFromLocal}} 
does not hide any issue.

> Make copyFromLocal's -t option available for put as well
> 
>
> Key: HADOOP-14698
> URL: https://issues.apache.org/jira/browse/HADOOP-14698
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Attachments: HADOOP-14698.01.patch, HADOOP-14698.02.patch, 
> HADOOP-14698.03.patch, HADOOP-14698.04.patch, HADOOP-14698.05.patch, 
> HADOOP-14698.06.patch, HADOOP-14698.07.patch
>
>
> After HDFS-11786 copyFromLocal and put are no longer identical.
> I do not see any reason why not to add the new feature to put as well.
> Being non-identical makes the understanding/usage of command more complicated 
> from user point of view.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14698) Make copyFromLocal's -t option available for put as well

2017-10-07 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195666#comment-16195666
 ] 

Andras Bokor commented on HADOOP-14698:
---

JUnit failures are not related.

> Make copyFromLocal's -t option available for put as well
> 
>
> Key: HADOOP-14698
> URL: https://issues.apache.org/jira/browse/HADOOP-14698
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Attachments: HADOOP-14698.01.patch, HADOOP-14698.02.patch, 
> HADOOP-14698.03.patch, HADOOP-14698.04.patch, HADOOP-14698.05.patch, 
> HADOOP-14698.06.patch, HADOOP-14698.07.patch
>
>
> After HDFS-11786 copyFromLocal and put are no longer identical.
> I do not see any reason why not to add the new feature to put as well.
> Being non-identical makes the understanding/usage of command more complicated 
> from user point of view.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13592) Outputs errors and warnings by checkstyle at compile time

2017-10-12 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16201686#comment-16201686
 ] 

Andras Bokor commented on HADOOP-13592:
---

Will/Should be it ever fixed?
I mean there are too much checkstyle errors/warnings to print at compile time. 
That would make the output useless. My terminal has not even enough buffer.
Fixing all is not an option as it was discussed above.

> Outputs errors and warnings by checkstyle at compile time
> -
>
> Key: HADOOP-13592
> URL: https://issues.apache.org/jira/browse/HADOOP-13592
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
> Attachments: HADOOP-13592.001.patch
>
>
> Currently, Apache Hadoop has lots checkstyle errors and warnings, but it's 
> not outputted at compile time. This prevents us from fixing the errors and 
> warnings.
> We should output errors and warnings at compile time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13592) Outputs errors and warnings by checkstyle at compile time

2017-10-13 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-13592:
--
Status: Open  (was: Patch Available)

> Outputs errors and warnings by checkstyle at compile time
> -
>
> Key: HADOOP-13592
> URL: https://issues.apache.org/jira/browse/HADOOP-13592
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
> Attachments: HADOOP-13592.001.patch
>
>
> Currently, Apache Hadoop has lots checkstyle errors and warnings, but it's 
> not outputted at compile time. This prevents us from fixing the errors and 
> warnings.
> We should output errors and warnings at compile time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14698) Make copyFromLocal's -t option available for put as well

2017-10-13 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-14698:
--
Attachment: HADOOP-14698.08.patch

Attached patch 08. Did you mean that?

> Make copyFromLocal's -t option available for put as well
> 
>
> Key: HADOOP-14698
> URL: https://issues.apache.org/jira/browse/HADOOP-14698
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Attachments: HADOOP-14698.01.patch, HADOOP-14698.02.patch, 
> HADOOP-14698.03.patch, HADOOP-14698.04.patch, HADOOP-14698.05.patch, 
> HADOOP-14698.06.patch, HADOOP-14698.07.patch, HADOOP-14698.08.patch
>
>
> After HDFS-11786 copyFromLocal and put are no longer identical.
> I do not see any reason why not to add the new feature to put as well.
> Being non-identical makes the understanding/usage of command more complicated 
> from user point of view.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-5943) IOUtils#copyBytes methods should not close streams that are passed in as parameters

2017-10-13 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-5943:
-
Attachment: HADOOP-5943.02.patch

> IOUtils#copyBytes methods should not close streams that are passed in as 
> parameters
> ---
>
> Key: HADOOP-5943
> URL: https://issues.apache.org/jira/browse/HADOOP-5943
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Hairong Kuang
>Assignee: Andras Bokor
> Attachments: HADOOP-5943.01.patch, HADOOP-5943.02.patch
>
>
> The following methods in IOUtils close the streams that are passed in as 
> parameters. Calling these methods can easily trigger findbug OBL: Method may 
> fail to clean up stream or resource (OBL_UNSATISFIED_OBLIGATION). A good 
> practice should be to close a stream in the same method where the stream is 
> opened. 
> public static void copyBytes(InputStream in, OutputStream out, int buffSize, 
> boolean close) 
> public static void copyBytes(InputStream in, OutputStream out, Configuration 
> conf, boolean close)
> These methods should be deprecated.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-5943) IOUtils#copyBytes methods should not close streams that are passed in as parameters

2017-10-13 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-5943:
-
Attachment: HADOOP-5943.03.patch

> IOUtils#copyBytes methods should not close streams that are passed in as 
> parameters
> ---
>
> Key: HADOOP-5943
> URL: https://issues.apache.org/jira/browse/HADOOP-5943
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Hairong Kuang
>Assignee: Andras Bokor
> Attachments: HADOOP-5943.01.patch, HADOOP-5943.02.patch, 
> HADOOP-5943.03.patch
>
>
> The following methods in IOUtils close the streams that are passed in as 
> parameters. Calling these methods can easily trigger findbug OBL: Method may 
> fail to clean up stream or resource (OBL_UNSATISFIED_OBLIGATION). A good 
> practice should be to close a stream in the same method where the stream is 
> opened. 
> public static void copyBytes(InputStream in, OutputStream out, int buffSize, 
> boolean close) 
> public static void copyBytes(InputStream in, OutputStream out, Configuration 
> conf, boolean close)
> These methods should be deprecated.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14178) Move Mockito up to version 2.x

2017-10-13 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203764#comment-16203764
 ] 

Andras Bokor commented on HADOOP-14178:
---

[~ajisakaa],
I found only some very minor thing:
# Declaration of InfoWithSameName class is now longer than 80 chars (it was 79 
before the patch)
# InfoWithSameName: {{return expected.equals((info).name());}} brackets around 
info seem unnecessary

Otherwise looks good.

> Move Mockito up to version 2.x
> --
>
> Key: HADOOP-14178
> URL: https://issues.apache.org/jira/browse/HADOOP-14178
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Akira Ajisaka
> Attachments: HADOOP-14178.001.patch
>
>
> I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 
> since the switch to maven in 2011. 
> Mockito is now at version 2.1, [with lots of Java 8 
> support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. 
> That' s not just defining actions as closures, but in supporting Optional 
> types, mocking methods in interfaces, etc. 
> It's only used for testing, and, *provided there aren't regressions*, cost of 
> upgrade is low. The good news: test tools usually come with good test 
> coverage. The bad: mockito does go deep into java bytecodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-5943) IOUtils#copyBytes methods should not close streams that are passed in as parameters

2017-10-13 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-5943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16204018#comment-16204018
 ] 

Andras Bokor commented on HADOOP-5943:
--

JUnit failure is unrelated (YARN-7299).

> IOUtils#copyBytes methods should not close streams that are passed in as 
> parameters
> ---
>
> Key: HADOOP-5943
> URL: https://issues.apache.org/jira/browse/HADOOP-5943
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Hairong Kuang
>Assignee: Andras Bokor
> Attachments: HADOOP-5943.01.patch, HADOOP-5943.02.patch, 
> HADOOP-5943.03.patch
>
>
> The following methods in IOUtils close the streams that are passed in as 
> parameters. Calling these methods can easily trigger findbug OBL: Method may 
> fail to clean up stream or resource (OBL_UNSATISFIED_OBLIGATION). A good 
> practice should be to close a stream in the same method where the stream is 
> opened. 
> public static void copyBytes(InputStream in, OutputStream out, int buffSize, 
> boolean close) 
> public static void copyBytes(InputStream in, OutputStream out, Configuration 
> conf, boolean close)
> These methods should be deprecated.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-5943) IOUtils#copyBytes methods should not close streams that are passed in as parameters

2017-10-13 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-5943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16204018#comment-16204018
 ] 

Andras Bokor edited comment on HADOOP-5943 at 10/13/17 6:45 PM:


Javac warnings are ok.
JUnit failure is unrelated (YARN-7299).


was (Author: boky01):
JUnit failure is unrelated (YARN-7299).

> IOUtils#copyBytes methods should not close streams that are passed in as 
> parameters
> ---
>
> Key: HADOOP-5943
> URL: https://issues.apache.org/jira/browse/HADOOP-5943
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Hairong Kuang
>Assignee: Andras Bokor
> Attachments: HADOOP-5943.01.patch, HADOOP-5943.02.patch, 
> HADOOP-5943.03.patch
>
>
> The following methods in IOUtils close the streams that are passed in as 
> parameters. Calling these methods can easily trigger findbug OBL: Method may 
> fail to clean up stream or resource (OBL_UNSATISFIED_OBLIGATION). A good 
> practice should be to close a stream in the same method where the stream is 
> opened. 
> public static void copyBytes(InputStream in, OutputStream out, int buffSize, 
> boolean close) 
> public static void copyBytes(InputStream in, OutputStream out, Configuration 
> conf, boolean close)
> These methods should be deprecated.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14178) Move Mockito up to version 2.x

2017-10-16 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205621#comment-16205621
 ] 

Andras Bokor commented on HADOOP-14178:
---

The 607 javac warnings make sense:

* 385 because 
[Matchers|https://static.javadoc.io/org.mockito/mockito-core/2.10.0/org/mockito/Matchers.html]
 class has been deprecated. 
[ArgumentMatchers|https://static.javadoc.io/org.mockito/mockito-core/2.10.0/org/mockito/ArgumentMatchers.html]
 should be used instead.
* 169 because of new WhiteBox.java
* 6 because of org.mockito.runners.MockitoJUnitRunner was moved to 
org.mockito.junit.MockitoJUnitRunner.
* The rest 47 warnings are because of Java8's [Target 
Type|https://docs.oracle.com/javase/tutorial/java/generics/genTypeInference.html#target_types]
 feature. So in Mockito 2 any() can be used instead of anyObject, anyList 
insterad of anyListOf and so on.

So all the 607 warnings because API change in Mockito so nothing dangerous.

I think they can be fixed either with this JIRA or separate JIRA(s) it's up to 
you.



> Move Mockito up to version 2.x
> --
>
> Key: HADOOP-14178
> URL: https://issues.apache.org/jira/browse/HADOOP-14178
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Akira Ajisaka
> Attachments: HADOOP-14178.001.patch, HADOOP-14178.002.patch, 
> HADOOP-14178.003.patch, HADOOP-14178.004.patch
>
>
> I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 
> since the switch to maven in 2011. 
> Mockito is now at version 2.1, [with lots of Java 8 
> support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. 
> That' s not just defining actions as closures, but in supporting Optional 
> types, mocking methods in interfaces, etc. 
> It's only used for testing, and, *provided there aren't regressions*, cost of 
> upgrade is low. The good news: test tools usually come with good test 
> coverage. The bad: mockito does go deep into java bytecodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-8363) publish Hadoop-* sources and javadoc to maven repositories.

2017-10-20 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor reassigned HADOOP-8363:


Assignee: Andras Bokor

> publish Hadoop-* sources and javadoc to maven repositories.
> ---
>
> Key: HADOOP-8363
> URL: https://issues.apache.org/jira/browse/HADOOP-8363
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Jonathan Hsieh
>Assignee: Andras Bokor
>
> I believe the hadoop 1.0.x series does not have the source jars published on 
> maven repos.  
> {code}
> hbase-trunk$ mvn eclipse:eclipse -DdownloadSources -DdownloadJavadocs
> ...
> [INFO] Wrote Eclipse project for "hbase" to /home/jon/proj/hbase-trunk.
> [INFO] 
>Sources for some artifacts are not available.
>List of artifacts without a source archive:
>  o org.apache.hadoop:hadoop-core:1.0.2
>  o org.apache.hadoop:hadoop-test:1.0.2
> {code}
> It would be great if the poms were setup so that this would pull in the 
> source jars as well!  I believe this is in place for the 0.23/2.x release 
> lines.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-9851) dfs -chown does not like "+" plus sign in user name

2017-10-20 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor reassigned HADOOP-9851:


Assignee: Andras Bokor

> dfs -chown does not like "+" plus sign in user name
> ---
>
> Key: HADOOP-9851
> URL: https://issues.apache.org/jira/browse/HADOOP-9851
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.5-alpha
>Reporter: Marc Villacorta
>Assignee: Andras Bokor
>Priority: Minor
>
> I intend to set user and group:
> *User:* _MYCOMPANY+marc.villacorta_
> *Group:* hadoop
> where _'+'_ is what we use as a winbind separator.
> And this is what I get:
> {code:none}
> sudo -u hdfs hadoop fs -touchz /tmp/test.txt
> sudo -u hdfs hadoop fs -chown MYCOMPANY+marc.villacorta:hadoop /tmp/test.txt
> -chown: 'MYCOMPANY+marc.villacorta:hadoop' does not match expected pattern 
> for [owner][:group].
> Usage: hadoop fs [generic options] -chown [-R] [OWNER][:[GROUP]] PATH...
> {code}
> I am using version: 2.0.0-cdh4.3.0
> Quote 
> [source|http://h30097.www3.hp.com/docs/iass/OSIS_62/MAN/MAN8/0044.HTM]:
> {quote}
> winbind separator
>The winbind separator option allows you to specify how NT domain names
>and user names are combined into unix user names when presented to
>users. By default, winbindd will use the traditional '\' separator so
>that the unix user names look like DOMAIN\username. In some cases this
>separator character may cause problems as the '\' character has
>special meaning in unix shells. In that case you can use the winbind
>separator option to specify an alternative separator character. Good
>alternatives may be '/' (although that conflicts with the unix
>directory separator) or a '+ 'character. The '+' character appears to
>be the best choice for 100% compatibility with existing unix
>utilities, but may be an aesthetically bad choice depending on your
>taste.
>Default: winbind separator = \
>Example: winbind separator = +
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9851) dfs -chown does not like "+" plus sign in user name

2017-10-20 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-9851:
-
Attachment: HADOOP-9851.01.patch

Patch 01:
"-" sign has to remain at 1st place otherwise it will be compiled as range

> dfs -chown does not like "+" plus sign in user name
> ---
>
> Key: HADOOP-9851
> URL: https://issues.apache.org/jira/browse/HADOOP-9851
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.5-alpha
>Reporter: Marc Villacorta
>Assignee: Andras Bokor
>Priority: Minor
> Attachments: HADOOP-9851.01.patch
>
>
> I intend to set user and group:
> *User:* _MYCOMPANY+marc.villacorta_
> *Group:* hadoop
> where _'+'_ is what we use as a winbind separator.
> And this is what I get:
> {code:none}
> sudo -u hdfs hadoop fs -touchz /tmp/test.txt
> sudo -u hdfs hadoop fs -chown MYCOMPANY+marc.villacorta:hadoop /tmp/test.txt
> -chown: 'MYCOMPANY+marc.villacorta:hadoop' does not match expected pattern 
> for [owner][:group].
> Usage: hadoop fs [generic options] -chown [-R] [OWNER][:[GROUP]] PATH...
> {code}
> I am using version: 2.0.0-cdh4.3.0
> Quote 
> [source|http://h30097.www3.hp.com/docs/iass/OSIS_62/MAN/MAN8/0044.HTM]:
> {quote}
> winbind separator
>The winbind separator option allows you to specify how NT domain names
>and user names are combined into unix user names when presented to
>users. By default, winbindd will use the traditional '\' separator so
>that the unix user names look like DOMAIN\username. In some cases this
>separator character may cause problems as the '\' character has
>special meaning in unix shells. In that case you can use the winbind
>separator option to specify an alternative separator character. Good
>alternatives may be '/' (although that conflicts with the unix
>directory separator) or a '+ 'character. The '+' character appears to
>be the best choice for 100% compatibility with existing unix
>utilities, but may be an aesthetically bad choice depending on your
>taste.
>Default: winbind separator = \
>Example: winbind separator = +
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9851) dfs -chown does not like "+" plus sign in user name

2017-10-20 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-9851:
-
Status: Patch Available  (was: Open)

> dfs -chown does not like "+" plus sign in user name
> ---
>
> Key: HADOOP-9851
> URL: https://issues.apache.org/jira/browse/HADOOP-9851
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.5-alpha
>Reporter: Marc Villacorta
>Assignee: Andras Bokor
>Priority: Minor
> Attachments: HADOOP-9851.01.patch
>
>
> I intend to set user and group:
> *User:* _MYCOMPANY+marc.villacorta_
> *Group:* hadoop
> where _'+'_ is what we use as a winbind separator.
> And this is what I get:
> {code:none}
> sudo -u hdfs hadoop fs -touchz /tmp/test.txt
> sudo -u hdfs hadoop fs -chown MYCOMPANY+marc.villacorta:hadoop /tmp/test.txt
> -chown: 'MYCOMPANY+marc.villacorta:hadoop' does not match expected pattern 
> for [owner][:group].
> Usage: hadoop fs [generic options] -chown [-R] [OWNER][:[GROUP]] PATH...
> {code}
> I am using version: 2.0.0-cdh4.3.0
> Quote 
> [source|http://h30097.www3.hp.com/docs/iass/OSIS_62/MAN/MAN8/0044.HTM]:
> {quote}
> winbind separator
>The winbind separator option allows you to specify how NT domain names
>and user names are combined into unix user names when presented to
>users. By default, winbindd will use the traditional '\' separator so
>that the unix user names look like DOMAIN\username. In some cases this
>separator character may cause problems as the '\' character has
>special meaning in unix shells. In that case you can use the winbind
>separator option to specify an alternative separator character. Good
>alternatives may be '/' (although that conflicts with the unix
>directory separator) or a '+ 'character. The '+' character appears to
>be the best choice for 100% compatibility with existing unix
>utilities, but may be an aesthetically bad choice depending on your
>taste.
>Default: winbind separator = \
>Example: winbind separator = +
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9851) dfs -chown does not like "+" plus sign in user name

2017-10-20 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-9851:
-
Target Version/s: 3.0.0

> dfs -chown does not like "+" plus sign in user name
> ---
>
> Key: HADOOP-9851
> URL: https://issues.apache.org/jira/browse/HADOOP-9851
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.5-alpha
>Reporter: Marc Villacorta
>Assignee: Andras Bokor
>Priority: Minor
> Attachments: HADOOP-9851.01.patch
>
>
> I intend to set user and group:
> *User:* _MYCOMPANY+marc.villacorta_
> *Group:* hadoop
> where _'+'_ is what we use as a winbind separator.
> And this is what I get:
> {code:none}
> sudo -u hdfs hadoop fs -touchz /tmp/test.txt
> sudo -u hdfs hadoop fs -chown MYCOMPANY+marc.villacorta:hadoop /tmp/test.txt
> -chown: 'MYCOMPANY+marc.villacorta:hadoop' does not match expected pattern 
> for [owner][:group].
> Usage: hadoop fs [generic options] -chown [-R] [OWNER][:[GROUP]] PATH...
> {code}
> I am using version: 2.0.0-cdh4.3.0
> Quote 
> [source|http://h30097.www3.hp.com/docs/iass/OSIS_62/MAN/MAN8/0044.HTM]:
> {quote}
> winbind separator
>The winbind separator option allows you to specify how NT domain names
>and user names are combined into unix user names when presented to
>users. By default, winbindd will use the traditional '\' separator so
>that the unix user names look like DOMAIN\username. In some cases this
>separator character may cause problems as the '\' character has
>special meaning in unix shells. In that case you can use the winbind
>separator option to specify an alternative separator character. Good
>alternatives may be '/' (although that conflicts with the unix
>directory separator) or a '+ 'character. The '+' character appears to
>be the best choice for 100% compatibility with existing unix
>utilities, but may be an aesthetically bad choice depending on your
>taste.
>Default: winbind separator = \
>Example: winbind separator = +
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-7553) hadoop-common tries to find hadoop-assemblies:jar:0.23.0-SNAPSHOT in http://snapshots.repository.codehaus.org

2017-10-20 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-7553.
--
Resolution: Invalid

It is no longer an issue.

> hadoop-common tries to find hadoop-assemblies:jar:0.23.0-SNAPSHOT in 
> http://snapshots.repository.codehaus.org 
> --
>
> Key: HADOOP-7553
> URL: https://issues.apache.org/jira/browse/HADOOP-7553
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Arun C Murthy
>Priority: Critical
>
> hadoop-common tries to find hadoop-assemblies:jar:0.23.0-SNAPSHOT in 
> http://snapshots.repository.codehaus.org - shouldn't it be apache repo?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14942) DistCp#cleanup() should check whether jobFS is null

2017-10-20 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-14942:
--
Attachment: HADOOP-14942.01.patch

> DistCp#cleanup() should check whether jobFS is null
> ---
>
> Key: HADOOP-14942
> URL: https://issues.apache.org/jira/browse/HADOOP-14942
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
> Attachments: HADOOP-14942.01.patch
>
>
> Over in HBASE-18975, we observed the following:
> {code}
> 2017-10-10 17:22:53,211 DEBUG [main] mapreduce.MapReduceBackupCopyJob(313): 
> Doing COPY_TYPE_DISTCP
> 2017-10-10 17:22:53,272 DEBUG [main] mapreduce.MapReduceBackupCopyJob(322): 
> DistCp options: [hdfs://localhost:55247/backupUT/.tmp/backup_1507681285309, 
> hdfs://localhost:55247/   backupUT]
> 2017-10-10 17:22:53,283 ERROR [main] tools.DistCp(167): Exception encountered
> java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob$BackupDistCp.execute(MapReduceBackupCopyJob.java:234)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:153)
>   at 
> org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob.copy(MapReduceBackupCopyJob.java:331)
>   at 
> org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.incrementalCopyHFiles(IncrementalTableBackupClient.java:286)
> ...
> Caused by: java.lang.NullPointerException
>   at org.apache.hadoop.tools.DistCp.cleanup(DistCp.java:460)
>   ... 45 more
> {code}
> NullPointerException came from second line below:
> {code}
>   if (metaFolder == null) return;
>   jobFS.delete(metaFolder, true);
> {code}
> in which case jobFS was null.
> A check against null should be added.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14942) DistCp#cleanup() should check whether jobFS is null

2017-10-20 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-14942:
--
Assignee: Andras Bokor
  Status: Patch Available  (was: Open)

> DistCp#cleanup() should check whether jobFS is null
> ---
>
> Key: HADOOP-14942
> URL: https://issues.apache.org/jira/browse/HADOOP-14942
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Andras Bokor
>Priority: Minor
> Attachments: HADOOP-14942.01.patch
>
>
> Over in HBASE-18975, we observed the following:
> {code}
> 2017-10-10 17:22:53,211 DEBUG [main] mapreduce.MapReduceBackupCopyJob(313): 
> Doing COPY_TYPE_DISTCP
> 2017-10-10 17:22:53,272 DEBUG [main] mapreduce.MapReduceBackupCopyJob(322): 
> DistCp options: [hdfs://localhost:55247/backupUT/.tmp/backup_1507681285309, 
> hdfs://localhost:55247/   backupUT]
> 2017-10-10 17:22:53,283 ERROR [main] tools.DistCp(167): Exception encountered
> java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob$BackupDistCp.execute(MapReduceBackupCopyJob.java:234)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:153)
>   at 
> org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob.copy(MapReduceBackupCopyJob.java:331)
>   at 
> org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.incrementalCopyHFiles(IncrementalTableBackupClient.java:286)
> ...
> Caused by: java.lang.NullPointerException
>   at org.apache.hadoop.tools.DistCp.cleanup(DistCp.java:460)
>   ... 45 more
> {code}
> NullPointerException came from second line below:
> {code}
>   if (metaFolder == null) return;
>   jobFS.delete(metaFolder, true);
> {code}
> in which case jobFS was null.
> A check against null should be added.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14942) DistCp#cleanup() should check whether jobFS is null

2017-10-20 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-14942:
--
Target Version/s: 3.0.0

> DistCp#cleanup() should check whether jobFS is null
> ---
>
> Key: HADOOP-14942
> URL: https://issues.apache.org/jira/browse/HADOOP-14942
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Andras Bokor
>Priority: Minor
> Attachments: HADOOP-14942.01.patch
>
>
> Over in HBASE-18975, we observed the following:
> {code}
> 2017-10-10 17:22:53,211 DEBUG [main] mapreduce.MapReduceBackupCopyJob(313): 
> Doing COPY_TYPE_DISTCP
> 2017-10-10 17:22:53,272 DEBUG [main] mapreduce.MapReduceBackupCopyJob(322): 
> DistCp options: [hdfs://localhost:55247/backupUT/.tmp/backup_1507681285309, 
> hdfs://localhost:55247/   backupUT]
> 2017-10-10 17:22:53,283 ERROR [main] tools.DistCp(167): Exception encountered
> java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob$BackupDistCp.execute(MapReduceBackupCopyJob.java:234)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:153)
>   at 
> org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob.copy(MapReduceBackupCopyJob.java:331)
>   at 
> org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.incrementalCopyHFiles(IncrementalTableBackupClient.java:286)
> ...
> Caused by: java.lang.NullPointerException
>   at org.apache.hadoop.tools.DistCp.cleanup(DistCp.java:460)
>   ... 45 more
> {code}
> NullPointerException came from second line below:
> {code}
>   if (metaFolder == null) return;
>   jobFS.delete(metaFolder, true);
> {code}
> in which case jobFS was null.
> A check against null should be added.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-9864) Adopt SLF4Js over commons-logging

2017-11-03 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-9864.
--
Resolution: Duplicate

Adopting SLF4J is in progress (or maybe ready?) and it seems the adopting 
process is not tracked here. This one can be closed.

> Adopt SLF4Js over commons-logging
> -
>
> Key: HADOOP-9864
> URL: https://issues.apache.org/jira/browse/HADOOP-9864
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Steve Loughran
>Priority: Major
>
> This is fairly major, but it's something to raise. Commons-logging is used as 
> frozen front end to log4j with a pre-java5-varargs syntax, forcing us to wrap 
> every log event with an {{if (log.isDebugEnabled()}} clause.
> SLF4J
> # is the new de-facto standard Java logging API
> # does use varags for on-demand stringification {{log.info("routing to {}
> , host)}}
> # bridges to Log4J
> # hooks up direct to logback, which has a reputation for speed through less 
> lock contention
> # still supports the same {{isDebugEnabled()}} probes, so commons-logging 
> based classes could switch to SLF4J merely by changing the type of the 
> {{LOG}} class.
> Hadoop already depends on SLF4J for jetty support, hadoop-auth uses it 
> directly.
> This JIRA merely proposes making a decision on whether to adopt SL4J -and if 
> so, how to roll it out.
> The least-disruptive roll-out strategy would be to mandate it on new modules, 
> then switch module-by-module in the existing code.
> We'd also need to find all those tests that dig down to log4j directly, and 
> make sure that they can migrate to the new APIs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9161) FileSystem.moveFromLocalFile fails to remove source

2017-11-03 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16237793#comment-16237793
 ] 

Andras Bokor commented on HADOOP-9161:
--

I cannot reproduce it from JUnit tests. I tried the following code:
{code}
Path src = new Path("file:///" + ROOT + File.separator + "whatever");
FileSystemTestHelper.createFile(fs, src);
Path dst = new Path("file:///" + ROOT + File.separator + "whatever2");
fs.moveFromLocalFile(src, dst);

Path src2 = new Path("file:///" + ROOT + File.separator + "dir/whatever");
Path srcDir = new Path("file:///" + ROOT + File.separator + "dir");
Path dst2 = new Path("file:///" + ROOT + File.separator + "dir2");
FileSystemTestHelper.createFile(fs, src2);
fs.moveFromLocalFile(srcDir, dst2);{code}

This small test passes. Possibly this bug was fixed in the past 5 years.

> FileSystem.moveFromLocalFile fails to remove source
> ---
>
> Key: HADOOP-9161
> URL: https://issues.apache.org/jira/browse/HADOOP-9161
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0-alpha1
>Reporter: Daryn Sharp
>Priority: Major
>
> FileSystem.moveFromLocalFile fails with cannot remove file:/path after 
> copying the files.  It appears to be trying to remove a file uri as a 
> relative path.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-9161) FileSystem.moveFromLocalFile fails to remove source

2017-11-03 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor reassigned HADOOP-9161:


Assignee: Andras Bokor

> FileSystem.moveFromLocalFile fails to remove source
> ---
>
> Key: HADOOP-9161
> URL: https://issues.apache.org/jira/browse/HADOOP-9161
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0-alpha1
>Reporter: Daryn Sharp
>Assignee: Andras Bokor
>Priority: Major
>
> FileSystem.moveFromLocalFile fails with cannot remove file:/path after 
> copying the files.  It appears to be trying to remove a file uri as a 
> relative path.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15021) Excluding private and limitiedprivate from javadoc causes broken links

2017-11-07 Thread Andras Bokor (JIRA)
Andras Bokor created HADOOP-15021:
-

 Summary: Excluding private and limitiedprivate from javadoc causes 
broken links
 Key: HADOOP-15021
 URL: https://issues.apache.org/jira/browse/HADOOP-15021
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Andras Bokor
Priority: Minor


Examples:
http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FSDataInputStream.html
Check "All Implemented Interfaces" section

http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapred/TaskAttemptContext.html
Same section

http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapreduce/Cluster.html#renewDelegationToken-org.apache.hadoop.security.token.Token-
Method parameters

I am not sure about the correct solution. Waiting for ideas or something.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15021) Excluding private and limitiedprivate from javadoc causes broken links

2017-11-07 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-15021:
--
Description: 
Examples:
http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FSDataInputStream.html
Check "All Implemented Interfaces" section

http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapred/TaskAttemptContext.html
Same section

http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapreduce/Cluster.html#renewDelegationToken-org.apache.hadoop.security.token.Token-
Method parameters

I am not sure about the correct solution. Waiting for ideas or suggestions.

  was:
Examples:
http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FSDataInputStream.html
Check "All Implemented Interfaces" section

http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapred/TaskAttemptContext.html
Same section

http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapreduce/Cluster.html#renewDelegationToken-org.apache.hadoop.security.token.Token-
Method parameters

I am not sure about the correct solution. Waiting for ideas or something.


> Excluding private and limitiedprivate from javadoc causes broken links
> --
>
> Key: HADOOP-15021
> URL: https://issues.apache.org/jira/browse/HADOOP-15021
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Priority: Minor
>
> Examples:
> http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FSDataInputStream.html
> Check "All Implemented Interfaces" section
> http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapred/TaskAttemptContext.html
> Same section
> http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapreduce/Cluster.html#renewDelegationToken-org.apache.hadoop.security.token.Token-
> Method parameters
> I am not sure about the correct solution. Waiting for ideas or suggestions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-8555) Incorrect Kerberos configuration

2017-11-09 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-8555.
--
Resolution: Invalid

This part of code has been totally changed. This is no longer a valid issue.

> Incorrect Kerberos configuration
> 
>
> Key: HADOOP-8555
> URL: https://issues.apache.org/jira/browse/HADOOP-8555
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0-alpha, 3.0.0-alpha1
>Reporter: Laxman
>  Labels: kerberos, security
>
> When keytab is given ticket cache should not be considered.
> Following configuration tries to use ticket cache even when keytab is 
> configured. We need not configure ticket cache here.
> org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.KerberosConfiguration.getAppConfigurationEntry(String)
> {code}
>   options.put("keyTab", keytab);
>   options.put("principal", principal);
>   options.put("useKeyTab", "true");
>   options.put("storeKey", "true");
>   options.put("doNotPrompt", "true");
>   options.put("useTicketCache", "true");
>   options.put("renewTGT", "true");
>   options.put("refreshKrb5Config", "true");
>   options.put("isInitiator", "false");
>   String ticketCache = System.getenv("KRB5CCNAME");
>   if (ticketCache != null) {
> options.put("ticketCache", ticketCache);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-6380) Deprecate hadoop fs -dus command.

2017-11-10 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor reopened HADOOP-6380:
--

> Deprecate hadoop fs -dus command.
> -
>
> Key: HADOOP-6380
> URL: https://issues.apache.org/jira/browse/HADOOP-6380
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ravi Phulari
>
> We need to remove *hadoop fs -dus* command whose functionality is duplicated 
> by *hadoop fs -du -s*.  
> {noformat}
> [rphulari@lm]> bin/hdfs dfs -du -s 
> 48902  hdfs://localhost:9000/user/rphulari
> [rphulari@lm]> bin/hdfs dfs -dus 
> 48902  hdfs://localhost:9000/user/rphulari
> [rphulari@lm]> 
> [rphulari@lm]> bin/hdfs dfs -dus -h
> 47.8k  hdfs://localhost:9000/user/rphulari
> [rphulari@lm]> bin/hdfs dfs -du -s -h
> 47.8k  hdfs://localhost:9000/user/rphulari
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-6380) Deprecate hadoop fs -dus command.

2017-11-10 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-6380.
--
Resolution: Won't Fix

It's already deprecated:
{code}bin/hdfs dfs -dus /
dus: DEPRECATED: Please use 'du -s' instead.
2017-11-07 13:56:08,914 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
0  0  /{code}

> Deprecate hadoop fs -dus command.
> -
>
> Key: HADOOP-6380
> URL: https://issues.apache.org/jira/browse/HADOOP-6380
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ravi Phulari
>
> We need to remove *hadoop fs -dus* command whose functionality is duplicated 
> by *hadoop fs -du -s*.  
> {noformat}
> [rphulari@lm]> bin/hdfs dfs -du -s 
> 48902  hdfs://localhost:9000/user/rphulari
> [rphulari@lm]> bin/hdfs dfs -dus 
> 48902  hdfs://localhost:9000/user/rphulari
> [rphulari@lm]> 
> [rphulari@lm]> bin/hdfs dfs -dus -h
> 47.8k  hdfs://localhost:9000/user/rphulari
> [rphulari@lm]> bin/hdfs dfs -du -s -h
> 47.8k  hdfs://localhost:9000/user/rphulari
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-6380) Deprecate hadoop fs -dus command.

2017-11-10 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-6380.
--
Resolution: Duplicate

> Deprecate hadoop fs -dus command.
> -
>
> Key: HADOOP-6380
> URL: https://issues.apache.org/jira/browse/HADOOP-6380
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ravi Phulari
>
> We need to remove *hadoop fs -dus* command whose functionality is duplicated 
> by *hadoop fs -du -s*.  
> {noformat}
> [rphulari@lm]> bin/hdfs dfs -du -s 
> 48902  hdfs://localhost:9000/user/rphulari
> [rphulari@lm]> bin/hdfs dfs -dus 
> 48902  hdfs://localhost:9000/user/rphulari
> [rphulari@lm]> 
> [rphulari@lm]> bin/hdfs dfs -dus -h
> 47.8k  hdfs://localhost:9000/user/rphulari
> [rphulari@lm]> bin/hdfs dfs -du -s -h
> 47.8k  hdfs://localhost:9000/user/rphulari
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9474) fs -put command doesn't work if selecting certain files from a local folder

2017-11-10 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16247344#comment-16247344
 ] 

Andras Bokor commented on HADOOP-9474:
--

I don't think it is an issue. This behavior is in sync with Unix way. Your 
cannot copy files to a non-existing directory but you can copy a directory to 
another path:
{code}$ cp mydir/* fakedir
usage: cp [-R [-H | -L | -P]] [-fi | -n] [-apvX] source_file target_file
   cp [-R [-H | -L | -P]] [-fi | -n] [-apvX] source_file ... 
target_directory
$ cp mydir/* existingdir
$ ls existingdir/
1   2
$ cp -r mydir/ fakedir; ls fakedir
1   2{code}


> fs -put command doesn't work if selecting certain files from a local folder
> ---
>
> Key: HADOOP-9474
> URL: https://issues.apache.org/jira/browse/HADOOP-9474
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 1.1.2
>Reporter: Glen Mazza
>
> The following four commands (a) - (d) were run sequentially.  From (a) - (c) 
> HDFS folder "inputABC" does not yet exist.
> (a) and (b) are improperly refusing to put the files from conf/*.xml into 
> inputABC because folder inputABC doesn't yet exist.  However, in (c) when I 
> make the same request except with just "conf" (and not "conf/*.xml") HDFS 
> will correctly create inputABC and copy the folders over.  We see that 
> inputABC now exists in (d) when I subsequently try to copy the conf/*.xml 
> folders, it correctly complains that the files already exist there.
> IOW, I can put "conf" into a nonexisting HDFS folder and fs will create the 
> folder for me, but I can't do the same with "conf/*.xml" -- but the latter 
> should work equally as well.  The problem appears to be in 
> org.apache.hadoop.fs.FileUtil, line 176, which properly routes "conf" to have 
> its files copied but will have "conf/*.xml" subsequently return a 
> "nonexisting folder" error.
> {noformat}
> a) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put 
> conf/*.xml inputABC
> put: `inputABC': specified destination directory doest not exist
> b) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put 
> conf/*.xml inputABC
> put: `inputABC': specified destination directory doest not exist
> c) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put conf 
> inputABC
> d) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put 
> conf/*.xml inputABC
> put: Target inputABC/capacity-scheduler.xml already exists
> Target inputABC/core-site.xml already exists
> Target inputABC/fair-scheduler.xml already exists
> Target inputABC/hadoop-policy.xml already exists
> Target inputABC/hdfs-site.xml already exists
> Target inputABC/mapred-queue-acls.xml already exists
> Target inputABC/mapred-site.xml already exists
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-9474) fs -put command doesn't work if selecting certain files from a local folder

2017-11-10 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-9474.
--
Resolution: Not A Bug

> fs -put command doesn't work if selecting certain files from a local folder
> ---
>
> Key: HADOOP-9474
> URL: https://issues.apache.org/jira/browse/HADOOP-9474
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 1.1.2
>Reporter: Glen Mazza
>
> The following four commands (a) - (d) were run sequentially.  From (a) - (c) 
> HDFS folder "inputABC" does not yet exist.
> (a) and (b) are improperly refusing to put the files from conf/*.xml into 
> inputABC because folder inputABC doesn't yet exist.  However, in (c) when I 
> make the same request except with just "conf" (and not "conf/*.xml") HDFS 
> will correctly create inputABC and copy the folders over.  We see that 
> inputABC now exists in (d) when I subsequently try to copy the conf/*.xml 
> folders, it correctly complains that the files already exist there.
> IOW, I can put "conf" into a nonexisting HDFS folder and fs will create the 
> folder for me, but I can't do the same with "conf/*.xml" -- but the latter 
> should work equally as well.  The problem appears to be in 
> org.apache.hadoop.fs.FileUtil, line 176, which properly routes "conf" to have 
> its files copied but will have "conf/*.xml" subsequently return a 
> "nonexisting folder" error.
> {noformat}
> a) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put 
> conf/*.xml inputABC
> put: `inputABC': specified destination directory doest not exist
> b) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put 
> conf/*.xml inputABC
> put: `inputABC': specified destination directory doest not exist
> c) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put conf 
> inputABC
> d) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put 
> conf/*.xml inputABC
> put: Target inputABC/capacity-scheduler.xml already exists
> Target inputABC/core-site.xml already exists
> Target inputABC/fair-scheduler.xml already exists
> Target inputABC/hadoop-policy.xml already exists
> Target inputABC/hdfs-site.xml already exists
> Target inputABC/mapred-queue-acls.xml already exists
> Target inputABC/mapred-site.xml already exists
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-10538) NumberFormatException happened when hadoop 1.2.1 running on Cygwin

2017-11-10 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-10538.
---
Resolution: Won't Fix

It's obsolete. 1.x is not supported. 

> NumberFormatException happened  when hadoop 1.2.1 running on Cygwin
> ---
>
> Key: HADOOP-10538
> URL: https://issues.apache.org/jira/browse/HADOOP-10538
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1.2.1
> Environment: OS: windows 7 / Cygwin
>Reporter: peter xie
>
> The TaskTracker always failed to startup when it running on Cygwin. And the 
> error info logged in xxx-tasktracker-.log is :
> 2014-04-21 22:13:51,439 DEBUG org.apache.hadoop.mapred.TaskRunner: putting 
> jobToken file name into environment 
> D:/hadoop/mapred/local/taskTracker/pxie/jobcache/job_201404212205_0001/jobToken
> 2014-04-21 22:13:51,439 INFO org.apache.hadoop.mapred.JvmManager: Killing 
> JVM: jvm_201404212205_0001_m_1895177159
> 2014-04-21 22:13:51,439 WARN org.apache.hadoop.mapred.TaskRunner: 
> attempt_201404212205_0001_m_00_0 : Child Error
> java.lang.NumberFormatException: For input string: ""
>   at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>   at java.lang.Integer.parseInt(Integer.java:504)
>   at java.lang.Integer.parseInt(Integer.java:527)
>   at 
> org.apache.hadoop.mapred.JvmManager$JvmManagerForType$JvmRunner.kill(JvmManager.java:552)
>   at 
> org.apache.hadoop.mapred.JvmManager$JvmManagerForType.killJvmRunner(JvmManager.java:314)
>   at 
> org.apache.hadoop.mapred.JvmManager$JvmManagerForType.reapJvm(JvmManager.java:378)
>   at 
> org.apache.hadoop.mapred.JvmManager$JvmManagerForType.access$000(JvmManager.java:189)
>   at org.apache.hadoop.mapred.JvmManager.launchJvm(JvmManager.java:122)
>   at 
> org.apache.hadoop.mapred.TaskRunner.launchJvmAndWait(TaskRunner.java:292)
>   at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:251)
> 2014-04-21 22:13:51,511 DEBUG org.apache.hadoop.ipc.Server: IPC Server 
> listener on 59983: disconnecting client 127.0.0.1:60154. Number of active 
> connections: 1
> 2014-04-21 22:13:51,531 WARN org.apache.hadoop.fs.FileUtil: Failed to set 
> permissions of path: 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14665) Support # hash prefix comment lines in auth_to_local mapping rules

2017-11-10 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16247494#comment-16247494
 ] 

Andras Bokor commented on HADOOP-14665:
---

I does not seem like a missing feature. You can use standard xml comments, do 
not need to implement an own comment logic.

> Support # hash prefix comment lines in auth_to_local mapping rules
> --
>
> Key: HADOOP-14665
> URL: https://issues.apache.org/jira/browse/HADOOP-14665
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.3
> Environment: HDP 2.6.0 + Kerberos
>Reporter: Hari Sekhon
>
> Request to add support for # hash prefixed comment lines in Hadoop's 
> auth_to_local mappings rules so I can comment what rules I've added and why 
> inline to the rules like with code (useful when supporting multi-directory 
> mappings).
> It should be fairly easy to implement, just string strip all lines from # to 
> end, trim whitespace and then exclude all blank / whitespace lines, I do this 
> in tools I write all the time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14665) Support # hash prefix comment lines in auth_to_local mapping rules

2017-11-10 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16247549#comment-16247549
 ] 

Andras Bokor commented on HADOOP-14665:
---

I am not sure if I understand correctly. auth_to_local rules are used in 
core-site.xml file. So it is an xml file that is why xml comment works.

> Support # hash prefix comment lines in auth_to_local mapping rules
> --
>
> Key: HADOOP-14665
> URL: https://issues.apache.org/jira/browse/HADOOP-14665
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.3
> Environment: HDP 2.6.0 + Kerberos
>Reporter: Hari Sekhon
>
> Request to add support for # hash prefixed comment lines in Hadoop's 
> auth_to_local mappings rules so I can comment what rules I've added and why 
> inline to the rules like with code (useful when supporting multi-directory 
> mappings).
> It should be fairly easy to implement, just string strip all lines from # to 
> end, trim whitespace and then exclude all blank / whitespace lines, I do this 
> in tools I write all the time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-9324) Out of date API document

2017-11-10 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-9324.
--
Resolution: Duplicate

I have raised HADOOP-15021 which is the root cause of most of the issues above. 
Others are ok.

1. Covered by HADOOP-15021
2. Covered by HADOOP-15021
3. Covered by HADOOP-15021
4. JoinCollector is not deleted
5. No longer an issue
6. Covered by HADOOP-15021
7. Covered by HADOOP-15021
8. Covered by HADOOP-15021
9. Covered by HADOOP-15021
10. JobContextImpl is not deleted. It will covered by HADOOP-15021
11. It is correct as it is
12. Covered by HADOOP-15021
13. Covered by HADOOP-15021
14. Covered by HADOOP-15021
15. Covered by HADOOP-15021
16. Package exists
17. Covered by HADOOP-15021
18. Covered by HADOOP-15021
19. No longer valid
20. Covered by HADOOP-15021


> Out of date API document
> 
>
> Key: HADOOP-9324
> URL: https://issues.apache.org/jira/browse/HADOOP-9324
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.3-alpha
>Reporter: Hao Zhong
>
> The documentation is out of date. Some code references are broken:
> 1. 
> http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FSDataInputStream.html
> "All Implemented Interfaces:
> Closeable, DataInput, *org.apache.hadoop.fs.ByteBufferReadable*, 
> *org.apache.hadoop.fs.HasFileDescriptor*, PositionedReadable, Seekable "
> 2.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapreduce/Cluster.html
> renewDelegationToken(*org.apache.hadoop.security.token.Token*
>  token)
>   Deprecated. Use Token.renew(*org.apache.hadoop.conf.Configuration*) 
> instead
> 3.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapred/JobConf.html
> "Use MRAsyncDiskService.moveAndDeleteAllVolumes instead. "
> I cannot find the MRAsyncDiskService class in the documentation of 2.0.3. 
> 4.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapred/join/CompositeRecordReader.html
>  "protected 
> *org.apache.hadoop.mapred.join.CompositeRecordReader.JoinCollector*   jc"
> Please globally search JoinCollector. It is deleted, but mentioned many times 
> in the current documentation.
> 5.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapred/OutputCommitter.html
> "abortJob(JobContext context, *org.apache.hadoop.mapreduce.JobStatus.State 
> runState*)"  
> http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapreduce/Job.html
> "public *org.apache.hadoop.mapreduce.JobStatus.State* getJobState()"
> 4.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapred/SequenceFileOutputFormat.html
> " static *org.apache.hadoop.io.SequenceFile.CompressionType* 
> getOutputCompressionType"
> " static *org.apache.hadoop.io.SequenceFile.Reader[]* getReaders"
> 5.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapred/TaskCompletionEvent.html
> "Returns enum Status.SUCESS or Status.FAILURE."->Status.SUCCEEDED? 
> 6.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapreduce/Job.html
> " static *org.apache.hadoop.mapreduce.Job.TaskStatusFilter*   
> getTaskOutputFilter"
> "  org.apache.hadoop.mapreduce.TaskReport[]   getTaskReports(TaskType type) "
> 7.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapreduce/Reducer.html
> "cleanup(*org.apache.hadoop.mapreduce.Reducer.Context* context) "
> 8.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapred/SequenceFileOutputFormat.html
>  "static *org.apache.hadoop.io.SequenceFile.CompressionType*  
> getOutputCompressionType(JobConf conf)
>   Get the *SequenceFile.CompressionType* for the output SequenceFile."
> " static *org.apache.hadoop.io.SequenceFile.Reader[]* getReaders" 
> 9.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapreduce/lib/partition/InputSampler.html
> "writePartitionFile(Job job, 
> *org.apache.hadoop.mapreduce.lib.partition.InputSampler.Sampler* 
> sampler) "
> 10.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapreduce/lib/partition/TotalOrderPartitioner.html
> contain JobContextImpl.getNumReduceTasks() - 1 keys. 
> The JobContextImpl class is already deleted.
> 11. 
> http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapreduce/OutputCommitter.html
> "Note that this is invoked for jobs with final runstate as 
> JobStatus.State.FAILED or JobStatus.State.KILLED."->JobStatus.FAILED 
> JobStatus.KILLED?
> 12.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapred/TaskAttemptContext.html
> "All Superinterfaces:
> JobContext, *org.apache.hadoop.mapreduce.MRJobConfig*, Progressable, 
> TaskAttemptContext "
> 13.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/metrics/file/FileContext.html
> "All Implemented Interfaces:
> *org.apache.hadoop.metrics.MetricsContext*"
> 14.http:

[jira] [Resolved] (HADOOP-9282) Document Java 7 support

2017-11-10 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-9282.
--
Resolution: Duplicate

Obsolete. 1.7 is mentioned on the wiki page.

> Document Java 7 support
> ---
>
> Key: HADOOP-9282
> URL: https://issues.apache.org/jira/browse/HADOOP-9282
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Kevin Lyda
>
> The Hadoop Java Versions page makes no mention of Java 7.
> http://wiki.apache.org/hadoop/HadoopJavaVersions
> Java 6 is EOL as of this month ( 
> http://www.java.com/en/download/faq/java_6.xml ) and that's after extending 
> the date twice: https://blogs.oracle.com/henrik/entry/java_6_eol_h_h While 
> Oracle has recently released a number of security patches, chances are more 
> security issues will come up and we'll be left running clusters we can't 
> patch if we stay with Java 6.
> Does Hadoop support Java 7 and if so could the docs be changed to indicate 
> that?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9327) Out of date code examples

2017-11-10 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16247605#comment-16247605
 ] 

Andras Bokor commented on HADOOP-9327:
--

All the 3 classes are still available on trunk:
* 
https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/JobConfigurationParser.java
* 
https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/LoggedNetworkTopology.java
* 
https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/ContextFactory.java

What is this ticket about?

> Out of date code examples
> -
>
> Key: HADOOP-9327
> URL: https://issues.apache.org/jira/browse/HADOOP-9327
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.3-alpha
>Reporter: Hao Zhong
>
> 1. This page contains code examples that use JobConfigurationParser
> http://hadoop.apache.org/docs/current/api/org/apache/hadoop/tools/rumen/package-summary.html
> "JobConfigurationParser jcp = 
>   new JobConfigurationParser(interestedProperties);"
> JobConfigurationParser is deleted in 2.0.3
> 2. This page contains code examples that use ContextFactory
> http://hadoop.apache.org/docs/current/api/org/apache/hadoop/metrics/package-summary.html
> " ContextFactory factory = ContextFactory.getFactory();
> ... examine and/or modify factory attributes ...
> MetricsContext context = factory.getContext("myContext");"
> ContextFactory is deleted in 2.0.3
> 3. This page contains code examples that use LoggedNetworkTopology
> http://hadoop.apache.org/docs/current/api/org/apache/hadoop/tools/rumen/package-summary.html
> " do.init("topology.json", conf);
> 
>   // get the job summary using TopologyBuilder
>   LoggedNetworkTopology topology = topologyBuilder.build();"
> LoggedNetworkTopology is deleted in 2.0.3
> Please revise the documentation to reflect the code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-10743) Problem building hadoop -2.4.0 on FreeBSD 10 (without -Pnative)

2017-11-10 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-10743.
---
Resolution: Won't Fix

2.4 is no longer supported.

> Problem building hadoop -2.4.0 on FreeBSD 10 (without -Pnative)
> ---
>
> Key: HADOOP-10743
> URL: https://issues.apache.org/jira/browse/HADOOP-10743
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.4.0
> Environment: $ uname -a
> FreeBSD kakumen 10.0-STABLE FreeBSD 10.0-STABLE #4 r267707: Sat Jun 21 
> 19:40:06 COT 2014 pfg@kakumen:/usr/obj/usr/src/sys/GENERIC  amd64
> $ javac -version 
> javac 1.6.0_32
> $
>Reporter: Pedro Giffuni
>
> mapreduce-client-core fails to compile with java 1.6 on FreeBSD 10.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-9083) Port HADOOP-9020 Add a SASL PLAIN server to branch 1

2017-11-10 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-9083.
--
Resolution: Won't Fix

> Port HADOOP-9020 Add a SASL PLAIN server to branch 1
> 
>
> Key: HADOOP-9083
> URL: https://issues.apache.org/jira/browse/HADOOP-9083
> Project: Hadoop Common
>  Issue Type: Task
>  Components: ipc, security
>Affects Versions: 1.0.3
>Reporter: Yu Gao
>Assignee: Yu Gao
> Attachments: HADOOP-9020-branch-1.patch, test-TestSaslRPC.result, 
> test-patch.result
>
>
> It would be good if the patch of HADOOP-9020 for adding SASL PLAIN server 
> implementation could be ported to branch 1 as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14389) Exception handling is incorrect in KerberosName.java

2017-11-15 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-14389:
--
Attachment: HADOOP-14389.03.patch

[~ste...@apache.org],

The Hadoop QA's error was some docker issue os no rebase was needed. I uploaded 
the same patch to rekick hadoop QA.

> Exception handling is incorrect in KerberosName.java
> 
>
> Key: HADOOP-14389
> URL: https://issues.apache.org/jira/browse/HADOOP-14389
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>  Labels: supportability
> Attachments: HADOOP-14389.01.patch, HADOOP-14389.02.patch, 
> HADOOP-14389.03.patch
>
>
> I found multiple inconsistency:
> Rule: {{RULE:\[2:$1/$2\@$3\](.\*)s/.\*/hdfs/}}
> Principal: {{nn/host.dom...@realm.tld}}
> Expected exception: {{BadStringFormat: ...3 is out of range...}}
> Actual exception: {{ArrayIndexOutOfBoundsException: 3}}
> 
> Rule: {{RULE:\[:$1/$2\@$0](.\*)s/.\*/hdfs/}} (Missing num of components)
> Expected: {{IllegalArgumentException}}
> Actual: {{java.lang.NumberFormatException: For input string: ""}}
> 
> Rule: {{RULE:\[2:$-1/$2\@$3\](.\*)s/.\*/hdfs/}}
> Expected {{BadStringFormat: -1 is outside of valid range...}}
> Actual: {{java.lang.NumberFormatException: For input string: ""}}
> 
> Rule: {{RULE:\[2:$one/$2\@$3\](.\*)s/.\*/hdfs/}}
> Expected {{java.lang.NumberFormatException: For input string: "one"}}
> Acutal {{java.lang.NumberFormatException: For input string: ""}}
> 
> In addtion:
> {code}[^\\]]{code}
> does not really make sense in {{ruleParser}}. Most probably it was needed 
> because we parse the whole rule string and remove the parsed rule from 
> beginning of the string: {{KerberosName#parseRules}}. This made the regex 
> engine parse wrong without it.
> In addition:
> In tests some corner cases are not covered.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14014) Shading runs on mvn deploy

2017-11-17 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256689#comment-16256689
 ] 

Andras Bokor commented on HADOOP-14014:
---

Is it something we should work on? It seems it is intended behavior.

> Shading runs on mvn deploy
> --
>
> Key: HADOOP-14014
> URL: https://issues.apache.org/jira/browse/HADOOP-14014
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>
> I'm running "mvn deploy -DskipTests" and see that there is shading happening 
> in the build output. This seems like a bug.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15049) Make Job History File Permissions configurable

2017-11-17 Thread Andras Bokor (JIRA)
Andras Bokor created HADOOP-15049:
-

 Summary: Make Job History File Permissions configurable
 Key: HADOOP-15049
 URL: https://issues.apache.org/jira/browse/HADOOP-15049
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Andras Bokor


Currently the mapreduce job history files are written with 770 permissions 
which can be accessed by job user or other user part of hadoop group.
Some customers has users who are not part of the hadoop group but want to 
access these history files. We should provide ability to change the default 
permissions for staging files.
The default should remain 770.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   3   4   5   6   >