[jira] [Resolved] (HADOOP-19010) NullPointerException in Hadoop Credential Check CLI Command

2023-12-27 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-19010.

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> NullPointerException in Hadoop Credential Check CLI Command
> ---
>
> Key: HADOOP-19010
> URL: https://issues.apache.org/jira/browse/HADOOP-19010
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Anika Kelhanka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> *Description*: Hadoop's credential check throws {{NullPointerException}} when 
> alias not found.
> {code:bash}
> hadoop credential check "fs.gs.proxy.username" -provider 
> "jceks://file/usr/lib/hive/conf/hive.jceks" {code}
> Checking aliases for CredentialProvider: 
> jceks://file/usr/lib/hive/conf/hive.jceks
> Enter alias password: 
> java.lang.NullPointerException
> at
> org.apache.hadoop.security.alias.CredentialShell$CheckCommand.execute(CredentialShell.java:369)
> at org.apache.hadoop.tools.CommandShell.run(CommandShell.java:73)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:82)
> at 
> org.apache.hadoop.security.alias.CredentialShell.main(CredentialShell.java:529)}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17505) public interface GroupMappingServiceProvider needs default impl for getGroupsSet()

2021-01-28 Thread Vinayakumar B (Jira)
Vinayakumar B created HADOOP-17505:
--

 Summary: public interface GroupMappingServiceProvider needs 
default impl for getGroupsSet() 
 Key: HADOOP-17505
 URL: https://issues.apache.org/jira/browse/HADOOP-17505
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinayakumar B


HADOOP-17079 added "GroupMappingServiceProvider#getGroupsSet()" interface.

But since this is a public interface, it will break compilation of existing 
implementations in downstreams.

Consider adding a default implementation in the interface to avoid such 
failures.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17306) RawLocalFileSystem's lastModifiedTime() looses milli seconds in JDK < 10.b09

2020-10-23 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-17306.

Fix Version/s: 3.4.0
   3.3.1
   3.2.2
 Hadoop Flags: Reviewed
   Resolution: Fixed

Merged to trunk, branch-3.3 and branch-3.2

 

Thanks [~aajisaka] and [~ayushsaxena] for reviews.

> RawLocalFileSystem's lastModifiedTime() looses milli seconds in JDK < 10.b09
> 
>
> Key: HADOOP-17306
> URL: https://issues.apache.org/jira/browse/HADOOP-17306
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.2.2, 3.3.1, 3.4.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> RawLocalFileSystem's FileStatus uses {{File.lastModified()}} api from JDK.
> This api looses milliseconds due to JDK bug.
> [https://bugs.java.com/bugdatabase/view_bug.do?bug_id=8177809]
> This bug fixed in JDK 10 b09 onwards and still exists in JDK 8 which is still 
> being used in many productions.
> Apparently, {{Files.getLastModifiedTime()}} from java's nio package returns 
> correct time.
> Use {{Files.getLastModifiedTime()}} instead of {{File.lastModified}} as 
> workaround. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17306) RawLocalFileSystem's lastModifiedTime() looses milli seconds in JDK < 10 b09

2020-10-15 Thread Vinayakumar B (Jira)
Vinayakumar B created HADOOP-17306:
--

 Summary: RawLocalFileSystem's lastModifiedTime() looses milli 
seconds in JDK < 10 b09
 Key: HADOOP-17306
 URL: https://issues.apache.org/jira/browse/HADOOP-17306
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Vinayakumar B


RawLocalFileSystem's FileStatus uses {{File.lastModified()}} api from JDK.

This api looses milliseconds due to JDK bug.

[https://bugs.java.com/bugdatabase/view_bug.do?bug_id=8177809]

This bug fixed in JDK 10 b09 onwards and still exists in JDK 8 which is still 
being used in many productions.

Apparently, {{Files.getLastModifiedTime()}} from java's nio package returns 
correct time.

Use {{Files.getLastModifiedTime()}} instead of {{File.lastModified}} as 
workaround. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17278) Shade guava 29.0-jre in hadoop thirdparty

2020-09-27 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-17278.

Fix Version/s: thirdparty-1.1.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Merged to trunk of hadoop-thirdparty

> Shade guava 29.0-jre in hadoop thirdparty
> -
>
> Key: HADOOP-17278
> URL: https://issues.apache.org/jira/browse/HADOOP-17278
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: thirdparty-1.1.0
>
>
> Shade guava 27.0-jre in hadoop-thirdparty



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17046) Support downstreams' existing Hadoop-rpc implementations using non-shaded protobuf classes.

2020-05-18 Thread Vinayakumar B (Jira)
Vinayakumar B created HADOOP-17046:
--

 Summary: Support downstreams' existing Hadoop-rpc implementations 
using non-shaded protobuf classes.
 Key: HADOOP-17046
 URL: https://issues.apache.org/jira/browse/HADOOP-17046
 Project: Hadoop Common
  Issue Type: Improvement
  Components: rpc-server
Affects Versions: 3.3.0
Reporter: Vinayakumar B


After upgrade/shade of protobuf to 3.7 version, existing Hadoop-Rpc 
client-server implementations using ProtobufRpcEngine will not work.

So, this Jira proposes to keep existing ProtobuRpcEngine as-is (without shading 
and with protobuf-2.5.0 implementation) to support downstream implementations.

Use new ProtobufRpcEngine2 to use shaded protobuf classes within Hadoop and 
later projects who wish to upgrade their protobufs to 3.x.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16985) Handle release package related issues

2020-04-15 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-16985.

Fix Version/s: 3.3.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Committed to branch-3.3 and trunk

> Handle release package related issues
> -
>
> Key: HADOOP-16985
> URL: https://issues.apache.org/jira/browse/HADOOP-16985
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Fix For: 3.3.0
>
>
> Same issue as mentioned in HADOOP-16919 is present in hadoop distribution 
> generation as well.
> Handle following comments from [~elek] in 1.0.0-RC0 voting mail thread 
> here[[https://lists.apache.org/thread.html/r1f2e8325ecef239f0d713c683a16336e2a22431a9f6bfbde3c763816%40%3Ccommon-dev.hadoop.apache.org%3E]]
> {quote}3. Yetus seems to be included in the source package. I am not sure if
>  it's intentional but I would remove the patchprocess directory from the
>  tar file.
> 7. Minor nit: I would suggest to use only the filename in the sha512
>  files (instead of having the /build/source/target prefix). It would help
>  to use `sha512 -c` command to validate the checksum.
> {quote}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16985) Handle release package related issues

2020-04-14 Thread Vinayakumar B (Jira)
Vinayakumar B created HADOOP-16985:
--

 Summary: Handle release package related issues
 Key: HADOOP-16985
 URL: https://issues.apache.org/jira/browse/HADOOP-16985
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Vinayakumar B


Same issue as mentioned in HADOOP-16919 is present in hadoop distribution 
generation as well.

Handle following comments from [~elek] in 1.0.0-RC0 voting mail thread 
here[[https://lists.apache.org/thread.html/r1f2e8325ecef239f0d713c683a16336e2a22431a9f6bfbde3c763816%40%3Ccommon-dev.hadoop.apache.org%3E]]
{quote}3. Yetus seems to be included in the source package. I am not sure if
 it's intentional but I would remove the patchprocess directory from the
 tar file.

7. Minor nit: I would suggest to use only the filename in the sha512
 files (instead of having the /build/source/target prefix). It would help
 to use `sha512 -c` command to validate the checksum.
{quote}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16927) Update hadoop-thirdparty dependency version to 1.0.0

2020-03-20 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-16927.

Fix Version/s: 3.3.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Committed to trunk.

> Update hadoop-thirdparty dependency version to 1.0.0
> 
>
> Key: HADOOP-16927
> URL: https://issues.apache.org/jira/browse/HADOOP-16927
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
>  Labels: release-blocker
> Fix For: 3.3.0
>
>
> Now hadoop-thirdparty 1.0.0 is released, its time to upgrade to released 
> version in hadoop



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16927) Update hadoop-thirdparty dependency version to 1.0.0

2020-03-18 Thread Vinayakumar B (Jira)
Vinayakumar B created HADOOP-16927:
--

 Summary: Update hadoop-thirdparty dependency version to 1.0.0
 Key: HADOOP-16927
 URL: https://issues.apache.org/jira/browse/HADOOP-16927
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Vinayakumar B






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16919) [thirdparty] Handle release package related issues

2020-03-11 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-16919.

Fix Version/s: thirdparty-1.0.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Merged to trunk,branch-1.0 of hadoop-thirdparty.

Thanks [~ayushtkn] for reviews

> [thirdparty] Handle release package related issues
> --
>
> Key: HADOOP-16919
> URL: https://issues.apache.org/jira/browse/HADOOP-16919
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: hadoop-thirdparty
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Fix For: thirdparty-1.0.0
>
>
> Handle following comments from [~elek] in 1.0.0-RC0 voting mail thread 
> here[[https://lists.apache.org/thread.html/r1f2e8325ecef239f0d713c683a16336e2a22431a9f6bfbde3c763816%40%3Ccommon-dev.hadoop.apache.org%3E]]
> {quote}3. Yetus seems to be included in the source package. I am not sure if
>  it's intentional but I would remove the patchprocess directory from the
>  tar file.
> 7. Minor nit: I would suggest to use only the filename in the sha512
>  files (instead of having the /build/source/target prefix). It would help
>  to use `sha512 -c` command to validate the checksum.
> {quote}
> Also, update available artifacts in docs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16895) [thirdparty] Revisit LICENSEs and NOTICEs

2020-03-11 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-16895.

Fix Version/s: thirdparty-1.0.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Committed to brach-1.0, trunk of hadoop-thirdparty

Thanks [~aajisaka] and [~elek] for reviews.

> [thirdparty] Revisit LICENSEs and NOTICEs
> -
>
> Key: HADOOP-16895
> URL: https://issues.apache.org/jira/browse/HADOOP-16895
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Fix For: thirdparty-1.0.0
>
>
> LICENSE.txt and NOTICE.txt have many entries which are unrelated to 
> thirdparty,
> Revisit and cleanup such entries.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16919) [thirdparty] Handle release package related issues

2020-03-11 Thread Vinayakumar B (Jira)
Vinayakumar B created HADOOP-16919:
--

 Summary: [thirdparty] Handle release package related issues
 Key: HADOOP-16919
 URL: https://issues.apache.org/jira/browse/HADOOP-16919
 Project: Hadoop Common
  Issue Type: Bug
  Components: hadoop-thirdparty
Reporter: Vinayakumar B


Handle following comments from [~elek] in 1.0.0-RC0 voting mail thread 
here[[https://lists.apache.org/thread.html/r1f2e8325ecef239f0d713c683a16336e2a22431a9f6bfbde3c763816%40%3Ccommon-dev.hadoop.apache.org%3E]]
{quote}3. Yetus seems to be included in the source package. I am not sure if
 it's intentional but I would remove the patchprocess directory from the
 tar file.

7. Minor nit: I would suggest to use only the filename in the sha512
 files (instead of having the /build/source/target prefix). It would help
 to use `sha512 -c` command to validate the checksum.
{quote}
Also, update available artifacts in docs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16895) [thirdparty] Revisit LICENSEs and NOTICEs

2020-02-28 Thread Vinayakumar B (Jira)
Vinayakumar B created HADOOP-16895:
--

 Summary: [thirdparty] Revisit LICENSEs and NOTICEs
 Key: HADOOP-16895
 URL: https://issues.apache.org/jira/browse/HADOOP-16895
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Vinayakumar B


LICENSE.txt and NOTICE.txt have many entries which are unrelated to thirdparty,
Revisit and cleanup such entries.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16596) [pb-upgrade] Use shaded protobuf classes from hadoop-thirdparty dependency

2020-02-07 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-16596.

Fix Version/s: 3.3.0
 Hadoop Flags: Reviewed
 Release Note: All protobuf classes will be used from 
hadooop-shaded-protobuf_3_7 artifact with package prefix as 
'org.apache.hadoop.thirdparty.protobuf' instead of 'com.google.protobuf'
   Resolution: Fixed

Merged to trunk. Thanks everyone for reviews

> [pb-upgrade] Use shaded protobuf classes from hadoop-thirdparty dependency
> --
>
> Key: HADOOP-16596
> URL: https://issues.apache.org/jira/browse/HADOOP-16596
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Fix For: 3.3.0
>
>
> Use the shaded protobuf classes from "hadoop-thirdparty" in hadoop codebase.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16824) [thirdparty] port HADOOP-16754 (Fix docker failed to build yetus/hadoop) to thirdparty Dockerfile

2020-01-22 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-16824.

Fix Version/s: thirdparty-1.0.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Merged PR.
Thanks [~aajisaka] for review.

> [thirdparty] port HADOOP-16754 (Fix docker failed to build yetus/hadoop) to 
> thirdparty Dockerfile
> -
>
> Key: HADOOP-16824
> URL: https://issues.apache.org/jira/browse/HADOOP-16824
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Fix For: thirdparty-1.0.0
>
>
> port HADOOP-16754 to avoid Docker build failure



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16824) [thirdparty] port HADOOP-16754 (Fix docker failed to build yetus/hadoop) to thirdparty Dockerfile

2020-01-21 Thread Vinayakumar B (Jira)
Vinayakumar B created HADOOP-16824:
--

 Summary: [thirdparty] port HADOOP-16754 (Fix docker failed to 
build yetus/hadoop) to thirdparty Dockerfile
 Key: HADOOP-16824
 URL: https://issues.apache.org/jira/browse/HADOOP-16824
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinayakumar B


port HADOOP-16754 to avoid Docker build failure



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16820) [thirdparty] ChangeLog and ReleaseNote are not packaged by createrelease script

2020-01-21 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-16820.

Fix Version/s: thirdparty-1.0.0
 Hadoop Flags: Reviewed
 Assignee: Vinayakumar B
   Resolution: Fixed

Merged PR.
Thanks [~ayushtkn] for review.

> [thirdparty] ChangeLog and ReleaseNote are not packaged by createrelease 
> script
> ---
>
> Key: HADOOP-16820
> URL: https://issues.apache.org/jira/browse/HADOOP-16820
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: hadoop-thirdparty
>Affects Versions: thirdparty-1.0.0
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Fix For: thirdparty-1.0.0
>
>
> createrelease script is not packaging CHANGELOGS and RELEASENOTES during 
> generation of site package for hadoop-thirdparty module.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16821) [pb-upgrade] Use 'o.a.h.thirdparty.protobuf' shaded prefix instead of 'protobuf_3_7'

2020-01-21 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-16821.

Fix Version/s: thirdparty-1.0.0
 Hadoop Flags: Reviewed
 Assignee: Vinayakumar B
   Resolution: Fixed

Committed to trunk.
Thanks [~ste...@apache.org] for review.

> [pb-upgrade] Use 'o.a.h.thirdparty.protobuf' shaded prefix instead of 
> 'protobuf_3_7'
> 
>
> Key: HADOOP-16821
> URL: https://issues.apache.org/jira/browse/HADOOP-16821
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: hadoop-thirdparty
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Fix For: thirdparty-1.0.0
>
>
> As per discussion  
> [here|https://github.com/apache/hadoop/pull/1635#issuecomment-576247014], 
> versioned package name may make upgrade of library to a non-trivial task. 
> package name needs to be updated in all usages in all modules. 
> So common package name is preferred.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16821) [pb-upgrade] Use 'o.a.h.thirdparty.protobuf' shaded prefix instead of 'protobuf_3_7'

2020-01-21 Thread Vinayakumar B (Jira)
Vinayakumar B created HADOOP-16821:
--

 Summary: [pb-upgrade] Use 'o.a.h.thirdparty.protobuf' shaded 
prefix instead of 'protobuf_3_7'
 Key: HADOOP-16821
 URL: https://issues.apache.org/jira/browse/HADOOP-16821
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Vinayakumar B


As per discussion  
[here|https://github.com/apache/hadoop/pull/1635#issuecomment-576247014], 
versioned package name may make upgrade of library to a non-trivial task. 
package name needs to be updated in all usages in all modules. 

So common package name is preferred.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16820) [thirdparty] ChangeLog and ReleaseNote are not packaged by createrelease script

2020-01-21 Thread Vinayakumar B (Jira)
Vinayakumar B created HADOOP-16820:
--

 Summary: [thirdparty] ChangeLog and ReleaseNote are not packaged 
by createrelease script
 Key: HADOOP-16820
 URL: https://issues.apache.org/jira/browse/HADOOP-16820
 Project: Hadoop Common
  Issue Type: Bug
  Components: hadoop-thirdparty
Affects Versions: thirdparty-1.0.0
Reporter: Vinayakumar B


createrelease script is not packaging CHANGELOGS and RELEASENOTES during 
generation of site package for hadoop-thirdparty module.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16621) [pb-upgrade] Remove Protobuf classes from signatures of Public APIs

2020-01-16 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-16621.

Fix Version/s: 3.3.0
 Hadoop Flags: Reviewed
 Release Note: 
Following APIs have been removed from Token.java to avoid protobuf classes in 
signature.
1.   o.a.h.security.token.Token(TokenProto tokenPB)
2.   o.a.h.security.token.Token.toTokenProto()
   Resolution: Fixed

Merged PR to trunk.
Thanks [~ste...@apache.org] and [~ayushtkn] for reviews. 

> [pb-upgrade] Remove Protobuf classes from signatures of Public APIs
> ---
>
> Key: HADOOP-16621
> URL: https://issues.apache.org/jira/browse/HADOOP-16621
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Vinayakumar B
>Priority: Critical
> Fix For: 3.3.0
>
>
> the move to protobuf 3.x stops spark building because Token has a method 
> which returns a protobuf, and now its returning some v3 types.
> if we want to isolate downstream code from protobuf changes, we need to move 
> that marshalling method from token and put in a helper class.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16595) [pb-upgrade] Create hadoop-thirdparty artifact to have shaded protobuf

2020-01-12 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-16595.

Fix Version/s: thirdparty-1.0.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Merged  PR.
Thanks everyone.

> [pb-upgrade] Create hadoop-thirdparty artifact to have shaded protobuf
> --
>
> Key: HADOOP-16595
> URL: https://issues.apache.org/jira/browse/HADOOP-16595
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: hadoop-thirdparty
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Fix For: thirdparty-1.0.0
>
>
> Create a separate repo "hadoop-thirdparty" to have shaded dependencies.
> starting with protobuf-java:3.7.1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16797) Add dockerfile for ARM builds

2020-01-12 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-16797.

Fix Version/s: 3.3.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Merged PR to trunk. 
Thanks [~ayushtkn] and [~aajisaka] for reviews.

> Add dockerfile for ARM builds
> -
>
> Key: HADOOP-16797
> URL: https://issues.apache.org/jira/browse/HADOOP-16797
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Fix For: 3.3.0
>
>
> Similar to x86 docker image in {{dev-support/docker/Dockerfile}},
> add one more Dockerfile to support aarch64 builds.
> And support all scripts (createrelease, start-build-env.sh, etc ) to make use 
> of it in ARM platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16797) Add dockerfile for ARM builds

2020-01-09 Thread Vinayakumar B (Jira)
Vinayakumar B created HADOOP-16797:
--

 Summary: Add dockerfile for ARM builds
 Key: HADOOP-16797
 URL: https://issues.apache.org/jira/browse/HADOOP-16797
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Vinayakumar B


Similar to x86 docker image in {{dev-support/docker/Dockerfile}},
add one more Dockerfile to support aarch64 builds.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16358) Add an ARM CI for Hadoop

2020-01-06 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-16358.

Fix Version/s: 3.3.0
   Resolution: Fixed

A Jenkins Job has been created to run nightly tests on aarch64

[https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-qbt-linux-ARM-trunk/]

> Add an ARM CI for Hadoop
> 
>
> Key: HADOOP-16358
> URL: https://issues.apache.org/jira/browse/HADOOP-16358
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Reporter: Zhenyu Zheng
>Priority: Major
> Fix For: 3.3.0
>
>
> Now the CI of Hadoop is handled by jenkins. While the tests are running under 
> x86 ARCH, the ARM arch has not being considered. This leads an problem that 
> we don't have a way to test every pull request that if it'll break the Hadoop 
> deployment on ARM or not.
> We should add a CI system that support ARM arch. Using it, Hadoop can 
> officially support arm release in the future. Here I'd like to introduce 
> OpenLab to the community. [OpenLab|https://openlabtesting.org/] is a open 
> source CI system that can test any open source software on either x86 or arm 
> ARCH, it's mainly used by github projects. Now some 
> [projects|https://github.com/theopenlab/openlab-zuul-jobs/blob/master/zuul.d/jobs.yaml]
>  has integrated it already. Such as containerd (a graduated CNCF project, the 
> arm build will be triggerd in every PR, 
> [https://github.com/containerd/containerd/pulls]), terraform and so on.
> OpenLab uses the open source CI software [Zuul 
> |https://github.com/openstack-infra/zuul] for CI system. Zuul is used by 
> OpenStack community as well. integrating with OpneLab is quite easy using its 
> github app. All config info is open source as well.
> If apache Hadoop community has interested with it, we can help for the 
> integration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16774) TestDiskChecker and TestReadWriteDiskValidator fails when run with -Pparallel-tests

2019-12-20 Thread Vinayakumar B (Jira)
Vinayakumar B created HADOOP-16774:
--

 Summary: TestDiskChecker and TestReadWriteDiskValidator fails when 
run with -Pparallel-tests
 Key: HADOOP-16774
 URL: https://issues.apache.org/jira/browse/HADOOP-16774
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinayakumar B


{noformat}
$  mvn test -Pparallel-tests -Dtest=TestReadWriteDiskValidator,TestDiskChecker 
-Pnative
 {noformat}
{noformat}
[INFO] Results:
[INFO] 
[ERROR] Errors: 
[ERROR]   
TestDiskChecker.testCheckDir_normal:111->_checkDirs:158->createTempDir:153 » 
NoSuchFile
[ERROR]   
TestDiskChecker.testCheckDir_normal_local:180->checkDirs:205->createTempDir:153 
» NoSuchFile
[ERROR]   
TestDiskChecker.testCheckDir_notDir:116->_checkDirs:158->createTempFile:142 » 
NoSuchFile
[ERROR]   
TestDiskChecker.testCheckDir_notDir_local:185->checkDirs:205->createTempFile:142
 » NoSuchFile
[ERROR]   
TestDiskChecker.testCheckDir_notListable:131->_checkDirs:158->createTempDir:153 
» NoSuchFile
[ERROR]   
TestDiskChecker.testCheckDir_notListable_local:200->checkDirs:205->createTempDir:153
 » NoSuchFile
[ERROR]   
TestDiskChecker.testCheckDir_notReadable:121->_checkDirs:158->createTempDir:153 
» NoSuchFile
[ERROR]   
TestDiskChecker.testCheckDir_notReadable_local:190->checkDirs:205->createTempDir:153
 » NoSuchFile
[ERROR]   
TestDiskChecker.testCheckDir_notWritable:126->_checkDirs:158->createTempDir:153 
» NoSuchFile
[ERROR]   
TestDiskChecker.testCheckDir_notWritable_local:195->checkDirs:205->createTempDir:153
 » NoSuchFile
[ERROR]   TestReadWriteDiskValidator.testCheckFailures:114 » NoSuchFile 
/usr1/code/hadoo...
[ERROR]   TestReadWriteDiskValidator.testReadWriteDiskValidator:62 » DiskError 
Disk Chec...
[INFO] 
[ERROR] Tests run: 16, Failures: 0, Errors: 12, Skipped: 0

{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16596) [pb-upgrade] Use shaded protobuf classes from hadoop-thirdparty dependency

2019-09-24 Thread Vinayakumar B (Jira)
Vinayakumar B created HADOOP-16596:
--

 Summary: [pb-upgrade] Use shaded protobuf classes from 
hadoop-thirdparty dependency
 Key: HADOOP-16596
 URL: https://issues.apache.org/jira/browse/HADOOP-16596
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Vinayakumar B


Use the shaded protobuf classes from "hadoop-thirdparty" in hadoop codebase.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16595) [pb-upgrade] Create hadoop-thirdparty artifact to have shaded protobuf

2019-09-24 Thread Vinayakumar B (Jira)
Vinayakumar B created HADOOP-16595:
--

 Summary: [pb-upgrade] Create hadoop-thirdparty artifact to have 
shaded protobuf
 Key: HADOOP-16595
 URL: https://issues.apache.org/jira/browse/HADOOP-16595
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: hadoop-thirdparty
Reporter: Vinayakumar B


Create a separate repo "hadoop-thirdparty" to have shaded dependencies.

starting with protobuf-java:3.7.1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16561) [MAPREDUCE] use protobuf-maven-plugin to generate protobuf classes

2019-09-24 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-16561.

Fix Version/s: 3.3.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Merged to trunk.

> [MAPREDUCE] use protobuf-maven-plugin to generate protobuf classes
> --
>
> Key: HADOOP-16561
> URL: https://issues.apache.org/jira/browse/HADOOP-16561
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Vinayakumar B
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.3.0
>
>
> Use "protoc-maven-plugin" to dynamically download protobuf executable to 
> generate protobuf classes from proto file



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16560) [YARN] use protobuf-maven-plugin to generate protobuf classes

2019-09-24 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-16560.

Fix Version/s: 3.3.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Merged already to trunk

> [YARN] use protobuf-maven-plugin to generate protobuf classes
> -
>
> Key: HADOOP-16560
> URL: https://issues.apache.org/jira/browse/HADOOP-16560
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Vinayakumar B
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.3.0
>
>
> Use "protoc-maven-plugin" to dynamically download protobuf executable to 
> generate protobuf classes from proto file



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16589) [pb-upgrade] Update docker image to make 3.7.1 protoc as default

2019-09-21 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-16589.

Fix Version/s: 3.3.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Merged to trunk.

> [pb-upgrade] Update docker image to make 3.7.1 protoc as default
> 
>
> Key: HADOOP-16589
> URL: https://issues.apache.org/jira/browse/HADOOP-16589
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Fix For: 3.3.0
>
>
> Right now, docker image contains both 2.5.0 protoc and 3.7.1 protoc.
> 2.5.0 is default protoc in PATH.
> After HADOOP-16557, protoc version expected in PATH is 3.7.1. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16589) [pb-upgrade] Update docker image to make 3.7.1 protoc as default

2019-09-20 Thread Vinayakumar B (Jira)
Vinayakumar B created HADOOP-16589:
--

 Summary: [pb-upgrade] Update docker image to make 3.7.1 protoc as 
default
 Key: HADOOP-16589
 URL: https://issues.apache.org/jira/browse/HADOOP-16589
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Vinayakumar B


Right now, docker image contains both 2.5.0 protoc and 3.7.1 protoc.

2.5.0 is default protoc in PATH.

After HADOOP-16557, protoc version expected in PATH is 3.7.1. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16562) Update docker image to have 3.7.1 protoc executable

2019-09-12 Thread Vinayakumar B (Jira)
Vinayakumar B created HADOOP-16562:
--

 Summary: Update docker image to have 3.7.1 protoc executable
 Key: HADOOP-16562
 URL: https://issues.apache.org/jira/browse/HADOOP-16562
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Vinayakumar B


Current docker image is installed with 2.5.0 protobuf executable.

During the process of upgrading protobuf to 3.7.1, docker needs to have both 
versions for yetus to verify.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16561) [MAPREDUCE] use protobuf-maven-plugin to generate protobuf classes

2019-09-12 Thread Vinayakumar B (Jira)
Vinayakumar B created HADOOP-16561:
--

 Summary: [MAPREDUCE] use protobuf-maven-plugin to generate 
protobuf classes
 Key: HADOOP-16561
 URL: https://issues.apache.org/jira/browse/HADOOP-16561
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Vinayakumar B


Use "protoc-maven-plugin" to dynamically download protobuf executable to 
generate protobuf classes from proto file



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16560) [YARN] use protobuf-maven-plugin to generate protobuf classes

2019-09-12 Thread Vinayakumar B (Jira)
Vinayakumar B created HADOOP-16560:
--

 Summary: [YARN] use protobuf-maven-plugin to generate protobuf 
classes
 Key: HADOOP-16560
 URL: https://issues.apache.org/jira/browse/HADOOP-16560
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Vinayakumar B


Use "protoc-maven-plugin" to dynamically download protobuf executable to 
generate protobuf classes from proto file



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16559) [HDFS] use protobuf-maven-plugin to generate protobuf classes

2019-09-12 Thread Vinayakumar B (Jira)
Vinayakumar B created HADOOP-16559:
--

 Summary: [HDFS] use protobuf-maven-plugin to generate protobuf 
classes
 Key: HADOOP-16559
 URL: https://issues.apache.org/jira/browse/HADOOP-16559
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Vinayakumar B


Use "protoc-maven-plugin" to dynamically download protobuf executable to 
generate protobuf classes from proto file



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16558) [COMMON] use protobuf-maven-plugin to generate protobuf classes

2019-09-12 Thread Vinayakumar B (Jira)
Vinayakumar B created HADOOP-16558:
--

 Summary: [COMMON] use protobuf-maven-plugin to generate protobuf 
classes
 Key: HADOOP-16558
 URL: https://issues.apache.org/jira/browse/HADOOP-16558
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: common
Reporter: Vinayakumar B


Use "protoc-maven-plugin" to dynamically download protobuf executable to 
generate protobuf classes from proto files.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16557) Upgrade protobuf.version to 3.7.1

2019-09-12 Thread Vinayakumar B (Jira)
Vinayakumar B created HADOOP-16557:
--

 Summary: Upgrade protobuf.version to 3.7.1 
 Key: HADOOP-16557
 URL: https://issues.apache.org/jira/browse/HADOOP-16557
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Vinayakumar B


Bump up the "protobuf.version" to 3.7.1 and ensure all compile is successful.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15901) IPC Client and Server should use Time.monotonicNow() for elapsed times.

2018-11-05 Thread Vinayakumar B (JIRA)
Vinayakumar B created HADOOP-15901:
--

 Summary: IPC Client and Server should use Time.monotonicNow() for 
elapsed times.
 Key: HADOOP-15901
 URL: https://issues.apache.org/jira/browse/HADOOP-15901
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc, metrics
Reporter: Vinayakumar B


Client.java and Server.java  uses {{Time.now()}} to calculate the elapsed 
times/timeouts. This could result in undesired results when system clock's time 
changes.

{{Time.monotonicNow()}} should be used for elapsed time calculations within 
same JVM.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15856) Trunk build fails to compile native on Windows

2018-10-15 Thread Vinayakumar B (JIRA)
Vinayakumar B created HADOOP-15856:
--

 Summary: Trunk build fails to compile native on Windows
 Key: HADOOP-15856
 URL: https://issues.apache.org/jira/browse/HADOOP-15856
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Reporter: Vinayakumar B


After removal of {{javah}} dependency in HADOOP-15767
Trunk build fails with unable to find JNI headers.

HADOOP-15767 fixed javah isssue with JDK10 only for linux builds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15602) Support SASL Rpc request handling in separate Handlers

2018-07-12 Thread Vinayakumar B (JIRA)
Vinayakumar B created HADOOP-15602:
--

 Summary: Support SASL Rpc request handling in separate Handlers 
 Key: HADOOP-15602
 URL: https://issues.apache.org/jira/browse/HADOOP-15602
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Vinayakumar B


Right now, during RPC Connection establishment, all SASL requests are 
considered as OutOfBand requests and handled within the same Reader thread.

SASL handling involves authentication with Kerberos and SecretManagers(for 
Token validation). During this time, Reader thread would be blocked, hence 
blocking all the incoming RPC requests on other established connections. Some 
secretManager impls require to communicate to external systems (ex: ZK) for 
verification.

SASL RPC handling in separate dedicated handlers, would enable Reader threads 
to read RPC requests from established connections without blocking.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13738) DiskChecker should perform some disk IO

2018-01-03 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B reopened HADOOP-13738:


Re-opening to cherry-pick to branch-2.8

> DiskChecker should perform some disk IO
> ---
>
> Key: HADOOP-13738
> URL: https://issues.apache.org/jira/browse/HADOOP-13738
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13738.01.patch, HADOOP-13738.02.patch, 
> HADOOP-13738.03.patch, HADOOP-13738.04.patch, HADOOP-13738.05.patch
>
>
> DiskChecker can fail to detect total disk/controller failures indefinitely. 
> We have seen this in real clusters. DiskChecker performs simple 
> permissions-based checks on directories which do not guarantee that any disk 
> IO will be attempted.
> A simple improvement is to write some data and flush it to the disk.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14910) Upgrade netty-all jar to 4.0.37.Final

2017-09-26 Thread Vinayakumar B (JIRA)
Vinayakumar B created HADOOP-14910:
--

 Summary: Upgrade netty-all jar to 4.0.37.Final
 Key: HADOOP-14910
 URL: https://issues.apache.org/jira/browse/HADOOP-14910
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinayakumar B
Priority: Critical


Upgrade netty-all jar to 4.0.37.Final version to fix latest vulnerabilities 
reported.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14427) Avoid reloading of Configuration in ViewFileSystem creation.

2017-05-16 Thread Vinayakumar B (JIRA)
Vinayakumar B created HADOOP-14427:
--

 Summary: Avoid reloading of Configuration in ViewFileSystem 
creation.
 Key: HADOOP-14427
 URL: https://issues.apache.org/jira/browse/HADOOP-14427
 Project: Hadoop Common
  Issue Type: Bug
  Components: viewfs
Reporter: Vinayakumar B
Assignee: Vinayakumar B


Avoid {{new Configuration()}} in below code. during viewfilesystem creation
{code}public InternalDirOfViewFs(final InodeTree.INodeDir dir,
final long cTime, final UserGroupInformation ugi, URI uri)
  throws URISyntaxException {
  myUri = uri;
  try {
initialize(myUri, new Configuration());
  } catch (IOException e) {
throw new RuntimeException("Cannot occur");
  }
  theInternalDir = dir;
  creationTime = cTime;
  this.ugi = ugi;
}{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14100) Upgrade Jsch jar to latest version

2017-02-20 Thread Vinayakumar B (JIRA)
Vinayakumar B created HADOOP-14100:
--

 Summary: Upgrade Jsch jar to latest version
 Key: HADOOP-14100
 URL: https://issues.apache.org/jira/browse/HADOOP-14100
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.5, 2.7.3
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Critical


Recently there was on vulnerability reported on jsch library. Its fixed in 
latest 0.1.54 version before CVE was made public.
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-5725

So, need to  upgrade jsch to latest 0.1.54 version.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13416) Hide System properties of Daemon in /jmx output

2016-07-25 Thread Vinayakumar B (JIRA)
Vinayakumar B created HADOOP-13416:
--

 Summary: Hide System properties of Daemon in /jmx output
 Key: HADOOP-13416
 URL: https://issues.apache.org/jira/browse/HADOOP-13416
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Vinayakumar B
Assignee: Vinayakumar B


Showing system properties of daemon in /jmx, which is not secured url, could 
show unwanted information to non-admin user.
So it would be better to hide these from displaying.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13415) add authentication filters to '/conf' and '/stacks' servlet

2016-07-25 Thread Vinayakumar B (JIRA)
Vinayakumar B created HADOOP-13415:
--

 Summary: add authentication filters to '/conf' and '/stacks' 
servlet
 Key: HADOOP-13415
 URL: https://issues.apache.org/jira/browse/HADOOP-13415
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Vinayakumar B


/conf and /stacks could reveal some security related information 
(configurations, paths, etc) at server side to the non-admin user.
Its better to make them go through authentication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13414) Hide Jetty Server version header in HTTP responses

2016-07-25 Thread Vinayakumar B (JIRA)
Vinayakumar B created HADOOP-13414:
--

 Summary: Hide Jetty Server version header in HTTP responses
 Key: HADOOP-13414
 URL: https://issues.apache.org/jira/browse/HADOOP-13414
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Vinayakumar B
Assignee: Vinayakumar B


Hide Jetty Server version in HTTP Response header. Some security analyzers 
would think this as an issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13314) Remove 'package-info.java' from 'test\java\org\apache\hadoop\fs\shell\' to remove eclipse compile error

2016-06-23 Thread Vinayakumar B (JIRA)
Vinayakumar B created HADOOP-13314:
--

 Summary: Remove 'package-info.java' from 
'test\java\org\apache\hadoop\fs\shell\' to remove eclipse compile error
 Key: HADOOP-13314
 URL: https://issues.apache.org/jira/browse/HADOOP-13314
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Trivial


HADOOP-13079 added package-info.java in test\java\org\apache\hadoop\fs\shell\ 
to avoid checkstyle comment. 
But this resulted in an eclipse compile error "The type package-info is already 
defined", because in src folder in same package package-info.java already 
present.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13216) Append SequenceFile with compressionType(NONE,RECORD,BLOCK) throws NullPointerException

2016-05-31 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B reopened HADOOP-13216:


Re-open to resolve as duplicate.

> Append SequenceFile with compressionType(NONE,RECORD,BLOCK) throws 
> NullPointerException
> ---
>
> Key: HADOOP-13216
> URL: https://issues.apache.org/jira/browse/HADOOP-13216
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.2
>Reporter: Syed Akram
> Fix For: 2.8.0
>
>
> while Appending sequenceFile with any Compression Type (NONE, RECORD, BLOCK), 
> getting writer 
>  SequenceFile.createWriter(conf, 
>   SequenceFile.Writer.file(path),
>   SequenceFile.Writer.compression(ctype),
>   SequenceFile.Writer.keyClass(Key.class),
>   SequenceFile.Writer.valueClass(value..class),
>   SequenceFile.Writer.appendIfExists(true),
>   );
> Above throws the below exception when we try to append existing sequencefile 
> with some compression technique, 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.io.SequenceFile$Writer.(SequenceFile.java:1118)
>   at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:273)
>   at SequenceFileTest.getWriter(SequenceFileTest.java:342)
>   at SequenceFileTest.writeContent(SequenceFileTest.java:429)
>   at SequenceFileTest.(SequenceFileTest.java:83)
>   at SequenceFileTest.main(SequenceFileTest.java:565)
> If i use below writer 
> SequenceFile.createWriter(conf, 
>   SequenceFile.Writer.file(path),
>   SequenceFile.Writer.keyClass(keyClass),
>   SequenceFile.Writer.valueClass(keyClass),
>   SequenceFile.Writer.appendIfExists(append)
>   ); 
> without any compressiontype, then it works fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13072) WindowsGetSpaceUsed constructor should be public

2016-04-29 Thread Vinayakumar B (JIRA)
Vinayakumar B created HADOOP-13072:
--

 Summary: WindowsGetSpaceUsed constructor should be public
 Key: HADOOP-13072
 URL: https://issues.apache.org/jira/browse/HADOOP-13072
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinayakumar B
Assignee: Vinayakumar B


WindowsGetSpaceUsed constructor should be made public.
Otherwise building using builder will not work.

{noformat}2016-04-29 12:49:37,455 [Thread-108] WARN  fs.GetSpaceUsed$Builder 
(GetSpaceUsed.java:build(127)) - Doesn't look like the class class 
org.apache.hadoop.fs.WindowsGetSpaceUsed have the needed constructor
java.lang.NoSuchMethodException: 
org.apache.hadoop.fs.WindowsGetSpaceUsed.(org.apache.hadoop.fs.GetSpaceUsed$Builder)
at java.lang.Class.getConstructor0(Unknown Source)
at java.lang.Class.getConstructor(Unknown Source)
at 
org.apache.hadoop.fs.GetSpaceUsed$Builder.build(GetSpaceUsed.java:118)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.(BlockPoolSlice.java:165)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addBlockPool(FsVolumeImpl.java:915)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addBlockPool(FsVolumeImpl.java:907)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$2.run(FsVolumeList.java:413)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-12480) Run precommit javadoc only for changed modules

2015-10-15 Thread Vinayakumar B (JIRA)
Vinayakumar B created HADOOP-12480:
--

 Summary: Run precommit javadoc only for changed modules
 Key: HADOOP-12480
 URL: https://issues.apache.org/jira/browse/HADOOP-12480
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Vinayakumar B
Assignee: Vinayakumar B


Currently Precommit javadoc check will happen on root of hadoop,

IMO Its sufficient to run for only changed modules.
This way Pre-commit will take even lesser time as Javadoc will take significant 
time compare to other checks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12302) Native Compilation broken in Windows after HADOOP-7824

2015-08-04 Thread Vinayakumar B (JIRA)
Vinayakumar B created HADOOP-12302:
--

 Summary: Native Compilation broken in Windows after HADOOP-7824
 Key: HADOOP-12302
 URL: https://issues.apache.org/jira/browse/HADOOP-12302
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Blocker


HADOOP-7824 introduced a way to set the java static values for POSIX flags, 
this resulted in compilation error in Windows



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12214) Parse 'HadoopArchive' commandline using cli Options.

2015-07-10 Thread Vinayakumar B (JIRA)
Vinayakumar B created HADOOP-12214:
--

 Summary: Parse 'HadoopArchive' commandline using cli Options.
 Key: HADOOP-12214
 URL: https://issues.apache.org/jira/browse/HADOOP-12214
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Minor


Use the CommandLine parser for parsing the hadoop archives options.
This will make provide options in any order.
Currently strict order needs to maintained.

like {{-archiveName NAME.har -p parent path \[-r replication factor\] 
src* dest}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12176) smart-apply-patch.sh fails to identify git patch prefixes in some cases

2015-07-02 Thread Vinayakumar B (JIRA)
Vinayakumar B created HADOOP-12176:
--

 Summary: smart-apply-patch.sh fails to identify git patch prefixes 
in some cases
 Key: HADOOP-12176
 URL: https://issues.apache.org/jira/browse/HADOOP-12176
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinayakumar B
Assignee: Vinayakumar B


after HADOOP-12018, git apply supported with --no-prefix.

But for some patches this detection will identify git patch with prefix as 
no-prefix patch and fails to apply.

Example case is ; If patch contains a line changed with '+++' or '---' 
somewhere in between. May be a javadoc update. This makes detection wrong and 
git apply will fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12019) update BUILDING.txt to include python for 'mvn site' in windows

2015-05-22 Thread Vinayakumar B (JIRA)
Vinayakumar B created HADOOP-12019:
--

 Summary: update BUILDING.txt to include python for 'mvn site' in 
windows 
 Key: HADOOP-12019
 URL: https://issues.apache.org/jira/browse/HADOOP-12019
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinayakumar B
Assignee: Vinayakumar B


HADOOP-11553 introduced shelldocs.py to generate documents for shell APIs. This 
needs python to execute.

In Linux environments python available by default, but in windows needs to be 
installed separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HADOOP-11566) Add tests and fix for erasure coders to recover erased parity units

2015-05-13 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B reopened HADOOP-11566:


 Add tests and fix for erasure coders to recover erased parity units 
 

 Key: HADOOP-11566
 URL: https://issues.apache.org/jira/browse/HADOOP-11566
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Fix For: HDFS-7285

 Attachments: HADOOP-11566-HDFS-7285-v2.patch, 
 HADOOP-11566-HDFS-7285-v2.patch, HADOOP-11566-v1.patch


 Discussing with [~zhz] in HADOOP-11542: it's planned to have follow up a JIRA 
 to enhance the tests for parity chunks as well. Like erasedDataIndexes, 
 erasedParityIndexes will be added to specify which parity units are to be 
 erased and recovered then.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11566) Add tests and fix for erasure coders to recover erased parity units

2015-05-13 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-11566.

   Resolution: Fixed
Fix Version/s: HDFS-7285

Resolving as FIXED

 Add tests and fix for erasure coders to recover erased parity units 
 

 Key: HADOOP-11566
 URL: https://issues.apache.org/jira/browse/HADOOP-11566
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Fix For: HDFS-7285

 Attachments: HADOOP-11566-HDFS-7285-v2.patch, 
 HADOOP-11566-HDFS-7285-v2.patch, HADOOP-11566-v1.patch


 Discussing with [~zhz] in HADOOP-11542: it's planned to have follow up a JIRA 
 to enhance the tests for parity chunks as well. Like erasedDataIndexes, 
 erasedParityIndexes will be added to specify which parity units are to be 
 erased and recovered then.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11834) Add ErasureCodecFactory to create ErasureCodec using codec's short name.

2015-04-17 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-11834.

Resolution: Duplicate

 Add ErasureCodecFactory to create ErasureCodec using codec's short name.
 

 Key: HADOOP-11834
 URL: https://issues.apache.org/jira/browse/HADOOP-11834
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Vinayakumar B
Assignee: Vinayakumar B

 Using Codec's shortname Codec instance should be create.
 such as, using name rs {{RSErasureCodec}} should be created.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11834) Add ErasureCodecFactory to create ErasureCodec using codec's short name.

2015-04-15 Thread Vinayakumar B (JIRA)
Vinayakumar B created HADOOP-11834:
--

 Summary: Add ErasureCodecFactory to create ErasureCodec using 
codec's short name.
 Key: HADOOP-11834
 URL: https://issues.apache.org/jira/browse/HADOOP-11834
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Vinayakumar B


Using Codec's shortname Codec instance should be create.
such as, using name rs {{RSErasureCodec}} should be created.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11645) Erasure Codec API covering the essential aspects for an erasure code

2015-04-07 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-11645.

   Resolution: Fixed
Fix Version/s: HDFS-7285
 Hadoop Flags: Reviewed

Thanks [~drankye].

 Erasure Codec API covering the essential aspects for an erasure code
 

 Key: HADOOP-11645
 URL: https://issues.apache.org/jira/browse/HADOOP-11645
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Fix For: HDFS-7285

 Attachments: HADOOP-11645-v1.patch, HADOOP-11645-v2.patch, 
 HADOOP-11645-v3.patch


 This is to define the even higher level API *ErasureCodec* to possiblly 
 consider all the essential aspects for an erasure code, as discussed in in 
 HDFS-7337 in details. Generally, it will cover the necessary configurations 
 about which *RawErasureCoder* to use for the code scheme, how to form and 
 layout the BlockGroup, and etc. It will also discuss how an *ErasureCodec* 
 will be used in both client and DataNode, in all the supported modes related 
 to EC.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11646) Erasure Coder API for encoding and decoding of block group

2015-03-09 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-11646.

  Resolution: Fixed
Hadoop Flags: Reviewed

Committed to HDFS-7285 branch.

 Erasure Coder API for encoding and decoding of block group
 --

 Key: HADOOP-11646
 URL: https://issues.apache.org/jira/browse/HADOOP-11646
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Fix For: HDFS-7285

 Attachments: HADOOP-11646-v4.patch, HADOOP-11646-v5.patch, 
 HDFS-7662-v1.patch, HDFS-7662-v2.patch, HDFS-7662-v3.patch


 This is to define ErasureCoder API for encoding and decoding of BlockGroup. 
 Given a BlockGroup, ErasureCoder extracts data chunks from the blocks and 
 leverages RawErasureCoder defined in HADOOP-11514 to perform concrete 
 encoding or decoding. Note this mainly focuses on the basic fundamental 
 aspects, and solves encoding, data blocks recovering and etc. Regarding 
 parity blocks recovering, as it involves multiple steps, HADOOP-11550 will 
 handle it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11668) start-dfs.sh and stop-dfs.sh no longer works in HA mode after --slaves shell option

2015-03-03 Thread Vinayakumar B (JIRA)
Vinayakumar B created HADOOP-11668:
--

 Summary: start-dfs.sh and stop-dfs.sh no longer works in HA mode 
after --slaves shell option
 Key: HADOOP-11668
 URL: https://issues.apache.org/jira/browse/HADOOP-11668
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Reporter: Vinayakumar B
Assignee: Vinayakumar B


After introduction of --slaves option for the scripts, start-dfs.sh and 
stop-dfs.sh will no longer work in HA mode.

This is due to multiple hostnames passed for '--hostnames' delimited with space.

These hostnames are treated as commands and script fails.

So, instead of delimiting with space, delimiting with comma(,) before passing 
to hadoop-daemons.sh will solve the problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11569) Provide Merge API for MapFile to merge multiple similar MapFiles to one MapFile

2015-02-09 Thread Vinayakumar B (JIRA)
Vinayakumar B created HADOOP-11569:
--

 Summary: Provide Merge API for MapFile to merge multiple similar 
MapFiles to one MapFile
 Key: HADOOP-11569
 URL: https://issues.apache.org/jira/browse/HADOOP-11569
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Vinayakumar B
Assignee: Vinayakumar B


If there are multiple similar MapFiles of the same keyClass and value classes, 
then these can be merged together to One MapFile to allow search easier.

Provide an API  similar to {{SequenceFile#merge()}}.
Merging will be easy with the fact that MapFiles are already sorted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11459) Fix recent findbugs in ActiveStandbyElector, NetUtils and ShellBasedIdMapping

2015-01-05 Thread Vinayakumar B (JIRA)
Vinayakumar B created HADOOP-11459:
--

 Summary: Fix recent findbugs in ActiveStandbyElector, NetUtils and 
ShellBasedIdMapping
 Key: HADOOP-11459
 URL: https://issues.apache.org/jira/browse/HADOOP-11459
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Minor


Fix findbugs in the latest jenkins which causing QA builds to fail.

{noformat}Return value of java.util.concurrent.CountDownLatch.await(long, 
TimeUnit) ignored in 
org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef.process(WatchedEvent){noformat}

{noformat}Sequence of calls to java.util.concurrent.ConcurrentHashMap may not 
be atomic in org.apache.hadoop.net.NetUtils.canonicalizeHost(String){noformat}

{noformat}Inconsistent synchronization of 
org.apache.hadoop.security.ShellBasedIdMapping.staticMapping; locked 88% of 
time{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11296) hadoop-daemons.sh throws 'host1: bash: host3: command not found...'

2014-11-11 Thread Vinayakumar B (JIRA)
Vinayakumar B created HADOOP-11296:
--

 Summary: hadoop-daemons.sh throws 'host1: bash: host3: command not 
found...'
 Key: HADOOP-11296
 URL: https://issues.apache.org/jira/browse/HADOOP-11296
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.5.1
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Critical


*hadoop-daemons.sh* throws command not found.

{noformat}[vinay@host2 install]$ 
/home/vinay/install/hadoop/sbin/hadoop-[vinay@host2 install]$ 
/home/vinay/install/hadoop/sbin/hadoop-daemons.sh --config 
/home/vinay/install/conf --hostnames 'host1 host2' start namenode
host1: bash: host2: command not found...
{noformat}

*hadoop-daemons.sh* is mainly used to start the cluster, for ex: start-dfs.sh

Without this cluster will not be able to start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-6253) Add a Ceph FileSystem interface.

2014-11-04 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-6253.
---
Resolution: Won't Fix

Resolving as 'Wont Fix' as no changes have been committed

 Add a Ceph FileSystem interface.
 

 Key: HADOOP-6253
 URL: https://issues.apache.org/jira/browse/HADOOP-6253
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs
Reporter: Gregory Farnum
Assignee: Gregory Farnum
Priority: Minor
  Labels: ceph
 Attachments: HADOOP-6253.patch, HADOOP-6253.patch, HADOOP-6253.patch, 
 HADOOP-6253.patch, HADOOP-6253.patch


 The experimental distributed filesystem Ceph does not have a single point of 
 failure, uses a distributed metadata cluster instead of a single in-memory 
 server, and might be of use to some Hadoop users.
 http://ceph.com/docs/wip-hadoop-doc/cephfs/hadoop/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HADOOP-6253) Add a Ceph FileSystem interface.

2014-11-04 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B reopened HADOOP-6253:
---

 Add a Ceph FileSystem interface.
 

 Key: HADOOP-6253
 URL: https://issues.apache.org/jira/browse/HADOOP-6253
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs
Reporter: Gregory Farnum
Assignee: Gregory Farnum
Priority: Minor
  Labels: ceph
 Attachments: HADOOP-6253.patch, HADOOP-6253.patch, HADOOP-6253.patch, 
 HADOOP-6253.patch, HADOOP-6253.patch


 The experimental distributed filesystem Ceph does not have a single point of 
 failure, uses a distributed metadata cluster instead of a single in-memory 
 server, and might be of use to some Hadoop users.
 http://ceph.com/docs/wip-hadoop-doc/cephfs/hadoop/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HADOOP-11227) error when building hadoop on windows

2014-11-04 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B reopened HADOOP-11227:


 error when building hadoop on windows  
 ---

 Key: HADOOP-11227
 URL: https://issues.apache.org/jira/browse/HADOOP-11227
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: milq
Assignee: milq

 [INFO] 
 
 [INFO] Building hadoop-mapreduce-client-app 2.2.0
 [INFO] 
 
 [INFO]
 [INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ 
 hadoop-mapreduce-clie
 nt-app ---
 [INFO] Executing tasks
 main:
 [INFO] Executed tasks
 [INFO]
 [INFO] --- maven-resources-plugin:2.2:resources (default-resources) @ 
 hadoop-map
 reduce-client-app ---
 [INFO] Using default encoding to copy filtered resources.
 [INFO]
 [INFO] --- maven-compiler-plugin:2.5.1:compile (default-compile) @ 
 hadoop-mapred
 uce-client-app ---
 [INFO] Nothing to compile - all classes are up to date
 [INFO]
 [INFO] --- maven-resources-plugin:2.2:testResources (default-testResources) @ 
 ha
 doop-mapreduce-client-app ---
 [INFO] Using default encoding to copy filtered resources.
 [INFO]
 [INFO] --- maven-compiler-plugin:2.5.1:testCompile (default-testCompile) @ 
 hadoo
 p-mapreduce-client-app ---
 [INFO] Compiling 29 source files to 
 D:\hdfs\hadoop-mapreduce-project\hadoop-mapr
 educe-client\hadoop-mapreduce-client-app\target\test-classes
 [INFO] -
 [ERROR] COMPILATION ERROR :
 [INFO] -
 [ERROR] 
 D:\hdfs\hadoop-mapreduce-project\hadoop-mapreduce-client\hadoop-mapreduc
 e-client-app\src\test\java\org\apache\hadoop\mapreduce\v2\app\TestRecovery.java:
 [1491,60] incomparable types: java.lang.Enumcapture#698 of ? and 
 org.apache.ha
 doop.mapreduce.JobCounter
 [ERROR] 
 D:\hdfs\hadoop-mapreduce-project\hadoop-mapreduce-client\hadoop-mapreduc
 e-client-app\src\test\java\org\apache\hadoop\mapreduce\v2\app\TestRecovery.java:
 [1495,67] incomparable types: java.lang.Enumcapture#215 of ? and 
 org.apache.ha
 doop.mapreduce.JobCounter
 [INFO] 2 errors
 [INFO] -
 [INFO] 
 
 [INFO] Reactor Summary:
 [INFO]
 [INFO] Apache Hadoop Main  SUCCESS [7.601s]
 [INFO] Apache Hadoop Project POM . SUCCESS [7.254s]
 [INFO] Apache Hadoop Annotations . SUCCESS [7.177s]
 [INFO] Apache Hadoop Assemblies .. SUCCESS [0.604s]
 [INFO] Apache Hadoop Project Dist POM  SUCCESS [6.864s]
 [INFO] Apache Hadoop Maven Plugins ... SUCCESS [8.371s]
 [INFO] Apache Hadoop Auth  SUCCESS [5.966s]
 [INFO] Apache Hadoop Auth Examples ... SUCCESS [4.492s]
 [INFO] Apache Hadoop Common .. SUCCESS [7:26.231s]
 [INFO] Apache Hadoop NFS . SUCCESS [20.858s]
 [INFO] Apache Hadoop Common Project .. SUCCESS [0.093s]
 [INFO] Apache Hadoop HDFS  SUCCESS [8:10.985s]
 [INFO] Apache Hadoop HttpFS .. SUCCESS [1:00.932s]
 [INFO] Apache Hadoop HDFS BookKeeper Journal . SUCCESS [17.207s]
 [INFO] Apache Hadoop HDFS-NFS  SUCCESS [12.950s]
 [INFO] Apache Hadoop HDFS Project  SUCCESS [0.104s]
 [INFO] hadoop-yarn ... SUCCESS [1.943s]
 [INFO] hadoop-yarn-api ... SUCCESS [2:39.214s]
 [INFO] hadoop-yarn-common  SUCCESS [1:15.391s]
 [INFO] hadoop-yarn-server  SUCCESS [0.278s]
 [INFO] hadoop-yarn-server-common . SUCCESS [14.293s]
 [INFO] hadoop-yarn-server-nodemanager  SUCCESS [25.848s]
 [INFO] hadoop-yarn-server-web-proxy .. SUCCESS [5.866s]
 [INFO] hadoop-yarn-server-resourcemanager  SUCCESS [39.821s]
 [INFO] hadoop-yarn-server-tests .. SUCCESS [0.645s]
 [INFO] hadoop-yarn-client  SUCCESS [6.714s]
 [INFO] hadoop-yarn-applications .. SUCCESS [0.454s]
 [INFO] hadoop-yarn-applications-distributedshell . SUCCESS [3.555s]
 [INFO] hadoop-mapreduce-client ... SUCCESS [0.292s]
 [INFO] hadoop-mapreduce-client-core .. SUCCESS [1:05.441s]
 [INFO] 

[jira] [Resolved] (HADOOP-11227) error when building hadoop on windows

2014-11-04 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-11227.

Resolution: Not a Problem

Closing as 'Not a problem'

 error when building hadoop on windows  
 ---

 Key: HADOOP-11227
 URL: https://issues.apache.org/jira/browse/HADOOP-11227
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: milq
Assignee: milq

 [INFO] 
 
 [INFO] Building hadoop-mapreduce-client-app 2.2.0
 [INFO] 
 
 [INFO]
 [INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ 
 hadoop-mapreduce-clie
 nt-app ---
 [INFO] Executing tasks
 main:
 [INFO] Executed tasks
 [INFO]
 [INFO] --- maven-resources-plugin:2.2:resources (default-resources) @ 
 hadoop-map
 reduce-client-app ---
 [INFO] Using default encoding to copy filtered resources.
 [INFO]
 [INFO] --- maven-compiler-plugin:2.5.1:compile (default-compile) @ 
 hadoop-mapred
 uce-client-app ---
 [INFO] Nothing to compile - all classes are up to date
 [INFO]
 [INFO] --- maven-resources-plugin:2.2:testResources (default-testResources) @ 
 ha
 doop-mapreduce-client-app ---
 [INFO] Using default encoding to copy filtered resources.
 [INFO]
 [INFO] --- maven-compiler-plugin:2.5.1:testCompile (default-testCompile) @ 
 hadoo
 p-mapreduce-client-app ---
 [INFO] Compiling 29 source files to 
 D:\hdfs\hadoop-mapreduce-project\hadoop-mapr
 educe-client\hadoop-mapreduce-client-app\target\test-classes
 [INFO] -
 [ERROR] COMPILATION ERROR :
 [INFO] -
 [ERROR] 
 D:\hdfs\hadoop-mapreduce-project\hadoop-mapreduce-client\hadoop-mapreduc
 e-client-app\src\test\java\org\apache\hadoop\mapreduce\v2\app\TestRecovery.java:
 [1491,60] incomparable types: java.lang.Enumcapture#698 of ? and 
 org.apache.ha
 doop.mapreduce.JobCounter
 [ERROR] 
 D:\hdfs\hadoop-mapreduce-project\hadoop-mapreduce-client\hadoop-mapreduc
 e-client-app\src\test\java\org\apache\hadoop\mapreduce\v2\app\TestRecovery.java:
 [1495,67] incomparable types: java.lang.Enumcapture#215 of ? and 
 org.apache.ha
 doop.mapreduce.JobCounter
 [INFO] 2 errors
 [INFO] -
 [INFO] 
 
 [INFO] Reactor Summary:
 [INFO]
 [INFO] Apache Hadoop Main  SUCCESS [7.601s]
 [INFO] Apache Hadoop Project POM . SUCCESS [7.254s]
 [INFO] Apache Hadoop Annotations . SUCCESS [7.177s]
 [INFO] Apache Hadoop Assemblies .. SUCCESS [0.604s]
 [INFO] Apache Hadoop Project Dist POM  SUCCESS [6.864s]
 [INFO] Apache Hadoop Maven Plugins ... SUCCESS [8.371s]
 [INFO] Apache Hadoop Auth  SUCCESS [5.966s]
 [INFO] Apache Hadoop Auth Examples ... SUCCESS [4.492s]
 [INFO] Apache Hadoop Common .. SUCCESS [7:26.231s]
 [INFO] Apache Hadoop NFS . SUCCESS [20.858s]
 [INFO] Apache Hadoop Common Project .. SUCCESS [0.093s]
 [INFO] Apache Hadoop HDFS  SUCCESS [8:10.985s]
 [INFO] Apache Hadoop HttpFS .. SUCCESS [1:00.932s]
 [INFO] Apache Hadoop HDFS BookKeeper Journal . SUCCESS [17.207s]
 [INFO] Apache Hadoop HDFS-NFS  SUCCESS [12.950s]
 [INFO] Apache Hadoop HDFS Project  SUCCESS [0.104s]
 [INFO] hadoop-yarn ... SUCCESS [1.943s]
 [INFO] hadoop-yarn-api ... SUCCESS [2:39.214s]
 [INFO] hadoop-yarn-common  SUCCESS [1:15.391s]
 [INFO] hadoop-yarn-server  SUCCESS [0.278s]
 [INFO] hadoop-yarn-server-common . SUCCESS [14.293s]
 [INFO] hadoop-yarn-server-nodemanager  SUCCESS [25.848s]
 [INFO] hadoop-yarn-server-web-proxy .. SUCCESS [5.866s]
 [INFO] hadoop-yarn-server-resourcemanager  SUCCESS [39.821s]
 [INFO] hadoop-yarn-server-tests .. SUCCESS [0.645s]
 [INFO] hadoop-yarn-client  SUCCESS [6.714s]
 [INFO] hadoop-yarn-applications .. SUCCESS [0.454s]
 [INFO] hadoop-yarn-applications-distributedshell . SUCCESS [3.555s]
 [INFO] hadoop-mapreduce-client ... SUCCESS [0.292s]
 [INFO] 

[jira] [Created] (HADOOP-11271) Use Time.monotonicNow() in Shell.java instead of Time.now()

2014-11-04 Thread Vinayakumar B (JIRA)
Vinayakumar B created HADOOP-11271:
--

 Summary: Use Time.monotonicNow() in Shell.java instead of 
Time.now()
 Key: HADOOP-11271
 URL: https://issues.apache.org/jira/browse/HADOOP-11271
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Minor


Use {{Time.monotonicNow()}} instead of {{Time.now()}} in Shell.java to keep 
track of the last executed time.

Using Time.monotonicNow() in elapsed time calculation usecases will be accurate 
and safe from system time changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HADOOP-11148) TestInMemoryNativeS3FileSystemContract fails

2014-11-03 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B reopened HADOOP-11148:


 TestInMemoryNativeS3FileSystemContract fails 
 -

 Key: HADOOP-11148
 URL: https://issues.apache.org/jira/browse/HADOOP-11148
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.6.0
Reporter: Rajat Jain
Priority: Minor

 Getting these errors. Ran on centos 6.5
 {code}
 testCanonicalName(org.apache.hadoop.fs.s3native.TestInMemoryNativeS3FileSystemContract)
   Time elapsed: 0.389 sec   ERROR!
 java.lang.IllegalArgumentException: java.net.UnknownHostException: null
   at 
 org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:373)
   at 
 org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:258)
   at 
 org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:304)
   at 
 org.apache.hadoop.fs.s3native.NativeS3FileSystemContractBaseTest.testCanonicalName(NativeS3FileSystemContractBaseTest.java:51)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at junit.framework.TestCase.runTest(TestCase.java:176)
   at junit.framework.TestCase.runBare(TestCase.java:141)
   at junit.framework.TestResult$1.protect(TestResult.java:122)
   at junit.framework.TestResult.runProtected(TestResult.java:142)
   at junit.framework.TestResult.run(TestResult.java:125)
   at junit.framework.TestCase.run(TestCase.java:129)
   at junit.framework.TestSuite.runTest(TestSuite.java:255)
   at junit.framework.TestSuite.run(TestSuite.java:250)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
 Caused by: java.net.UnknownHostException: null
   at 
 org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:373)
   at 
 org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:258)
   at 
 org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:304)
   at 
 org.apache.hadoop.fs.s3native.NativeS3FileSystemContractBaseTest.testCanonicalName(NativeS3FileSystemContractBaseTest.java:51)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at junit.framework.TestCase.runTest(TestCase.java:176)
   at junit.framework.TestCase.runBare(TestCase.java:141)
   at junit.framework.TestResult$1.protect(TestResult.java:122)
   at junit.framework.TestResult.runProtected(TestResult.java:142)
   at junit.framework.TestResult.run(TestResult.java:125)
   at junit.framework.TestCase.run(TestCase.java:129)
   at junit.framework.TestSuite.runTest(TestSuite.java:255)
   at junit.framework.TestSuite.run(TestSuite.java:250)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
 testListStatusForRoot(org.apache.hadoop.fs.s3native.TestInMemoryNativeS3FileSystemContract)
   Time elapsed: 0.084 sec   ERROR!
 java.lang.NullPointerException: null
   at 
 

[jira] [Resolved] (HADOOP-11148) TestInMemoryNativeS3FileSystemContract fails

2014-11-03 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-11148.

Resolution: Not a Problem

Resolving as 'Not a Problem'

 TestInMemoryNativeS3FileSystemContract fails 
 -

 Key: HADOOP-11148
 URL: https://issues.apache.org/jira/browse/HADOOP-11148
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.6.0
Reporter: Rajat Jain
Priority: Minor

 Getting these errors. Ran on centos 6.5
 {code}
 testCanonicalName(org.apache.hadoop.fs.s3native.TestInMemoryNativeS3FileSystemContract)
   Time elapsed: 0.389 sec   ERROR!
 java.lang.IllegalArgumentException: java.net.UnknownHostException: null
   at 
 org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:373)
   at 
 org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:258)
   at 
 org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:304)
   at 
 org.apache.hadoop.fs.s3native.NativeS3FileSystemContractBaseTest.testCanonicalName(NativeS3FileSystemContractBaseTest.java:51)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at junit.framework.TestCase.runTest(TestCase.java:176)
   at junit.framework.TestCase.runBare(TestCase.java:141)
   at junit.framework.TestResult$1.protect(TestResult.java:122)
   at junit.framework.TestResult.runProtected(TestResult.java:142)
   at junit.framework.TestResult.run(TestResult.java:125)
   at junit.framework.TestCase.run(TestCase.java:129)
   at junit.framework.TestSuite.runTest(TestSuite.java:255)
   at junit.framework.TestSuite.run(TestSuite.java:250)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
 Caused by: java.net.UnknownHostException: null
   at 
 org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:373)
   at 
 org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:258)
   at 
 org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:304)
   at 
 org.apache.hadoop.fs.s3native.NativeS3FileSystemContractBaseTest.testCanonicalName(NativeS3FileSystemContractBaseTest.java:51)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at junit.framework.TestCase.runTest(TestCase.java:176)
   at junit.framework.TestCase.runBare(TestCase.java:141)
   at junit.framework.TestResult$1.protect(TestResult.java:122)
   at junit.framework.TestResult.runProtected(TestResult.java:142)
   at junit.framework.TestResult.run(TestResult.java:125)
   at junit.framework.TestCase.run(TestCase.java:129)
   at junit.framework.TestSuite.runTest(TestSuite.java:255)
   at junit.framework.TestSuite.run(TestSuite.java:250)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
 testListStatusForRoot(org.apache.hadoop.fs.s3native.TestInMemoryNativeS3FileSystemContract)
   Time elapsed: 0.084 sec   ERROR!
 java.lang.NullPointerException: null
   at 
 

[jira] [Created] (HADOOP-10966) Hadoop Common native compilation broken in windows

2014-08-13 Thread Vinayakumar B (JIRA)
Vinayakumar B created HADOOP-10966:
--

 Summary: Hadoop Common native compilation broken in windows
 Key: HADOOP-10966
 URL: https://issues.apache.org/jira/browse/HADOOP-10966
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Reporter: Vinayakumar B
Priority: Blocker


After HADOOP-10962 hadoop common native compilation broken in windows

{noformat}
src\org\apache\hadoop\io\nativeio\NativeIO.c(181): error C2065: 
'POSIX_FADV_NORMAL' : undeclared identifier 
src\org\apache\hadoop\io\nativeio\NativeIO.c(184): error C2065: 
'POSIX_FADV_RANDOM' : undeclared identifier 
src\org\apache\hadoop\io\nativeio\NativeIO.c(187): error C2065: 
'POSIX_FADV_SEQUENTIAL' : undeclared identifier 
src\org\apache\hadoop\io\nativeio\NativeIO.c(190): error C2065: 
'POSIX_FADV_WILLNEED' : undeclared identifier 
src\org\apache\hadoop\io\nativeio\NativeIO.c(193): error C2065: 
'POSIX_FADV_DONTNEED' : undeclared identifier 
src\org\apache\hadoop\io\nativeio\NativeIO.c(196): error C2065: 
'POSIX_FADV_NOREUSE' : undeclared identifier 
{noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10590) ServiceAuthorizationManager is not threadsafe

2014-06-17 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-10590.


   Resolution: Fixed
Fix Version/s: 2.5.0
 Hadoop Flags: Reviewed

 ServiceAuthorizationManager  is not threadsafe
 --

 Key: HADOOP-10590
 URL: https://issues.apache.org/jira/browse/HADOOP-10590
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.4.0
Reporter: Benoy Antony
Assignee: Benoy Antony
 Fix For: 2.5.0

 Attachments: HADOOP-10590.patch, performance-test-without-rpc.patch, 
 performancetest.patch


 The mutators in ServiceAuthorizationManager  are synchronized. The accessors 
 are not synchronized.
 This results in visibility issues when  ServiceAuthorizationManager's state 
 is accessed from different threads.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10350) BUILDING.txt should mention openssl dependency required for hadoop-pipes

2014-02-17 Thread Vinayakumar B (JIRA)
Vinayakumar B created HADOOP-10350:
--

 Summary: BUILDING.txt should mention openssl dependency required 
for hadoop-pipes
 Key: HADOOP-10350
 URL: https://issues.apache.org/jira/browse/HADOOP-10350
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinayakumar B
Assignee: Vinayakumar B


BUILDING.txt should mention openssl dependency required for hadoop-pipes



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)