[jira] [Created] (HADOOP-19148) Update solr from 8.11.2 to 8.11.3 to address CVE-2023-50298

2024-04-15 Thread Brahma Reddy Battula (Jira)
Brahma Reddy Battula created HADOOP-19148:
-

 Summary: Update solr from 8.11.2 to 8.11.3 to address 
CVE-2023-50298
 Key: HADOOP-19148
 URL: https://issues.apache.org/jira/browse/HADOOP-19148
 Project: Hadoop Common
  Issue Type: Improvement
  Components: common
Reporter: Brahma Reddy Battula


Update solr from 8.11.2 to 8.11.3 to address CVE-2023-50298



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17917) Backport HADOOP-15993 to branch-3.2 which Address CVE-2014-4611

2021-09-16 Thread Brahma Reddy Battula (Jira)
Brahma Reddy Battula created HADOOP-17917:
-

 Summary: Backport HADOOP-15993 to branch-3.2 which Address 
CVE-2014-4611
 Key: HADOOP-17917
 URL: https://issues.apache.org/jira/browse/HADOOP-17917
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


Now the version is 0.8.2.1 and it has net.jpountz.lz4:lz4:1.2.0 dependency, 
which is vulnerable. 
([https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-4611])



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17840) Backport HADOOP-17837 to branch-3.2

2021-08-06 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula resolved HADOOP-17840.
---
Fix Version/s: 3.2.3
 Hadoop Flags: Reviewed
   Resolution: Fixed

Committed to branch-3.2.3...[~bbeaudreault] thanks for your contribution.

> Backport HADOOP-17837 to branch-3.2
> ---
>
> Key: HADOOP-17840
> URL: https://issues.apache.org/jira/browse/HADOOP-17840
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.2.3
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17837) Make it easier to debug UnknownHostExceptions from NetUtils.connect

2021-08-06 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula resolved HADOOP-17837.
---
Hadoop Flags: Reviewed
  Resolution: Fixed

[~bbeaudreault] thanks raising the PR. Committed to trunk and branch-3.3.. As 
this only test assertion, ran locally and pushed.

> Make it easier to debug UnknownHostExceptions from NetUtils.connect
> ---
>
> Key: HADOOP-17837
> URL: https://issues.apache.org/jira/browse/HADOOP-17837
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Most UnknownHostExceptions thrown throughout hadoop include a useful message, 
> either the hostname that was not found or some other descriptor of the 
> problem. The UnknownHostException thrown from NetUtils.connect only includes 
> the [message of the underlying 
> UnresolvedAddressException|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java#L592].
>  If you take a look at the source for UnresolvedAddressException, [it only 
> has a no-args 
> constructor|https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/nio/channels/UnresolvedAddressException.html]
>  (java11, but same is true in other versions). So it never has a message, 
> meaning the UnknownHostException message is empty.
> We should include the endpoint.toString() in the UnknownHostException thrown 
> by NetUtils.connect



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17800) CLONE - Uber-JIRA: Hadoop should support IPv6

2021-07-13 Thread Brahma Reddy Battula (Jira)
Brahma Reddy Battula created HADOOP-17800:
-

 Summary: CLONE - Uber-JIRA: Hadoop should support IPv6
 Key: HADOOP-17800
 URL: https://issues.apache.org/jira/browse/HADOOP-17800
 Project: Hadoop Common
  Issue Type: Improvement
  Components: net
Reporter: Brahma Reddy Battula
Assignee: Nate Edel


Hadoop currently treats IPv6 as unsupported.  Track related smaller issues to 
support IPv6.

(Current case here is mainly HBase on HDFS, so any suggestions about other test 
cases/workload are really appreciated.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17236) Bump up snakeyaml to 1.26 to mitigate CVE-2017-18640

2020-08-30 Thread Brahma Reddy Battula (Jira)
Brahma Reddy Battula created HADOOP-17236:
-

 Summary: Bump up snakeyaml to 1.26 to mitigate CVE-2017-18640
 Key: HADOOP-17236
 URL: https://issues.apache.org/jira/browse/HADOOP-17236
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Brahma Reddy Battula


Bump up snakeyaml to 1.26 to mitigate CVE-2017-18640



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17225) Update jackson-mapper-asl-1.9.13 to atlassian version to mitigate: CVE-2019-10172

2020-08-24 Thread Brahma Reddy Battula (Jira)
Brahma Reddy Battula created HADOOP-17225:
-

 Summary: Update jackson-mapper-asl-1.9.13 to atlassian version to 
mitigate: CVE-2019-10172
 Key: HADOOP-17225
 URL: https://issues.apache.org/jira/browse/HADOOP-17225
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


Currently jersey depends on the jackson, and upgradation of jersey from 1.X to 
2.x looks complicated(see HADOOP-15984 and HADOOP-16485).

Update jackson-mapper-asl-1.9.13 to atlassian version to mitigate: 
CVE-2019-10172.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17221) Upgrade log4j-1.2.17 to atlassian ( To Adress: CVE-2019-17571)

2020-08-24 Thread Brahma Reddy Battula (Jira)
Brahma Reddy Battula created HADOOP-17221:
-

 Summary: Upgrade log4j-1.2.17 to atlassian ( To Adress: 
CVE-2019-17571)
 Key: HADOOP-17221
 URL: https://issues.apache.org/jira/browse/HADOOP-17221
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Brahma Reddy Battula


Currentlly there are no active release under 1.X in log4j and log4j2 is 
incompatiable to upgrade (see HADOOP-16206 ) for more details.

But following CVE is reported on log4j 1.2.17..I think,we should consider to 
update to 
Atlassian([https://mvnrepository.com/artifact/log4j/log4j/1.2.17-atlassian-0.4])
 or redhat versions

[https://nvd.nist.gov/vuln/detail/CVE-2019-17571]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17220) Upgrade slf4j to 1.7.30 ( To Adress: CVE-2018-8088)

2020-08-24 Thread Brahma Reddy Battula (Jira)
Brahma Reddy Battula created HADOOP-17220:
-

 Summary: Upgrade slf4j to 1.7.30 ( To Adress: CVE-2018-8088)
 Key: HADOOP-17220
 URL: https://issues.apache.org/jira/browse/HADOOP-17220
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


To address the following CVE, upgrade the slf4j to latest stable release 1.7.30.

[https://nvd.nist.gov/vuln/detail/CVE-2018-8088]
 

Note: We don't use EventData but should consider upgrading.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16310) Log of a slow RPC request should contain the parameter of the request

2019-10-01 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula resolved HADOOP-16310.
---
Resolution: Duplicate

> Log of a slow RPC request should contain the parameter of the request
> -
>
> Key: HADOOP-16310
> URL: https://issues.apache.org/jira/browse/HADOOP-16310
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Affects Versions: 3.1.1, 2.7.7, 3.1.2
>Reporter: lindongdong
>Priority: Minor
>
>  Now, the log of  a slow RPC request just contains the 
> *methodName*,*processingTime* and *client*. Code is here:
> {code:java}
> if ((rpcMetrics.getProcessingSampleCount() > minSampleSize) &&
> (processingTime > threeSigma)) {
>   if(LOG.isWarnEnabled()) {
> String client = CurCall.get().toString();
> LOG.warn(
> "Slow RPC : " + methodName + " took " + processingTime +
> " milliseconds to process from client " + client);
>   }
>   rpcMetrics.incrSlowRpc();
> }{code}
>  
> It is not enough to analyze why the RPC request is slow. 
> The parameter of the request is a very important thing, and need to be logged.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-14903) Add json-smart explicitly to pom.xml

2018-02-14 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula reopened HADOOP-14903:
---

> Add json-smart explicitly to pom.xml
> 
>
> Key: HADOOP-14903
> URL: https://issues.apache.org/jira/browse/HADOOP-14903
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Major
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14903-003-branch-2.patch, HADOOP-14903.001.patch, 
> HADOOP-14903.002.patch, HADOOP-14903.003.patch
>
>
> With the library update in HADOOP-14799, maven knows how to pull in 
> net.minidev:json-smart for tests, but not for packaging.  This needs to be 
> added to the main project pom in order to avoid this warning:
> {noformat}
> [WARNING] The POM for net.minidev:json-smart:jar:2.3-SNAPSHOT is missing, no 
> dependency information available
> {noformat}
> This is pulled in from a few places:
> {noformat}
> [INFO] |  +- org.apache.hadoop:hadoop-auth:jar:3.1.0-SNAPSHOT:compile
> [INFO] |  |  +- com.nimbusds:nimbus-jose-jwt:jar:4.41.1:compile
> [INFO] |  |  |  +- com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile
> [INFO] |  |  |  \- net.minidev:json-smart:jar:2.3:compile
> [INFO] |  |  \- org.apache.kerby:token-provider:jar:1.0.1:compile
> [INFO] |  | \- com.nimbusds:nimbus-jose-jwt:jar:4.41.1:compile
> [INFO] |  |+- 
> com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile
> [INFO] |  |\- net.minidev:json-smart:jar:2.3:compile
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15196) Zlib decompression fails when file having trailing garbage

2018-01-30 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-15196:
-

 Summary: Zlib decompression fails when file having trailing garbage
 Key: HADOOP-15196
 URL: https://issues.apache.org/jira/browse/HADOOP-15196
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


When file has trailing garbage gzip will ignore.

{noformat}

gzip -d 2018011309-js.rishenglipin.com.gz

gzip: 2018011309-js.rishenglipin.com.gz: decompression OK, trailing garbage 
ignored

{noformat}

 when we use same file and decompress,we got following.

{noformat}

2018-01-13 14:23:43,151 | WARN  | task-result-getter-3 | Lost task 0.0 in stage 
345.0 (TID 5686, node-core-gyVYT, executor 3): java.io.IOException: unknown 
compression method

        at 
org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateBytesDirect(Native 
Method)

        at 
org.apache.hadoop.io.compress.zlib.ZlibDecompressor.decompress(ZlibDecompressor.java:225)

        at 
org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:91)

        at 
org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)

{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15169) "hadoop.ssl.enabled.protocols" should be considered in httpserver2

2018-01-12 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-15169:
-

 Summary: "hadoop.ssl.enabled.protocols" should be considered in 
httpserver2
 Key: HADOOP-15169
 URL: https://issues.apache.org/jira/browse/HADOOP-15169
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


As of now *hadoop.ssl.enabled.protocols"* will not take effect for all the http 
servers( only Datanodehttp server will use this config).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15167) [viewfs] ViewFileSystem.InternalDirOfViewFs#getFileStatus shouldn't depend on UGI#getPrimaryGroupName

2018-01-11 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-15167:
-

 Summary: [viewfs] ViewFileSystem.InternalDirOfViewFs#getFileStatus 
shouldn't depend on UGI#getPrimaryGroupName
 Key: HADOOP-15167
 URL: https://issues.apache.org/jira/browse/HADOOP-15167
 Project: Hadoop Common
  Issue Type: Bug
  Components: viewfs
Affects Versions: 2.7.0
Reporter: Brahma Reddy Battula



Have Secure federated cluster with atleast two nameservices
Configure viewfs related configs 

 When we run the {{ls}} cmd in HDFS client ,we will call the method: 
org.apache.hadoop.fs.viewfs.ViewFileSystem.InternalDirOfViewFs#getFileStatus

 it will try to get the group of the kerberos user. If the node has not this 
user, it fails. 

Throws the following and exits.UserGroupInformation#getPrimaryGroupName

{code}
if (groups.isEmpty()) {
  throw new IOException("There is no primary group for UGI " + this);
}
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15153) [branch-2.8] Increase heap memory to avoid the OOM in pre-commit

2018-01-02 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-15153:
-

 Summary: [branch-2.8] Increase heap memory to avoid the OOM in 
pre-commit
 Key: HADOOP-15153
 URL: https://issues.apache.org/jira/browse/HADOOP-15153
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Brahma Reddy Battula


Refernce:
https://builds.apache.org/job/PreCommit-HDFS-Build/22528/consoleFull
https://builds.apache.org/job/PreCommit-HDFS-Build/22528/artifact/out/branch-mvninstall-root.txt

{noformat}
[ERROR] unable to create new native thread -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/OutOfMemoryError
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15150) UGI params should be overidden through env vars(-D arg)

2017-12-28 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-15150:
-

 Summary: UGI params should be overidden through env vars(-D arg)
 Key: HADOOP-15150
 URL: https://issues.apache.org/jira/browse/HADOOP-15150
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


org.apache.hadoop.security.UserGroupInformation#ensureInitialized,will always 
get the configure from the configuration files.*So that, -D args will not take 
effect*.
{code}
  private static void ensureInitialized() {
if (conf == null) {
  synchronized(UserGroupInformation.class) {
if (conf == null) { // someone might have beat us
  initialize(new Configuration(), false);
}
  }
}
  }
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15020) NNBench not support run more than one map task on the same host

2017-11-08 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula resolved HADOOP-15020.
---
Resolution: Duplicate

Closing as duplicate. Please feel free re-open if it's not duplicate.

> NNBench not support run more than one map task on the same host
> ---
>
> Key: HADOOP-15020
> URL: https://issues.apache.org/jira/browse/HADOOP-15020
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: benchmarks
>Affects Versions: 2.7.2
> Environment: Hadoop 2.7.2
>Reporter: zhoutai.zt
>Priority: Minor
>
> When benchmark NameNode performance with NNBench, I start with pseudo 
> distributed deploy. Everything goes well with "-maps 1". BUT with -maps N 
> (n>1) and -operation create_write, many exceptions meet during the benchmark.
> Hostname is part of the file path, which can  differentiate hosts. With more 
> than two map tasks run on the same host, more than two map tasks may operate 
> on the same file, leading to exceptions.
> 17/11/07 15:22:32 INFO hdfs.NNBench:  RAW DATA: AL Total #1: 
> 84
> 17/11/07 15:22:32 INFO hdfs.NNBench:  RAW DATA: AL Total #2: 
> 43
> 17/11/07 15:22:32 INFO hdfs.NNBench:   RAW DATA: TPS Total (ms): 
> 2570
> 17/11/07 15:22:32 INFO hdfs.NNBench:RAW DATA: Longest Map Time (ms): 
> 814.0
> 17/11/07 15:22:32 INFO hdfs.NNBench:RAW DATA: Late maps: 0
> 17/11/07 15:22:32 INFO hdfs.NNBench:  {color:red}RAW DATA: # of 
> exceptions: 3000{color}
> 2017-11-07 14:54:08,082 INFO org.apache.hadoop.hdfs.NNBench: Exception 
> recorded in op: Create/Write/Close
> 2017-11-07 14:54:08,082 INFO org.apache.hadoop.hdfs.NNBench: Exception 
> recorded in op: Create/Write/Close
> 2017-11-07 14:54:08,083 INFO org.apache.hadoop.hdfs.NNBench: Exception 
> recorded in op: Create/Write/Close



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14877) Trunk compilation fails in windows

2017-09-17 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-14877:
-

 Summary: Trunk compilation fails in windows
 Key: HADOOP-14877
 URL: https://issues.apache.org/jira/browse/HADOOP-14877
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.1.0
 Environment: windows
Reporter: Brahma Reddy Battula


{noformat}
[INFO] Dependencies classpath:
D:\trunk\hadoop\hadoop-client-modules\hadoop-client-runtime\target\hadoop-client-runtime-3.1.0-SNAPSHOT.jar;D:\trunk\had
oop\hadoop-client-modules\hadoop-client-api\target\hadoop-client-api-3.1.0-SNAPSHOT.jar
[INFO]
[INFO] --- exec-maven-plugin:1.3.1:exec (check-jar-contents) @ 
hadoop-client-check-invariants ---
java.io.FileNotFoundException: D (The system cannot find the file specified)
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.(ZipFile.java:219)
at java.util.zip.ZipFile.(ZipFile.java:149)
at java.util.zip.ZipFile.(ZipFile.java:120)
at sun.tools.jar.Main.list(Main.java:1115)
at sun.tools.jar.Main.run(Main.java:293)
at sun.tools.jar.Main.main(Main.java:1288)
java.io.FileNotFoundException: 
\trunk\hadoop\hadoop-client-modules\hadoop-client-runtime\target\hadoop-client-runtime-3.
1.0-SNAPSHOT.jar;D (The system cannot find the file specified)
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.(ZipFile.java:219)
at java.util.zip.ZipFile.(ZipFile.java:149)
at java.util.zip.ZipFile.(ZipFile.java:120)
at sun.tools.jar.Main.list(Main.java:1115)
at sun.tools.jar.Main.run(Main.java:293)
at sun.tools.jar.Main.main(Main.java:1288)
[INFO] Artifact looks correct: 'D'
[INFO] Artifact looks correct: 'hadoop-client-runtime-3.1.0-SNAPSHOT.jar;D'
[ERROR] Found artifact with unexpected contents: 
'\trunk\hadoop\hadoop-client-modules\hadoop-client-api\target\hadoop-cl
ient-api-3.1.0-SNAPSHOT.jar'
Please check the following and either correct the build or update
the allowed list with reasoning.
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14543) Should use getAversion() while setting the zkacl

2017-06-19 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-14543:
-

 Summary: Should use getAversion() while setting the zkacl
 Key: HADOOP-14543
 URL: https://issues.apache.org/jira/browse/HADOOP-14543
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


while setting the zkacl we used {color:red}{{getVersion()}}{color} which is 
dataVersion,Ideally we should use {{{color:#14892c}getAversion{color}()}}. If 
there is any acl changes( i.e relam change/..) ,we set the ACL with dataversion 
which will cause {color:#d04437}BADVersion {color}and {color:#d04437}*process 
will not start*{color}. See 
[here|https://issues.apache.org/jira/browse/HDFS-11403?focusedCommentId=16051804=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16051804]

{{zkClient.setACL(path, zkAcl, stat.getVersion());}}





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14455) ViewFileSystem#rename should support be supported within same nameservice with different mountpoints

2017-05-24 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-14455:
-

 Summary: ViewFileSystem#rename should support be supported within 
same nameservice with different mountpoints
 Key: HADOOP-14455
 URL: https://issues.apache.org/jira/browse/HADOOP-14455
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


*Scenario:* 

|| Mount Point || NameService|| Value||
|/tmp|hacluster|/tmp|
|/user|hacluster|/user|
Move file from {{/tmp}} to {{/user}}
It will fail by throwing the following error

{noformat}
Caused by: java.io.IOException: Renames across Mount points not supported
at 
org.apache.hadoop.fs.viewfs.ViewFileSystem.rename(ViewFileSystem.java:500)
at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:2692)
... 22 more
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14356) Update CHANGES.txt to reflect all the changes in branch-2.7

2017-04-26 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-14356:
-

 Summary: Update CHANGES.txt to reflect all the changes in 
branch-2.7
 Key: HADOOP-14356
 URL: https://issues.apache.org/jira/browse/HADOOP-14356
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Brahma Reddy Battula


Following jira's are not updated in {{CHANGES.txt }}

HADOOP-14066,HDFS-11608,HADOOP-14293,HDFS-11628,YARN-6274,YARN-6152,HADOOP-13119,HDFS-10733,HADOOP-13958,HDFS-11280,YARN-6024



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14256) [S3A DOC] Correct the format for "Seoul" example

2017-03-29 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-14256:
-

 Summary: [S3A DOC] Correct the format for "Seoul" example
 Key: HADOOP-14256
 URL: https://issues.apache.org/jira/browse/HADOOP-14256
 Project: Hadoop Common
  Issue Type: Bug
  Components: s3, documentation
Reporter: Brahma Reddy Battula
Priority: Minor


Give the empty b/w Seoul and "```xml"

{noformat}
Seoul
```xml

  fs.s3a.endpoint
  s3.ap-northeast-2.amazonaws.com

```
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14117) TestUpdatePipelineWithSnapshots#testUpdatePipelineAfterDelete fails with bind exception

2017-02-23 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-14117:
-

 Summary: 
TestUpdatePipelineWithSnapshots#testUpdatePipelineAfterDelete fails with bind 
exception
 Key: HADOOP-14117
 URL: https://issues.apache.org/jira/browse/HADOOP-14117
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Brahma Reddy Battula


{noformat}

at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at 
org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:317)
at 
org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:1100)
at 
org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1131)
at 
org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1193)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1049)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:169)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:885)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:721)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:947)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:926)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1635)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:2080)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:2054)
at 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestUpdatePipelineWithSnapshots.testUpdatePipelineAfterDelete(TestUpdatePipelineWithSnapshots.java:100)
{noformat}

 *reference* 
https://builds.apache.org/job/PreCommit-HDFS-Build/18434/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13943) TestCommonConfigurationFields#testCompareXmlAgainstConfigurationClass fails after HADOOP-13863

2016-12-25 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-13943:
-

 Summary: 
TestCommonConfigurationFields#testCompareXmlAgainstConfigurationClass fails 
after HADOOP-13863
 Key: HADOOP-13943
 URL: https://issues.apache.org/jira/browse/HADOOP-13943
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


{noformat}
File core-default.xml (153 properties)

core-default.xml has 3 properties missing in  class 
org.apache.hadoop.fs.CommonConfigurationKeys  class 
org.apache.hadoop.fs.CommonConfigurationKeysPublic  class 
org.apache.hadoop.fs.local.LocalConfigKeys  class 
org.apache.hadoop.fs.ftp.FtpConfigKeys  class 
org.apache.hadoop.ha.SshFenceByTcpPort  class 
org.apache.hadoop.security.LdapGroupsMapping  class 
org.apache.hadoop.ha.ZKFailoverController  class 
org.apache.hadoop.security.ssl.SSLFactory  class 
org.apache.hadoop.security.CompositeGroupsMapping  class 
org.apache.hadoop.io.erasurecode.CodecUtil

  fs.azure.sas.expiry.period
  fs.azure.local.sas.key.mode
  fs.azure.secure.mode
{noformat}

 *Referece* 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/266/testReport/junit/org.apache.hadoop.conf/TestCommonConfigurationFields/testCompareXmlAgainstConfigurationClass/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13890) TestWebDelegationToken and TestKMS fails in trunk

2016-12-10 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-13890:
-

 Summary: TestWebDelegationToken and TestKMS fails in trunk
 Key: HADOOP-13890
 URL: https://issues.apache.org/jira/browse/HADOOP-13890
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Brahma Reddy Battula


{noformat}
org.apache.hadoop.security.authentication.client.AuthenticationException: 
org.apache.hadoop.security.authentication.client.AuthenticationException: 
Invalid SPNEGO sequence, status code: 403
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.readToken(KerberosAuthenticator.java:371)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.access$300(KerberosAuthenticator.java:53)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:317)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:287)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:132)
at 
org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:298)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:170)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:373)
at 
org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:782)
at 
org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:779)
at 
org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$4.run(TestWebDelegationToken.java:715)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.doAsKerberosUser(TestWebDelegationToken.java:712)
at 
org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:778)
at 
org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:729)
 {noformat}

 *Jenkins URL* 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/251/testReport/
https://builds.apache.org/job/PreCommit-HADOOP-Build/11240/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13833) TestSymlinkHdfsFileSystem#testCreateLinkUsingPartQualPath2 fails after HADOOP13605

2016-11-24 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-13833:
-

 Summary: 
TestSymlinkHdfsFileSystem#testCreateLinkUsingPartQualPath2 fails after 
HADOOP13605
 Key: HADOOP-13833
 URL: https://issues.apache.org/jira/browse/HADOOP-13833
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Brahma Reddy Battula



{noformat}
org.junit.ComparisonFailure: expected:<...ileSystem for scheme[: null]> but 
was:<...ileSystem for scheme[ "null"]>
at org.junit.Assert.assertEquals(Assert.java:115)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.hadoop.fs.SymlinkBaseTest.testCreateLinkUsingPartQualPath2(SymlinkBaseTest.java:574)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{noformat}

 *REF:*  
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/235/testReport/junit/org.apache.hadoop.fs/TestSymlinkHdfsFileSystem/testCreateLinkUsingPartQualPath2/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13822) Use GlobalStorageStatistics.INSTANCE.reset() at FileSystem#clearStatistics()

2016-11-16 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-13822:
-

 Summary: Use GlobalStorageStatistics.INSTANCE.reset() at 
FileSystem#clearStatistics()
 Key: HADOOP-13822
 URL: https://issues.apache.org/jira/browse/HADOOP-13822
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


As per my [comment 
here|thttps://issues.apache.org/jira/browse/HADOOP-13283?focusedCommentId=15672426=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15672426]
  GlobalStorageStatistics.INSTANCE.reset() can be used at 
FileSystem#clearStatistics()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13815) TestKMS#testDelegationTokensOpsSimple and TestKMS#testDelegationTokensOpsKerberized Fails in Trunk

2016-11-13 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-13815:
-

 Summary: TestKMS#testDelegationTokensOpsSimple and 
TestKMS#testDelegationTokensOpsKerberized Fails in Trunk
 Key: HADOOP-13815
 URL: https://issues.apache.org/jira/browse/HADOOP-13815
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Brahma Reddy Battula


{noformat}
Expected to find 'tries to renew a token with renewer' but got unexpected 
exception:java.io.IOException: HTTP status [403], message 
[org.apache.hadoop.security.AccessControlException: client tries to renew a 
token (kms-dt owner=client, renewer=client1, realUser=, 
issueDate=1479025952525, maxDate=1479630752525, sequenceNumber=1, 
masterKeyId=2) with non-matching renewer client1]
 at 
org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:169)
 at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:300)
 at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.renewDelegationToken(DelegationTokenAuthenticator.java:216)
 at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.renewDelegationToken(DelegationTokenAuthenticatedURL.java:415)
 at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$2.run(KMSClientProvider.java:906)
 at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$2.run(KMSClientProvider.java:903)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1857)
 at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.renewDelegationToken(KMSClientProvider.java:902)
 at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$KMSTokenRenewer.renew(KMSClientProvider.java:183)
 at org.apache.hadoop.security.token.Token.renew(Token.java:490)
 at org.apache.hadoop.crypto.key.kms.server.TestKMS$14$1.run(TestKMS.java:1820)
 at org.apache.hadoop.crypto.key.kms.server.TestKMS$14$1.run(TestKMS.java:1793)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1857)
 at org.apache.hadoop.crypto.key.kms.server.TestKMS.doAs(TestKMS.java:292)
 at org.apache.hadoop.crypto.key.kms.server.TestKMS.access$100(TestKMS.java:80)
 at org.apache.hadoop.crypto.key.kms.server.TestKMS$14.call(TestKMS.java:1793)
 at org.apache.hadoop.crypto.key.kms.server.TestKMS$14.call(TestKMS.java:1785)
 at org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:140)
 at org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:122)
 at 
org.apache.hadoop.crypto.key.kms.server.TestKMS.testDelegationTokensOps(TestKMS.java:1785)
 at 
org.apache.hadoop.crypto.key.kms.server.TestKMS.testDelegationTokensOpsKerberized(TestKMS.java:1768)
{noformat}

 *Reference:* 

https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/224/testReport/junit/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13742) Expose "NumOpenConnectionsPerUser" as a metric

2016-10-20 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-13742:
-

 Summary: Expose "NumOpenConnectionsPerUser" as a metric
 Key: HADOOP-13742
 URL: https://issues.apache.org/jira/browse/HADOOP-13742
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


To track which user level connections in busy cluster where so many connections 
to server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13707) If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot be accessed

2016-10-15 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula reopened HADOOP-13707:
---

Reopening issue, Reverted for branch-2 and branch-2 as it's broken 
compilation...Bytheway thanks [~yuanbo] for updating patches for branch-2 and 
branch-2.8.Lets jenkins run against this.

> If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot 
> be accessed
> -
>
> Key: HADOOP-13707
> URL: https://issues.apache.org/jira/browse/HADOOP-13707
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
>  Labels: security
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13707-branch-2-addendum.patch, 
> HADOOP-13707-branch-2.8.patch, HADOOP-13707-branch-2.patch, 
> HADOOP-13707.001.patch, HADOOP-13707.002.patch, HADOOP-13707.003.patch, 
> HADOOP-13707.004.patch
>
>
> In {{HttpServer2#hasAdministratorAccess}}, it uses 
> `hadoop.security.authorization` to detect whether HTTP is authenticated.
> It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If 
> Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, 
> such as "/logs", and it will return error message as below:
> {quote}
> HTTP ERROR 403
> Problem accessing /logs/. Reason:
> User dr.who is unauthorized to access this page.
> {quote}
> We should make sure {{HttpServletRequest#getAuthType}} is not null before we 
> invoke {{HttpServer2#hasAdministratorAccess}}.
> {{getAuthType}} means to get the authorization scheme of this request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13670) Update CHANGES.txt to reflect all the changes in branch-2.7

2016-09-30 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-13670:
-

 Summary: Update CHANGES.txt to reflect all the changes in 
branch-2.7
 Key: HADOOP-13670
 URL: https://issues.apache.org/jira/browse/HADOOP-13670
 Project: Hadoop Common
  Issue Type: Task
Reporter: Brahma Reddy Battula


When committing to branch-2.7, we need to edit CHANGES.txt. However, there are 
some recent commits to branch-2.7 without editing CHANGES.txt. We need to 
update the change log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13234) Get random port by new ServerSocket(0).getLocalPort() in ServerSocketUtil#getPort

2016-06-02 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-13234:
-

 Summary: Get random port by new ServerSocket(0).getLocalPort() in 
ServerSocketUtil#getPort
 Key: HADOOP-13234
 URL: https://issues.apache.org/jira/browse/HADOOP-13234
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


As per [~iwasakims] comment from 
[here|https://issues.apache.org/jira/browse/HDFS-10367?focusedCommentId=15275604=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15275604]

we can get available random port by {{new ServerSocket(0).getLocalPort()}} and 
it's more portable. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13049) Fix the TestFailures After HADOOP-12563

2016-04-23 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula resolved HADOOP-13049.
---
Resolution: Duplicate

HADOOP-12563 got reopen to address this failures, hence going to close this 
issue..

> Fix the TestFailures After HADOOP-12563
> ---
>
> Key: HADOOP-13049
> URL: https://issues.apache.org/jira/browse/HADOOP-13049
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>
> Following test fails after this in..
> TestRMContainerAllocator.testRMContainerAllocatorResendsRequestsOnRMRestart:2535
>  » IllegalState
> TestContainerManagerRecovery.testApplicationRecovery:189->startContainer:511 
> » IllegalState
> TestContainerManagerRecovery.testContainerCleanupOnShutdown:412->startContainer:511
>  » IllegalState
> TestContainerManagerRecovery.testContainerResizeRecovery:351->startContainer:511
>  » IllegalState
> See https://builds.apache.org/job/Hadoop-Yarn-trunk/2051/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-13049) Fix the TestFailures After HADOOP-12653

2016-04-21 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-13049:
-

 Summary: Fix the TestFailures After HADOOP-12653
 Key: HADOOP-13049
 URL: https://issues.apache.org/jira/browse/HADOOP-13049
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


Following test fails after this in..

TestRMContainerAllocator.testRMContainerAllocatorResendsRequestsOnRMRestart:2535
 » IllegalState
TestContainerManagerRecovery.testApplicationRecovery:189->startContainer:511 » 
IllegalState
TestContainerManagerRecovery.testContainerCleanupOnShutdown:412->startContainer:511
 » IllegalState
TestContainerManagerRecovery.testContainerResizeRecovery:351->startContainer:511
 » IllegalState
See https://builds.apache.org/job/Hadoop-Yarn-trunk/2051/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12992) Fix TestRefreshCallQueue failure.

2016-04-02 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-12992:
-

 Summary: Fix TestRefreshCallQueue failure.
 Key: HADOOP-12992
 URL: https://issues.apache.org/jira/browse/HADOOP-12992
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


 *Jenkins link* 
https://builds.apache.org/job/PreCommit-HDFS-Build/15041/testReport/
 *Trace* 
{noformat}
java.lang.RuntimeException: 
org.apache.hadoop.TestRefreshCallQueue$MockCallQueue could not be constructed.
at 
org.apache.hadoop.ipc.CallQueueManager.createCallQueueInstance(CallQueueManager.java:164)
at 
org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:70)
at org.apache.hadoop.ipc.Server.(Server.java:2579)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:421)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:759)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:701)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:900)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:879)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1596)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:891)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:823)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:482)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
at 
org.apache.hadoop.TestRefreshCallQueue.setUp(TestRefreshCallQueue.java:71)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12967) Remove FileUtil#copyMerge

2016-03-25 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-12967:
-

 Summary: Remove FileUtil#copyMerge
 Key: HADOOP-12967
 URL: https://issues.apache.org/jira/browse/HADOOP-12967
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


As per discussion in HADOOP-11661 ,Need to remove FileUtil#copyMerge.

CC. to [~wheat9]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12776) Remove getaclstatus call for non-acl commands in getfacl.

2016-02-06 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-12776:
-

 Summary: Remove getaclstatus call for non-acl commands in getfacl.
 Key: HADOOP-12776
 URL: https://issues.apache.org/jira/browse/HADOOP-12776
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


Remove getaclstatus call for non-acl commands in getfacl.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12648) Not able to compile hadoop source code on windows

2015-12-16 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula resolved HADOOP-12648.
---
Resolution: Not A Problem

> Not able to compile hadoop source code on windows
> -
>
> Key: HADOOP-12648
> URL: https://issues.apache.org/jira/browse/HADOOP-12648
> Project: Hadoop Common
>  Issue Type: Wish
>  Components: build
>Affects Versions: 2.6.2
> Environment: WIndow 7 32 bit 
> Maven 3.3.9
> Protoc 2.5.0
> Cmake 3.3.2
> Zlib 1.2.7
> Cygwin
>Reporter: Pardeep
>
> I haved added the path as per below :
> cmake_path =C:\cmake
> FINDBUGS_HOME=C:\FINDBUGS_HOME
> HADOOP_HOME=C:\HOOO\hadoop-2.6.2-src
> path=C:\JAVA\bin
> ZLIB_HOME=C:\zlib-1.2.7
> path 
> =C:\oraclexe\app\oracle\product\11.2.0\server\bin;D:\Forms\bin;D:\Reports\bin;D:\oracle\ora92\bin;C:\Program
>  Files\Oracle\jre\1.3.1\bin;C:\Program 
> Files\Oracle\jre\1.1.8\bin;D:\Workflow\bin;C:\Program Files\Intel\iCLS 
> Client\;%SystemRoot%\system32;%SystemRoot%;%SystemRoot%\System32\Wbem;%SYSTEMROOT%\System32\WindowsPowerShell\v1.0\;C:\Program
>  Files\Intel\Intel(R) Management Engine Components\DAL;C:\Program 
> Files\Intel\Intel(R) Management Engine Components\IPT;C:\Program 
> Files\WIDCOMM\Bluetooth Software\;C:\Program Files\Intel\WiFi\bin\;C:\Program 
> Files\Common Files\Intel\WirelessCommon\;D:\Forms\jdk\bin;C:\Program 
> Files\Intel\OpenCL SDK\2.0\bin\x86;D:\Reports\jdk\bin;C:\Program 
> Files\TortoiseSVN\bin;c:\cygwin\bin;%M2_HOME%\bin;C:\protobuf;C/Windows/Microsoft.NET/Framework/v4.0.30319;C:\Program
>  Files\Microsoft Windows Performance 
> Toolkit\;C:\msysgit\Git\cmd;C:\msysgit\bin\;C:\msysgit\mingw\bin\;C:\cmake;C:\FINDBUGS_HOME;C:\zlib-1.2.7
> Please let me know if anything is wrong or need to install any other s/w



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12618) NPE in TestSequenceFile

2015-12-06 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-12618:
-

 Summary: NPE in TestSequenceFile
 Key: HADOOP-12618
 URL: https://issues.apache.org/jira/browse/HADOOP-12618
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


NPE thrown which is hiding actual error...Here it should throw 
numberformatexpection since count exceeds integer range.

{noformat}
host-1:/opt/Hadoop-Trunk/install/hadoop/bin # ./yarn jar 
../share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-SNAPSHOT.jar 
testsequencefile -count 1 hadoop
java.lang.NullPointerException
at org.apache.hadoop.io.TestSequenceFile.main(TestSequenceFile.java:754)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at 
org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:136)
at 
org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:144)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.util.RunJar.run(RunJar.java:222)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12259) Utility to Dynamic port allocation

2015-07-22 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-12259:
-

 Summary: Utility to Dynamic port allocation
 Key: HADOOP-12259
 URL: https://issues.apache.org/jira/browse/HADOOP-12259
 Project: Hadoop Common
  Issue Type: Bug
  Components: test, util
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


As per discussion in YARN-3528 and [~rkanter] comment [here | 
https://issues.apache.org/jira/browse/YARN-3528?focusedCommentId=14637700page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14637700
 ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11939) deprecate DistCpV1 and Logalyzer

2015-05-08 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-11939:
-

 Summary: deprecate DistCpV1 and Logalyzer
 Key: HADOOP-11939
 URL: https://issues.apache.org/jira/browse/HADOOP-11939
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-10492) Help Commands needs change after deprecation

2015-05-08 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula resolved HADOOP-10492.
---
Resolution: Not A Problem

 Help Commands needs change after deprecation
 

 Key: HADOOP-10492
 URL: https://issues.apache.org/jira/browse/HADOOP-10492
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Raja Nagendra Kumar
 Attachments: HADOOP-10492.patch


 As hadoop dfs is deprecated, the help should show usage with HDFS
 e.g in the following command it still refers to 
 Usage: hadoop fs [generic options]
 D:\Apps\java\BI\hadoop\hw\hdp\hadoop-2.2.0.2.0.6.0-0009hdfs dfs
 Usage: hadoop fs [generic options]
 [-appendToFile localsrc ... dst]
 [-cat [-ignoreCrc] src ...]
 [-checksum src ...]
 [-chgrp [-R] GROUP PATH...]
 [-chmod [-R] MODE[,MODE]... | OCTALMODE PATH...]
 [-chown [-R] [OWNER][:[GROUP]] PATH...]
 [-copyFromLocal [-f] [-p] localsrc ... dst]
 [-copyToLocal [-p] [-ignoreCrc] [-crc] src ... localdst]
 [-count [-q] path ...]
 [-cp [-f] [-p] src ... dst]
 [-createSnapshot snapshotDir [snapshotName]]
 [-deleteSnapshot snapshotDir snapshotName]
 [-df [-h] [path ...]]
 [-du [-s] [-h] path ...]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11922) Misspelling of threshold in log4j.properties for tests in hadoop-nfs

2015-05-05 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-11922:
-

 Summary: Misspelling of threshold in log4j.properties for tests in 
hadoop-nfs
 Key: HADOOP-11922
 URL: https://issues.apache.org/jira/browse/HADOOP-11922
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Priority: Minor


log4j.properties file for test contains misspelling log4j.threshhold.
We should use log4j.threshold correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11878) NPE in FileContext.java # fixRelativePart(Path p)

2015-04-27 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-11878:
-

 Summary: NPE in FileContext.java # fixRelativePart(Path p)
 Key: HADOOP-11878
 URL: https://issues.apache.org/jira/browse/HADOOP-11878
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


Following will come when job failed and deletion service trying to delete the 
log fiels

2015-04-27 14:56:17,113 INFO 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting 
absolute path : null
2015-04-27 14:56:17,113 ERROR 
org.apache.hadoop.yarn.server.nodemanager.DeletionService: Exception during 
execution of task in DeletionService
java.lang.NullPointerException
at 
org.apache.hadoop.fs.FileContext.fixRelativePart(FileContext.java:274)
at org.apache.hadoop.fs.FileContext.delete(FileContext.java:761)
at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.deleteAsUser(DefaultContainerExecutor.java:457)
at 
org.apache.hadoop.yarn.server.nodemanager.DeletionService$FileDeletionTask.run(DeletionService.java:293)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11854) Fix Typos in all the projects

2015-04-21 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-11854:
-

 Summary: Fix Typos in all the projects
 Key: HADOOP-11854
 URL: https://issues.apache.org/jira/browse/HADOOP-11854
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
Priority: Minor


Recently I had seen, there are so many jira's for fixing the typo's ( Keep on 
accumulating more ). Hence I want to plan in proper manner such that everything 
will be addressed..

I am thinking, we can fix project level ( at most package level)...

My intention to avoid the number of jira's on typo's...One more suggestion to 
reviewer's is please dn't commit for class level try to check project level ( 
atmost package level) if any such typo's present...


Please correct me If I am wrong.. I will close this jira..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11672) test

2015-03-04 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula resolved HADOOP-11672.
---
Resolution: Not a Problem

 test
 

 Key: HADOOP-11672
 URL: https://issues.apache.org/jira/browse/HADOOP-11672
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: xiangqian.xu





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11661) Deprecate FileUtil#copyMerge

2015-03-02 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-11661:
-

 Summary: Deprecate FileUtil#copyMerge
 Key: HADOOP-11661
 URL: https://issues.apache.org/jira/browse/HADOOP-11661
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


 FileUtil#copyMerge is currently unused in the Hadoop source tree. In branch-1, 
it had been part of the implementation of the hadoop fs -getmerge shell 
command. In branch-2, the code for that shell command was rewritten in a way 
that no longer requires this method.

Please check more details here..

https://issues.apache.org/jira/browse/HADOOP-11392?focusedCommentId=14339336page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14339336



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11654) .

2015-02-28 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula resolved HADOOP-11654.
---
Resolution: Not a Problem

 .
 -

 Key: HADOOP-11654
 URL: https://issues.apache.org/jira/browse/HADOOP-11654
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Eric J. Van der Velden

 I'm sorry, I made a mistake about hadoop-config.sh.
 Please remove this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11326) documentation for configuring HVE: dfs.block.replicator.classname should be org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyWithNodeGroup

2015-02-25 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula resolved HADOOP-11326.
---
Resolution: Not a Problem

[~ejohansen] I am closing this issue...can you raise in CDH jira..? Correct me 
If I am wrong..

 documentation for configuring HVE: dfs.block.replicator.classname should be 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyWithNodeGroup
 ---

 Key: HADOOP-11326
 URL: https://issues.apache.org/jira/browse/HADOOP-11326
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: ellen johansen
Assignee: Brahma Reddy Battula

 Attempted to create an HVE VM based cluster following the directions posted 
 here: 
 https://issues.apache.org/jira/secure/attachment/12551386/HVE%20User%20Guide%20on%20branch-1%28draft%20%29.pdf
  
 The doc has the value for dfs.block.replicator.classname set to 
 org.apache.hadoop.hdfs.server.namenode.BlockPlacementPolicyWithNodeGroup  but 
 that class doesn't exist in hadoop 2.5, it seems that the class is 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyWithNodeGroup
 This JIRA is to request the documentation be updated. 
 thanks,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11634) Webhdfs kerboes principal and keytab descriptions are wrongly given( Interchanged) in SecureMode doc

2015-02-24 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-11634:
-

 Summary: Webhdfs kerboes principal and keytab descriptions are 
wrongly given( Interchanged)  in SecureMode doc
 Key: HADOOP-11634
 URL: https://issues.apache.org/jira/browse/HADOOP-11634
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.6.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


 *Need to interchnage Following Note for principal and keytab* 

{noformat}
ParameterValue  
Notes
dfs.web.authentication.kerberos.principal   http/_h...@realm.tld
   Kerberos keytab file for the WebHDFS.
dfs.web.authentication.kerberos.keytab  
/etc/security/keytab/http.service.keytabKerberos principal name for 
WebHDFS.
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11609) Correct credential commands info in CommandsManual.html#credential

2015-02-17 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-11609:
-

 Summary: Correct credential commands info in 
CommandsManual.html#credential
 Key: HADOOP-11609
 URL: https://issues.apache.org/jira/browse/HADOOP-11609
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation, security
Reporter: Brahma Reddy Battula



 -i is not supported, so would you remove -i,,, 
-v should be undocumented. The option is used only by test.

{noformat}
create alias [-v value][-provider provider-path]Prompts the user for a 
credential to be stored as the given alias when a value is not provided via -v. 
The hadoop.security.credential.provider.path within the core-site.xml file will 
be used unless a -provider is indicated.
delete alias [-i][-provider provider-path]  Deletes the credential with the 
provided alias and optionally warns the user when --interactive is used. The 
hadoop.security.credential.provider.path within the core-site.xml file will be 
used unless a -provider is indicated.
list [-provider provider-path]  Lists all of the credential aliases The 
hadoop.security.credential.provider.path within the core-site.xml file will be 
used unless a -provider is indicated.
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-10909) Add more documents about command daemonlog

2015-02-11 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula resolved HADOOP-10909.
---
Resolution: Duplicate
  Assignee: Brahma Reddy Battula

 Add more documents about command daemonlog
 

 Key: HADOOP-10909
 URL: https://issues.apache.org/jira/browse/HADOOP-10909
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Jeff Zhang
Assignee: Brahma Reddy Battula

 In the current document, it does not explain the argument name means. 
 People without java knowledge would think it as process name, such as 
 NameNode or DataNode, but actually it is class name. So I suggest to add more 
 document about the daemonlog to explain more about the arguments.
 PS: I only find the document on official site (Programming Guide -- Commands 
 Guide), but did not found the document in trunk. Anybody know where's the 
 document of Commands Guide ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11581) Multithreaded correctness Warnings #org.apache.hadoop.fs.shell.Ls

2015-02-10 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-11581:
-

 Summary: Multithreaded correctness Warnings 
#org.apache.hadoop.fs.shell.Ls
 Key: HADOOP-11581
 URL: https://issues.apache.org/jira/browse/HADOOP-11581
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


Please check the following for same..

https://builds.apache.org/job/PreCommit-HADOOP-Build/5646//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html#Warnings_MT_CORRECTNESS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11545) [ hadoop credential ] ArrayIndexOutOfBoundsException thrown when we list -provider

2015-02-04 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-11545:
-

 Summary: [ hadoop credential ] ArrayIndexOutOfBoundsException  
thrown when we list -provider
 Key: HADOOP-11545
 URL: https://issues.apache.org/jira/browse/HADOOP-11545
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


Scenario:

Please run the following command . dn't give the provider path.

{noformat}
[hdfs@host194 bin]$ ./hadoop credential list -provider
java.lang.ArrayIndexOutOfBoundsException: 2
at 
org.apache.hadoop.security.alias.CredentialShell.init(CredentialShell.java:117)
at 
org.apache.hadoop.security.alias.CredentialShell.run(CredentialShell.java:63)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at 
org.apache.hadoop.security.alias.CredentialShell.main(CredentialShell.java:427)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11392) FileUtil.java leaks file descriptor when copybytes success.

2014-12-11 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-11392:
-

 Summary: FileUtil.java leaks file descriptor when copybytes 
success.
 Key: HADOOP-11392
 URL: https://issues.apache.org/jira/browse/HADOOP-11392
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Brahma Reddy Battula


 Please check following code for same..
{code}
try {
in = srcFS.open(src);
out = dstFS.create(dst, overwrite);
IOUtils.copyBytes(in, out, conf, true);
  } catch (IOException e) {
IOUtils.closeStream(out);
IOUtils.closeStream(in);
throw e;
  }
}
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-8636) Decommissioned nodes are included in cluster after switch which is not expected

2012-07-30 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-8636:


 Summary: Decommissioned nodes are included in cluster after switch 
which is not expected
 Key: HADOOP-8636
 URL: https://issues.apache.org/jira/browse/HADOOP-8636
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.0.0-alpha, 2.1.0-alpha, 2.0.1-alpha
Reporter: Brahma Reddy Battula


Scenario:
=

Start ANN and SNN with three DN's

Exclude DN1 from cluster by using decommission feature 

(./hdfs dfsadmin -fs hdfs://ANNIP:8020 -refreshNodes)

After decommission successful,do switch such that SNN will become Active.

Here exclude node(DN1) is included in cluster.Able to write files to excluded 
node since it's not excluded.

Checked SNN(Which Active before switch) UI decommissioned=1 and ANN UI 
decommissioned=0

One more Observation:


All dfsadmin commands will create proxy only on nn1 irrespective of Active or 
standby.I think this also we need to re-look once..


I am not getting , why we are not given HA for dfsadmin commands..?

Please correct me,,If I am wrong.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8506) Standy NameNode is entering into Safemode even after HDFS-2914 due to resources low

2012-06-12 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-8506:


 Summary: Standy NameNode is entering into Safemode even after 
HDFS-2914 due to resources low
 Key: HADOOP-8506
 URL: https://issues.apache.org/jira/browse/HADOOP-8506
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Brahma Reddy Battula


Scenario:
=
Start ANN SNN with One DN
Make 100% disk full for SNN
Now restart SNN..

Here SNN is going safemode..But it shouldnot happen according to HDFS-2914

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8436) NPE In getLocalPathForWrite ( path, conf ) when dfs.client.buffer.dir not configured

2012-05-25 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-8436:


 Summary: NPE In getLocalPathForWrite ( path, conf ) when 
dfs.client.buffer.dir not configured
 Key: HADOOP-8436
 URL: https://issues.apache.org/jira/browse/HADOOP-8436
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Brahma Reddy Battula


Call  dirAllocator.getLocalPathForWrite ( path , conf );

without configuring  dfs.client.buffer.dir..
{noformat}
java.lang.NullPointerException
at 
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:261)
at 
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:365)
at 
org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:134)
at 
org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:113)
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8437) getLocalPathForWrite is not throwing any expection for invalid paths

2012-05-25 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-8437:


 Summary: getLocalPathForWrite is not throwing any expection for 
invalid paths
 Key: HADOOP-8437
 URL: https://issues.apache.org/jira/browse/HADOOP-8437
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.1-alpha
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


call dirAllocator.getLocalPathForWrite ( /InvalidPath, conf );
Here it will not thrown any exception but earlier version it used throw.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8433) Logs are getting misplaced after introducing hadoop-env.sh

2012-05-24 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-8433:


 Summary: Logs are getting misplaced after introducing hadoop-env.sh
 Key: HADOOP-8433
 URL: https://issues.apache.org/jira/browse/HADOOP-8433
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.0.1-alpha, 3.0.0
Reporter: Brahma Reddy Battula


It's better to comment the following in hadoop-env.sh

# Where log files are stored.  $HADOOP_HOME/logs by default.
export HADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER

Because of this logs are placing under root($user) and this getting called two 
times while starting process.
hence logs are placing at /root/root/

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira