[jira] [Created] (HADOOP-14682) cmake Makefiles in hadoop-common don't properly respect -Dopenssl.prefix

2017-07-24 Thread Ravi Prakash (JIRA)
Ravi Prakash created HADOOP-14682:
-

 Summary: cmake Makefiles in hadoop-common don't properly respect 
-Dopenssl.prefix
 Key: HADOOP-14682
 URL: https://issues.apache.org/jira/browse/HADOOP-14682
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ravi Prakash


Allen reported that while running tests, cmake didn't properly respect 
-Dopenssl.prefix that would allow us to build and run the tests with different 
versions of OpenSSL.
https://issues.apache.org/jira/browse/HADOOP-14597?focusedCommentId=16092114=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16092114

I too encountered some funny stuff while trying to build with a non-default 
OpenSSL library. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14597) Native compilation broken with OpenSSL-1.1.0 because EVP_CIPHER_CTX has been made opaque

2017-06-27 Thread Ravi Prakash (JIRA)
Ravi Prakash created HADOOP-14597:
-

 Summary: Native compilation broken with OpenSSL-1.1.0 because 
EVP_CIPHER_CTX has been made opaque
 Key: HADOOP-14597
 URL: https://issues.apache.org/jira/browse/HADOOP-14597
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0-alpha4
 Environment: openssl-1.1.0
Reporter: Ravi Prakash


Trying to build Hadoop trunk on Fedora 26 which has openssl-devel-1.1.0 fails 
with this error
{code}[WARNING] 
/home/raviprak/Code/hadoop/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c:
 In function ‘check_update_max_output_len’:
[WARNING] 
/home/raviprak/Code/hadoop/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c:256:14:
 error: dereferencing pointer to incomplete type ‘EVP_CIPHER_CTX {aka struct 
evp_cipher_ctx_st}’
[WARNING]if (context->flags & EVP_CIPH_NO_PADDING) {
[WARNING]   ^~
{code}

https://github.com/openssl/openssl/issues/962 mattcaswell says
{quote}
One of the primary differences between master (OpenSSL 1.1.0) and the 1.0.2 
version is that many types have been made opaque, i.e. applications are no 
longer allowed to look inside the internals of the structures
{quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14513) A little performance improvement of HarFileSystem

2017-06-13 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved HADOOP-14513.
---
Resolution: Not A Problem

> A little performance improvement of HarFileSystem
> -
>
> Key: HADOOP-14513
> URL: https://issues.apache.org/jira/browse/HADOOP-14513
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha3
>Reporter: hu xiaodong
>Assignee: hu xiaodong
>Priority: Trivial
> Attachments: HADOOP-14513.001.patch
>
>
> In the Java source of HarFileSystem.java:
> {code:title=HarFileSystem.java|borderStyle=solid}
> ...
> ...
> private Path archivePath(Path p) {
> Path retPath = null;
> Path tmp = p;
> 
> // I think p.depth() need not be loop many times, depth() is a complex 
> calculation
> for (int i=0; i< p.depth(); i++) {
>   if (tmp.toString().endsWith(".har")) {
> retPath = tmp;
> break;
>   }
>   tmp = tmp.getParent();
> }
> return retPath;
>   }
> ...
> ...
> {code}
>  
> I think the fellow is more suitable:
> {code:title=HarFileSystem.java|borderStyle=solid}
> ...
> ...
> private Path archivePath(Path p) {
> Path retPath = null;
> Path tmp = p;
> 
> // just loop once
> for (int i=0,depth=p.depth(); i< depth; i++) {
>   if (tmp.toString().endsWith(".har")) {
> retPath = tmp;
> break;
>   }
>   tmp = tmp.getParent();
> }
> return retPath;
>   }
> ...
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14319) Under replicated blocks are not getting re-replicated

2017-04-18 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved HADOOP-14319.
---
Resolution: Invalid

Please send your queries to hdfs-user mailing list. 
https://hadoop.apache.org/mailing_lists.html
To answer your query please look at dfs.namenode.replication.max-streams , 
dfs.namenode.replication.max-streams-hard-limit,  
dfs.namenode.replication.work.multiplier.per.iteration etc.


> Under replicated blocks are not getting re-replicated
> -
>
> Key: HADOOP-14319
> URL: https://issues.apache.org/jira/browse/HADOOP-14319
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Anil
>
> Under replicated blocks are not getting re-replicated
> In production Hadoop cluster of 5 Manangement + 5 Data Nodes, under 
> replicated blocks are not re-replicated even after 2 days. 
> Here is quick view of required configurations;
>  Default replication factor:  3
>  Average block replication:   3.0
>  Corrupt blocks:  0
>  Missing replicas:0 (0.0 %)
>  Number of data-nodes:5
>  Number of racks: 1
> After bringing one of the DataNodes down, the replication factor for the 
> blocks allocated on the Data Node became 2. It is observed that, even after 2 
> days the replication factor remains as 2. Under replicated blocks are not 
> getting re-replicated to another DataNodes in the cluster. 
> If a Data Node goes down, HDFS will try to replicate the blocks from Dead DN 
> to other nodes and the priority. Are there any configuration changes to speed 
> up the re-replication process for the under replicated blocks? 
> When tested for blocks with replication factor 1, the re-replication happened 
> to 2 overnight in around 10 hours of time. But blocks with 2 replication 
> factor are not being re-replicated to default replication factor 3. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-11232) jersey-core-1.9 has a faulty glassfish-repo setting

2017-03-09 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved HADOOP-11232.
---
Resolution: Duplicate

HADOOP-9613 seems to have upgraded jersey to 1.19 . Please reopen if I'm 
mistaken

> jersey-core-1.9 has a faulty glassfish-repo setting
> ---
>
> Key: HADOOP-11232
> URL: https://issues.apache.org/jira/browse/HADOOP-11232
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Sushanth Sowmyan
>
> The following was reported by [~sushanth].
> hadoop-common brings in jersey-core-1.9 as a dependency by default.
> This is problematic, since the pom file for jersey 1.9 hardcode-specifies 
> glassfish-repo as the place to get further transitive dependencies, which 
> leads to a site that serves a static "this has moved" page instead of a 404. 
> This results in faulty parent resolutions, which when asked for a pom file, 
> get erroneous results.
> The only way around this seems to be to add a series of exclusions for 
> jersey-core, jersey-json, jersey-server and a bunch of others to 
> hadoop-common, then to hadoop-hdfs, then to hadoop-mapreduce-client-core. I 
> don't know how many more excludes are necessary before I can get this to work.
> If you update your jersey.version to 1.14, this faulty pom goes away. Please 
> either update that, or work with build infra to update our nexus pom for 
> jersey-1.9 so that it does not include the faulty glassfish repo.
> Another interesting note about this is that something changed yesterday 
> evening to cause this break in behaviour. We have not had this particular 
> problem in about 9+ months.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-12563) Updated utility to create/modify token files

2016-04-29 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved HADOOP-12563.
---
Resolution: Fixed

> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Fix For: 3.0.0
>
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> HADOOP-12563.06.patch, HADOOP-12563.07.patch, HADOOP-12563.07.patch, 
> HADOOP-12563.08.patch, HADOOP-12563.09.patch, HADOOP-12563.10.patch, 
> HADOOP-12563.11.patch, HADOOP-12563.12.patch, HADOOP-12563.13.patch, 
> HADOOP-12563.14.patch, HADOOP-12563.15.patch, HADOOP-12563.16.patch, 
> dtutil-test-out, example_dtutil_commands_and_output.txt, 
> generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-12563) Updated utility to create/modify token files

2016-04-22 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash reopened HADOOP-12563:
---

> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Fix For: 3.0.0
>
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> HADOOP-12563.06.patch, HADOOP-12563.07.patch, HADOOP-12563.07.patch, 
> HADOOP-12563.08.patch, HADOOP-12563.09.patch, HADOOP-12563.10.patch, 
> HADOOP-12563.11.patch, HADOOP-12563.12.patch, HADOOP-12563.13.patch, 
> dtutil-test-out, example_dtutil_commands_and_output.txt, 
> generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-13018) Make Kdiag fail fast if hadoop.token.files points to non-existent file

2016-04-11 Thread Ravi Prakash (JIRA)
Ravi Prakash created HADOOP-13018:
-

 Summary: Make Kdiag fail fast if hadoop.token.files points to 
non-existent file
 Key: HADOOP-13018
 URL: https://issues.apache.org/jira/browse/HADOOP-13018
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Ravi Prakash






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12108) Erroneous behavior of use of wildcard character ( * ) in ls command of hdfs

2015-06-22 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved HADOOP-12108.
---
Resolution: Invalid

Thanks Aman! Steve is right. You do need to use quotes when there is already a 
file on the local file system which would match the wildcard

 Erroneous behavior of use of wildcard character ( * ) in ls command of hdfs 
 

 Key: HADOOP-12108
 URL: https://issues.apache.org/jira/browse/HADOOP-12108
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Aman Goyal
Priority: Critical

 If you have following directories in your LOCAL file system 
 /data/hadoop/sample/00/contents1.txt
 /data/hadoop/sample/01/contents2.txt
 and following directories in hdfs : 
 /data/hadoop/sample/00/contents1.txt
 /data/hadoop/sample/01/contents2.txt
 /data/hadoop/sample/02/contents3.txt
 suppose you run the following hdfs ls command:
 hdfs dfs -ls -R /data/hadoop/sample/*
 the paths that are printed have a reference to local paths, and only 00  01 
 directories get listed. 
 this happens only when wildcard ( * ) character is used in input paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11972) hdfs dfs -copyFromLocal reports File Not Found instead of Permission Denied.

2015-05-19 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved HADOOP-11972.
---
Resolution: Duplicate

This is a legitimate problem. Duping to HDFS-5033

 hdfs dfs -copyFromLocal reports File Not Found instead of Permission Denied.
 

 Key: HADOOP-11972
 URL: https://issues.apache.org/jira/browse/HADOOP-11972
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 2.6.0
 Environment: Linux hadoop-8309-2.west.isilon.com 
 2.6.32-504.16.2.el6.centos.plus.x86_64 #1 SMP Wed Apr 22 00:59:31 UTC 2015 
 x86_64 x86_64 x86_64 GNU/Linux
Reporter: David Tucker

 userA creates a file in /home/userA with 700 permissions.
 userB tries to copy it to HDFS, and receives a No such file or directory 
 instead of Permission denied.
 [hrt_qa@hadoop-8309-2 ~]$ touch ./foo
 [hrt_qa@hadoop-8309-2 ~]$ ls -l ./foo
 -rw-r--r--. 1 hrt_qa users 0 May 14 16:09 ./foo
 [hrt_qa@hadoop-8309-2 ~]$ sudo su hbase
 [hbase@hadoop-8309-2 hrt_qa]$ ls -l ./foo
 ls: cannot access ./foo: Permission denied
 [hbase@hadoop-8309-2 hrt_qa]$ hdfs dfs -copyFromLocal ./foo /tmp/foo
 copyFromLocal: `./foo': No such file or directory



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11992) ORA-00933: SQL command not properly ended

2015-05-18 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved HADOOP-11992.
---
Resolution: Duplicate

Duplicate of https://issues.apache.org/jira/browse/MAPREDUCE-3695

 ORA-00933: SQL command not properly ended
 -

 Key: HADOOP-11992
 URL: https://issues.apache.org/jira/browse/HADOOP-11992
 Project: Hadoop Common
  Issue Type: Bug
Reporter: eiko
Assignee: eiko
  Labels: hadoop

 hello
 when I insert data into oracle database from hdfs with mapreduce.error occur 
 like this.
 ORA-00933: SQL command not properly ended
 -
 this is my solution
 hadoop version:hadoop-2.6.0
 file:DBOutputFormat.class
 method:constructQuery
 line:163 query.append(););
 -
 modify like this
 query.append()+\n);
 contact me
 email:minicoo...@gmail.com



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11972) hdfs dfs -copyFromLocal reports File Not Found instead of Permission Denied.

2015-05-16 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved HADOOP-11972.
---
Resolution: Invalid

This is because '.' is translated to /user/hbase for user hbase, and 
/user/hrt_qa for hrt_qa.
If you think this is not the case, please reopen and tell us
1. If your environment is Kerberized?
2. Are you using NFS?
3. What happens when you do the same thing using HDFS CLI

 hdfs dfs -copyFromLocal reports File Not Found instead of Permission Denied.
 

 Key: HADOOP-11972
 URL: https://issues.apache.org/jira/browse/HADOOP-11972
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 2.6.0
 Environment: Linux hadoop-8309-2.west.isilon.com 
 2.6.32-504.16.2.el6.centos.plus.x86_64 #1 SMP Wed Apr 22 00:59:31 UTC 2015 
 x86_64 x86_64 x86_64 GNU/Linux
Reporter: David Tucker

 userA creates a file in /home/userA with 700 permissions.
 userB tries to copy it to HDFS, and receives a No such file or directory 
 instead of Permission denied.
 [hrt_qa@hadoop-8309-2 ~]$ touch ./foo
 [hrt_qa@hadoop-8309-2 ~]$ ls -l ./foo
 -rw-r--r--. 1 hrt_qa users 0 May 14 16:09 ./foo
 [hrt_qa@hadoop-8309-2 ~]$ sudo su hbase
 [hbase@hadoop-8309-2 hrt_qa]$ ls -l ./foo
 ls: cannot access ./foo: Permission denied
 [hbase@hadoop-8309-2 hrt_qa]$ hdfs dfs -copyFromLocal ./foo /tmp/foo
 copyFromLocal: `./foo': No such file or directory



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11976) submit job with oozie to between cluster

2015-05-15 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved HADOOP-11976.
---
Resolution: Invalid

Sunmeng! JIRA is for tracking development changes to Hadoop. For user queries 
please send an email to u...@hadoop.apache.org . Being able to submit jobs to 
YARN using oozie is a pretty basic feature that I doubt would be broken in 
2.5.x . 

 submit job with oozie to between cluster
 

 Key: HADOOP-11976
 URL: https://issues.apache.org/jira/browse/HADOOP-11976
 Project: Hadoop Common
  Issue Type: Bug
Reporter: sunmeng





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-7692) hadoop single node setup script to create mapred dir

2015-03-19 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved HADOOP-7692.
--
Resolution: Won't Fix

 hadoop single node setup script to create mapred dir
 

 Key: HADOOP-7692
 URL: https://issues.apache.org/jira/browse/HADOOP-7692
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.20.205.0, 1.1.0
Reporter: Giridharan Kesavan

 hadoop single node setup script should create /mapred directory and chown to 
 mapred:mapred ; jt requires this directory for startup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-7643) Bump up the version of aspectj

2015-03-19 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved HADOOP-7643.
--
Resolution: Won't Fix

 Bump up the version of aspectj
 --

 Key: HADOOP-7643
 URL: https://issues.apache.org/jira/browse/HADOOP-7643
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, test
Affects Versions: 0.20.205.0, 1.1.0
Reporter: Kihwal Lee
Priority: Minor

 When the fault injection target is enabled, aspectj fails with the following 
 message:
 Can't parameterize a member of non-generic type:
 This is fixed by upgrading aspectj. I tested with 1.6.11 and it worked.
 It will also apply to trunk, but I believe trunk has other problems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-7638) visibility of the security utils and things like getCanonicalService.

2015-03-19 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved HADOOP-7638.
--
Resolution: Later

 visibility of the security utils and things like getCanonicalService.
 -

 Key: HADOOP-7638
 URL: https://issues.apache.org/jira/browse/HADOOP-7638
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 0.24.0
Reporter: John George
Priority: Minor

 It would be a good idea to file an additional jira to take another look at 
 the visibility of the security utils and things like getCanonicalService. 
 Doesn't seem like these should be fully public. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11596) Allow smart-apply-patch.sh to add new files in binary git patches

2015-02-14 Thread Ravi Prakash (JIRA)
Ravi Prakash created HADOOP-11596:
-

 Summary: Allow smart-apply-patch.sh to add new files in binary git 
patches
 Key: HADOOP-11596
 URL: https://issues.apache.org/jira/browse/HADOOP-11596
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Ravi Prakash
Assignee: Ravi Prakash


When a new file is added, the source is /dev/null (rather than the root of the 
tree (which would mean a a/b prefix)) Allow for this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11516) Enable HTTP compression on all hadoop UIs

2015-01-28 Thread Ravi Prakash (JIRA)
Ravi Prakash created HADOOP-11516:
-

 Summary: Enable HTTP compression on all hadoop UIs
 Key: HADOOP-11516
 URL: https://issues.apache.org/jira/browse/HADOOP-11516
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ravi Prakash


Some of our UI pages (e.g. # of jobs, # of task attempts, logs) are extremely 
big. It would drastically improve their load times if we used HTTP compression.
There exists a GzipFilter for jetty that can be added so that if the browser 
includes Accept-Encoding: gzip, deflate in the request header, the server 
will respond with the resource compressed and a response header 
Content-Encoding: gzip (or deflate). 
http://www.eclipse.org/jetty/documentation/current/gzip-filter.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11360) GraphiteSink reports data with wrong timestamp

2014-12-08 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved HADOOP-11360.
---
Resolution: Duplicate

Thanks for the reporting this JIRA Kamil!
This looks like a duplicate of 
https://issues.apache.org/jira/browse/HADOOP-11182 which was fixed in Hadoop 
2.6.0. If not, please re-open this JIRA

 GraphiteSink reports data with wrong timestamp
 --

 Key: HADOOP-11360
 URL: https://issues.apache.org/jira/browse/HADOOP-11360
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Reporter: Kamil Gorlo

 I've tried to use GraphiteSink with metrics2 system, but it looks that 
 timestamp sent to Graphite is refreshed rarely (about every 2 minutes 
 approx.) no mather how small period is set.
 Here is my configuration:
 *.sink.graphite.server_host=graphite-relay.host
 *.sink.graphite.server_port=2013
 *.sink.graphite.metrics_prefix=graphite.warehouse-data-1
 *.period=10
 nodemanager.sink.graphite.class=org.apache.hadoop.metrics2.sink.GraphiteSink
 And here is dumped network traffic to graphite-relay.host (only selected 
 lines, every line appears in 10 seconds as period suggests):
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  0 1418041472
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  0 1418041472
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  0 1418041472
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  0 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  3 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  4 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  2 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  3 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  2 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  2 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  1 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  1 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  0 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  0 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  0 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  0 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  0 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  0 1418041728
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  0 1418041728
 As you can see, AllocatedContainers value is refreshed every 10 seconds, but 
 timestamp is not.
 It looks that the problem is level above (in classes providing MetricsRecord 
 - because timestamp value is taken from MetricsRecord object provided in 
 argument to putMetrics method in Sink implementation) which implies that 
 every sink will have the same problem. Maybe I misconfigured something?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11192) Change old subversion links to git

2014-10-10 Thread Ravi Prakash (JIRA)
Ravi Prakash created HADOOP-11192:
-

 Summary: Change old subversion links to git
 Key: HADOOP-11192
 URL: https://issues.apache.org/jira/browse/HADOOP-11192
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ravi Prakash


e.g. hadoop-project/src/site/site.xml still references SVN. 
We should probably check our wiki's and other documentation. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11185) There should be a way to disable a kill -9 during stop

2014-10-09 Thread Ravi Prakash (JIRA)
Ravi Prakash created HADOOP-11185:
-

 Summary: There should be a way to disable a kill -9 during stop
 Key: HADOOP-11185
 URL: https://issues.apache.org/jira/browse/HADOOP-11185
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ravi Prakash


eg. hadoop-common-project/hadoop-common/bin/src/main/bin/hadoop-functions.sh 
calls kill -9 after some time. This might not be the best thing to do for some 
processes (if HA is not enabled) . There should be ability to disable this kill 
-9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-7985) maven build should be super fast when there are no changes

2013-08-15 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved HADOOP-7985.
--

Resolution: Won't Fix

 maven build should be super fast when there are no changes
 --

 Key: HADOOP-7985
 URL: https://issues.apache.org/jira/browse/HADOOP-7985
 Project: Hadoop Common
  Issue Type: Wish
  Components: build
Affects Versions: 0.23.0
Reporter: Ravi Prakash
Assignee: Ravi Prakash
  Labels: build, maven
 Attachments: HADOOP-7985.patch


 I use this command mvn -Pdist -P-cbuild -Dmaven.javadoc.skip -DskipTests 
 install to build. Without ANY changes in code, running this command takes 
 1:32. It seems to me this is too long. Investigate if this time can be 
 reduced drastically.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9732) JOB_FAIL and JOB_KILL have different behaviors when they should ideally be same / similar.

2013-07-15 Thread Ravi Prakash (JIRA)
Ravi Prakash created HADOOP-9732:


 Summary: JOB_FAIL and JOB_KILL have different behaviors when they 
should ideally be same / similar.
 Key: HADOOP-9732
 URL: https://issues.apache.org/jira/browse/HADOOP-9732
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.9, 3.0.0, 2.2.0
Reporter: Ravi Prakash


After MAPREDUCE-5317, both JOB_KILL and JOB_FAIL wait for all the tasks to die 
/ be killed, and then move to their final states. We can make these two code 
paths the same. Ideally KILL_WAIT should also have a timeout like the one 
MAPREDUCE-5317 introduces.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9614) smart-test-patch.sh hangs for new version of patch (2.7.1)

2013-06-01 Thread Ravi Prakash (JIRA)
Ravi Prakash created HADOOP-9614:


 Summary: smart-test-patch.sh hangs for new version of patch (2.7.1)
 Key: HADOOP-9614
 URL: https://issues.apache.org/jira/browse/HADOOP-9614
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.4-alpha, 0.23.7, 3.0.0
Reporter: Ravi Prakash
Assignee: Ravi Prakash


patch -p0 -E --dry-run prints checking file  for the new version of 
patch(2.7.1) rather than patching file as it did for older versions. This 
causes TMP2 to become empty, which causes the script to hang on this command 
forever:
PREFIX_DIRS_AND_FILES=$(cut -d '/' -f 1 | sort | uniq)


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-5006) ganglia is now showing any graphs

2012-10-24 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved HADOOP-5006.
--

  Resolution: Fixed
Release Note: 2409 seems to have fixed this. Closing

 ganglia is now showing any graphs
 -

 Key: HADOOP-5006
 URL: https://issues.apache.org/jira/browse/HADOOP-5006
 Project: Hadoop Common
  Issue Type: Bug
  Components: contrib/cloud
Affects Versions: 0.19.0
Reporter: Stefan Groschupf
Priority: Trivial

 Ganglia is not showing any graphs since the rdd tool require installed fonts, 
 though the fedora core image used as basis to build the hadoop image does not 
 come with fonts. 
 To fix this just install dejavu-fonts as another package.
 The line in create-hadoop-image-remote.sh should look like this:
 yum -y install rsync lynx screen ganglia-gmetad ganglia-gmond ganglia-web 
 dejavu-fonts httpd php

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8504) OfflineImageViewer throws an NPE

2012-06-11 Thread Ravi Prakash (JIRA)
Ravi Prakash created HADOOP-8504:


 Summary: OfflineImageViewer throws an NPE
 Key: HADOOP-8504
 URL: https://issues.apache.org/jira/browse/HADOOP-8504
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ravi Prakash


Courtesy [~mithun]
Exception in thread main java.lang.NullPointerException
at 
org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:371)
at org.apache.hadoop.security.User.init(User.java:48)
at org.apache.hadoop.security.User.init(User.java:43)
at 
org.apache.hadoop.security.UserGroupInformation.createRemoteUser(UserGroupInformation.java:857)
at 
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.getUser(AbstractDelegationTokenIdentifier.java:91)
at 
org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier.toString(DelegationTokenIdentifier.java:61)
at 
org.apache.hadoop.hdfs.tools.offlineImageViewer.ImageLoaderCurrent.processDelegationTokens(ImageLoaderCurrent.java:222)
at 
org.apache.hadoop.hdfs.tools.offlineImageViewer.ImageLoaderCurrent.loadImage(ImageLoaderCurrent.java:185)
at 
org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewer.go(OfflineImageViewer.java:129)
at 
org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewer.main(OfflineImageViewer.java:250)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8445) Token should not print the password in toString

2012-05-29 Thread Ravi Prakash (JIRA)
Ravi Prakash created HADOOP-8445:


 Summary: Token should not print the password in toString
 Key: HADOOP-8445
 URL: https://issues.apache.org/jira/browse/HADOOP-8445
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.0.3
Reporter: Ravi Prakash
Assignee: Ravi Prakash


This JIRA is for porting HADOOP-6622 to branch-1 since 6622 is already closed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-7664) o.a.h.conf.Configuration complains of overriding final parameter even if the value with which its attempting to override is the same.

2011-09-21 Thread Ravi Prakash (JIRA)
o.a.h.conf.Configuration complains of overriding final parameter even if the 
value with which its attempting to override is the same. 
--

 Key: HADOOP-7664
 URL: https://issues.apache.org/jira/browse/HADOOP-7664
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 0.23.0
 Environment: commit a2f64ee8d9312fe24780ec53b15af439a315796d
Reporter: Ravi Prakash
Assignee: Ravi Prakash
Priority: Minor
 Fix For: 0.23.0


o.a.h.conf.Configuration complains of overriding final parameter even if the 
value with which its attempting to override is the same. 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-7438) Using the hadoop-deamon.sh script to start nodes leads to a depricated warning

2011-07-01 Thread Ravi Prakash (JIRA)
Using the hadoop-deamon.sh script to start nodes leads to a depricated warning 
---

 Key: HADOOP-7438
 URL: https://issues.apache.org/jira/browse/HADOOP-7438
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 0.22.0
Reporter: Ravi Prakash
Assignee: Ravi Prakash


hadoop-daemon.sh calls common/bin/hadoop for hdfs/bin/hdfs tasks and so 
common/bin/hadoop complains its deprecated for those uses.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira