[jira] [Created] (HADOOP-7685) Issues with hadoop-common-project\hadoop-common\src\main\packages\hadoop-setup-conf.sh file

2011-09-27 Thread Devaraj K (Created) (JIRA)
Issues with 
hadoop-common-project\hadoop-common\src\main\packages\hadoop-setup-conf.sh file 


 Key: HADOOP-7685
 URL: https://issues.apache.org/jira/browse/HADOOP-7685
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.24.0
Reporter: Devaraj K
Assignee: Devaraj K


hadoop-common-project\hadoop-common\src\main\packages\hadoop-setup-conf.sh has 
following issues
1. check_permission does not work as expected if there are two folders with 
$NAME as part of their name inside $PARENT
e.g. /home/hadoop/conf, /home/hadoop/someconf, 
The result of `ls -ln $PARENT | grep -w $NAME| awk '{print $3}'` is non 
zero..it is 0 0 and hence the following if check becomes true.
{code:xml}
if [ $OWNER != 0 ]; then
RESULT=1
break
fi 
{code}

2. Spelling mistake
{code:xml}
HADDOP_DN_ADDR=0.0.0.0:50010
{code}
it should be 

{code:xml}
HADOOP_DN_ADDR=0.0.0.0:50010
{code}

3. HADOOP_SNN_HOST is not set due to which the hdfs-site.xml contains following 
configuration
{code:xml}
property
namedfs.namenode.http-address/name
value:50070/value
description
The address and the base port where the dfs namenode web ui will listen on.
If the port is 0 then the server will start on a free port.
/description
/property
{code}




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




configuring different number of slaves for MR jobs

2011-09-27 Thread bikash sharma
Hi -- Can we specify a different set of slaves for each mapreduce job run.
I tried using the --config option and specify different set of slaves in
slaves config file. However, it does not use the selective slaves set but
the one initially configured.

Any help?

Thanks,
Biksah


[jira] [Resolved] (HADOOP-4905) static initializers for default config files duplicate code

2011-09-27 Thread Doug Cutting (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-4905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doug Cutting resolved HADOOP-4905.
--

Resolution: Invalid

Resolving as 'Invalid' since these static initializers are no long present in 
the codebase.

 static initializers for default config files duplicate code
 ---

 Key: HADOOP-4905
 URL: https://issues.apache.org/jira/browse/HADOOP-4905
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 0.20.0
Reporter: Doug Cutting
Priority: Minor
  Labels: newbie

 The default config files are loaded by static initializers.  The code in 
 these initializers is two lines that contains string literals.  This is 
 fragile and duplicated code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Hadoop-Common-22-branch - Build # 84 - Failure

2011-09-27 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-22-branch/84/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 3750 lines...]

compile-rcc-compiler:
Trying to override old definition of task recordcc

compile-core-classes:

ivy-resolve-test:

ivy-retrieve-test:

generate-test-records:

generate-avro-records:
Trying to override old definition of task schema
   [schema] SLF4J: The requested version 1.5.11 by your slf4j binding is not 
compatible with [1.6]
   [schema] SLF4J: See http://www.slf4j.org/codes.html#version_mismatch for 
further details.

generate-avro-protocols:
Trying to override old definition of task schema
   [schema] SLF4J: The requested version 1.5.11 by your slf4j binding is not 
compatible with [1.6]
   [schema] SLF4J: See http://www.slf4j.org/codes.html#version_mismatch for 
further details.

compile-core-test:
[javac] Compiling 7 source files to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Common-22-branch/trunk/build-fi/test/core/classes
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
Trying to override old definition of task paranamer
[paranamer] Generating parameter names from 
/home/jenkins/jenkins-slave/workspace/Hadoop-Common-22-branch/trunk/src/test/core
 to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Common-22-branch/trunk/build-fi/test/core/classes
   [delete] Deleting directory 
/home/jenkins/jenkins-slave/workspace/Hadoop-Common-22-branch/trunk/build-fi/test/cache
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Common-22-branch/trunk/build-fi/test/cache
 [copy] Copying 1 file to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Common-22-branch/trunk/build-fi/test/cache

run-test-core:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Common-22-branch/trunk/build-fi/test/data
 [copy] Copying 3 files to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Common-22-branch/trunk/build-fi/test/webapps
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Common-22-branch/trunk/build-fi/test/logs
 [copy] Copying 1 file to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Common-22-branch/trunk/build-fi/test/extraconf
 [copy] Copying 1 file to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Common-22-branch/trunk/build-fi/test/extraconf

checkfailure:

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Hadoop-Common-22-branch/trunk/build.xml:788:
 Tests failed!

Total time: 5 minutes 39 seconds
Build step 'Execute shell' marked build as failure
[FINDBUGS] Skipping publisher since build result is FAILURE
[WARNINGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Recording fingerprints
Updating HADOOP-7646
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
3 tests failed.
REGRESSION:  org.apache.hadoop.fs.TestPath.testAvroReflect

Error Message:
null

Stack Trace:
java.io.EOFException
at org.apache.avro.io.BinaryDecoder.readInt(BinaryDecoder.java:145)
at org.apache.avro.io.BinaryDecoder.readString(BinaryDecoder.java:251)
at 
org.apache.avro.io.ValidatingDecoder.readString(ValidatingDecoder.java:107)
at 
org.apache.avro.generic.GenericDatumReader.readString(GenericDatumReader.java:321)
at 
org.apache.avro.reflect.ReflectDatumReader.readString(ReflectDatumReader.java:117)
at 
org.apache.avro.reflect.ReflectDatumReader.readString(ReflectDatumReader.java:98)
at 
org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:144)
at 
org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:129)
at org.apache.hadoop.io.AvroTestUtil.testReflect(AvroTestUtil.java:52)
at org.apache.hadoop.io.AvroTestUtil.testReflect(AvroTestUtil.java:37)
at org.apache.hadoop.fs.TestPath.testAvroReflect(TestPath.java:212)


REGRESSION:  org.apache.hadoop.io.TestEnumSetWritable.testAvroReflect

Error Message:
null

Stack Trace:
java.io.EOFException
at org.apache.avro.io.BinaryDecoder.readLong(BinaryDecoder.java:182)
at 
org.apache.avro.io.BinaryDecoder.doReadItemCount(BinaryDecoder.java:343)
at 
org.apache.avro.io.BinaryDecoder.readArrayStart(BinaryDecoder.java:375)
at 
org.apache.avro.io.ValidatingDecoder.readArrayStart(ValidatingDecoder.java:173)
at 

problem of lost name-node

2011-09-27 Thread Mirko Kämpf
Hi,
during the Cloudera Developer Training at Berlin I came up with an idea,
regarding a lost name-node.
As in this case all data blocks are lost. The solution could be, to have a
table which relates filenames and block_ids on that node, which can be
scaned
after a name-node is lost. Or on every block could be a kind of a backlink
to the filename and the total nr of blocks and/or a total hashsum attached.
This would it make easy to recover with minimal overhead.

Now I would like to ask the developer community, if there is any good reason
not to do this?
Before I start to figure out where to start an implementation of such a
feature.

Thanks,
Mirko


[jira] [Created] (HADOOP-7686) update hadoop rpm package to create symlink to libhadoop.so lib

2011-09-27 Thread Giridharan Kesavan (Created) (JIRA)
update hadoop rpm package to create symlink to libhadoop.so lib
---

 Key: HADOOP-7686
 URL: https://issues.apache.org/jira/browse/HADOOP-7686
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.20.205.0, 0.20.206.0
Reporter: Giridharan Kesavan


rpm installation of hadoop doesnt seem to libhadoop.so

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-7687) Make getProtocolSignature public

2011-09-27 Thread Sanjay Radia (Created) (JIRA)
Make getProtocolSignature public 
-

 Key: HADOOP-7687
 URL: https://issues.apache.org/jira/browse/HADOOP-7687
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Sanjay Radia
Assignee: Sanjay Radia
Priority: Minor
 Attachments: protSigPublic.patch



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-7683) hdfs-site.xml template has properties that are not used in 20

2011-09-27 Thread Matt Foley (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley resolved HADOOP-7683.


   Resolution: Fixed
Fix Version/s: 0.20.205.0
 Hadoop Flags: Reviewed

Committed to 0.20-security and 0.20.205.  Thanks, Arpit!

 hdfs-site.xml template has properties that are not used in 20
 -

 Key: HADOOP-7683
 URL: https://issues.apache.org/jira/browse/HADOOP-7683
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.20.205.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Minor
 Fix For: 0.20.205.0

 Attachments: HADOOP-7683.20s.patch


 properties dfs.namenode.http-address and dfs.namenode.https-address should be 
 removed

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-7646) Make hadoop-common use same version of avro as HBase

2011-09-27 Thread Konstantin Shvachko (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko resolved HADOOP-7646.
-

  Resolution: Fixed
Hadoop Flags: Reviewed

I just committed this. Thank you Joep.

 Make hadoop-common use same version of avro as HBase
 

 Key: HADOOP-7646
 URL: https://issues.apache.org/jira/browse/HADOOP-7646
 Project: Hadoop Common
  Issue Type: Bug
  Components: io, ipc
Affects Versions: 0.22.0
Reporter: Joep Rottinghuis
Assignee: Joep Rottinghuis
 Fix For: 0.22.0

 Attachments: HADOOP-7646-branch-22-shv.patch, 
 HADOOP-7646-branch-22-shv.patch, HADOOP-7646-branch-22.patch, 
 HADOOP-7646-branch-22.patch, HADOOP-7646-branch-22.patch, 
 HADOOP-7646-branch-22.patch


 HBase depends on avro 1.5.3 whereas hadoop-common depends on 1.3.2.
 When building HBase on top of hadoop, this should be consistent.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-7688) When a servlet filter throws an exception in init(..), the Jetty server failed silently.

2011-09-27 Thread Tsz Wo (Nicholas), SZE (Created) (JIRA)
When a servlet filter throws an exception in init(..), the Jetty server failed 
silently. 
-

 Key: HADOOP-7688
 URL: https://issues.apache.org/jira/browse/HADOOP-7688
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Tsz Wo (Nicholas), SZE


When a servlet filter throws a ServletException in init(..), the exception is 
logged by Jetty but not re-throws to the caller.  As a result, the Jetty server 
failed silently.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira