Build failed in Jenkins: Hadoop-Common-trunk #488

2012-07-30 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-trunk/488/

--
[...truncated 26568 lines...]
[DEBUG]   (s) debug = false
[DEBUG]   (s) effort = Default
[DEBUG]   (s) failOnError = true
[DEBUG]   (s) findbugsXmlOutput = false
[DEBUG]   (s) findbugsXmlOutputDirectory = 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target
[DEBUG]   (s) fork = true
[DEBUG]   (s) includeTests = false
[DEBUG]   (s) localRepository =id: local
  url: file:///home/jenkins/.m2/repository/
   layout: none

[DEBUG]   (s) maxHeap = 512
[DEBUG]   (s) nested = false
[DEBUG]   (s) outputDirectory = 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/site
[DEBUG]   (s) outputEncoding = UTF-8
[DEBUG]   (s) pluginArtifacts = 
[org.codehaus.mojo:findbugs-maven-plugin:maven-plugin:2.3.2:, 
com.google.code.findbugs:bcel:jar:1.3.9:compile, 
org.codehaus.gmaven:gmaven-mojo:jar:1.3:compile, 
org.codehaus.gmaven.runtime:gmaven-runtime-api:jar:1.3:compile, 
org.codehaus.gmaven.feature:gmaven-feature-api:jar:1.3:compile, 
org.codehaus.gmaven.runtime:gmaven-runtime-1.5:jar:1.3:compile, 
org.codehaus.gmaven.feature:gmaven-feature-support:jar:1.3:compile, 
org.codehaus.groovy:groovy-all-minimal:jar:1.5.8:compile, 
org.apache.ant:ant:jar:1.7.1:compile, 
org.apache.ant:ant-launcher:jar:1.7.1:compile, jline:jline:jar:0.9.94:compile, 
org.codehaus.plexus:plexus-interpolation:jar:1.1:compile, 
org.codehaus.gmaven:gmaven-plugin:jar:1.3:compile, 
org.codehaus.gmaven.runtime:gmaven-runtime-loader:jar:1.3:compile, 
org.codehaus.gmaven.runtime:gmaven-runtime-support:jar:1.3:compile, 
org.sonatype.gshell:gshell-io:jar:2.0:compile, 
com.thoughtworks.qdox:qdox:jar:1.10:compile, 
org.apache.maven.shared:file-management:jar:1.2.1:compile, 
org.apache.maven.shared:maven-shared-io:jar:1.1:compile, 
commons-lang:commons-lang:jar:2.4:compile, 
org.slf4j:slf4j-api:jar:1.5.10:compile, 
org.sonatype.gossip:gossip:jar:1.2:compile, 
org.apache.maven.reporting:maven-reporting-impl:jar:2.1:compile, 
commons-validator:commons-validator:jar:1.2.0:compile, 
commons-beanutils:commons-beanutils:jar:1.7.0:compile, 
commons-digester:commons-digester:jar:1.6:compile, 
commons-logging:commons-logging:jar:1.0.4:compile, oro:oro:jar:2.0.8:compile, 
xml-apis:xml-apis:jar:1.0.b2:compile, 
org.codehaus.groovy:groovy-all:jar:1.7.4:compile, 
org.apache.maven.reporting:maven-reporting-api:jar:3.0:compile, 
org.apache.maven.doxia:doxia-core:jar:1.1.3:compile, 
org.apache.maven.doxia:doxia-logging-api:jar:1.1.3:compile, 
xerces:xercesImpl:jar:2.9.1:compile, 
commons-httpclient:commons-httpclient:jar:3.1:compile, 
commons-codec:commons-codec:jar:1.2:compile, 
org.apache.maven.doxia:doxia-sink-api:jar:1.1.3:compile, 
org.apache.maven.doxia:doxia-decoration-model:jar:1.1.3:compile, 
org.apache.maven.doxia:doxia-site-renderer:jar:1.1.3:compile, 
org.apache.maven.doxia:doxia-module-xhtml:jar:1.1.3:compile, 
org.apache.maven.doxia:doxia-module-fml:jar:1.1.3:compile, 
org.codehaus.plexus:plexus-i18n:jar:1.0-beta-7:compile, 
org.codehaus.plexus:plexus-velocity:jar:1.1.7:compile, 
org.apache.velocity:velocity:jar:1.5:compile, 
commons-collections:commons-collections:jar:3.2:compile, 
org.apache.maven.shared:maven-doxia-tools:jar:1.2.1:compile, 
commons-io:commons-io:jar:1.4:compile, 
com.google.code.findbugs:findbugs-ant:jar:1.3.9:compile, 
com.google.code.findbugs:findbugs:jar:1.3.9:compile, 
com.google.code.findbugs:jsr305:jar:1.3.9:compile, 
com.google.code.findbugs:jFormatString:jar:1.3.9:compile, 
com.google.code.findbugs:annotations:jar:1.3.9:compile, 
dom4j:dom4j:jar:1.6.1:compile, jaxen:jaxen:jar:1.1.1:compile, 
jdom:jdom:jar:1.0:compile, xom:xom:jar:1.0:compile, 
xerces:xmlParserAPIs:jar:2.6.2:compile, xalan:xalan:jar:2.6.0:compile, 
com.ibm.icu:icu4j:jar:2.6.1:compile, asm:asm:jar:3.1:compile, 
asm:asm-analysis:jar:3.1:compile, asm:asm-commons:jar:3.1:compile, 
asm:asm-util:jar:3.1:compile, asm:asm-tree:jar:3.1:compile, 
asm:asm-xml:jar:3.1:compile, jgoodies:plastic:jar:1.2.0:compile, 
org.codehaus.plexus:plexus-resources:jar:1.0-alpha-4:compile, 
org.codehaus.plexus:plexus-utils:jar:1.5.1:compile]
[DEBUG]   (s) project = MavenProject: 
org.apache.hadoop:hadoop-common-project:3.0.0-SNAPSHOT @ 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/pom.xml
[DEBUG]   (s) relaxed = false
[DEBUG]   (s) remoteArtifactRepositories = [   id: apache.snapshots.https
  url: https://repository.apache.org/content/repositories/snapshots
   layout: default
snapshots: [enabled = true, update = daily]
 releases: [enabled = true, update = daily]
,id: repository.jboss.org
  url: http://repository.jboss.org/nexus/content/groups/public/
   layout: default
snapshots: [enabled = false, update = daily]
 releases: [enabled = true, update = daily]
,id: central
  url: http://repo1.maven.org/maven2
   layout: default
snapshots: [enabled = 

[jira] [Created] (HADOOP-8632) Configuration leaking class-loaders

2012-07-30 Thread Costin Leau (JIRA)
Costin Leau created HADOOP-8632:
---

 Summary: Configuration leaking class-loaders
 Key: HADOOP-8632
 URL: https://issues.apache.org/jira/browse/HADOOP-8632
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Costin Leau


The newly introduced CACHE_CLASSES leaks class loaders causing associated 
classes to not be reclaimed.

One solution is to remove the cache itself since each class loader 
implementation caches the classes it loads automatically and preventing an 
exception from being raised is just a micro-optimization that, as one can tell, 
causes bugs instead of improving anything.
In fact, I would argue in a highly-concurrent environment, the weakhashmap 
synchronization/lookup probably costs more then creating the exception itself.

Another is to prevent the leak from occurring, by inserting the loadedclass 
into the WeakHashMap wrapped in a WeakReference. Otherwise the class has a 
strong reference to its classloader (the key) meaning neither gets GC'ed.
And since the cache_class is static, even if the originating Configuration 
instance gets GC'ed, its classloader won't.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8633) Interrupted FsShell copies may leave tmp files

2012-07-30 Thread Daryn Sharp (JIRA)
Daryn Sharp created HADOOP-8633:
---

 Summary: Interrupted FsShell copies may leave tmp files
 Key: HADOOP-8633
 URL: https://issues.apache.org/jira/browse/HADOOP-8633
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha, 0.23.0, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical


Interrupting a copy, ex. via SIGINT, may cause tmp files to not be removed.  If 
the user is copying large files then the remnants will eat into the user's 
quota.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8634) Ensure FileSystem#close doesn't squawk for deleteOnExit paths

2012-07-30 Thread Daryn Sharp (JIRA)
Daryn Sharp created HADOOP-8634:
---

 Summary: Ensure FileSystem#close doesn't squawk for deleteOnExit 
paths
 Key: HADOOP-8634
 URL: https://issues.apache.org/jira/browse/HADOOP-8634
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 3.0.0, 2.2.0-alpha
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical


{{FileSystem#deleteOnExit}} doesn't check if the path exists before attempting 
to delete.  Errors may cause unnecessary INFO log squawks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: 答复: regarding _HOST token replacement in security hadoop

2012-07-30 Thread Aaron T. Myers
What do you have set as the fs.defaultFS in your configuration? Make sure
that that is a fully-qualified domain name.

--
Aaron T. Myers
Software Engineer, Cloudera



On Fri, Jul 27, 2012 at 1:57 PM, Arpit Gupta ar...@hortonworks.com wrote:

 That does seem to be valid issue. Could you log a jira for it.

 Thanks


 On Thu, Jul 26, 2012 at 7:32 PM, Wangwenli wangwe...@huawei.com wrote:

  Could you spent one minute to check whether below code will cause issue
 or
  not?
 
  In org.apache.hadoop.hdfs.server.namenode.NameNode.loginAsNameNodeUser(),
  it use socAddr.getHostName() to get _HOST,
  But in org.apache.hadoop.security.SecurityUtil.replacePattern(), in
  getLocalHostName(), it use getCanonicalHostName() to get _HOST
 
  Meanwhile I will check what you said. Thank you~
 
 
  -邮件原件-
  发件人: Arpit Gupta [mailto:ar...@hortonworks.com]
  发送时间: 2012年7月27日 10:03
  收件人: common-dev@hadoop.apache.org
  主题: Re: regarding _HOST token replacement in security hadoop
 
  you need to use HTTP/_h...@site.com as that is the principal needed by
  spnego. So you would need create the HTTP/_HOST principal and add it to
 the
  same keytab (/home/hdfs/keytab/nn.service.keytab).
 
  --
  Arpit Gupta
  Hortonworks Inc.
  http://hortonworks.com/
 
  On Jul 26, 2012, at 6:54 PM, Wangwenli wangwe...@huawei.com wrote:
 
   Thank yours response.
   I am using hadoop-2.0.0-alpha from apache site.  In which version it
  should configure with HTTP/_h...@site.com?  I think not in
  hadoop-2.0.0-alpha. Because I login successful with other principal, pls
  refer below log:
  
   2012-07-23 22:48:17,303 INFO
 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler:
  Login using keytab /home/hdfs/keytab/nn.service.keytab, for principal
  nn/167-52-0-56.site@site
   2012-07-23 22:48:17,310 INFO
 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler:
  Initialized, principal [nn/167-52-0-56.site@site] from keytab
  [/home/hdfs/keytab/nn.service.keytab]
  
  
   -邮件原件-
   发件人: Arpit Gupta [mailto:ar...@hortonworks.com]
   发送时间: 2012年7月27日 9:22
   收件人: common-dev@hadoop.apache.org
   主题: Re: regarding _HOST token replacement in security hadoop
  
   what version of hadoop are you using?
  
   also
  
   dfs.web.authentication.kerberos.principal should be set to HTTP/_
  h...@site.com
  
   --
   Arpit Gupta
   Hortonworks Inc.
   http://hortonworks.com/
  
   On Jul 26, 2012, at 6:11 PM, Wangwenli wangwe...@huawei.com wrote:
  
   Hi all,
  
I configured like below in hdfs-site.xml:
  
   property
   namedfs.namenode.kerberos.principal/name
   valuenn/_HOST@site/value
   /property
  
  
   property
 namedfs.web.authentication.kerberos.principal/name
 valuenn/_HOST@site/value
   /property
  
  
When  start up namenode, I found, namenode will use principal :
  nn/167-52-0-56@site to login, but the http server will use
  nn/167-52-0-56.site@sitemailto:nn/167-52-0-56.site@site to lgin,  so
 it
  start failed.
  
   I checked the code,
  
   Namenode will use socAddr.getHostName() to get hostname in
  org.apache.hadoop.hdfs.server.namenode.NameNode.loginAsNameNodeUser.
  
  
   But httpserver 's default hostname is 0.0.0.0, so in
  org.apache.hadoop.security.SecurityUtil.replacePattern, it will get the
  hostname by invoking getLocalHostName,there it use
 getCanonicalHostName(),
  
   I think this inconsistent is wrong,  can someone confirm this? Need
  raise one bug ?
  
   Thanks
  
  
 
 



NameNode not restarting?

2012-07-30 Thread mouradk

Dear all,

We are running a hadoop 0.20.2 single node with hbase 0.20.4 and cannot restart 
namenode after the disk got full. I have freed more space but cannot restart 
Namenode and get the following error:

Someone has suggeted that the edits file was corrupted? How do you fix that? 
Thanks.

TARTUP_MSG:   build 
=https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; 
compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
/
2012-07-30 16:02:23,649 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: 
Initializing RPC Metrics with hostName=NameNode, port=50001
2012-07-30 16:02:23,656 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
Namenode up at: localhost/127.0.0.1:50001
2012-07-30 16:02:23,659 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: 
Initializing JVM Metrics with processName=NameNode, sessionId=null
2012-07-30 16:02:23,660 INFO 
org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing 
NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
2012-07-30 16:02:23,714 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop,hadoop
2012-07-30 16:02:23,714 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2012-07-30 16:02:23,714 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=false
2012-07-30 16:02:23,721 INFO 
org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics: 
Initializing FSNamesystemMetrics using context 
object:org.apache.hadoop.metrics.spi.NullContext
2012-07-30 16:02:23,723 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered 
FSNamesystemStatusMBean
2012-07-30 16:02:23,756 INFO org.apache.hadoop.hdfs.server.common.Storage: 
Number of files = 533
2012-07-30 16:02:23,833 INFO org.apache.hadoop.hdfs.server.common.Storage: 
Number of files under construction = 2
2012-07-30 16:02:23,835 INFO org.apache.hadoop.hdfs.server.common.Storage: 
Image file of size 55400 loaded in 0 seconds.
2012-07-30 16:02:23,844 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: 
java.lang.NumberFormatException: For input string: 1343506
at 
java.lang.NumberFormatException.forInputString(NumberFormatException.java:48)
at java.lang.Long.parseLong(Long.java:419)
at java.lang.Long.parseLong(Long.java:468)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.readLong(FSEditLog.java:1273)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadFSEdits(FSEditLog.java:775)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:992)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:812)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:364)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesystem.java:292)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
at org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:279)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)

2012-07-30 16:02:23,845 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
SHUTDOWN_MSG:


Your help is much appreciated!!


MouradkMouradk
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)



Fix a corrupt edits file?

2012-07-30 Thread mouradk
Hello all,

I have just had a problem with a NameNode restart and someone on the mailing 
list kindly suggested that the edits file was corrupted. I have made a backup 
copy of the file and checked my /namesecondary/previous.checkpoint but the 
edits file there is empty 4kb with ? inside.

This suggest to me that I cannot recover from the secondaryNameNode? How do you 
fix this problem?

Thanks for your help.

Original error log:
TARTUP_MSG:   build 
=https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; 
compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
/
2012-07-30 16:02:23,649 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: 
Initializing RPC Metrics with hostName=NameNode, port=50001
2012-07-30 16:02:23,656 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
Namenode up at: localhost/127.0.0.1:50001
2012-07-30 16:02:23,659 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: 
Initializing JVM Metrics with processName=NameNode, sessionId=null
2012-07-30 16:02:23,660 INFO 
org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing 
NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
2012-07-30 16:02:23,714 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop,hadoop
2012-07-30 16:02:23,714 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2012-07-30 16:02:23,714 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=false
2012-07-30 16:02:23,721 INFO 
org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics: 
Initializing FSNamesystemMetrics using context 
object:org.apache.hadoop.metrics.spi.NullContext
2012-07-30 16:02:23,723 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered 
FSNamesystemStatusMBean
2012-07-30 16:02:23,756 INFO org.apache.hadoop.hdfs.server.common.Storage: 
Number of files = 533
2012-07-30 16:02:23,833 INFO org.apache.hadoop.hdfs.server.common.Storage: 
Number of files under construction = 2
2012-07-30 16:02:23,835 INFO org.apache.hadoop.hdfs.server.common.Storage: 
Image file of size 55400 loaded in 0 seconds.
2012-07-30 16:02:23,844 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: 
java.lang.NumberFormatException: For input string: 1343506
at 
java.lang.NumberFormatException.forInputString(NumberFormatException.java:48)
at java.lang.Long.parseLong(Long.java:419)
at java.lang.Long.parseLong(Long.java:468)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.readLong(FSEditLog.java:1273)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadFSEdits(FSEditLog.java:775)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:992)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:812)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:364)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesystem.java:292)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
at org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:279)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)

2012-07-30 16:02:23,845 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
SHUTDOWN_MSG:



Mouradk


Mouradk
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)



Re: Fix a corrupt edits file?

2012-07-30 Thread Kihwal Lee
Probably the last entry is partial or is complete but not terminated
properly. You need to hexedit the file in order to correct the error. You
can also pull HDFS-1378 and figure out the offset where you can put
OP_INVALID (0xff). HDFS-3055 implements the interactive recovery mode,
which makes it even easier.

Kihwal




On 7/30/12 12:30 PM, mouradk mourad...@googlemail.com wrote:

Hello all,

I have just had a problem with a NameNode restart and someone on the
mailing list kindly suggested that the edits file was corrupted. I have
made a backup copy of the file and checked my
/namesecondary/previous.checkpoint but the edits file there is empty 4kb
with ? inside.

This suggest to me that I cannot recover from the secondaryNameNode? How
do you fix this problem?

Thanks for your help.

Original error log:
TARTUP_MSG:   build
=https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
/
2012-07-30 16:02:23,649 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
Initializing RPC Metrics with hostName=NameNode, port=50001
2012-07-30 16:02:23,656 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
localhost/127.0.0.1:50001
2012-07-30 16:02:23,659 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
Initializing JVM Metrics with processName=NameNode, sessionId=null
2012-07-30 16:02:23,660 INFO
org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
Initializing NameNodeMeterics using context
object:org.apache.hadoop.metrics.spi.NullContext
2012-07-30 16:02:23,714 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop,hadoop
2012-07-30 16:02:23,714 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2012-07-30 16:02:23,714 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
isPermissionEnabled=false
2012-07-30 16:02:23,721 INFO
org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
Initializing FSNamesystemMetrics using context
object:org.apache.hadoop.metrics.spi.NullContext
2012-07-30 16:02:23,723 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
FSNamesystemStatusMBean
2012-07-30 16:02:23,756 INFO
org.apache.hadoop.hdfs.server.common.Storage: Number of files = 533
2012-07-30 16:02:23,833 INFO
org.apache.hadoop.hdfs.server.common.Storage: Number of files under
construction = 2
2012-07-30 16:02:23,835 INFO
org.apache.hadoop.hdfs.server.common.Storage: Image file of size 55400
loaded in 0 seconds.
2012-07-30 16:02:23,844 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode:
java.lang.NumberFormatException: For input string: 1343506
at 
java.lang.NumberFormatException.forInputString(NumberFormatException.java:
48)
at java.lang.Long.parseLong(Long.java:419)
at java.lang.Long.parseLong(Long.java:468)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.readLong(FSEditLog.java:1
273)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadFSEdits(FSEditLog.jav
a:775)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:99
2)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:81
2)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSIma
ge.java:364)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory
.java:87)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesyste
m.java:311)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesystem.ja
va:292)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:2
01)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:279)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.ja
va:956)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)

2012-07-30 16:02:23,845 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:



Mouradk


Mouradk
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)



[jira] [Created] (HADOOP-8636) Decommissioned nodes are included in cluster after switch which is not expected

2012-07-30 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-8636:


 Summary: Decommissioned nodes are included in cluster after switch 
which is not expected
 Key: HADOOP-8636
 URL: https://issues.apache.org/jira/browse/HADOOP-8636
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.0.0-alpha, 2.1.0-alpha, 2.0.1-alpha
Reporter: Brahma Reddy Battula


Scenario:
=

Start ANN and SNN with three DN's

Exclude DN1 from cluster by using decommission feature 

(./hdfs dfsadmin -fs hdfs://ANNIP:8020 -refreshNodes)

After decommission successful,do switch such that SNN will become Active.

Here exclude node(DN1) is included in cluster.Able to write files to excluded 
node since it's not excluded.

Checked SNN(Which Active before switch) UI decommissioned=1 and ANN UI 
decommissioned=0

One more Observation:


All dfsadmin commands will create proxy only on nn1 irrespective of Active or 
standby.I think this also we need to re-look once..


I am not getting , why we are not given HA for dfsadmin commands..?

Please correct me,,If I am wrong.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira