On 29 March 2013 05:54, Mohit Vadhera <[email protected]> wrote:
> Hi, > > I have filsystem error. when i run fsck to move corrupted blocks i get the > following error after stopping services i get the below error. but if i > don't start the services and run the fsck command the corrupted block > doesn't move. I am not getting this Usergroupinformation error. it is > looking permission error. Can any body fix it . It is an urgent issue on my > hadoop machine. It is a standalone cluster configured using the below link > > > https://ccp.cloudera.com/display/CDH4DOC/Installing+CDH4+on+a+Single+Linux+Node+in+Pseudo-distributed+Mode > 1. If you've got problems w/ any non-ASF Hadoop product -here the Cloudera one- please take it up through their support channels and forums http://wiki.apache.org/hadoop/InvalidJiraIssues 13/03/29 01:20:20 ERROR security.UserGroupInformation: > PriviledgedActionException as:hdfs (auth:SIMPLE) > cause:java.net.ConnectException: Call From OPERA-MAST1.ny.os.local/ > 172.20.3.119 to localhost:8020 failed on connection exception: > java.net.ConnectException: Connection refused; For more details see: > http://wiki.apache.org/hadoop/ConnectionRefused > Exception in thread "main" java.net.ConnectException: Call From > OPERA-MAST1.ny.os.local/172.20.3.119 to localhost:8020 failed on > connection > exception: java.net.ConnectException: Connection refused; For more details > see: http://wiki.apache.org/hadoop/ConnectionRefused > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) > at org.apache.hadoop.ipc.Client.call(Client.java:1228) > at > That's an error message that I actually added to avoid support calls by pointing to a wiki page that actually explains what the message means and how you need to go about diagnosing the problem on your installation. Did you actually read the error message, see the referenced wiki page and follow it? 1. If you didn't recognise that there was a link to a self-help page, I'd love to get your recommendations as to how we could make it clearer to people such as yourself that there is a URL in the message, that this page should be the first place that you should go to to start diagnosing things better. Should we drop the stack trace and just print in capital letters STOP: GO TO THE WIKI PAGE http://wiki.apache.org/hadoop/ConnectionRefused Because I'm not sure what else we could do? We could try that, but the more detailed stack trace is designed for people who do know more about hadoop internals, including those commercial support teams, We can't drop the details without removing the escalation options available to you. 2. If you did go to the wiki page, did you follow its step-by-step instructions? If not: why not? Is there any way we could make these instructions clearer and easier to follow? Do you think there's something that is missing? The ASF projects are, apart from those supported vendor channels, entire self supporting through the community. We do try our utmost to help people fend for themselves, which is why the error message includes the URL to a diagnostics page, and the page has instructions. If this process is somehow failing you, then I'd love some suggestions as to how it could be improved. -Steve
