Matt,

Thanks for the suggestion.

I had actually forgotten about local dns caching. I am using a mac so I used
dscacheutil -flushcache

To clear the cache, and also investigated the ordering. And everything seems to be in order.

Except I still get a bogus result.

it is using the old name except with a trailing period. So it is using duey.local. when it should be using duey.local.xxxxx.com (which is the internal name).



-John

On Jun 10, 2009, at 9:21 PM, Matt Massie wrote:

If you look at the documentation for the getCanonicalHostName() function (thanks, Steve)...

http://java.sun.com/javase/6/docs/api/java/net/InetAddress.html#getCanonicalHostName()

you'll see two Java security properties (networkaddress.cache.ttl and networkaddress.cache.negative.ttl).

You might take a look at your /etc/nsswitch.conf configuration as well to learn how hosts are resolved on your machine, e.g...

$ grep hosts /etc/nsswitch.conf
hosts:      files dns

and lastly, you may want to check if you are running nscd (the NameService cache daemon). If you are, take a look at /etc/ nscd.conf for the caching policy it's using.

Good luck.

-Matt



On Jun 10, 2009, at 1:09 PM, John Martyniak wrote:

That is what I thought also, is that it needs to keep that information somewhere, because it needs to be able to communicate with all of the servers.

So I deleted the /tmp/had* and /tmp/hs* directories, removed the log files, and grepped for the duey name in all files in config. And the problem still exists. Originally I thought that it might have had something to do with multiple entries in the .ssh/ authorized_keys file but removed everything there. And the problem still existed.

So I think that I am going to grab a new install of hadoop 0.19.1, delete the existing one and start out fresh to see if that changes anything.

Wish me luck:)

-John

On Jun 10, 2009, at 12:30 PM, Steve Loughran wrote:

John Martyniak wrote:
Does hadoop "cache" the server names anywhere? Because I changed to using DNS for name resolution, but when I go to the nodes view, it is trying to view with the old name. And I changed the hadoop-site.xml file so that it no longer has any of those values.

in SVN head, we try and get Java to tell us what is going on
http://svn.apache.org/viewvc/hadoop/core/trunk/src/core/org/apache/hadoop/net/DNS.java

This uses InetAddress.getLocalHost().getCanonicalHostName() to get the value, which is cached for life of the process. I don't know of anything else, but wouldn't be surprised -the Namenode has to remember the machines where stuff was stored.



John Martyniak
President/CEO
Before Dawn Solutions, Inc.
9457 S. University Blvd #266
Highlands Ranch, CO 80126
o: 877-499-1562
c: 303-522-1756
e: j...@beforedawnsoutions.com
w: http://www.beforedawnsolutions.com



John Martyniak
President/CEO
Before Dawn Solutions, Inc.
9457 S. University Blvd #266
Highlands Ranch, CO 80126
o: 877-499-1562
c: 303-522-1756
e: j...@beforedawnsoutions.com
w: http://www.beforedawnsolutions.com

Reply via email to