The IP is a static address through comcast, and we point gslbiotech.com to it as well (http works with hostname or IP number, so I think the IP interface is live). I don't know if that leading / means anything. Note that hadoop binds just fine to the 500XX ports on that IP.
Michael On Tue, Sep 14, 2010 at 12:41 AM, Ryan Rawson <[email protected]> wrote: > dur my mistake look at this line: > > java.net.BindException: Problem binding to /97.86.88.18:60000 : Cannot > > do you have an interface for that IP? > > we use the hostname to find the IP and then bind to that IP. > > -ryan > > On Mon, Sep 13, 2010 at 10:36 PM, Michael Scott <[email protected]> > wrote: > > I wish it were so, but no port 600XX is in use: > > > > [root]# netstat -anp | grep 600 > > unix 3 [ ] STREAM CONNECTED 8600 > 1480/avahi-daemon: > > > > > > thanks, > > Michael > > > > On Tue, Sep 14, 2010 at 12:22 AM, Ryan Rawson <[email protected]> > wrote: > > > >> you can use: > >> > >> netstat -anp > >> > >> to figure out which process is using port 60000. > >> > >> -ryan > >> > >> On Mon, Sep 13, 2010 at 10:16 PM, Michael Scott <[email protected]> > >> wrote: > >> > Hi, > >> > > >> > I am trying to install a standalone hbase server on Fedora Core 11. I > >> have > >> > hadoop running: > >> > > >> > bash-4.0$ jps > >> > 30908 JobTracker > >> > 30631 NameNode > >> > 30824 SecondaryNameNode > >> > 30731 DataNode > >> > 30987 TaskTracker > >> > 31137 Jps > >> > > >> > The only edit I have made to the hbase-0.20.6 directory from the > tarball > >> is > >> > to point to the Java installation (the same as used by hadoop): > >> > export JAVA_HOME=/usr/lib/jvm/java-1.6.0-sun/ > >> > > >> > I have verified sshd passwordless login for hadoop for all variations > of > >> the > >> > hostname (localhost, qualifiedname.com, www.qualifiedname.com, > straight > >> IP > >> > address), and have added the qualified hostnames to /etc/hosts just to > be > >> > sure. > >> > > >> > When I attempt to start the hbase server with start-hbase.sh (as > hadoop) > >> the > >> > following appears in the log file: > >> > > >> > 2010-09-14 00:36:45,555 INFO org.apache.hadoop.hbase.master.HMaster: > My > >> > address is qualifiedname.com:60000 > >> > 2010-09-14 00:36:45,682 ERROR org.apache.hadoop.hbase.master.HMaster: > Can > >> > not start master > >> > java.net.BindException: Problem binding to /97.86.88.18:60000 : > Cannot > >> > assign requested address > >> > at > >> > org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:179) > >> > at > >> > > >> > org.apache.hadoop.hbase.ipc.HBaseServer$Listener.<init>(HBaseServer.java:242) > >> > at > >> > org.apache.hadoop.hbase.ipc.HBaseServer.<init>(HBaseServer.java:998) > >> > at > >> > org.apache.hadoop.hbase.ipc.HBaseRPC$Server.<init>(HBaseRPC.java:637) > >> > at > >> org.apache.hadoop.hbase.ipc.HBaseRPC.getServer(HBaseRPC.java:596) > >> > at > org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:224) > >> > at > >> > > >> > org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:94) > >> > at > >> > > >> > org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:78) > >> > at > >> org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1229) > >> > at > org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1274) > >> > Caused by: java.net.BindException: Cannot assign requested address > >> > at sun.nio.ch.Net.bind(Native Method) > >> > at > >> > > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119) > >> > at > >> sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59) > >> > at > >> > org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:177) > >> > ... 9 more > >> > > >> > At this point zookeeper is apparently running, but hbase master is > not: > >> > bash-4.0$ jps > >> > 31454 HQuorumPeer > >> > 30908 JobTracker > >> > 30631 NameNode > >> > 30824 SecondaryNameNode > >> > 30731 DataNode > >> > 31670 Jps > >> > 30987 TaskTracker > >> > > >> > I am stumped -- the documentation simply says that the standalone > server > >> > should work out of the box, and it would seem to me that hadoop. > Does > >> > anyone have any suggestions here? Thanks in advance! > >> > > >> > Michael > >> > > >> > Michael > >> > > >> > > >
