This is strange!
after doing cap start, when I start hypertable $./ht hypertable, I get
error saying Hyperspace is not up.

Then if I do cap start again, services start and I can do successful
$./ht hypertable.

I am surprised but happy that it's working :)

On Jun 1, 9:15 pm, Harshada <[email protected]> wrote:
> I get following output:
>
> /------------------------
> $cap cleandb
>
> cap cleandb
>   * executing `cleandb'
>  ** transaction: start
>   * executing `clean_ranges'
>   * executing "/opt/hypertable/hypertable-0.9.3.1-alpha/current/bin/
> start-dfsbroker.sh hadoop       --config=/opt/hypertable/
> hypertable-0.9.3.1-alpha/0.9.3.1/conf/hypertable.cfg &&    /opt/
> hypertable/hypertable-0.9.3.1-alpha/current/bin/clean-database.sh;"
>     servers: ["master", "slave"]
>     [slave] executing command
>     [master] executing command
>  ** [out :: slave] DFS broker: available file descriptors: 1024
>  ** [out :: master] DFS broker: available file descriptors: 1024
>  ** [out :: slave] Started DFS Broker (hadoop)
>  ** [out :: slave] Removed /hypertable/servers in DFS
>  ** [out :: slave] Removed /hypertable/tables in DFS
>  ** [out :: slave] Cleared hyperspace
>  ** [out :: slave] Killing DfsBroker.hadoop.pid 11025
>  ** [out :: slave] Shutdown thrift broker complete
>  ** [out :: slave] Shutdown hypertable master complete
>  ** [out :: slave] Shutdown range server complete
>  ** [out :: slave] Shutdown hyperspace complete
>  ** [out :: slave] Shutdown DFS broker complete
>  ** [out :: master] Waiting for DFS Broker (hadoop) to come up...
>  ** [out :: master] Waiting for DFS Broker (hadoop) to come up...
>  ** [out :: master] Waiting for DFS Broker (hadoop) to come up...
>  ** [out :: master] Waiting for DFS Broker (hadoop) to come up...
>  ** [out :: master] Waiting for DFS Broker (hadoop) to come up...
>  ** [out :: master] Waiting for DFS Broker (hadoop) to come up...
>  ** [out :: master] Waiting for DFS Broker (hadoop) to come up...
>  ** [out :: master] Waiting for DFS Broker (hadoop) to come up...
>  ** [out :: master] ERROR: DFS Broker (hadoop) did not come up
>  ** [out :: master] DfsBroker.hadoop appears to be running (17960):
>  ** [out :: master] erts 17960 17939 0 20:27 ? 00:00:00 java -
> classpath /opt/hypertable/hypertable-0.9.3.1-alpha/current:/opt/
> hypertable/hypertable-0.9.3.1-alpha/current/lib/*.jar:/opt/hypertable/
> hypertable-0.9.3.1-alpha/current/lib/java/commons-cli-1.2.jar:/opt/
> hypertable/hypertable-0.9.3.1-alpha/current/lib/java/commons-
> logging-1.0.4.jar:/opt/hypertable/hypertable-0.9.3.1-alpha/current/lib/
> java/hadoop-0.20.2-core.jar:/opt/hypertable/hypertable-0.9.3.1-alpha/
> current/lib/java/hypertable-0.9.3.1-examples.jar:/opt/hypertable/
> hypertable-0.9.3.1-alpha/current/lib/java/hypertable-0.9.3.1.jar:/opt/
> hypertable/hypertable-0.9.3.1-alpha/current/lib/java/junit-4.3.1.jar:/
> opt/hypertable/hypertable-0.9.3.1-alpha/current/lib/java/
> libthrift-0.2.0.jar:/opt/hypertable/hypertable-0.9.3.1-alpha/current/
> lib/java/log4j-1.2.13.jar:/opt/hypertable/hypertable-0.9.3.1-alpha/
> current/lib/java/slf4j-api-1.5.8.jar:/opt/hypertable/
> hypertable-0.9.3.1-alpha/current/lib/java/slf4j-log4j12-1.5.8.jar:/opt/
> hypertable/hypertable-0.9.3.1-alpha/curren
>  ** [out :: master] t/lib/jetty-ext/*.jar
> org.hypertable.DfsBroker.hadoop.main --verbose --config=/opt/
> hypertable/hypertable-0.9.3.1-alpha/0.9.3.1/conf/hypertable.cfg
>     command finished
> failed: "sh -c '/opt/hypertable/hypertable-0.9.3.1-alpha/current/bin/
> start-dfsbroker.sh hadoop       --config=/opt/hypertable/
> hypertable-0.9.3.1-alpha/0.9.3.1/conf/hypertable.cfg &&    /opt/
> hypertable/hypertable-0.9.3.1-alpha/current/bin/clean-database.sh;'"
> on master
>
> ------------------------/
>
> /------------------------
>
> cap start
>   * executing `start'
>  ** transaction: start
>   * executing `start_hyperspace'
>   * executing "/opt/hypertable/hypertable-0.9.3.1-alpha/current/bin/
> start-hyperspace.sh       --config=/opt/hypertable/hypertable-0.9.3.1-
> alpha/0.9.3.1/conf/hypertable.cfg"
>     servers: ["master"]
>     [master] executing command
>  ** [out :: master] Started Hyperspace
>     command finished
>   * executing `start_master'
>   * executing "/opt/hypertable/hypertable-0.9.3.1-alpha/current/bin/
> start-dfsbroker.sh hadoop       --config=/opt/hypertable/
> hypertable-0.9.3.1-alpha/0.9.3.1/conf/hypertable.cfg &&\\\n   /opt/
> hypertable/hypertable-0.9.3.1-alpha/current/bin/start-master.sh --
> config=/opt/hypertable/hypertable-0.9.3.1-alpha/0.9.3.1/conf/
> hypertable.cfg"
>     servers: ["master"]
>     [master] executing command
>  ** [out :: master] DFS broker: available file descriptors: 1024
>  ** [out :: master] Waiting for DFS Broker (hadoop) to come up...
>  ** [out :: master] Waiting for DFS Broker (hadoop) to come up...
>  ** [out :: master] Waiting for DFS Broker (hadoop) to come up...
>  ** [out :: master] Waiting for DFS Broker (hadoop) to come up...
>  ** [out :: master] Waiting for DFS Broker (hadoop) to come up...
>  ** [out :: master] Waiting for DFS Broker (hadoop) to come up...
>  ** [out :: master] Waiting for DFS Broker (hadoop) to come up...
>  ** [out :: master] Waiting for DFS Broker (hadoop) to come up...
>  ** [out :: master] ERROR: DFS Broker (hadoop) did not come up
>  ** [out :: master] DfsBroker.hadoop appears to be running (20337):
>  ** [out :: master] erts 20337 20321 0 21:44 ? 00:00:00 java -
> classpath /opt/hypertable/hypertable-0.9.3.1-alpha/current:/opt/
> hypertable/hypertable-0.9.3.1-alpha/current/lib/*.jar:/opt/hypertable/
> hypertable-0.9.3.1-alpha/current/lib/java/commons-cli-1.2.jar:/opt/
> hypertable/hypertable-0.9.3.1-alpha/current/lib/java/commons-
> logging-1.0.4.jar:/opt/hypertable/hypertable-0.9.3.1-alpha/current/lib/
> java/hadoop-0.20.2-core.jar:/opt/hypertable/hypertable-0.9.3.1-alpha/
> current/lib/java/hypertable-0.9.3.1-examples.jar:/opt/hypertable/
> hypertable-0.9.3.1-alpha/current/lib/java/hypertable-0.9.3.1.jar:/opt/
> hypertable/hypertable-0.9.3.1-alpha/current/lib/java/junit-4.3.1.jar:/
> opt/hypertable/hypertable-0.9.3.1-alpha/current/lib/java/
> libthrift-0.2.0.jar:/opt/hypertable/hypertable-0.9.3.1-alpha/current/
> lib/java/log4j-1.2.13.jar:/opt/hypertable/hypertable-0.9.3.1-alpha/
> current/lib/java/slf4j-api-1.5.8.jar:/opt/hypertable/
> hypertable-0.9.3.1-alpha/current/lib/java/slf4j-log4j12-1.5.8.jar:/opt/
> hypertable/hypertable-0.9.3.1-alpha/curren
>  ** [out :: master] t/lib/jetty-ext/*.jar
> org.hypertable.DfsBroker.hadoop.main --verbose --config=/opt/
> hypertable/hypertable-0.9.3.1-alpha/0.9.3.1/conf/hypertable.cfg
>     command finished
> failed: "sh -c '/opt/hypertable/hypertable-0.9.3.1-alpha/current/bin/
> start-dfsbroker.sh hadoop       --config=/opt/hypertable/
> hypertable-0.9.3.1-alpha/0.9.3.1/conf/hypertable.cfg &&\\\n   /opt/
> hypertable/hypertable-0.9.3.1-alpha/current/bin/start-master.sh --
> config=/opt/hypertable/hypertable-0.9.3.1-alpha/0.9.3.1/conf/
> hypertable.cfg'" on master
>
> ---------------------------/
>
> On Jun 1, 7:34 pm, Harshada <[email protected]> wrote:
>
> > Hi,
>
> > I am using Hypertable 9.3.1 on ubuntu 8.04. A couple of days ago, I
> > could install and start the system perfectly. Then I shutdown the
> > nodes (with cap stop and hadoop$./stop-all.sh). When I tried to
> > restart, Hyperspace log at master is giving an error:
>
> > /-------------------
> > 1275402797 ERROR Hyperspace.Master : run (/opt/hypertable/
> > hypertable-0.9.3.1-alpha/src/cc/Hyperspace/RequestHandlerOpen.cc:60):
> > Hypertable::Exception:  node: '/hypertable/master' parent node: '/
> > hypertable' - HYPERSPACE file not found
> >         at void
> > Hyperspace::Master::open(Hyperspace::ResponseCallbackOpen*, uint64_t,
> > const char*, uint32_t, uint32_t, std::vector<Hyperspace::Attribute,
> > std::allocator<Hyperspace::Attribute> >&) (/opt/hypertable/
> > hypertable-0.9.3.1-alpha/src/cc/Hyperspace/Master.cc:883)
> > --------------------/
>
> > /-------------------
> > $ ./ht hyperspace --exec "open /; readdir /;"
> > SESSION CALLBACK: Safe
>
> > Welcome to the hyperspace command interpreter.
> > For information about Hypertable, visithttp://www.hypertable.org/
>
> > Type 'help' for a list of commands, or 'help shell' for a
> > list of shell meta commands.
>
> > (dir) hyperspace
> > -----------------------/
>
> > but I did create /hypertable directory
>
> > /---------------------
> > $ ./hadoop fs -ls /
> > Found 2 items
> > drwxrwxrwx   - erts supergroup          0 2010-06-01 18:28 /hypertable
> > drwxr-xr-x   - erts supergroup          0 2010-06-01 17:36 /mnt
> > -----------------------/
>
> > On May 29, 8:53 pm, Harshada <[email protected]> wrote:
>
> > > Oh.. sorry forgot to post here. Hypertable started successfully on
> > > hadoop. The problem was in my config file (hypertable.cfg). I changed
> > > localhost to master for Hypertable.Master.Host &
> > > Hyperspace.Replica.Host on slaves. All services are working fine.
>
> > > For my version of Hypertable there was no property like
> > > "Hyperspace.Master.Host" in hypertable.cfg, but there was one in
> > > Capfile "role :hyperspace".
>
> > > Thanks for your valuable inputs Sanjit.
>
> > > On May 29, 8:38 pm, Sanjit Jhala <[email protected]> wrote:
>
> > > > That error message shdn't be a problem. Can you try upgrading to 
> > > > 0.9.3.1 and
> > > > see if that fixes anything? Theres been many bugfixes since 0.9.2.8 and
> > > > though I can't think of one that explains this scenario, it would be 
> > > > easier
> > > > to debug/reproduce with the latest & greatest code.
>
> > > > So Hyperspace is up and running but the RangeServer can't connect to 
> > > > it? Can
> > > > you do a clean run and (cap cleandb; kill all dangling HT procs 
> > > > manually;
> > > > cap dist; cap start) and tar ball all the logs, Capfile and config
> > > > file here<http://groups.google.com/group/hypertable-user/files?pli=1>?
> > > > Btw I'm not a capistrano expert, but I wonder if the way you've defined 
> > > > the
> > > > roles is correct. If you look at the examples in the conf directory it 
> > > > looks
> > > > like it shd be:
> > > > *role :localhost, "master"*
>
> > > > Also I hope you have two servers named "master" & "slave" and you're 
> > > > running
> > > > the cap commands from "master". Btw do both RangeServers fail to come 
> > > > up or
> > > > just the one "slave" ?
> > > > If its just the one on "slave" can you try to connect to Hyperspace from
> > > > "slave"
>
> ...
>
> read more »

-- 
You received this message because you are subscribed to the Google Groups 
"Hypertable Development" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/hypertable-dev?hl=en.

Reply via email to