Thanks for your reply Sanjit.

'cap cleandb' gives following output: (with one error at the end)

  * executing `stop'
 ** transaction: start
  * executing `stop_slaves'
  * executing "/opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/bin/
stop-servers.sh --no-hyperspace"
    servers: ["master", "slave"]
    [master] executing command
 ** [out :: master] Sending shutdown command
 ** [out :: master] Unable to establish connection to range server
    [slave] executing command
 ** [out :: slave] Sending shutdown command
 ** [out :: slave] Unable to establish connection to range server
 ** [out :: master] Shutdown range server complete
 ** [out :: slave] Shutdown range server complete
 ** [out :: master] Shutdown DFS broker complete
 ** [out :: master] Shutdown thrift broker complete
 ** [out :: master] Shutdown hypertable master complete
 ** [out :: slave] Shutdown thrift broker complete
 ** [out :: slave] Shutdown DFS broker complete
 ** [out :: slave] Shutdown hypertable master complete
    command finished
  * executing `stop_master'
  * executing "/opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/bin/
stop-servers.sh --no-hyperspace"
    servers: ["master"]
    [master] executing command
 ** [out :: master] Sending shutdown command
 ** [out :: master] Unable to establish connection to range server
 ** [out :: master] Shutdown range server complete
 ** [out :: master] Shutdown DFS broker complete
 ** [out :: master] Shutdown thrift broker complete
 ** [out :: master] Shutdown hypertable master complete
    command finished
  * executing `stop_hyperspace'
  * executing "/opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/bin/
stop-hyperspace.sh"
    servers: ["master"]
    [master] executing command
 ** [out :: master] Killing Hyperspace.pid 14096
*** [err :: master] /opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/
bin/ht-env.sh: line 68: kill: (14096) - No such process
 ** [out :: master] Shutdown hyperspace complete
    command finished
 ** transaction: commit
harsh...@erts-server:/opt/hypertable/hypertable-0.9.2.8-alpha/conf$ /
var/lib/gems/1.8/bin/cap cleandb
  * executing `cleandb'
 ** transaction: start
  * executing `clean_ranges'
  * executing "/opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/bin/
start-dfsbroker.sh hadoop       --config=/opt/hypertable/
hypertable-0.9.2.8-alpha/0.9.2.8/conf/hypertable.cfg &&    /opt/
hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/bin/clean-database.sh;"
    servers: ["master", "slave"]
    [master] executing command
    [slave] executing command
 ** [out :: master] DFS broker: available file descriptors: 1024
 ** [out :: slave] DFS broker: available file descriptors: 1024
 ** [out :: slave] Started DFS Broker (hadoop)
 ** [out :: slave] Removed /hypertable/servers in DFS
 ** [out :: slave] Removed /hypertable/tables in DFS
 ** [out :: slave] Cleared hyperspace
 ** [out :: slave] Killing DfsBroker.hadoop.pid 24587
 ** [out :: slave] Shutdown hyperspace complete
 ** [out :: slave] Shutdown range server complete
 ** [out :: slave] Shutdown thrift broker complete
 ** [out :: slave] Shutdown hypertable master complete
 ** [out :: slave] Shutdown DFS broker complete
 ** [out :: master] Waiting for DFS Broker (hadoop) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) to come up...
 ** [out :: master] ERROR: DFS Broker (hadoop) did not come up
 ** [out :: master] DfsBroker.hadoop appears to be running (19475):
 ** [out :: master] harshada 19475 19394 0 May28 ? 00:00:00 java -
classpath /opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8:/opt/
hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/lib/*.jar:/opt/hypertable/
hypertable-0.9.2.8-alpha/0.9.2.8/lib/java/commons-logging-1.0.4.jar:/
opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/lib/java/hadoop-0.20.1-
core.jar:/opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/lib/java/
hypertable-0.9.2.8.jar:/opt/hypertable/hypertable-0.9.2.8-alpha/
0.9.2.8/lib/java/junit-4.3.1.jar:/opt/hypertable/hypertable-0.9.2.8-
alpha/0.9.2.8/lib/java/libthrift-0.2.0.jar:/opt/hypertable/
hypertable-0.9.2.8-alpha/0.9.2.8/lib/java/log4j-1.2.13.jar:/opt/
hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/lib/java/slf4j-
api-1.5.8.jar:/opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/lib/
java/slf4j-log4j12-1.5.8.jar:/opt/hypertable/hypertable-0.9.2.8-alpha/
0.9.2.8/lib/jetty-ext/*.jar org.hypertable.DfsBroker.hadoop.main --
verbose --config=/opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/conf/
hypertable.cfg
    command finished
failed: "sh -c '/opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/bin/
start-dfsbroker.sh hadoop       --config=/opt/hypertable/
hypertable-0.9.2.8-alpha/0.9.2.8/conf/hypertable.cfg &&    /opt/
hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/bin/clean-database.sh;'"
on master

But going ahead when I run 'cap start', everything else starts up
except slaves's RangeServer. Here's the output:

  * executing `start'
 ** transaction: start
  * executing `start_hyperspace'
  * executing "/opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/bin/
start-hyperspace.sh       --config=/opt/hypertable/hypertable-0.9.2.8-
alpha/0.9.2.8/conf/hypertable.cfg"
    servers: ["master"]
    [master] executing command
 ** [out :: master] Started Hyperspace
    command finished
  * executing `start_master'
  * executing "/opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/bin/
start-dfsbroker.sh hadoop       --config=/opt/hypertable/
hypertable-0.9.2.8-alpha/0.9.2.8/conf/hypertable.cfg &&\\\n   /opt/
hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/bin/start-master.sh --
config=/opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/conf/
hypertable.cfg"
    servers: ["master"]
    [master] executing command
 ** [out :: master] DFS broker: available file descriptors: 1024
 ** [out :: master] DfsBroker.hadoop appears to be running (19475):
 ** [out :: master] harshada 19475 1 0 May28 ? 00:00:01 java -
classpath /opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8:/opt/
hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/lib/*.jar:/opt/hypertable/
hypertable-0.9.2.8-alpha/0.9.2.8/lib/java/commons-logging-1.0.4.jar:/
opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/lib/java/hadoop-0.20.1-
core.jar:/opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/lib/java/
hypertable-0.9.2.8.jar:/opt/hypertable/hypertable-0.9.2.8-alpha/
0.9.2.8/lib/java/junit-4.3.1.jar:/opt/hypertable/hypertable-0.9.2.8-
alpha/0.9.2.8/lib/java/libthrift-0.2.0.jar:/opt/hypertable/
hypertable-0.9.2.8-alpha/0.9.2.8/lib/java/log4j-1.2.13.jar:/opt/
hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/lib/java/slf4j-
api-1.5.8.jar:/opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/lib/
java/slf4j-log4j12-1.5.8.jar:/opt/hypertable/hypertable-0.9.2.8-alpha/
0.9.2.8/lib/jetty-ext/*.jar org.hypertable.DfsBroker.hadoop.main --
verbose --config=/opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/conf/
hypertable.cfg
 ** [out :: master] Started Hypertable.Master
    command finished
  * executing `start_slaves'
  * executing "/opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/bin/
random-wait.sh 5 &&\\\n   /opt/hypertable/hypertable-0.9.2.8-alpha/
0.9.2.8/bin/start-dfsbroker.sh hadoop       --config=/opt/hypertable/
hypertable-0.9.2.8-alpha/0.9.2.8/conf/hypertable.cfg &&\\\n   /opt/
hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/bin/start-
rangeserver.sh       --config=/opt/hypertable/hypertable-0.9.2.8-alpha/
0.9.2.8/conf/hypertable.cfg &&\\\n   /opt/hypertable/
hypertable-0.9.2.8-alpha/0.9.2.8/bin/start-thriftbroker.sh       --
config=/opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/conf/
hypertable.cfg"
    servers: ["master", "slave"]
    [master] executing command
    [slave] executing command
 ** [out :: master] DFS broker: available file descriptors: 1024
 ** [out :: master] DfsBroker.hadoop appears to be running (19475):
 ** [out :: master] harshada 19475 1 0 May28 ? 00:00:01 java -
classpath /opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8:/opt/
hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/lib/*.jar:/opt/hypertable/
hypertable-0.9.2.8-alpha/0.9.2.8/lib/java/commons-logging-1.0.4.jar:/
opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/lib/java/hadoop-0.20.1-
core.jar:/opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/lib/java/
hypertable-0.9.2.8.jar:/opt/hypertable/hypertable-0.9.2.8-alpha/
0.9.2.8/lib/java/junit-4.3.1.jar:/opt/hypertable/hypertable-0.9.2.8-
alpha/0.9.2.8/lib/java/libthrift-0.2.0.jar:/opt/hypertable/
hypertable-0.9.2.8-alpha/0.9.2.8/lib/java/log4j-1.2.13.jar:/opt/
hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/lib/java/slf4j-
api-1.5.8.jar:/opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/lib/
java/slf4j-log4j12-1.5.8.jar:/opt/hypertable/hypertable-0.9.2.8-alpha/
0.9.2.8/lib/jetty-ext/*.jar org.hypertable.DfsBroker.hadoop.main --
verbose --config=/opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/conf/
hypertable.cfg
 ** [out :: slave] DFS broker: available file descriptors: 1024
 ** [out :: master] Started Hypertable.RangeServer
 ** [out :: slave] Started DFS Broker (hadoop)
 ** [out :: master] Started ThriftBroker
 ** [out :: slave] Waiting for Hypertable.RangeServer to come up...
 ** [out :: slave] Waiting for Hypertable.RangeServer to come up...
*** [err :: slave] /opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/
bin/ht-env.sh: line 95: 25904 Segmentation fault      (core dumped)
$HEAPCHECK $VALGRIND $HYPERTABLE_HOME/bin/$servercmd --pidfile
$pidfile "$@" >&$logfile
 ** [out :: slave] Waiting for Hypertable.RangeServer to come up...
 ** [out :: slave] Waiting for Hypertable.RangeServer to come up...
 ** [out :: slave] Waiting for Hypertable.RangeServer to come up...
 ** [out :: slave] Waiting for Hypertable.RangeServer to come up...
 ** [out :: slave] Waiting for Hypertable.RangeServer to come up...
 ** [out :: slave] Waiting for Hypertable.RangeServer to come up...
 ** [out :: slave] Waiting for Hypertable.RangeServer to come up...
 ** [out :: slave] Waiting for Hypertable.RangeServer to come up...
 ** [out :: slave] Waiting for Hypertable.RangeServer to come up...
 ** [out :: slave] Waiting for Hypertable.RangeServer to come up...
 ** [out :: slave] Waiting for Hypertable.RangeServer to come up...
 ** [out :: slave] Waiting for Hypertable.RangeServer to come up...
 ** [out :: slave] Waiting for Hypertable.RangeServer to come up...
 ** [out :: slave] Waiting for Hypertable.RangeServer to come up...
 ** [out :: slave] Waiting for Hypertable.RangeServer to come up...
 ** [out :: slave] Waiting for Hypertable.RangeServer to come up...
 ** [out :: slave] Waiting for Hypertable.RangeServer to come up...
 ** [out :: slave] Waiting for Hypertable.RangeServer to come up...
 ** [out :: slave] ERROR: Hypertable.RangeServer did not come up
    command finished
failed: "sh -c '/opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/bin/
random-wait.sh 5 &&\\\n   /opt/hypertable/hypertable-0.9.2.8-alpha/
0.9.2.8/bin/start-dfsbroker.sh hadoop       --config=/opt/hypertable/
hypertable-0.9.2.8-alpha/0.9.2.8/conf/hypertable.cfg &&\\\n   /opt/
hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/bin/start-
rangeserver.sh       --config=/opt/hypertable/hypertable-0.9.2.8-alpha/
0.9.2.8/conf/hypertable.cfg &&\\\n   /opt/hypertable/
hypertable-0.9.2.8-alpha/0.9.2.8/bin/start-thriftbroker.sh       --
config=/opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/conf/
hypertable.cfg'" on slave


Error log of RangeServer @master Says:

1275073251 ERROR Hypertable.RangeServer : create_scanner (/opt/
hypertable/hypertable-0.9.2.8-alpha/src/cc/Hypertable/RangeServer/
RangeServer.cc:833): Hypertable::Exception: unknown table id=0
'METADATA' - RANGE SERVER table not found
        at void Hypertable::TableInfoMap::get(const
Hypertable::TableIdentifier*, Hypertable::TableInfoPtr&) (/opt/
hypertable/hypertable-0.9.2.8-alpha/src/cc/Hypertable/RangeServer/
TableInfoMap.cc:48)
1275073252 ERROR Hypertable.RangeServer : create_scanner (/opt/
hypertable/hypertable-0.9.2.8-alpha/src/cc/Hypertable/RangeServer/
RangeServer.cc:833): Hypertable::Exception: unknown table id=0
'METADATA' - RANGE SERVER table not found
        at void Hypertable::TableInfoMap::get(const
Hypertable::TableIdentifier*, Hypertable::TableInfoPtr&) (/opt/
hypertable/hypertable-0.9.2.8-alpha/src/cc/Hypertable/RangeServer/
TableInfoMap.cc:48)
1275073256 INFO Hypertable.RangeServer : (/opt/hypertable/
hypertable-0.9.2.8-alpha/src/cc/Hypertable/RangeServer/RangeServer.cc:
2329) Memory Usage: 50000000 bytes
1275073276 INFO Hypertable.RangeServer : (/opt/hypertable/
hypertable-0.9.2.8-alpha/src/cc/Hypertable/RangeServer/
RangeServerStats.h:78) scans=(0 0 0.000000) updates=(0 0 0.000000)


> Also check that the user you're running as has write permission to the path
> "/opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/hyperspace" ?

Yes I have permission.

>
> -Sanjit
>
> On Fri, May 28, 2010 at 9:08 AM, Harshada <[email protected]> wrote:
> > Thanks kevin,
>
> > I reinstalled Hadoop from scratch and now I could do "cap dist"
> > successfully.
>
> > But, when I start servers using "cap start", I get following error:
>
> >  * executing `start'
> >  ** transaction: start
> >  * executing `start_hyperspace'
> >   * executing "/opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/bin/
> > start-hyperspace.sh <http://0.9.2.8/bin/%0Astart-hyperspace.sh>
> > --config=/opt/hypertable/hypertable-0.9.2.8-
> > alpha/0.9.2.8/conf/hypertable.cfg"
> >     servers: ["master"]
> >    [master] executing command
>
> > *** [err :: master] /opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/
> > bin/ht-env.sh <http://0.9.2.8/%0Abin/ht-env.sh>: line 95: 14096
> > Segmentation fault      (core dumped)
> > $HEAPCHECK $VALGRIND $HYPERTABLE_HOME/bin/$servercmd --pidfile
> > $pidfile "$@" >&$logfile
>
> >  ** [out :: master] Waiting for Hyperspace to come up...
> >  ** [out :: master] Waiting for Hyperspace to come up...
> >  ** [out :: master] Waiting for Hyperspace to come up...
> >  ** [out :: master] Waiting for Hyperspace to come up...
> >  ** [out :: master] Waiting for Hyperspace to come up...
>
> > It never comes up!
>
> > log at log/Hyperspace.log tells:
>
> > 1275062567 INFO Hyperspace.Master : (/opt/hypertable/
> > hypertable-0.9.2.8-alpha/src/cc/Hyperspace/Master.cc:145) BerkeleyDB
> > base directory = '/opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/
> > hyperspace'
> > 1275062567 INFO Hyperspace.Master : (/opt/hypertable/
> > hypertable-0.9.2.8-alpha/src/cc/Hyperspace/BerkeleyDbFilesystem.cc:
> > 304) BDB ERROR:unable to join the environment
> > 1275062567 INFO Hyperspace.Master : (/opt/hypertable/
> > hypertable-0.9.2.8-alpha/src/cc/Hyperspace/BerkeleyDbFilesystem.cc:
> > 304) BDB ERROR:Recovery function for LSN 2 3158676 failed
> > 1275062567 INFO Hyperspace.Master : (/opt/hypertable/
> > hypertable-0.9.2.8-alpha/src/cc/Hyperspace/BerkeleyDbFilesystem.cc:
> > 304) BDB ERROR:PANIC: Permission denied
> > 1275062567 FATAL Hyperspace.Master : (/opt/hypertable/
> > hypertable-0.9.2.8-alpha/src/cc/Hyperspace/BerkeleyDbFilesystem.cc:
> > 358) Received DB_EVENT_PANIC event
>
> > waiting for pointers.
>
> > Thanks.
>
> > On May 27, 4:19 pm, Kevin Yuan <[email protected]> wrote:
> > > I think you should make sure that the HDFS is running normally by
> > > checking its log files.
>
> > > And, firewalls? (just wild guesses)
>
> > > -Kevin
>
> > > On 5月27日, 下午2时20分, Harshada <[email protected]> wrote:
>
> > > > Thank you for the reply.
>
> > > > First of all I am sorry for posting this query to the wrong thread. It
> > > > you can, please migrate it to the -user mailing list.
>
> > > > I checked the log file for DfsBroker.hadoop, it says:
>
> > > > Num CPUs=2
> > > > HdfsBroker.Port=38030
> > > > HdfsBroker.Reactors=2
> > > > HdfsBroker.Workers=20
> > > > HdfsBroker.Server.fs.default.name=hdfs://localhost:54310
> > > > 10/05/27 05:01:58 INFO ipc.Client: Retrying connect to server:
> > > > localhost/127.0.0.1:54310. Already tried 0 time(s).
> > > > 10/05/27 05:01:59 INFO ipc.Client: Retrying connect to server:
> > > > localhost/127.0.0.1:54310. Already tried 1 time(s).
> > > > 10/05/27 05:02:00 INFO ipc.Client: Retrying connect to server:
> > > > localhost/127.0.0.1:54310. Already tried 2 time(s).
> > > > 10/05/27 05:02:01 INFO ipc.Client: Retrying connect to server:
> > > > localhost/127.0.0.1:54310. Already tried 3 time(s).
> > > > 10/05/27 05:02:02 INFO ipc.Client: Retrying connect to server:
> > > > localhost/127.0.0.1:54310. Already tried 4 time(s).
> > > > 10/05/27 05:02:03 INFO ipc.Client: Retrying connect to server:
> > > > localhost/127.0.0.1:54310. Already tried 5 time(s).
> > > > 10/05/27 05:02:04 INFO ipc.Client: Retrying connect to server:
> > > > localhost/127.0.0.1:54310. Already tried 6 time(s).
> > > > 10/05/27 05:02:05 INFO ipc.Client: Retrying connect to server:
> > > > localhost/127.0.0.1:54310. Already tried 7 time(s).
> > > > 10/05/27 05:02:06 INFO ipc.Client: Retrying connect to server:
> > > > localhost/127.0.0.1:54310. Already tried 8 time(s).
> > > > 10/05/27 05:02:07 INFO ipc.Client: Retrying connect to server:
> > > > localhost/127.0.0.1:54310. Already tried 9 time(s).
> > > > 27 May, 2010 5:02:07 AM org.hypertable.DfsBroker.hadoop.HdfsBroker
> > > > <init>
> > > > SEVERE: ERROR: Unable to establish connection to HDFS.
> > > > ShutdownHook called
> > > > Exception in thread "Thread-1" java.lang.NullPointerException
> > > >         at org.hypertable.DfsBroker.hadoop.main
> > > > $ShutdownHook.run(main.java:69)
>
> > > > ---------------------------
>
> > > > but hdfs is running. because jps on master gives me following output:
>
> > > > e...@erts-server:~$ jps
> > > > 32538 SecondaryNameNode
> > > > 32270 NameNode
> > > > 32388 DataNode
> > > > 310 TaskTracker
> > > > 32671 JobTracker
> > > > 21233 Jps
> > > > -----------------------------------------------------
>
> > > > > Is there a reason you're using 0.9.2.8 and not 0.9.3.1 (the latest
> > and
> > > > > greatest) ?
>
> > > > Oh.. ok. thanks for the info. But since 0.9.2.8 was successfully
> > > > installed, I ll continue with it for the moment.
>
> > > > > Do you have HDFS running and if so make sure the permissions for the
> > > > > /hypertable dir are set correctly.
>
> > > > yes. I followedhttp://
> > code.google.com/p/hypertable/wiki/UpAndRunningWithHadoop
> > > > andhttp://code.google.com/p/hypertable/wiki/DeployingHypertable.
>
> > > > A doubt: do I always need the owner of slave and master machine to be
> > > > the same? coz currently I have 'erts' as the user for master and one
> > > > slave (which are on the same machine) and harshada as the user on
> > > > slave. So what happens is, whenever I use '$cap dist' or '$cap shell
> > > > cap>date' it asks for password of e...@slave which does not exist
> > > > hence authentication fails. I am in the process of getting same user
> > > > on all the machines, but till then I thought of getting max things up.
> > > > Is this the reason why DfsBroker.hadoop is also failing?
>
> > > > If yes, then I should better wait and have same user on all the
> > > > machines.
>
> > > > PS: though hadoop require same installation paths on all the machines,
> > > > I managed it with symbolic links, though my users (and hence their
> > > > $HOME s ) were different.
>
> > > > > Beyond that, try taking a look at
> > <HT_INSTALL_DIR>/log/DfsBroker.hadoop.log
> > > > > to figure out whats going on.
>
> > > > > -Sanjit
>
> > > > > On Wed, May 26, 2010 at 4:35 PM, Harshada <[email protected]>
> > wrote:
> > > > > > Hi,
>
> > > > > > I am installing Hypertable 0.9.2.8 on Hadoop. I have successfully
> > set
> > > > > > up Hadoop and its working. When I start servers using 'cap start',
> > DFS
> > > > > > Server doesn't come up. The output of cap start is:
>
> > > > > > e...@erts-server:~/hypertable/hypertable-0.9.2.8-alpha/conf$ cap
> > start
> > > > > >  * executing `start'
> > > > > >  ** transaction: start
> > > > > >  * executing `start_hyperspace'
> > > > > >  * executing "/home/erts/hypertable/hypertable-0.9.2.8-alpha/
> > 0.9.2.8/
> > > > > > bin/start-hyperspace.sh <http://0.9.2.8/%0Abin/start-hyperspace.sh
>
> > > > > > --config=/home/erts/hypertable/
> > > > > > hypertable-0.9.2.8-alpha/0.9.2.8/conf/hypertable.cfg"
> > > > > >    servers: ["127.0.0.1"]
> > > > > >    [127.0.0.1] executing command
> > > > > >  ** [out :: 127.0.0.1] Hyperspace appears to be running (12170):
> > > > > >  ** [out :: 127.0.0.1] erts 12170 1 0 04:40 ? 00:00:00 /home/erts/
> > > > > > hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/bin/Hyperspace.Master--
> > > > > > pidfile /home/erts/hypertable/hypertable-0.9.2.8-alpha/
> > 0.9.2.8/run/
> > > > > > Hyperspace.pid --verbose --config=/home/erts/hypertable/
> > > > > > hypertable-0.9.2.8-alpha/0.9.2.8/conf/hypertable.cfg
> > > > > >    command finished
> > > > > >  * executing `start_master'
> > > > > >  * executing "/home/erts/hypertable/hypertable-0.9.2.8-alpha/
> > 0.9.2.8/
> > > > > > bin/start-dfsbroker.sh <http://0.9.2.8/%0Abin/start-dfsbroker.sh>
> > hadoop
> > > > > >     --config=/home/erts/hypertable/
> > > > > > hypertable-0.9.2.8-alpha/0.9.2.8/conf/hypertable.cfg &&\\\n
> > /home/
> > > > > > erts/hypertable/hypertable-0.9.2.8-alpha/
> > 0.9.2.8/bin/start-master.sh --
> > > > > > config=/home/erts/hypertable/hypertable-0.9.2.8-alpha/
> > 0.9.2.8/conf/
> > > > > > hypertable.cfg"
> > > > > >    servers: ["127.0.0.1"]
> > > > > >    [127.0.0.1] executing command
> > > > > >  ** [out :: 127.0.0.1] DFS broker: available file descriptors: 1024
> > > > > >  ** [out :: 127.0.0.1] Waiting for DFS Broker (hadoop) to come
> > up...
> > > > > >  ** [out :: 127.0.0.1] Waiting for DFS Broker (hadoop) to come
> > up...
> > > > > >  ** [out :: 127.0.0.1] Waiting for DFS Broker (hadoop) to come
> > up...
> > > > > >  ** [out :: 127.0.0.1] Waiting for DFS Broker (hadoop) to come
> > up...
> > > > > >  ** [out :: 127.0.0.1] Waiting for DFS Broker (hadoop) to come
> > up...
> > > > > >  ** [out :: 127.0.0.1] Waiting for DFS Broker (hadoop) to come
> > up...
> > > > > >  ** [out :: 127.0.0.1] Waiting for DFS Broker (hadoop) to come
> > up...
> > > > > >  ** [out :: 127.0.0.1] Waiting for DFS Broker (hadoop) to come
> > up...
> > > > > >  ** [out :: 127.0.0.1] ERROR: DFS Broker (hadoop) did not come up
> > > > > >    command finished
> > > > > > failed: "sh -c '/home/erts/hypertable/hypertable-0.9.2.8-alpha/
> > 0.9.2.8/
> > > > > > bin/start-dfsbroker.sh <http://0.9.2.8/%0Abin/start-dfsbroker.sh>
> > hadoop
> > > > > >     --config=/home/erts/hypertable/
> > > > > > hypertable-0.9.2.8-alpha/0.9.2.8/conf/hypertable.cfg &&\\\n
> > /home/
> > > > > > erts/hypertable/hypertable-0.9.2.8-alpha/
> > 0.9.2.8/bin/start-master.sh --
> > > > > > config=/home/erts/hypertable/hypertable-0.9.2.8-alpha/
> > 0.9.2.8/conf/
>
> ...
>
> read more >>

-- 
You received this message because you are subscribed to the Google Groups 
"Hypertable Development" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/hypertable-dev?hl=en.

Reply via email to