It looks like what may have happened here is that you ran the system in
"local" mode and then switched to running it on top of Hadoop.  This could
cause Hyperspace to get out of sync.  Can you try starting the servers in
"hadoop" mode and then running the following program:

/opt/hypertable/current/bin/clean-database.sh

then stop the servers and start them back up again and let us know if that
works?

- Doug

On Thu, Nov 26, 2009 at 10:38 PM, [email protected]
<[email protected]>wrote:

> I have shared the log archive on the files section. Please have a
> look.
>
> On Nov 26, 9:48 pm, Doug Judd <[email protected]> wrote:
> > Can you archive and upload your hypertable llog files to the file upload
> > area <http://groups.google.com/group/hypertable-dev/files> so we can
> take a
> > look?
> >
> > - Doug
> >
> > On Thu, Nov 26, 2009 at 2:29 AM, [email protected] <
> [email protected]>wrote:
> >
> > > Hi Doug,
> >
> > > As you suggested, we updated our Hypertable and Hadoop to their latest
> > > versions, but still it doesnt help out.
> > > We are still struggling to create any table on hypertable using
> > > hadoop.
> >
> > > This is what we are getting when trying to create any table.
> >
> > >
> ===========================================================================================
> > > Error: Hypertable::Exception: Master 'create table' error,
> > > tablename=HadoopTest - HYPERTABLE request timeout
> > >        at void Hypertable::MasterClient::create_table(const char*,
> const
> > > char*, Hypertable::Timer*) (/root/src/hypertable-0.9.2.7-alpha/src/cc/
> > > Hypertable/Lib/MasterClient.cc:104) - HYPERTABLE request timeout
> >
> > >
> ===========================================================================================
> >
> > > Please help us out. If you need anything else to get into the problem
> > > deeper, please ask.
> >
> > > Thanks
> >
> > > On Nov 25, 11:53 am, "[email protected]" <[email protected]>
> > > wrote:
> > > > Thanks Doug for the quick reply.
> > > > I will update both Hypertable and Hadoop and let you know.
> >
> > > > On Nov 25, 10:54 am, Doug Judd <[email protected]> wrote:
> >
> > > > > Is there a reason you are using older versions of both Hypertable
> and
> > > > > Hadoop?  Hypertable 0.9.2.7 is the most stable version,
> considerably
> > > more
> > > > > stable than 0.9.2.3 and Hadoop 0.20.1 is much more stable than
> Hadoop
> > > > > 0.20.0.  I would first upgrade your software and then try to get
> things
> > > up
> > > > > and running then.  Feel free to post again to this list if you're
> still
> > > > > having problems.
> >
> > > > > - Doug
> >
> > > > > On Tue, Nov 24, 2009 at 9:42 PM, [email protected] <
> > > [email protected]>wrote:
> >
> > > > > > Hi,
> >
> > > > > > We are using Hypertable 0.9.2.3 and Hadoop 0.20.0. We are
> successful
> > > > > > in installing Hadoop. Hypertable also gets started with hadoop.
> But
> > > > > > when we try to create any table, hypertable hangs up. But table
> name
> > > > > > appears in the "show tables" command. If we try to delete this
> table
> > > > > > Hypertable hangs again.
> >
> > > > > > If we format the namenode again and try to install the hadoop
> again,
> > > > > > jobtracker doesnt come up, saying port is already in use. And
> shows
> > > > > > namenode with 0 Live nodes.
> >
> > > > > > Has anyone come across this issue before.
> >
> > > > > > Please help.
> >
> > > > > > Thanks
> >
> > > > > > Here is the log for hadoop-ems-jobtracker-localhost.localdomain
> >
> > >
> =================================================================================
> > > > > > 2009-11-24 18:29:58,556 WARN org.apache.hadoop.hdfs.DFSClient:
> > > > > > NotReplicatedYetException sleeping
> /tmp/hadoop-root/mapred/system/
> > > > > > jobtracker.info retries left 1
> > > > > > 2009-11-24 18:30:01,761 WARN org.apache.hadoop.hdfs.DFSClient:
> > > > > > DataStreamer Exception: org.apache.hadoop.ipc.RemoteException:
> > > > > > java.io.IOException: File /tmp/hadoop-root/mapred/system/
> > > > > > jobtracker.info could only be replicated to 0 nodes, instead of
> 1
> > > > > >        at
> >
> > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock
> > > > > > (FSNamesystem.java:1256)
> > > > > >        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock
> > > > > > (NameNode.java:422)
> > > > > >        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> > > > > >        at sun.reflect.NativeMethodAccessorImpl.invoke
> > > > > > (NativeMethodAccessorImpl.java:57)
> > > > > >        at sun.reflect.DelegatingMethodAccessorImpl.invoke
> > > > > > (DelegatingMethodAccessorImpl.java:43)
> > > > > >        at java.lang.reflect.Method.invoke(Method.java:616)
> > > > > >        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> > > > > >        at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> > > > > >        at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> > > > > >        at java.security.AccessController.doPrivileged(Native
> Method)
> > > > > >        at javax.security.auth.Subject.doAs(Subject.java:416)
> > > > > >        at
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> >
> > > > > >        at org.apache.hadoop.ipc.Client.call(Client.java:739)
> > > > > >        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
> > > > > >        at $Proxy4.addBlock(Unknown Source)
> > > > > >        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> > > > > >        at sun.reflect.NativeMethodAccessorImpl.invoke
> > > > > > (NativeMethodAccessorImpl.java:57)
> > > > > >        at sun.reflect.DelegatingMethodAccessorImpl.invoke
> > > > > > (DelegatingMethodAccessorImpl.java:43)
> > > > > >        at java.lang.reflect.Method.invoke(Method.java:616)
> > > > > >        at
> > > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod
> > > > > > (RetryInvocationHandler.java:82)
> > > > > >        at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke
> > > > > > (RetryInvocationHandler.java:59)
> > > > > >        at $Proxy4.addBlock(Unknown Source)
> > > > > >        at org.apache.hadoop.hdfs.DFSClient
> > > > > > $DFSOutputStream.locateFollowingBlock(DFSClient.java:2873)
> > > > > >        at org.apache.hadoop.hdfs.DFSClient
> > > > > > $DFSOutputStream.nextBlockOutputStream(DFSClient.java:2755)
> > > > > >        at
> > > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000
> > > > > > (DFSClient.java:2046)
> > > > > >        at
> > > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run
> > > > > > (DFSClient.java:2232)
> >
> > > > > > 2009-11-24 18:30:01,761 WARN org.apache.hadoop.hdfs.DFSClient:
> Error
> > > > > > Recovery for block null bad datanode[0] nodes == null
> > > > > > 2009-11-24 18:30:01,761 WARN org.apache.hadoop.hdfs.DFSClient:
> Could
> > > > > > not get block locations. Source file
> "/tmp/hadoop-root/mapred/system/
> > > > > > jobtracker.info" - Aborting...
> > > > > > 2009-11-24 18:30:01,762 WARN org.apache.hadoop.mapred.JobTracker:
> > > > > > Failed to initialize recovery manager. The Recovery manager
> failed to
> > > > > > access the system files in the system dir
> (hdfs://localhost:9000/tmp/
> > > > > > hadoop-root/mapred/system).
> > > > > > 2009-11-24 18:30:01,765 WARN org.apache.hadoop.mapred.JobTracker:
> It
> > > > > > might be because the JobTracker failed to read/write system files
> > > > > > (hdfs://localhost:9000/tmp/hadoop-root/mapred/system/
> jobtracker.info/
> >
> > >
> hdfs://localhost:9000/tmp/hadoop-root/mapred/system/jobtracker.info.recover)
> > > > > > or the system  file
> > > hdfs://localhost:9000/tmp/hadoop-root/mapred/system/
> > > > > > jobtracker.info
> > > > > > is missing!
> > > > > > 2009-11-24 18:30:01,766 WARN org.apache.hadoop.mapred.JobTracker:
> > > > > > Bailing out...
> > > > > > 2009-11-24 18:30:01,766 WARN org.apache.hadoop.mapred.JobTracker:
> > > > > > Error starting tracker: org.apache.hadoop.ipc.RemoteException:
> > > > > > java.io.IOException: File /tmp/hadoop-root/mapred/system/
> > > > > > jobtracker.info could only be replicated to 0 nodes, instead of
> 1
> > > > > >        at
> >
> > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock
> > > > > > (FSNamesystem.java:1256)
> > > > > >        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock
> > > > > > (NameNode.java:422)
> > > > > >        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> > > > > >        at sun.reflect.NativeMethodAccessorImpl.invoke
> > > > > > (NativeMethodAccessorImpl.java:57)
> > > > > >        at sun.reflect.DelegatingMethodAccessorImpl.invoke
> > > > > > (DelegatingMethodAccessorImpl.java:43)
> > > > > >        at java.lang.reflect.Method.invoke(Method.java:616)
> > > > > >        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> > > > > >        at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> > > > > >        at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> > > > > >        at java.security.AccessController.doPrivileged(Native
> Method)
> > > > > >        at javax.security.auth.Subject.doAs(Subject.java:416)
> > > > > >        at
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> >
> > > > > >        at org.apache.hadoop.ipc.Client.call(Client.java:739)
> > > > > >        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
> > > > > >        at $Proxy4.addBlock(Unknown Source)
> > > > > >        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> > > > > >        at sun.reflect.NativeMethodAccessorImpl.invoke
> > > > > > (NativeMethodAccessorImpl.java:57)
> > > > > >        at sun.reflect.DelegatingMethodAccessorImpl.invoke
> > > > > > (DelegatingMethodAccessorImpl.java:43)
> > > > > >        at java.lang.reflect.Method.invoke(Method.java:616)
> > > > > >        at
> > > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod
> > > > > > (RetryInvocationHandler.java:82)
> > > > > >        at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke
> > > > > > (RetryInvocationHandler.java:59)
> > > > > >        at $Proxy4.addBlock(Unknown Source)
> > > > > >        at org.apache.hadoop.hdfs.DFSClient
> > > > > > $DFSOutputStream.locateFollowingBlock(DFSClient.java:2873)
> > > > > >        at org.apache.hadoop.hdfs.DFSClient
> > > > > > $DFSOutputStream.nextBlockOutputStream(DFSClient.java:2755)
> > > > > >        at
> > > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000
> > > > > > (DFSClient.java:2046)
> > > > > >        at
> > > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run
> > > > > > (DFSClient.java:2232)
> >
> > > > > > 2009-11-24 18:30:02,768 FATAL
> org.apache.hadoop.mapred.JobTracker:
> > > > > > java.net.BindException: Problem binding to localhost/
> 127.0.0.1:9001:
> > > > > > Address already in use
> > > > > >        at org.apache.hadoop.ipc.Server.bind(Server.java:190)
> > > > > >        at
> > > org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:253)
> > > > > >        at org.apache.hadoop.ipc.Server.<init>(Server.java:1026)
> > > > > >        at
> >
> > ...
> >
> > read more ยป
>
> --
>
> You received this message because you are subscribed to the Google Groups
> "Hypertable Development" group.
> To post to this group, send email to [email protected].
> To unsubscribe from this group, send email to
> [email protected]<hypertable-dev%[email protected]>
> .
> For more options, visit this group at
> http://groups.google.com/group/hypertable-dev?hl=en.
>
>
>

--

You received this message because you are subscribed to the Google Groups 
"Hypertable Development" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/hypertable-dev?hl=en.


Reply via email to