Hi,
I am confused with some thing in HBase.
1. All data is stored in HDFS. Data is served to clients by
HRegionServers. Is it allowed that the tablet T is on machine A, and
served by a HRegionServers running on machine B?
What information does the META table maintain?
The map from T to the
Hi,
I think you should try using MultiFileInputFormat/MultiFileInputSplit
rather than FileSplit, since the former is optimized for processing
large number of files. Could you report you numMaps and numReduces and
the avarage time the map() function is expected to take.
André Martin wrote:
Hello!
We would like to use HDFS for our software, which software will be extended to
use a cluster later. For now we would like to just create an implementation of
file system interface for JackRabbit.
We found how can we implement this using Hadoop part for HDFS, however it's
still not clear
Bin YANG wrote:
Hi,
I am confused with some thing in HBase.
1. All data is stored in HDFS. Data is served to clients by
HRegionServers. Is it allowed that the tablet T is on machine A, and
served by a HRegionServers running on machine B?
Yes, tablet T may be hosted in HDFS on machine A
-Original Message-
From: Bin YANG [mailto:[EMAIL PROTECTED]
Sent: Thursday, November 01, 2007 3:06 AM
To: hadoop-user@lucene.apache.org
Subject: HBase question on HRegions server
Hi,
I am confused with some thing in HBase.
1. All data is stored in HDFS. Data is served to clients
Hi Chris,
I meant Local Runner.
From the sounds of your email, it seems that modifying the two
fore-mentioned properties is not enough to get a cluster node to run
as a local runner?
Also, what about including .xml to the class path? Do I include
them like jar files? My ant script has
It is definitely easier to build a jar and use the hadoop script. You can
do it yourself, though. Just duplicate the line in bin/hadoop that runs
java and prefix it with echo to see what is happening.
On 11/1/07 1:37 PM, Jim the Standing Bear [EMAIL PROTECTED] wrote:
Hi Ted,
It is funny
Thanks Ted. just as I thought.
On 11/1/07, Ted Dunning [EMAIL PROTECTED] wrote:
It is definitely easier to build a jar and use the hadoop script. You can
do it yourself, though. Just duplicate the line in bin/hadoop that runs
java and prefix it with echo to see what is happening.
On
Thanks for the detail Holger. Helps.
Reading it, it looks like the cluster hasn't started up properly; the
NoSuchElementException would seem to indicate that the basic startup
deploying the catalog meta tables hasn't happened or has gotten mangled
somehow. Whats in your hbase master log
Hi,
Of course I yest another newbie but at least I have read the 10 minutes
introduction... :-)
So I am running HBase on a local filesystem.
Attached you can find the (hopefully) necessary part of the
master-log-file. Does not look to bad, right?
BUT in the regionserver-log I get the
Thank you very much Michael and Jim!
That means the master does not maintain the mapping from HRegion to
HRegionServer. And the mapping from HRegion to HRegionServer is in the
META and ROOT. Is it right?
So if a client want to read a tablet, it should first find the ROOT,
find corresponding
-Original Message-
From: Bin YANG [mailto:[EMAIL PROTECTED]
Sent: Thursday, November 01, 2007 6:59 PM
To: hadoop-user@lucene.apache.org
Subject: Re: HBase question on HRegions server
Thank you very much Michael and Jim!
That means the master does not maintain the mapping from
C:\hadoop is my installation
C:\workspace\hadoop-commit is my checked out SVN tree which is current
with trunk.
/cygdrive/c$ diff hadoop/conf/hadoop-site.xml workspace/hadoop-commit/conf
7,17c7
property
namehadoop.tmp.dir/name
valueC:\hadoop\tmp/value
descriptionA base for other
13 matches
Mail list logo