Jonathan Gray wrote:
Forgot to mention. In addition to not over-committing your memory
resources to the JVMs, you should also set the swappiness to 0 on the
kernel.
Info on how to do this (and links to the flame wars on linux kernel
mailing lists and slashdot) here:
Hi all ,
Can any one tell me where i can access some docs which gives a good
explanation about how a map-reduce scheduler in Hbase
i.e., How Map regions are created (to minimize data flow through network)
and how the reduce phases are performed so that we can minimize
the flow of keys and values
Some number of CPU instructions are always emulated when running in a VM,
anything that would affect real processor state with respect to hardware or
affecting the integrity of other tasks. MMU functions are virtualized/shadowed
and require an extra level of mediation. Emulation of privileged
Hi,
Since I've migrated to HBase 0.20.0 RC1, the following error keeps
happening. I have to kill HBase and start it again to recover from the
exception. Does anybody know a workaround?
Lucas
09/08/20 10:09:01 WARN zookeeper.ClientCnxn: Ignoring exception during
shutdown output
Hi all ,
I have one small doubt . Kindly answer it even if it sounds silly.
Iam using Map Reduce in HBase in distributed mode . I have a table which
spans across 5 region servers . I am using TableInputFormat to read the data
from the tables in the map . When i run the program , by default how
On Thu, Aug 20, 2009 at 9:42 AM, john smith js1987.sm...@gmail.com wrote:
Hi all ,
I have one small doubt . Kindly answer it even if it sounds silly.
No questions are silly.. Dont worry
Iam using Map Reduce in HBase in distributed mode . I have a table which
spans across 5 region
What Amandeep said.
Also, one clarification for you. You mentioned the reduce task moving
map output across regionservers. Remember, HBase is just a MapReduce
input source or output sink. The sort/shuffle/reduce is a part of
Hadoop MapReduce and has nothing to do with HBase directly. It
Hi all,
I am a beginner to HBase. I have some question with Hbase after setup Hbase
and Hadoop.
The first, After setup Hbase and create a new database, I don't know where
is location of HBase's database (database' s files) on the hard disk. At the
first, I think it is on the hbase.rootdir
You configure the location of the hbase directory in the hbase-site.xml
The data being lost could have multilple reasons. To rule out the
basic one - where have you pointed the hdfs to store data? If its
going into /tmp, you'll lose data everytime the tmp cleaner comes into
action.
On 8/20/09,
No, my zk server is same with Master sever.
09/08/21 09:54:07 WARN zookeeper.ZooKeeperWrapper: Failed to create
/hbase
-- check quorum servers, currently=10.42.253.182:2181
org.apache.zookeeper.KeeperException$ConnectionLossException:
KeeperErrorCode = ConnectionLoss for /hbase
property
You ideally want to have 3-5 servers outside the hbase servers... 1
server is not enough. That could to be causing you the trouble.
Post logs from the master and the region server where the read failed.
Also, what's your configuration? How many nodes, ram, cpus etc?
On 8/20/09,
Aamandeep , Gray and Purtell thanks for your replies .. I have found them
very useful.
You said to increase the number of reduce tasks . Suppose the number of
reduce tasks is more than number of distinct map output keys , some of the
reduce processes may go waste ? is that the case?
Also I have
@JG:
Regards the write-buffer, I means the DFSClient side buffer. In the current
version of HDFS, I found the buffer (bytesPerChecksum) in client size. The
writed data will be flushed to data node when the buffer full. The HBase RS
is a client of HDFS.
@JD:
you wrote:
But, in many cases, a RS
Thanks for all your replies guys ,.As bharath said , what is the case when
number of reducers becomes more than number of distinct Map key outputs?
On Fri, Aug 21, 2009 at 9:39 AM, bharath vissapragada
bharathvissapragada1...@gmail.com wrote:
Aamandeep , Gray and Purtell thanks for your
14 matches
Mail list logo