Rong-en Fan wrote:
I did so. I even rm -rf on dfs's dir and do namenode -format before starting my dfs. hadoop fsck reports the default replication is 1, avg. block replication is 2.9x after I wrote some data into hbase. The underlying dfs is used by hbase. No other apps on it.
What if you add a file using './bin/hadoop fs ....' -- i.e. don't have hbase in the mix at all -- does the file show as replicated?
If you copy your hadoop-conf.xml to $HBASE_HOME/conf, does it then do the right thing? Maybe whats happening is that hbase writing files, we're using hadoop defaults.
Hmm... as far as I understand the hadoop FileSystem, you can specify # of replication when creating a file. But I did not find hbase use it, correct?
We don't do it explicitly, but as I suggest above, we're probably using defaults instead of your custom config.
St.Ack
