On 11/10/2013 09:28 AM, ivaylo frankov wrote:
Dear All,
I installed bigtop 0.7.0 but after every restart of my computer there is
a message that
/tmp folder can not be found. HBase tables are also deleted after reset.
I tried to see configuration for the /TMP folder but when I start
ivo@ivo-Aspire-3830T:/usr/lib/hadoop-hdfs/bin$ dpkg -L hadoop
/etc/hadoop/conf.empty
/etc/hadoop/conf.empty/configuration.xsl
/etc/hadoop/conf.empty/hadoop-env.sh
/etc/hadoop/conf.empty/ssl-client.xml.example
/etc/hadoop/conf.empty/slaves
/etc/hadoop/conf.empty/hadoop-metrics2.properties
/etc/hadoop/conf.empty/log4j.properties
/etc/hadoop/conf.empty/hadoop-policy.xml
/etc/hadoop/conf.empty/ssl-server.xml.example
/etc/hadoop/conf.empty/core-site.xml
/etc/hadoop/conf.empty/hadoop-metrics.properties
and these files are empty.
Would you mind to tell me how I can start hadoop in pseudo mode and to
configure the right directory for tmp to be able to keep HBase tables.
Thank you very much!
Cheers,
Ivo
Hi Ivaylo,
Apache HBase package do not come with a pseudo conf package and that is
something I hope to fix at the next hackathon.
Also by default and if I remember correctly, Apache HBase will write on
disk on /tmp.
So you may want to look into Apache HBase documentation in order to
configure Apache HBase accordingly to your setup. Apache Bigtop puppet
recipes may also help since they provide working configuration for a
distributed cluster.
if it helps, here is a simple configuration I use sometimes for
hbase-site.xml:
<configuration>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>ZOOKEEPER_HOSTNAME</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://HDFS_HOSTNAME:8020/hbase</value>
</property>
<property>
<name>dfs.support.append</name>
<value>true</value>
</property>
</configuration>
Please replace ZOOKEEPER_HOSTNAME and HDFS_HOSTNAME accordingly.
You can also replace the value of hbase.rootdir by a local directory if
you do not want to go through Apache Hadoop HDFS.
Thanks,
Bruno