Dear Bruno,
Thank you very much. Thanks you help I succeed to start HBase I even see it
in HUE.
Unfortunatelly I have other problem with HUE. In fact 2 problems.
When I try to see what is on HDFS I receive :
*Cannot access: /. Note: You are a Hue admin but not a HDFS superuser
(which is "hdfs").*
I tried to change hdfs-site.xml
<configuration>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.web.ugi</name>
<value>root,root</value>
</property>
</configuration>
But without success.
The second is I can not start pig shell (and pig IDE) in HUE.
When I start pig shell I receive :
Error: value 0 for UID is less than the minimum UID allowed (500)
T
Thank you for your help once again!
Cheers,
Ivo
Am Sonntag, 17. November 2013 schrieb Bruno Mahé :
> You don't find them, you set them up.
> ZOOKEEPER_HOSTNAME being the name of the host where zookeeper is running
> and HDFS_HOSTNAME the name of the host where the namenode is running.
>
> On 11/15/2013 10:07 AM, ivaylo frankov wrote:
>
>> Hi Bruno,
>>
>> Thank you very much for your email. It gives me hope ;).
>> Would you mind to tell me how I can find ZOOKEEPER_HOSTNAME and
>> HDFS_HOSTNAME in bigtop.
>> I suppose that HDFS_HOSTNAME is already started but it is also valid
>> for ZOOKEEPER_HOSTNAME?
>> How can I check that ?
>>
>> Sorry for the funny questions but I am just perfect beginer. Hope thanks
>> Bigtop not very long ;)
>>
>> Thank you once again and
>> Cheers,
>> Ivo
>>
>> Am Freitag, 15. November 2013 schrieb Bruno Mahé :
>>
>> On 11/10/2013 09:28 AM, ivaylo frankov wrote:
>>
>> Dear All,
>>
>> I installed bigtop 0.7.0 but after every restart of my computer
>> there is
>> a message that
>> /tmp folder can not be found. HBase tables are also deleted
>> after reset.
>>
>> I tried to see configuration for the /TMP folder but when I start
>> ivo@ivo-Aspire-3830T:/usr/lib/__hadoop-hdfs/bin$ dpkg -L hadoop
>>
>> /etc/hadoop/conf.empty
>> /etc/hadoop/conf.empty/__configuration.xsl
>> /etc/hadoop/conf.empty/hadoop-__env.sh
>> /etc/hadoop/conf.empty/ssl-__client.xml.example
>> /etc/hadoop/conf.empty/slaves
>> /etc/hadoop/conf.empty/hadoop-__metrics2.properties
>> /etc/hadoop/conf.empty/log4j.__properties
>> /etc/hadoop/conf.empty/hadoop-__policy.xml
>> /etc/hadoop/conf.empty/ssl-__server.xml.example
>> /etc/hadoop/conf.empty/core-__site.xml
>> /etc/hadoop/conf.empty/hadoop-__metrics.properties
>>
>> and these files are empty.
>> Would you mind to tell me how I can start hadoop in pseudo mode
>> and to
>> configure the right directory for tmp to be able to keep HBase
>> tables.
>> Thank you very much!
>>
>> Cheers,
>> Ivo
>>
>>
>>
>> Hi Ivaylo,
>>
>> Apache HBase package do not come with a pseudo conf package and that
>> is something I hope to fix at the next hackathon.
>> Also by default and if I remember correctly, Apache HBase will write
>> on disk on /tmp.
>>
>> So you may want to look into Apache HBase documentation in order to
>> configure Apache HBase accordingly to your setup. Apache Bigtop
>> puppet recipes may also help since they provide working
>> configuration for a distributed cluster.
>>
>> if it helps, here is a simple configuration I use sometimes for
>> hbase-site.xml:
>>
>> <configuration>
>>
>> <property>
>> <name>hbase.cluster.__distributed</name>
>> <value>true</value>
>> </property>
>>
>> <property>
>> <name>hbase.zookeeper.quorum</__name>
>> <value>ZOOKEEPER_HOSTNAME</__value>
>> </property>
>>
>> <property>
>> <name>hbase.rootdir</name>
>> <value>hdfs://HDFS_HOSTNAME:__8020/hbase</value>
>> </property>
>>
>> <property>
>> <name>dfs.support.append</__name>
>> <value>true</value>
>> </property>
>>
>>
>> </configuration>
>>
>> Please replace ZOOKEEPER_HOSTNAME and HDFS_HOSTNAME accordingly.
>> You can also replace the value of hbase.rootdir by a local directory
>> if you do not want to go through Apache Hadoop HDFS.
>>
>>
>>
>> Thanks,
>> Bruno
>>
>>
>