Hi Anil, 

I had done similar to this in the past . In the stellar to get the hbase 
configuration we call HBaseConfiguration.create() , in that call hbase adds the 
hbase-site and core-site as resources to the config we probably SHOULD let 
people specify a base config. 

What I had done was in the global config,  I set a property called 
hbase.provider.impl,  it's the fully qualified class name for a class that 
implements the TableProvider interface which has one method: 

    public HTableInterface getTable(Configuration config, String tableName) 
throws IOException

if you implement your own where you ignore the config argument and resolve the 
hbase table with your own injected config that will work

Thanks 
Mohan DV

On 1/24/19, 8:56 PM, "Otto Fowler" <ottobackwa...@gmail.com> wrote:

    Hi Anil,
    Can you create a jira on this with these details and a general overview of
    your use case?
    It looks like the HbaseConfiguration we use in the HTableConnector
    is done using the create() method, which creates from resources.
    
    I think we would need to do some work to support the external file.
    
    
    
    On January 24, 2019 at 10:14:46, Anil Donthireddy (
    anil.donthire...@sstech.us) wrote:
    
    Hi,
    
    
    
    I have written a java application which uses Stellar processor and execute
    the stellar expressions. The issue I am facing is I am unable to connect
    Hbase unless I place hbase-site.xml in src/main/resources/ folder of the
    code. As it is not the proper way of packaging the hbase-site.xml with Jar,
    I would like to understand how the hbase-site.xml is being set to classpath
    while starting profiler topology.
    
    
    
    The ways I tried are
    
    1)      Setting the classpath to hbase conf folder using command “java -cp
    $CLASSPATH:/etc/hbase/conf:/etc/Hadoop/conf –jar myJar.jar”
    
    2)      Adding Hbase conf folder to HADOOP_CLASSPATH. Below is the Hadoop
    classpath
    
    
/usr/hdp/2.6.1.0-129/hadoop/conf:/usr/hdp/2.6.1.0-129/hadoop/lib/*:/usr/hdp/2.6.1.0-129/hadoop/.//*:/usr/hdp/2.6.1.0-129/hadoop-hdfs/./:/usr/hdp/2.6.1.0-129/hadoop-hdfs/lib/*:/usr/hdp/2.6.1.0-129/hadoop-hdfs/.//*:/usr/hdp/2.6.1.0-129/hadoop-yarn/lib/*:/usr/hdp/2.6.1.0-129/hadoop-yarn/.//*:/usr/hdp/2.6.1.0-129/hadoop-mapreduce/lib/*:/usr/hdp/2.6.1.0-129/hadoop-mapreduce/.//*::mysql-connector-java.jar:postgresql-jdbc2ee.jar:postgresql-jdbc2.jar:postgresql-jdbc3.jar:postgresql-jdbc.jar:/etc/hbase/conf/:/usr/hdp/2.6.1.0-129/tez/*:/usr/hdp/2.6.1.0-129/tez/lib/*:/usr/hdp/2.6.1.0-129/tez/conf
    
    
    
    One more step I would like to try to fix the issue is to set property “
    *zookeeper.znode.parent*” for configuration object while instantiating
    HbaseConnector. But it is in the scope of metron code to try this fix.
    
    
    
    I would like to know if any one able to provide hbase-site.xml to any Java
    Application or anyone able to extend Metron Stellar Processor and execute
    profile definitions successfully.
    
    Please provide any inputs to resolve the issue.
    
    
    
    Thanking you.
    
    
    
    Thanks,
    
    Anil.
    

Reply via email to