Hive-thrift is definitely best option till now. That said, I am wondering
if its possible to load megastore in local mode[1] to avoid dependency on
external service. Can I read the HIVE_CONF_DIR for javax.jdo.option.*
parameters and talk to sql server hosting hive metadata?

[1] https://cwiki.apache.org/Hive/adminmanual-metastoreadmin.html

Thanks, 
Parag




On 12/02/13 10:23 PM, "Edward Capriolo" <edlinuxg...@gmail.com> wrote:

>If you use hive-thrift/hive-service you can get the location of a
>table through the Table API (instead of Dean's horrid bash-isms)
>
>http://hive.apache.org/docs/r0.7.0/api/org/apache/hadoop/hive/metastore/ap
>i/ThriftHiveMetastore.Client.html#get_table(java.lang.String,
>java.lang.String)
>
>Table t = ....
>t.getSd().getLocation()
>
>
>On Tue, Feb 12, 2013 at 9:41 AM, Dean Wampler
><dean.wamp...@thinkbiganalytics.com> wrote:
>> I'll mention another bash hack that I use all the time:
>>
>> hive -e 'some_command' | grep for_what_i_want |
>> sed_command_to_remove_just_i_dont_want
>>
>> For example, the following command will print just the value of
>> hive.metastore.warehouse.dir, sending all the logging junk written to
>>stderr
>> to /dev/null and stripping off the leading
>>"hive.metastore.warehouse.dir="
>> from the stdout output:
>>
>> hive -e 'set hive.metastore.warehouse.dir;' 2> /dev/null | sed -e
>> 's/hive.metastore.warehouse.dir=//'
>>
>> (No grep subcommand required in this case...)
>>
>> You could do something similar with DESCRIBE EXTENDED table PARTION(...)
>> Suppose you want a script that works for any property. Put the
>>following in
>> a script file, say hive-prop.sh:
>>
>> #!/bin/sh
>> hive -e "set $1;" 2> /dev/null | sed -e "s/$1=//"
>>
>> Make it executable (chmod +x /path/to/hive-prop.sh), then run it this
>>way:
>>
>> /path/to/hive-prop.sh hive.metastore.warehouse.dir
>>
>> Back to asking for for metadata for a table. The following script will
>> determine the location of a particular partition for an external
>> "mydatabase.stocks" table:
>>
>> #!/bin/sh
>> hive -e "describe formatted mydatabase.stocks
>>partition(exchange='NASDAQ',
>> symbol='AAPL');" 2> /dev/null | grep Location | sed -e "s/Location:[
>>\t]*//"
>>
>> dean
>>
>> On Mon, Feb 11, 2013 at 4:59 PM, Parag Sarda <psa...@walmartlabs.com>
>>wrote:
>>>
>>> Hello Hive Users,
>>>
>>> I am writing a program in java which is bundled as JAR and executed
>>>using
>>> hadoop jar command. I would like to access hive metadata (read
>>>partitions
>>> informations) in this program. I can ask user to set HIVE_CONF_DIR
>>> environment variable before calling my program or ask for any
>>>reasonable
>>> parameters to be passed. I do not want to force user to run hive
>>>megastore
>>> service if possible to increase reliability of program by avoiding
>>> external dependencies.
>>>
>>> What is the recommended way to get partitions information? Here is my
>>> understanding
>>> 1. Make sure my jar is bundled with hive-metastore[1] library.
>>> 2. Use HiveMetastoreClient[2]
>>>
>>> Is this correct? If yes, how to read the hive configuration[3] from
>>> HIVE_CONF_DIR?
>>>
>>> [1] http://mvnrepository.com/artifact/org.apache.hive/hive-metastore
>>> [2]
>>>
>>> 
>>>http://hive.apache.org/docs/r0.7.1/api/org/apache/hadoop/hive/metastore/
>>>Hiv
>>> eMetaStoreClient.html
>>> [3]
>>>
>>> 
>>>http://hive.apache.org/docs/r0.7.1/api/org/apache/hadoop/hive/conf/HiveC
>>>onf
>>> .html
>>>
>>> Thanks in advance,
>>> Parag
>>>
>>
>>
>>
>> --
>> Dean Wampler, Ph.D.
>> thinkbiganalytics.com
>> +1-312-339-1330
>>

Reply via email to