If you manually create the hive table + partitions to match the format
Pig writes things in, it should just work.  HCatalog is about doing
the deep integration; if you want deeper integration than just
matching up formats and metadata, you will pretty much wind up
rewriting HCat...
For your second question:
grunt> sh echo foo
foo

D

On Mon, Nov 14, 2011 at 1:03 PM, Stan Rosenberg
<[email protected]> wrote:
> On Mon, Nov 14, 2011 at 3:08 PM, Dmitriy Ryaboy <[email protected]> wrote:
>> My lack of imagination is showing -- can you explain what you mean by
>> integrating hive queries with pig,
>
> For example, we implemented a storage function which creates path
> partitioning based on a given sequence of columns; the output is
> stored in a temporary hdfs location.  Subsequent to the pig 'store'
> command we'd like to execute the hive 'load' command.
>
>> and what the problems with hcatalog are?
>
> One of them is the version requirement.
>
>>
>> For thrift, you might want to check jruby integration
>> (https://issues.apache.org/jira/browse/PIG-2317)
>>
>
> Thanks, but we'd like to limit the language choices to either java or
> python.  Btw, is there some plan to have a shell-execute command in
> pig? E.g., fs -exec "java Foo"
>

Reply via email to