Hi Guys,

That's a big head start. It looks like I need to:

1) Configure Hive to use Derby as a meta db
2) Launch the hive thrift service with bin/hive --service hiveserver
3) Using the thrift api, I should be able to send queries from remote hosts

Am I missing anything from there?

Thanks!

On Thu, Feb 19, 2009 at 2:32 PM, Raghu Murthy <[email protected]> wrote:

> Hive supports both a Thrift service as well as a partial JDBC interface.
> Check out sample usage in service/src/test and jdbc/src/test. I can help
> you
> set up the thrift service if you have problems.
>
>
> On 2/19/09 2:16 PM, "Edward Capriolo" <[email protected]> wrote:
>
> > The best way to answer this is that all hadoop components work
> > remotely, assuming you have the proper configuration and library files
> > (the same ones from the remote cluster)
> >
> > I attached a HiveLet (Made up term). It was my first API testing
> > program. It is more or less a 'One Shot', run the query and exit
> > program.
> >
> > You need to run the Meta DB in Server mode for concurrent access.
> > http://wiki.apache.org/hadoop/HiveDerbyServerMode
> >
> > It is slightly complicated if your desktop is windows, but still doable.
> >
> > You would need:
> > Hadoop conf directory
> > Hive conf directory
> > hadoop distribution ( technically only jars )
> > Hive distribution ( technically only jars )
> >
> > When you start hadoop/hive they both pick up the locations of the
> > components from the configurations and start happily on a remote
> > machine. (Not counting firewall issues)
>
>

Reply via email to