The only way I think of would be accessing Hive tables through their
respective thrift servers running on different clusters but not sure you
can do it within Spark. Basically two different JDBC connections.

HTH

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 4 December 2016 at 21:45, ayan guha <guha.a...@gmail.com> wrote:

> Hi
>
> Is it possible to access hive tables sitting on multiple clusters in a
> single spark application?
>
> We have a data processing cluster and analytics cluster. I want to join a
> table from analytics cluster with another table in processing cluster and
> finally write back in analytics cluster.
>
> Best
> Ayan
>

Reply via email to