You can cache often used hive tables/partitions (ideally using tez engine and orc format) since they are just files. However , it could be optimized that it automatically detects which tables/partitions are often used. This use case might become irrelevant with Hive LLAP.
> On 19 Nov 2016, at 22:26, Valentin Kulichenko <valentin.kuliche...@gmail.com> > wrote: > > Hadoop Accelerator is a plugin to Ignite and this plugin is used by Hadoop > when running its jobs. ignite-spark module only provides IgniteRDD which > Hadoop obviously will never use. > > Is there another use case for Hadoop Accelerator which I'm missing? > > -Val > > On Sat, Nov 19, 2016 at 3:12 AM, Dmitriy Setrakyan <dsetrak...@apache.org> > wrote: > >> Why do you think that spark module is not needed in our hadoop build? >> >> On Fri, Nov 18, 2016 at 5:44 PM, Valentin Kulichenko < >> valentin.kuliche...@gmail.com> wrote: >> >>> Folks, >>> >>> Is there anyone who understands the purpose of including ignite-spark >>> module in the Hadoop Accelerator build? I can't figure out a use case for >>> which it's needed. >>> >>> In case we actually need it there, there is an issue then. We actually >> have >>> two ignite-spark modules, for 2.10 and 2.11. In Fabric build everything >> is >>> good, we put both in 'optional' folder and user can enable either one. >> But >>> in Hadoop Accelerator there is only 2.11 which means that the build >> doesn't >>> work with 2.10 out of the box. >>> >>> We should either remove the module from the build, or fix the issue. >>> >>> -Val >>> >>