Hi Shubhang,

May be it is too late for such a urgent question....for the two questions
1, Livy support scala 2.12, just set SPARK_HOME to your spark version, Livy
will recognize your scala version and use correct livy jar for it
2, If it is a batch session, you cannot add jar to an already running
session, this makes sense, you cannot interactive with a batch session. If
it is a interactive session, you can add jar by call the add jar rest API

To guarantee <500ms of response time, you should use interactive session,
and make sure Spark get all executors ready before execute. But if it is a
big query, still hard to guarantee <500ms of response time. Anyway, the
time depends on Spark, not Livy.

Thanks!


larry mccay <lmc...@apache.org> 于2022年11月29日周二 23:22写道:

> Hi Shubhang -
>
> I have approved this note to go to the dev@ list for Livy as a moderator.
> I suggest that you subscribe to the list in order to get the responses from
> the community.
>
> thanks,
>
> --larry
>
> On Tue, Nov 29, 2022 at 6:09 AM <
> dev-reject-1669720161.407095.ofnpgmilmjggalahi...@livy.apache.org> wrote:
>
> >
> > To approve:
> >    dev-accept-1669720161.407095.ofnpgmilmjggalahi...@livy.apache.org
> > To reject:
> >    dev-reject-1669720161.407095.ofnpgmilmjggalahi...@livy.apache.org
> > To give a reason to reject:
> > %%% Start comment
> > %%% End comment
> >
> >
> >
> >
> > ---------- Forwarded message ----------
> > From: Shubhang Arora <shubhang.ar...@delhivery.com.invalid>
> > To: dev@livy.apache.org
> > Cc:
> > Bcc:
> > Date: Tue, 29 Nov 2022 16:38:41 +0530
> > Subject: [URGENT] Doubts on Spark <> Livy Integration
> > Hi Livy dev team
> >
> > *Context*
> > I want to enable live interaction with our spark cluster, where users
> > would be requesting some data over the web/mobile app, and in the backend
> > we want to run computations over spark and return the result in the API
> > call, and the response time that I am aiming for is less than 500ms.
> >
> > I've some doubts regarding implementing the same using livy
> > 1. The spark job we have uses the latest spark version (3.3.0) while livy
> > documentation says it requires at least Spark 1.6 and supports both Scala
> > 2.10 and 2.11 builds of Spark, while spark 3.3.0 is built with scale
> 2.12.
> > Is there any way around this?
> >
> > 2. While using the batches api to initiate a spark job, I want to specify
> > jar files which need to be executed, though I found one open issue (Not
> > able to submit jars to Livy via batch/session to an already running
> > session/context.
> > <
> https://issues.apache.org/jira/projects/LIVY/issues/LIVY-869?filter=allopenissues
> >).
> > Is there any war around this?
> >
> > It would be great if you can also share your thoughts on this whole
> > architecture and would it be possible to guarantee <500ms of response
> time?
> >
> > Thanks & Regards
> > Shubhang Arora
> >
> >
>

Reply via email to