GitHub user gckalia added a comment to the discussion: Who is using Apache 
Kyuubi?

I have the following scenario. I have a number of applications who would like 
to independently use SQL queries using spark. The data is ingested by 
independent plugins, which write partitions as parquet on HDFS. Each 
application requires only a subset of the partition for its queries. So I would 
like to have separate sparksession where the tables are loaded using only the 
relevant partitions that an application requires. So when the application opens 
a connection it should be able to load and access the tables that are behind 
the spark session. Could you explain how the clients can specify which data 
partitions are required to be loaded into a view in the spark session and hence 
send query requests for that session. What is the lifecycle of such a session, 
is this session active until the connection is active?

GitHub link: 
https://github.com/apache/kyuubi/discussions/925#discussioncomment-12775631

----
This is an automatically sent email for notifications@kyuubi.apache.org.
To unsubscribe, please send an email to: 
notifications-unsubscr...@kyuubi.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscr...@kyuubi.apache.org
For additional commands, e-mail: notifications-h...@kyuubi.apache.org

Reply via email to