CurtHagenlocher commented on issue #1727:
URL: https://github.com/apache/arrow-adbc/issues/1727#issuecomment-2103006499

   I think what would need to happen to make this work is not to return until 
we've read the first Arrow batch and to take the schema from that instead of 
the Thrift-based metadata. I'm not sure yet how worthwhile this is. I'm pretty 
convinced that "Spark Connect" is the right path forward for Spark in the 
future and AFAIK Arrow results aren't supported from Impala or non-Spark 
implementations of HiveServer2. What I don't know is how quickly Spark users 
adopt newer versions: Spark Connect requires 3.4 which is barely a year old.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to