zhengruifeng commented on PR #48246:
URL: https://github.com/apache/spark/pull/48246#issuecomment-2375501608
@dongjoon-hyun good point, but I think we cannot reuse existing sql side
`.sql` files for now, because this test is mainly for python side feature, e.g.
```
df0 = self.spark.range(10)
df1 = self.spark.sql(
"SELECT * FROM {df} WHERE id > ?",
args=[1],
df=df0,
)
```
which takes a PySpark dataframe as a named argument.
But probably it is doable to introduce a similar framework for PySpark in
the future.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]