srielau commented on code in PR #52334:
URL: https://github.com/apache/spark/pull/52334#discussion_r2392114181
##########
sql/core/src/main/scala/org/apache/spark/sql/classic/SparkSession.scala:
##########
@@ -448,16 +448,26 @@ class SparkSession private(
private[sql] def sql(sqlText: String, args: Array[_], tracker:
QueryPlanningTracker): DataFrame =
withActive {
val plan = tracker.measurePhase(QueryPlanningTracker.PARSING) {
- val parsedPlan = sessionState.sqlParser.parsePlan(sqlText)
- if (args.nonEmpty) {
- // Check for SQL scripting with positional parameters before
creating parameterized query
- if (parsedPlan.isInstanceOf[CompoundBody]) {
- throw
SqlScriptingErrors.positionalParametersAreNotSupportedWithSqlScripting()
+ val parsedPlan = if (args.nonEmpty) {
+ // Set parameter context for parsing
+ val paramContext =
PositionalParameterContext(args.map(lit(_).expr).toSeq)
+ ThreadLocalParameterContext.withContext(paramContext) {
+ val parsed = sessionState.sqlParser.parsePlan(sqlText)
Review Comment:
The flow is:
1. SparkSession sets up parameter context in ThreadLocal
2. SparkSession calls sqlParser.parsePlan(sqlText)
3. SparkSqlParser.parse automatically detects the ThreadLocal parameter
context
4. SparkSqlParser calls the pre-parser
(ParameterHandler.substituteParameters)
5. Pre-parser returns substituted SQL text (e.g., "SELECT ?" → "SELECT 42")
6. SparkSqlParser continues with normal ANTLR parsing on the substituted text
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]