MaxGekk opened a new pull request, #40666:
URL: https://github.com/apache/spark/pull/40666

   ### What changes were proposed in this pull request?
   In the PR, I propose to change API of parameterized SQL, and replace type of 
argument values from `string` to `Any` in Scala/Java/Python and 
`Expression.Literal` in protobuf API. Language API can accept `Any` objects 
from which it is possible to construct literal expressions.
   
   This is a backport of https://github.com/apache/spark/pull/40623
   
   #### Scala/Java:
   
   ```scala
     def sql(sqlText: String, args: Map[String, Any]): DataFrame
   ```
   values of the `args` map are wrapped by the `lit()` function which leaves 
`Column` as is and creates a literal from other Java/Scala objects (for more 
details see the `Scala` tab at 
https://spark.apache.org/docs/latest/sql-ref-datatypes.html).
   
   #### Python:
   
   ```python
   def sql(self, sqlQuery: str, args: Optional[Dict[str, Any]] = None, 
**kwargs: Any) -> DataFrame:
   ```
   Similarly to Scala/Java `sql`, Python's `sql()` accepts Python objects as 
values of the `args` dictionary (see more details about acceptable Python 
objects at https://spark.apache.org/docs/latest/sql-ref-datatypes.html). 
`sql()` converts dictionary values to `Column` literal expressions by `lit()`.
   
   #### Protobuf:
   
   ```proto
   message SqlCommand {
     // (Required) SQL Query.
     string sql = 1;
   
     // (Optional) A map of parameter names to literal expressions.
     map<string, Expression.Literal> args = 2;
   }
   ```
   
   For example:
   ```scala
   scala> val sqlText = """SELECT s FROM VALUES ('Jeff /*__*/ Green'), 
('E\'Twaun Moore') AS t(s) WHERE s = :player_name"""
   sqlText: String = SELECT s FROM VALUES ('Jeff /*__*/ Green'), ('E\'Twaun 
Moore') AS t(s) WHERE s = :player_name
   
   scala> sql(sqlText, args = Map("player_name" -> lit("E'Twaun 
Moore"))).show(false)
   +-------------+
   |s            |
   +-------------+
   |E'Twaun Moore|
   +-------------+
   ```
   
   ### Why are the changes needed?
   The current implementation the parameterized `sql()` requires arguments as 
string values parsed to SQL literal expressions that causes the following 
issues:
   1. SQL comments are skipped while parsing, so, some fragments of input might 
be skipped. For example, `'Europe -- Amsterdam'`. In this case, `-- Amsterdam` 
is excluded from the input.
   2. Special chars in string values must be escaped, for instance `'E\'Twaun 
Moore'`
   
   ### Does this PR introduce _any_ user-facing change?
   No since the parameterized SQL feature 
https://github.com/apache/spark/pull/38864 hasn't been released yet.
   
   ### How was this patch tested?
   By running the affected tests:
   ```
   $ build/sbt "test:testOnly *ParametersSuite"
   $ python/run-tests --parallelism=1 --testnames 
'pyspark.sql.tests.connect.test_connect_basic 
SparkConnectBasicTests.test_sql_with_args'
   $ python/run-tests --parallelism=1 --testnames 'pyspark.sql.session 
SparkSession.sql'
   ```
   
   Authored-by: Max Gekk <[email protected]>
   (cherry picked from commit 156a12ec0abba8362658a58e00179a0b80f663f2)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to