[jira] [Comment Edited] (SPARK-47033) EXECUTE IMMEDIATE USING does not recognize session variable names
[ https://issues.apache.org/jira/browse/SPARK-47033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17820680#comment-17820680 ] A G edited comment on SPARK-47033 at 2/27/24 6:08 PM: -- I want to work on this! PR: https://github.com/apache/spark/pull/45293 was (Author: JIRAUSER304341): I want to work on this! > EXECUTE IMMEDIATE USING does not recognize session variable names > - > > Key: SPARK-47033 > URL: https://issues.apache.org/jira/browse/SPARK-47033 > Project: Spark > Issue Type: Bug > Components: Spark Core >Affects Versions: 4.0.0 >Reporter: Serge Rielau >Priority: Major > Labels: pull-request-available > > {noformat} > DECLARE parm = 'Hello'; > EXECUTE IMMEDIATE 'SELECT :parm' USING parm; > [ALL_PARAMETERS_MUST_BE_NAMED] Using name parameterized queries requires all > parameters to be named. Parameters missing names: "parm". SQLSTATE: 07001 > EXECUTE IMMEDIATE 'SELECT :parm' USING parm AS parm; > Hello > {noformat} > variables are like column references, they act as their own aliases and thus > should not be required to be named to associate with a named parameter with > the same name. > Note that unlike for pySpark this should be case insensitive (haven't > verified). -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-47033) EXECUTE IMMEDIATE USING does not recognize session variable names
[ https://issues.apache.org/jira/browse/SPARK-47033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17820680#comment-17820680 ] A G commented on SPARK-47033: - I want to work on this! > EXECUTE IMMEDIATE USING does not recognize session variable names > - > > Key: SPARK-47033 > URL: https://issues.apache.org/jira/browse/SPARK-47033 > Project: Spark > Issue Type: Bug > Components: Spark Core >Affects Versions: 4.0.0 >Reporter: Serge Rielau >Priority: Major > > {noformat} > DECLARE parm = 'Hello'; > EXECUTE IMMEDIATE 'SELECT :parm' USING parm; > [ALL_PARAMETERS_MUST_BE_NAMED] Using name parameterized queries requires all > parameters to be named. Parameters missing names: "parm". SQLSTATE: 07001 > EXECUTE IMMEDIATE 'SELECT :parm' USING parm AS parm; > Hello > {noformat} > variables are like column references, they act as their own aliases and thus > should not be required to be named to associate with a named parameter with > the same name. > Note that unlike for pySpark this should be case insensitive (haven't > verified). -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-43256) Assign a name to the error class _LEGACY_ERROR_TEMP_2021
[ https://issues.apache.org/jira/browse/SPARK-43256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17819237#comment-17819237 ] A G commented on SPARK-43256: - I want to work on this. PR: [https://github.com/apache/spark/pull/45198] > Assign a name to the error class _LEGACY_ERROR_TEMP_2021 > > > Key: SPARK-43256 > URL: https://issues.apache.org/jira/browse/SPARK-43256 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.5.0 >Reporter: Max Gekk >Priority: Minor > Labels: pull-request-available, starter > > Choose a proper name for the error class *_LEGACY_ERROR_TEMP_2021* defined in > {*}core/src/main/resources/error/error-classes.json{*}. The name should be > short but complete (look at the example in error-classes.json). > Add a test which triggers the error from user code if such test still doesn't > exist. Check exception fields by using {*}checkError(){*}. The last function > checks valuable error fields only, and avoids dependencies from error text > message. In this way, tech editors can modify error format in > error-classes.json, and don't worry of Spark's internal tests. Migrate other > tests that might trigger the error onto checkError(). > If you cannot reproduce the error from user space (using SQL query), replace > the error by an internal error, see {*}SparkException.internalError(){*}. > Improve the error message format in error-classes.json if the current is not > clear. Propose a solution to users how to avoid and fix such kind of errors. > Please, look at the PR below as examples: > * [https://github.com/apache/spark/pull/38685] > * [https://github.com/apache/spark/pull/38656] > * [https://github.com/apache/spark/pull/38490] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org