[ 
https://issues.apache.org/jira/browse/CALCITE-6693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17899260#comment-17899260
 ] 

Julian Hyde commented on CALCITE-6693:
--------------------------------------

The test - as its name indicates - tests conversion from RelNode to SQL. That 
is, the RelToSqlConverter class. We allow a SQL string because it is a more 
concise way to create a relational algebra tree.

You seem to want to test translation from dialect X to dialect Y for various 
pairs of dialects (X, Y). There are a few problems with that. First, there is a 
huge number of combinations of X, Y. Second, this would no longer be a unit 
test, because it would be checking parser, validator, sql-to-rel, and 
rel-to-sql.

Can we figure out what are the goals here, and come up with a design that meets 
those goals?

> Add Source SQL Dialect to RelToSqlConverterTest
> -----------------------------------------------
>
>                 Key: CALCITE-6693
>                 URL: https://issues.apache.org/jira/browse/CALCITE-6693
>             Project: Calcite
>          Issue Type: Improvement
>          Components: core
>    Affects Versions: 1.38.0
>            Reporter: yanjing.wang
>            Assignee: yanjing.wang
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 1.39.0
>
>
> Currently, {{RelToSqlConverterTest}} converts the original SQL to RelNode 
> using the default validator config and the target dialect's type system. This 
> is confusing because it's unclear whether the test is meant to verify the SQL 
> conversion between different dialects or within the same dialect. I believe 
> it should verify the conversion between source and target dialects. 
> Therefore, we should clearly define the source and target dialects and 
> provide a way to set them. The code would look like
> {code:java}
> @Test void testNullCollation() {
>   final String query = "select * from \"product\" order by \"brand_name\"";
>   final String expected = "SELECT *\n"
>       + "FROM \"foodmart\".\"product\"\n"
>       + "ORDER BY \"brand_name\"";
>   final String sparkExpected = "SELECT *\n"
>       + "FROM `foodmart`.`product`\n"
>       + "ORDER BY `brand_name` NULLS LAST";
>   sql(query)
>       .sourceDialect(PrestoSqlDialect.DEFAULT)
>       .withPresto().ok(expected)
>       .withSpark().ok(sparkExpected);
> } {code}
> We need also set correct null collation config using source dialect due to 
> the source and target dialect have different null collations.
> For the case that the source dialect equals the target dialect
> {code:java}
> @Test void testCastDecimalBigPrecision() {
>   final String query = "select cast(\"product_id\" as decimal(60,2)) "
>       + "from \"product\" ";
>   final String expectedRedshift = "SELECT CAST(\"product_id\" AS DECIMAL(38, 
> 2))\n"
>       + "FROM \"foodmart\".\"product\"";
>   sql(query)
>       .withRedshift()
>       .withSourceDialectEqualsTargetDialect()
>       .ok(expectedRedshift);
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to