zabetak commented on code in PR #3733:
URL: https://github.com/apache/calcite/pull/3733#discussion_r1574555950
##########
babel/src/test/resources/sql/redshift.iq:
##########
@@ -523,78 +523,78 @@ select deptno, ratio_to_report(sal) over (partition by
deptno) from emp;
!}
# STDDEV_POP
-select empno, stddev_pop(comm) over (order by empno rows unbounded preceding)
from emp where deptno = 30 order by 1;
+select empno, stddev_pop(CAST(comm AS DECIMAL(12, 4))) over (order by empno
rows unbounded preceding) from emp where deptno = 30 order by 1;
Review Comment:
According to the description, I assume that the `CAST` was added to avoid a
runtime error but it is not evident why the error is raised. Does the plan have
another CAST somewhere? For what reason?
As far as I see the column `comm` is declared as `DECIMAL(7,2)` and the
result of the `stddev_pop` function should also be a `DECIMAL(7,2)` according
to `RelDataTypeSystemImpl#deriveAvgAggType`. Based on the existing data the
result of stddev_pop is always between 0 and 523 so I am not sure why it cannot
be represented as `DECIMAL(7,2)`.
From a SQL user perspective it is strange to require an explicit CAST for
functions that are supposed to handle exact numerics such as `stddev_pop`,
`sum`, `variance` etc.
Can you please elaborate a bit why it is necessary to introduce this
explicit cast?
##########
druid/src/test/java/org/apache/calcite/test/DruidAdapter2IT.java:
##########
@@ -1909,10 +1909,14 @@ private void checkGroupBySingleSortLimit(boolean
approx) {
+ "end as b from \"foodmart\" group by \"store_state\" order by a
desc";
final String postAggString =
"'postAggregations':[{'type':'expression','name':'A',"
+ "'expression':'(\\'$f1\\' /
\\'$f2\\')'},{'type':'expression','name':'B',"
- + "'expression':'case_searched((\\'$f3\\' == 0),1.0,CAST(\\'$f3\\'";
+ + "'expression':'case_searched((\\'$f3\\' == 0),1,CAST(\\'$f3\\'";
final String plan = "PLAN="
+ "EnumerableInterpreter\n"
- + " DruidQuery(table=[[foodmart, foodmart]],
intervals=[[1900-01-09T00:00:00.000Z/2992-01-10T00:00:00.000Z]],
projects=[[$63, $90, $91, $89]], groups=[{0}], aggs=[[SUM($1), SUM($2),
SUM($3)]], post_projects=[[$0, /($1, $2), CASE(=($3, 0), 1.0:DECIMAL(19, 0),
CAST($3):DECIMAL(19, 0))]], sort0=[1], dir0=[DESC])\n";
+ + " DruidQuery(table=[[foodmart, foodmart]],"
+ + " intervals=[[1900-01-09T00:00:00.000Z/2992-01-10T00:00:00.000Z]],"
+ + " projects=[[$63, $90, $91, $89]], groups=[{0}], aggs=[[SUM($1),
SUM($2), SUM($3)]],"
+ + " post_projects=[[$0, /($1, $2), CASE(=($3, 0), CAST(1:DECIMAL(19,
0)):DECIMAL(19, 0),"
Review Comment:
The `CAST(1:DECIMAL(19, 0)):DECIMAL(19, 0)` seems like a redundant cast and
it was not there before. Why do we need it now?
##########
testkit/src/main/java/org/apache/calcite/test/SqlOperatorTest.java:
##########
@@ -598,6 +598,13 @@ void testCastBooleanToNumeric(CastType castType,
SqlOperatorFixture f) {
"Cast function cannot convert value of type BOOLEAN to type
DECIMAL\\(19, 0\\)", false);
}
+ /** Test case for <a
href="https://issues.apache.org/jira/projects/CALCITE/issues/CALCITE-6322">
+ * [CALCITE-6322] Casts to DECIMAL types are ignored</a>. */
+ @Test void testDecimalDecimalCast() {
+ final SqlOperatorFixture f = fixture();
+ f.checkScalar("CAST(1.123 AS DECIMAL(4, 0))", "1", "DECIMAL(4, 0) NOT
NULL");
+ }
+
Review Comment:
Instead of adding a new method should we rather enrich
`testCastToExactNumeric`?
##########
testkit/src/main/java/org/apache/calcite/test/SqlOperatorTest.java:
##########
@@ -598,6 +598,13 @@ void testCastBooleanToNumeric(CastType castType,
SqlOperatorFixture f) {
"Cast function cannot convert value of type BOOLEAN to type
DECIMAL\\(19, 0\\)", false);
}
+ /** Test case for <a
href="https://issues.apache.org/jira/projects/CALCITE/issues/CALCITE-6322">
+ * [CALCITE-6322] Casts to DECIMAL types are ignored</a>. */
Review Comment:
The PR claims to add support for three kinds of CASTS:
* decimal to decimal
* int to decimal
* double/float to decimal
I've seen a few tests for the first category, do we have something for the
other two? If not then I think we should add.
Moreover, I've noticed some tests that are currently disabled by the
`DECIMAL` flag. A few of them should be fixed by this PR so please enable those
that are relevant.
##########
druid/src/test/java/org/apache/calcite/test/DruidAdapterIT.java:
##########
@@ -2215,12 +2215,13 @@ private void checkGroupBySingleSortLimit(boolean
approx) {
+ "from \"foodmart\" group by \"store_state\" order by a desc";
final String postAggString =
"'postAggregations':[{'type':'expression','name':'A',"
+ "'expression':'(\\'$f1\\' /
\\'$f2\\')'},{'type':'expression','name':'B',"
- + "'expression':'case_searched((\\'$f3\\' == 0),1.0,CAST(\\'$f3\\'";
+ + "'expression':'case_searched((\\'$f3\\' == 0),1,CAST(\\'$f3\\'";
final String plan =
"DruidQuery(table=[[foodmart, foodmart]],
intervals=[[1900-01-09T00:00:00.000Z/"
+ "2992-01-10T00:00:00.000Z]], projects=[[$63, $90, $91, $89]],
groups=[{0}], "
+ "aggs=[[SUM($1), SUM($2), SUM($3)]], post_projects=[[$0, /($1,
$2), "
- + "CASE(=($3, 0), 1.0:DECIMAL(19, 0), CAST($3):DECIMAL(19, 0))]],
sort0=[1], dir0=[DESC])";
+ + "CASE(=($3, 0), CAST(1:DECIMAL(19, 0)):DECIMAL(19, 0), "
Review Comment:
redundant CAST?
##########
core/src/main/java/org/apache/calcite/sql/type/SqlTypeUtil.java:
##########
@@ -1836,7 +1836,7 @@ public static boolean isValidDecimalValue(@Nullable
BigDecimal value, RelDataTyp
case DECIMAL:
final int intDigits = value.precision() - value.scale();
final int maxIntDigits = toType.getPrecision() - toType.getScale();
- return intDigits <= maxIntDigits;
+ return (intDigits <= maxIntDigits) && (value.scale() <=
toType.getScale());
Review Comment:
Why is this change needed? Is this something mandated by the standard? What
does it fix?
##########
linq4j/src/main/java/org/apache/calcite/linq4j/tree/Primitive.java:
##########
@@ -384,10 +384,57 @@ static void checkRoundedRange(Number value, double min,
double max) {
}
}
+ /** Called from BuiltInMethod.INTEGER_CAST */
public static @Nullable Object integerCast(Primitive primitive, final Object
value) {
return requireNonNull(primitive, "primitive").numberValue((Number) value);
}
+ static BigDecimal checkOverflow(BigDecimal value, int precision, int scale) {
+ BigDecimal result = value.setScale(scale, RoundingMode.FLOOR);
+ result = result.stripTrailingZeros();
+ if (result.scale() < scale) {
+ // stripTrailingZeros also removes zeros if there is no
+ // decimal point, converting 1000 to 1e+3, using a negative scale.
+ // Here we undo this change.
+ result = result.setScale(scale, RoundingMode.FLOOR);
+ }
+ int actualPrecision = result.precision();
+ if (actualPrecision > precision) {
+ throw new ArithmeticException("Value " + value
+ + " cannot be represented as a DECIMAL(" + precision + ", " + scale
+ ")");
+ }
+ return result;
+ }
+
+ /** Called from BuiltInMethod.DECIMAL_DECIMAL_CAST */
+ public static @Nullable Object decimalDecimalCast(
+ @Nullable BigDecimal value, int precision, int scale) {
+ if (value == null) {
+ return null;
+ }
+ return checkOverflow(value, precision, scale);
+ }
+
+ /** Called from BuiltInMethod.INTEGER_DECIMAL_CAST */
+ public static @Nullable Object integerDecimalCast(
+ @Nullable Object value, int precision, int scale) {
Review Comment:
Later we are casting the `Object` to `Number` so I assume that generated
code guarantees that it will always be like that. In that case why not
declaring the `value` type as `Number` to begin with?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]