ege-st commented on code in PR #24942:
URL: https://github.com/apache/superset/pull/24942#discussion_r1293501272
##########
superset/connectors/sqla/models.py:
##########
@@ -1011,7 +1013,7 @@ def adhoc_column_to_sqla( # pylint:
disable=too-many-locals
if is_dttm and has_timegrain:
sqla_column = self.db_engine_spec.get_timestamp_expr(
col=sqla_column,
- pdf=None,
+ pdf=pdf,
Review Comment:
@zhaoyongjie after some testing there's one problem that's preventing your
suggestion from working with Pinot. When using the code above, it will generate
a query that looks like this:
```
SELECT CAST(DATE_TRUNC('day', CAST(created_date_ts AS TIMESTAMP)) AS
TIMESTAMP) as created_date_ts,
COUNT(*) AS "count"
FROM "default".test_data
GROUP BY CAST(DATE_TRUNC('day', CAST(created_date_ts AS TIMESTAMP)) AS
TIMESTAMP)
ORDER BY COUNT(*) DESC
LIMIT 10000;
```
Unfortunately, if you alias the expression as the name of an already
existing column, Pinot will not properly analyze the `GROUP BY` clause. The
above will generate an error.
However, if I change the query to:
```
SELECT CAST(DATE_TRUNC('day', CAST(created_date_ts AS TIMESTAMP)) AS
TIMESTAMP) as created_date_ts_2,
COUNT(*) AS "count"
FROM "default".test_data
GROUP BY CAST(DATE_TRUNC('day', CAST(created_date_ts AS TIMESTAMP)) AS
TIMESTAMP)
ORDER BY COUNT(*) DESC
LIMIT 10000;
```
or if I use:
```
SELECT CAST(DATE_TRUNC('day', CAST(created_date_ts AS TIMESTAMP)) AS
TIMESTAMP) as created_date_ts,
COUNT(*) AS "count"
FROM "default".test_data
GROUP BY created_date_ts
ORDER BY COUNT(*) DESC
LIMIT 10000;
```
It will work. Is there a way that I can change what alias Superset uses for
the computed timestamp?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]