andygrove opened a new issue, #2890:
URL: https://github.com/apache/arrow-datafusion/issues/2890
**Describe the bug**
I have a table with an int32 and a string column.
```
❯ select c3, c4 from data limit 2;
+------------+----------+
| c3 | c4 |
+------------+----------+
| | |
| -987603476 | H)n_zVMR |
+------------+----------+
```
I can compare them with `=` and `!=` and an implicit cast gets added,
casting the int to a string.
```
❯ select c3, c4 from data where c3 = c4 limit 2;
0 rows in set. Query took 0.016 seconds.
❯ select c3, c4 from data where c3 != c4 limit 2;
+------------+----------+
| c3 | c4 |
+------------+----------+
| -987603476 | H)n_zVMR |
| 854785627 | /\t+h*@D |
+------------+----------+
```
The explain for the `=` query shows `CAST(c3@0 AS Utf8) = c4@1`.
Note that Spark would cast the string to an int and not the int to a string,
so I'm not sure if we are doing the right thing here.
If I use other comparison expressions I get an error instead.
```
❯ select c3, c4 from data where c3 < c4 limit 2;
Plan("'Int32 < Utf8' can't be evaluated because there isn't a common type to
coerce the types to")
```
This seems inconsistent.
**To Reproduce**
See above.
**Expected behavior**
- All comparison expressions should use the same casts
- We should use int as the common type here, not string?
**Additional context**
None
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]