wangbo commented on issue #6009:
URL: 
https://github.com/apache/incubator-doris/issues/6009#issuecomment-864507001


   There are two problems here
   1 DppUtils.getHashValue() didn't deal with date type, so no bytes is added 
here, so result is wrong;
   2 I create a table using ```Spark Load``` to load data, but I didn't 
reproduce it; The case is as below;
   ```
   table:
   CREATE TABLE `test_int_bucket` (
     `tinyint_col` tinyint(4) NULL COMMENT "",
     `smallint_col` smallint(6) NULL COMMENT "",
     `int_col` int(11) NULL COMMENT "",
     `bigint_col` bigint(20) NULL COMMENT "",
     `pv_sum` int(11) SUM NULL COMMENT ""
   ) ENGINE=OLAP
   AGGREGATE KEY(`tinyint_col`, `smallint_col`, `int_col`, `bigint_col`)
   COMMENT "OLAP"
   DISTRIBUTED BY HASH(`tinyint_col`,`smallint_col`,`int_col`,`bigint_col`) 
BUCKETS 3
   PROPERTIES (
   "replication_num" = "1",
   "in_memory" = "false",
   "storage_format" = "DEFAULT"
   ); 
   
   data:
   mysql> select * from test_int_bucket;
   +-------------+--------------+---------+------------+--------+
   | tinyint_col | smallint_col | int_col | bigint_col | pv_sum |
   +-------------+--------------+---------+------------+--------+
   |           1 |            1 |       1 |          1 |      1 |
   |           4 |            4 |       4 |          4 |      4 |
   |           2 |            2 |       2 |          2 |      2 |
   |           3 |            3 |       3 |          3 |      3 |
   +-------------+--------------+---------+------------+--------+
   4 rows in set (0.01 sec)
   
   
   query:
   mysql> select count(1) from test_int_bucket where bigint_col=1;
   +----------+
   | count(1) |
   +----------+
   |        1 |
   +----------+
   1 row in set (0.02 sec)
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to