[ 
https://issues.apache.org/jira/browse/HAWQ-70?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15074619#comment-15074619
 ] 

Dong Li commented on HAWQ-70:
-----------------------------

 The size of data  size >128*3MB , so dispatcher should start 4 QE processes to 
read data, but it only starts  3 QE processes to read data , which causes it 
lost 128MB data to read and count result is wrong.
The key point of this bug is to check why dispatcher only start 3 QE processes.

> INSERT INTO foo SELECT * FROM foo returned successful, but the data of foo 
> was not as expected.
> -----------------------------------------------------------------------------------------------
>
>                 Key: HAWQ-70
>                 URL: https://issues.apache.org/jira/browse/HAWQ-70
>             Project: Apache HAWQ
>          Issue Type: Bug
>          Components: Query Execution
>            Reporter: Lirong Jian
>            Assignee: George Caragea
>
> Reproduce steps:
> postgres=# CREATE TABLE foo (a INT);
> CREATE TABLE
> postgres=# INSERT INTO foo VALUES(generate_series(1, 25000000));
> INSERT 0 25000000
> postgres=# SELECT COUNT(*) FROM foo;
>   count   
> ----------
>  25000000
> (1 row)
> postgres=# INSERT INTO foo SELECT * FROM foo;
> INSERT 0 25000000
> postgres=# SELECT COUNT(*) FROM foo;
>   count   
> ----------
>  38402176
> (1 row)
> The expected answer is 50000000.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to