Hi Dave,

I have tried for 350K rows with query given by you but not able to
reproduce the hang issue.
I have put some debug statement in Qt Signal's handler. Can you please
apply this debug patch and provide me console output if possible ?

Thanks,
Neel Patel

On Fri, Nov 17, 2017 at 3:13 PM, Neel Patel <neel.pa...@enterprisedb.com>
wrote:

> Hi Dave,
>
> On Fri, Nov 17, 2017 at 3:09 PM, Dave Page <dp...@pgadmin.org> wrote:
>
>>
>>
>> On Fri, Nov 17, 2017 at 9:36 AM, Neel Patel <neel.pa...@enterprisedb.com>
>> wrote:
>>
>>> Hi Dave,
>>>
>>> On Fri, Nov 17, 2017 at 2:42 PM, Dave Page <dp...@pgadmin.org> wrote:
>>>
>>>>
>>>>
>>>> On Fri, Nov 17, 2017 at 8:45 AM, Neel Patel <
>>>> neel.pa...@enterprisedb.com> wrote:
>>>>
>>>>> Hi Dave,
>>>>>
>>>>> On Thu, Nov 16, 2017 at 8:13 PM, Dave Page <dp...@pgadmin.org> wrote:
>>>>>
>>>>>> Hi
>>>>>>
>>>>>> On Thu, Nov 16, 2017 at 1:47 PM, Neel Patel <
>>>>>> neel.pa...@enterprisedb.com> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> Adding information.
>>>>>>>
>>>>>>> With this patch, RM#2715 should also be resolved.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Neel Patel
>>>>>>>
>>>>>>> On Thu, Nov 16, 2017 at 7:01 PM, Neel Patel <
>>>>>>> neel.pa...@enterprisedb.com> wrote:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> I am able to reproduce the crash while downloading and save data in
>>>>>>>> CSV file from query tool.
>>>>>>>>
>>>>>>>> Please find attached updated patch with below changes after reading
>>>>>>>> Qt documentation.
>>>>>>>>
>>>>>>>>    - Added new signal "readyRead". As per the Qt documentation,
>>>>>>>>    this signal will be emitted when data is ready from IO channel for 
>>>>>>>> large
>>>>>>>>    amount of data transfer between server and client.
>>>>>>>>    - Ready read and DownloadInProgress signal is very quick in
>>>>>>>>    call so we should not do large operation inside that slot because 
>>>>>>>> for
>>>>>>>>    downloading big data it may possible of frequent call of those 
>>>>>>>> signals
>>>>>>>>    which may cause the crash or missing data to write inside the file 
>>>>>>>> so
>>>>>>>>    removed unnecessary logic from that slot.
>>>>>>>>    - Fixed the crash while opening IODevice with NULL handle.
>>>>>>>>
>>>>>>>> With above changes, I have tested with same data as earlier and it
>>>>>>>> is working as expected without crashing the application.
>>>>>>>>
>>>>>>>> Do review it and let me know for comments.
>>>>>>>>
>>>>>>>
>>>>>> My first test was on Mac using Qt 5.8 with webkit, and when I
>>>>>> attempted to download the results from a 300K row query, it basically 
>>>>>> hung
>>>>>> showing the Downloading File progress indicator. It let me cancel it and
>>>>>> carried on working, but hung again the next time I tried the CSV 
>>>>>> download.
>>>>>> Each time it seems to download some data - from the same query I've seen
>>>>>> 2.8MB, 5.1MB and 1.5MB.
>>>>>>
>>>>>
>>>>> We are using "*downloadProgress*"  signal to update the progress bar
>>>>> and here we are getting "readData" as bytes but "totalData" is unknown so
>>>>> we received "totalData" as -1 because Qt doesn't know the total size of 
>>>>> the
>>>>> file we are getting from server but when we open "pgAdmin4" website in
>>>>> another link and try to download some binary or source at that time we are
>>>>> getting "totalData" as known size and we are displaying the progress bar
>>>>> correctly.
>>>>>
>>>>> Here, Qt sending the signal "downloadProgress" with unknown totalData
>>>>> size as -1 so we update the progress bar with 100% so from user point of
>>>>> view it looks like hang but it is downloading and writing to file at back
>>>>> ground. if we apply the same patch in windows then it display running
>>>>> progress bar so user can know something happening at back end side.
>>>>>
>>>>> I think it is same as browser - if we download big size file from
>>>>> server then it keep on downloading the file without displaying total size.
>>>>> What should we do here ? Should we create one custom progress bar dialog
>>>>> with moving cursor or something other so that user can know that activity
>>>>> is doing at backend ? Thoughts ?
>>>>>
>>>>
>>>> It is misleading, but it *is* also hanging. 10+ minutes to download a
>>>> few megs of data from a server on the same machine is not just a misleading
>>>> progress indicator :-)
>>>>
>>> Can you please share some table definition with no of rows with ~table
>>> size ? I have tried but not able to reproduce the issue.
>>>
>>
>> It was simply (on PG 9.4):
>>
>> SELECT * FROM pg_class c, pg_attribute a WHERE c.oid = a.attrelid
>> UNION ALL
>> SELECT * FROM pg_class c, pg_attribute a WHERE c.oid = a.attrelid
>> UNION ALL
>> SELECT * FROM pg_class c, pg_attribute a WHERE c.oid = a.attrelid
>> UNION ALL
>> ...
>>
>> Unioned enough times to get a shade under 300K rows.
>>
>
> Thanks. I will check and keep you updated.
>
>>
>> --
>> Dave Page
>> Blog: http://pgsnake.blogspot.com
>> Twitter: @pgsnake
>>
>> EnterpriseDB UK: http://www.enterprisedb.com
>> The Enterprise PostgreSQL Company
>>
>
>

Attachment: CSV_Download_debug.patch
Description: Binary data

Reply via email to