On Fri, Jun 17, 2016 at 4:04 PM, Akshay Joshi <akshay.jo...@enterprisedb.com
> wrote:

> Hi Dave
>
> On Thu, Jun 16, 2016 at 6:48 PM, Dave Page <dp...@pgadmin.org> wrote:
>
>>
>>
>> On Thu, Jun 16, 2016 at 1:43 PM, Akshay Joshi <
>> akshay.jo...@enterprisedb.com> wrote:
>>
>>>
>>>
>>> On Thu, Jun 16, 2016 at 6:09 PM, Dave Page <dp...@pgadmin.org> wrote:
>>>
>>>>
>>>>
>>>> On Thu, Jun 16, 2016 at 1:34 PM, Akshay Joshi <
>>>> akshay.jo...@enterprisedb.com> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Thu, Jun 16, 2016 at 5:47 PM, Khushboo Vashi <
>>>>> khushboo.va...@enterprisedb.com> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Thu, Jun 16, 2016 at 5:07 PM, Dave Page <dp...@pgadmin.org> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Thu, Jun 16, 2016 at 12:19 PM, Khushboo Vashi <
>>>>>>> khushboo.va...@enterprisedb.com> wrote:
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Thu, Jun 16, 2016 at 4:42 PM, Dave Page <dp...@pgadmin.org>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Thu, Jun 16, 2016 at 12:04 PM, Akshay Joshi <
>>>>>>>>> akshay.jo...@enterprisedb.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hi Dave
>>>>>>>>>>
>>>>>>>>>> On Thu, Jun 16, 2016 at 2:42 PM, Akshay Joshi <akshay.joshi@
>>>>>>>>>> enterprisedb.com> wrote:
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Thu, Jun 16, 2016 at 2:35 PM, Dave Page <dp...@pgadmin.org>
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Thanks, patch applied.
>>>>>>>>>>>>
>>>>>>>>>>>> However, whilst I was testing, I saw just how slow the tool is:
>>>>>>>>>>>>
>>>>>>>>>>>> SELECT * FROM pg_attribute
>>>>>>>>>>>>
>>>>>>>>>>>> In a PEM database, returns 8150 rows. In pgAdmin 3, this is
>>>>>>>>>>>> timed at 676ms on my laptop. In pgAdmin 4, the busy spinner runs 
>>>>>>>>>>>> for approx
>>>>>>>>>>>> 5 seconds, then the whole UI freezes. I then have to wait a 
>>>>>>>>>>>> further 3
>>>>>>>>>>>> minutes and 46 seconds(!!!!) for the operation to complete. Once 
>>>>>>>>>>>> loaded,
>>>>>>>>>>>> scrolling is very sluggish.
>>>>>>>>>>>>
>>>>>>>>>>>> Please make this your top priority - and if you have
>>>>>>>>>>>> incremental improvements, send them as you have them.
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>    Sure.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>       Below is my initial finding while running "SELECT * FROM
>>>>>>>>>> pg_attribute" on PEM database, returns 8498 rows:
>>>>>>>>>>
>>>>>>>>>>    - Fetching data from the server side took consistent time and
>>>>>>>>>>    it took 3-4 secs.
>>>>>>>>>>    - Create/Render Backgrid without pagination : *1 minute*
>>>>>>>>>>    - Create/Render Backgrid with pagination (50 items per page):
>>>>>>>>>>     *469ms*
>>>>>>>>>>    - Create/Render Backgrid with pagination (500 items per
>>>>>>>>>>    page): *3 secs*
>>>>>>>>>>    - Create/Render Backgrid with pagination (1000 items per
>>>>>>>>>>    page): *6 secs*
>>>>>>>>>>    - Create/Render Backgrid with pagination (3000 items per
>>>>>>>>>>    page): *22 secs*
>>>>>>>>>>    - Create/Render Backgrid with pagination (5000 items per
>>>>>>>>>>    page): *36 secs*
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>> OK, so I guess diving into Backgrid is the next step. Are there
>>>>>>>>> any profiling tools that could be used?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>> Can we use infinity scrolling in case of no pagination?
>>>>>>>>
>>>>>>>
>>>>>>> How would add row work then?
>>>>>>>
>>>>>>
>>>>>> Yeah, in this case user has to wait till the last record to load. :(
>>>>>> Btw, I was thinking of https://github.com/bhvaleri/backgrid-infinator
>>>>>>
>>>>>
>>>>>     This seems to be the good option.
>>>>>
>>>>
>>>> No - please see my previous comment.
>>>>
>>>
>>>     We can add/paste row as top row of the backgrid. In that case will
>>> it be fine?
>>>
>>
>> It's a hack, it doesn't solve the underlying problem. The fact that it
>> took 4 minutes to load 8000 rows on my top-of-the-range MacBook Pro is not
>> good.
>>
>
>    I have tried to fix the issue, but unable to figure out any way to do
> it .  I have tried following options
>
>    - Same issue found here https://github.com/wyuenho/backgrid/issues/126
>    Which will be fixed https://github.com/wyuenho/backgrid/pull/444. I
>    have copied the backgrid.js and backgrid.css from "*perf*" branch
>    replace it in our code, but faced so many error's and warning, fixed those
>    but some how data is not rendered.
>
> Hmm, that's so old I'm not holding my breath about it being included in a
hurry.


>
>    - Another approach is instead of adding all the records to the
>    backbone collection at once, I have added them in chunk of 500 records at a
>    time and sleep for 500ms, in this case busy spinner won't run for a longer
>    time as first 500 records gets rendered, but browser again goes into
>    unresponsive state as CPU goes up to 98-99%.
>
> Urgh. I wonder if we need a different grid control for this, something
like SlickGrid which is extremely fast with large data sets.

https://github.com/6pac/SlickGrid

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Reply via email to