On 08/28/2012 08:31 PM, Endi Sukma Dewata wrote:
On 8/28/2012 11:23 AM, Petr Vobornik wrote:
Your possible solution does not address how many results are fetched
(unless I misunderstood).
 >
If paging is enabled it doesn't, but it expects, that admin will disable
it for larger setups.For smaller setups it isn't of much an issue. If
paging is disabled, the limit is server 'search size limit' or
--sizelimit option supplied by Web UI.

I'm not sure how per-user preferences are handled in browsers, but don't
forget we now have session support in the server. Server session data is
available for use.

I was thinking about using browser local storage (basically key-value
DB). It has a benefit over session, that it survives a browser restart
but it should contain only non-sensitive data (other users may see it).

You mean browser cookie?

No. https://developer.mozilla.org/en-US/docs/DOM/Storage

Petr Spacek also suggested to have an attribute in user object to store IPA specific data as JSON. Therefore changing browser wouldn't be an issue.

If you don't want to fetch every record to implement paging smartly in
conjunction with it's performance issues why not do the query on the
server, cache the results in the session, and have the RPC return the
total number of results plus a subset of the result. Another RPC could
retrieve the next/previous subset of results from the cached result in
the session.

I think most software do paging like this. I don't know the reasons for
not doing it that way first time. My solution was counting with that we
still don't want to do it. Endi do you know the reasons? Earlier
sessions didn't exists, but it is doable without them too.

Yes, at the time the sessions didn't exist and we needed a
quick/temporary solution. I agree ideally this should be done on the
server side, but how would the server maintain the paging information?

The server can keep the search result (either just the pkey list or the
entire entries) in memcached, but the result might be large and there
might be multiple users accessing multiple search pages, so the total
memory requirement could be large.

I can imagine that if used only by admins it may be OK but if such functionality would be used by other applications it may become problem.


If we don't want the server to keep the search result, the server could
rerun the search and return only the requested page and discard most of
the results each time the user change page. This may affect server
performance.

We can also use Simple Paged Results, but if I understood correctly it
requires the httpd to maintain an open connection to the LDAP server for
each user and for each page. I'm not sure memcached can be used to move
the connection object among forked httpd processes. Also Simple Paged
Results can only go forward, so no Prev button unless somebody keeps the
results.

Seems unusable.


Another possibility is to use VLV, but it seems to require open
connection too and only works with a predefined filter.

Apart from the possible connection problem I think it is very usable for default views of search pages. User can freely page. I don't understand from the wiki page if it can reasonably obtain total record count.


Here's the page I wrote about this:
http://freeipa.org/page/IPAv3_Directory_Browsing

Currently we're using Option #2 but I'm not sure if we use SPR between
httpd and the LDAP server. But even with SPR the result is still bound
by some limits. It looks you have to connect as Directory Manager to get
the complete list of pkey list/entries. See the table in this page:

This is one of the main reasons why I say it is broken.


http://directory.fedoraproject.org/wiki/Simple_Paged_Results_Design

The current implementation (keeping the pkey list in the UI) is
conceptually similar to the front-end approach described in the above page.


IMO a modification of a hybrid solution may be best:

When user opens page a VLV result would be shown. User can page and see all the data. Currently we don't support custom sorting so disability of custom sorting isn't an issue. With predefined sorts we can even improved these possibilities.

If there is many records and user don't want to go through pages he will use search. If the search string is good he should fit to search limit so I wouldn't use paging at this moment. If not the case and he really don't know how to improve the filter he should have option to enable paging (current implementation) and go through the records by hand. Such record set should fit to limit of 2000 records. If not the filter is really bad.

If we don't want to implement VLV or until that time I would set not-paging as default setting.

I don't think there any need in JSON formatted data for pretty printing
with indentation. Is it an accident or oversight we're pretty printing
the JSON data in an RPC. For large data sets compression would be a win,
but most of our RPC communication is not large. Compression has an
overhead. With small data you'll use more cycles compressing and
uncompressing than you would sending verbose but small data blobs.
Compression should be an RPC option specified by either side of the
connection and the receiver should be prepared to conditionally
uncompress based on a flag value in the RPC. If you use server session
data to support paging you may not need to introduce compression since
you would only be passing one page of data at a time.

User specified page size (without limitation) is an absolute necessity.
I am frequently annoyed by web sites which either do not allow me to
specify page size or constrain it to ridiculous hard coded limits such
as 10, 20 or 30.

+1

Agreed, but if we implement the single continuous page I described in
the earlier email the page size won't be much of an issue anymore.

It may matter. For example when testing Web UI permissions. It often requires to navigate to the last page. I can enter the number directly and press enter or extend it for times (hypothetically) but I would rather see those 90 permissions right away because I can be at the list end in 0.5s.

We can also improve pager to offer some subset of pages. For example:

First Prev 22 23 *24* 25 26 Next Last Page [   ]
where 24 is current page.

Regarding the continuous paging: I would extend the page by clicking at a bar at the end of the list and/or by mouse-scrolling at the end of the list?
--
Petr Vobornik


_______________________________________________
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Reply via email to