Hi Javier,

Sincerely, I cannot see any reason for returning at least 100,000 rows to an
user in whatever list component you may wish. We just cannot process this
kind of information never mind the computer capacity no matter what kind of
cache system you implement [maybe if its required for further processing
somewhere else]? To me this just plain lazy or brute force on our part and
indicates the typical communication divide that exists on most projects
between software developers and users.

I think your energies could be better channeled by not focusing on trying to
solve this problem technically but analyzing the actual job/task being
performed by the user. This analysis will most likely come up a solution
that will reduce your performance overhead. What your bascially doing is
placing an extreme processing burden on the user along with your network?
Please look for ways of filtering the data more on the database. Try to
determine what types of complex filtering you can implement on the server to
help the user perform the task more efficiently. Build an intelligent system
that compliments your users abilities. Most of the time the user knows what
she wants to view its just we have to allow for this to be conceptualized in
our interfaces.

kind regards

William Louth

> -----Original Message-----
> From: Javier Borrajo [SMTP:[EMAIL PROTECTED]]
> Sent: Wednesday, April 26, 2000 4:00 PM
> To:   [EMAIL PROTECTED]
> Subject:      Re: Paging large database result sets
>
> > I have a lot of difficulty visualizing how to efficiently cache
> resultsets with
> > 100.000+ rows. For example we have a table with customer data that has
> > over 2 million rows. We allow the user to query this table to find
> customers.
> > The user interface includes several controls and a JList. The user can
> use
> > the controls to compose a lot of different querys. I don't see how
> server side
> > caching can help here.
> >
>         [Randy Stafford]  GemStone's cache (PCA) is an OODBMS - not a
> volatile in-VM-memory cache.  So there could be a collection of two
> million
> customer objects persisting in the OODBMS, if that's where you choose to
> persist them.  In our approach a query over such a collection would result
> in the creation of another, smaller, collection (to hold the query
> results),
> whose elements are simply pointer references to the customer objects
> already
> persistent in the OODBMS.  The result collection is also committed to the
> OODBMS, and wrapped by an entity bean distributed to the client (OODBMSes
> do
> have their advantages).
>
> That's the key point. We use Oracle, not an OODBMS.
>
>   If the customer objects' state is instead stored in
> an RDBMS, then I agree that O/R mapping all the customer objects that form
> a
> large result set, and committing the mapped objects to the OODBMS, would
> not
> be the optimal solution.  Your approach seems like the best balance one
> could hope for in that situation.
>
> ok
>
> > The client caches pages of data. We do not use a server side snapshot.
> > Each page is the result of a new query, so the data is up to date when
> the user
> > scrolls down the JList/JTable or presses the "Find" button
> >
>         [Randy Stafford]  Sounds like a good optimization.
>
>
> It works fine unless the user wants to scroll near the end of the list,
> then paging gets really slow because of JDBC 1.x "next" method.
> JDBC 2.0 "absolute" method should make this strategy a lot better.
> Even better putting the JDBC code inside Oracle 8i JServer.
>
> > , but there is no garantee the data *stays* current on the client cache.
> > There are 2 kinds of querys:
> > 1. when the user presses the "Find" button
> > 2. when the user scrolls down with a JList or a JTable
> > In the first case the client performs both "getCount" and "getPage".
> > In the second case the client only performs a "getPage".
> > Problems:
> > A. the number of records that fit the original query changes
> > B. client side cached records change in the server
> >
> > Both problems exist with our scheme, but we tolerate them.
> >
>         [Randy Stafford]  That's cool; sounds like you can tolerate the
> "don't care" semantics.  Interesting discussion - thanks for sharing your
> solution!
>
>
> Well, not exactly "don't care", I think this is kind of "optimistic
> locking".
> If the client uses the cached data to update serverside records
> then the usual collision detection mechanisms apply.
>
> Regards
>
>     Javier
>


***********************************************************************
Bear Stearns is not responsible for any recommendation, solicitation,
offer or agreement or any information about any transaction, customer
account or account activity contained in this communication.
***********************************************************************

===========================================================================
To unsubscribe, send email to [EMAIL PROTECTED] and include in the body
of the message "signoff EJB-INTEREST".  For general help, send email to
[EMAIL PROTECTED] and include in the body of the message "help".

Reply via email to