You cannot even consider loading even a thousand records directly.
Get a set of ID numbers. Load each record as it's needed. This is very
basic stuff, and not even that hard to implement. I am just saying for
the record that this is not hard to do, hopefully no one else will be
scared away from
Hi
Thanks for the reposnse. The main reason is my record count could be from a
few thousands to a million. But even at the lowly numbers of around 1
the interface can seem slugish if you read every record before displaying
anything.
As you mention, and has been disucssed above, doing stuff
Why don't you design the table with a unique row ID, stored in an
integer field, then fetch a list of those ID numbers?
For 5000 rows, assuming you store them in you application as 4 byte
longs, that's about 19 k of memory.
Counting that result as you receive it isn't that difficult. If it
No there isnt, but RDBM systems are a generalised data retrieval mechanism.
As such they suffer from that generality.
Dont get me wrong, RDBM systems are appropriate for 95% of all data
requirements I have had to deal with and I would never dream of trying to
write one from scratch, nor can I
Nuno Lucas wrote:
On 10/25/06, Dennis Cote <[EMAIL PROTECTED]> wrote:
Nuno Lucas wrote:
>
> There is another alternative if you don't mind to have the overhead of
> having an automatic row count (which sqlite avoids by design). It's by
> having a trigger that will update the table row count on
There is no magic in data retrieval. Google use the same physical laws
as us ordinary mortals.
I see no reason to ever perform a dataabase search twice.
Da Martian wrote:
Yes but google doesnt us an RDMS, its all propriatary to support there
speed
and huge volumes. Its anyones guess (excpet
Indeed,
Thanks to all for the responses. Currently I use the seperate count(*), but
think I will change to the "prepare, step(n), reset" option only because the
query is a little slow.
Utlimatly sqlite is brilliant, hands down the best embeddable and standalone
db I have yet to encounter.
On 10/25/06, Dennis Cote <[EMAIL PROTECTED]> wrote:
Nuno Lucas wrote:
>
> There is another alternative if you don't mind to have the overhead of
> having an automatic row count (which sqlite avoids by design). It's by
> having a trigger that will update the table row count on each
>
Nuno Lucas wrote:
There is another alternative if you don't mind to have the overhead of
having an automatic row count (which sqlite avoids by design). It's by
having a trigger that will update the table row count on each
insert/delete.
Nuno,
This technique only works if you never use
Hi
All these options are good, and the discussion was interesting. I mearly
wanted to see what peoples thoughts on the sqlite providing this were. There
are many ways to skin a cat as they say. Since this is probably drifting off
topic, I suggest we discontinue the thread.
Thanks for your
On 10/25/06, Da Martian <[EMAIL PROTECTED]> wrote:
[...]
But to return all the rows just to count them requires N calls to step. If
the data set is large you only want to return a subset to start with. So you
wouldnt know the count. If you dont know the count, you cant update GUI type
things
* Da Martian <[EMAIL PROTECTED]> [2006-10-25 15:05]:
> Its was not meant as an insult, however you did set the tone
> with your post (ala: Either you want the data from the query,
> or you don't.). I mearly responded in kind. If you live in
> glass houses dont throw stones and all that. I mean its
Its was not meant as an insult, however you did set the tone with your post
(ala: Either you want the data
from the query, or you don't.). I mearly responded in kind. If you live in
glass houses dont throw stones and all that. I mean its not hard to see that
loading 20 million records into memory
* Da Martian <[EMAIL PROTECTED]> [2006-10-25 14:15]:
> 1) If there are too many results the user will have to wait
> a long time before they see anything because we will be
> buffering away results. The application will appear slugish.
> The user would get bored. I can point you to dozens of
Your extremly simplistic view on this is a result of never dealing in
volumous data and result sets and quick running queries. Once you >put
volumes into your thinking cap you will begin to see why you dont just read
everything into memory for the hell of it.
Just to complete the above thought.
Its quiet simple:
1) If there are too many results the user will have to wait a long time
before they see anything because we will be buffering away results. The
application will appear slugish. The user would get bored. I can point you
to dozens of studies which show the correlation between
* Da Martian <[EMAIL PROTECTED]> [2006-10-25 11:35]:
> >What prevents you from doing the same yourself? Just collect
> >the data in a memory buffer as you get it, before processing
> >it.
>
> Nothing but effiency as discussed. Basically, as Mr Cote has
> said, its either a count(*) query or O(n)
Da Martian wrote:
Yes but google doesnt us an RDMS, its all propriatary to support
there speed and huge volumes.
I know, but that was my point. If they can't/don't do it (with their
custom kit) then surely it's non-trivial?
Martin
Yes but google doesnt us an RDMS, its all propriatary to support there speed
and huge volumes. Its anyones guess (excpet google themselves) what exactly
they do, and rumours abound, but I have done many apps which require custom
data handling to achieve some end that doesnt fit with RDBM Systems.
Da Martian wrote:
But to return all the rows just to count them requires N calls to
step. If the data set is large you only want to return a subset to
start with. So you wouldnt know the count. If you dont know the
count, you cant update GUI type things etc..
I haven't been following this
What prevents you from doing the same yourself? Just collect the
data in a memory buffer as you get it, before processing it.
Nothing but effiency as discussed. Basically, as Mr Cote has said, its
either a count(*) query or O(n) step calls. The former means two statements
have to be run, if
* Da Martian <[EMAIL PROTECTED]> [2006-10-24 16:15]:
> It appears that DBs like oracle etc.. get the whole resultset
> into temporary memory/space and then return the query (at least
> for unions this appears true), although this is just based off
> assumptions based on observations.
What
22 matches
Mail list logo