[EMAIL PROTECTED] wrote:
> well....The database and the applications accessing the database are all
> located on the same machine, so distribution across multiple machines
> doesn't apply here.   The system is designed so that only one application
> handles all the writes to the DB.   Another application handles all the
> reads, and there may be up to two instances of that application running at
> any one time, so I guess that shows a small number of clients.   When the
> application that reads the DB data starts, it reads *all* the data in the
> DB and ships it elsewhere.

I think either SQLite or PostgreSQL would be appropriate here.  I'm
guessing that SQLite will have the speed advantage in this particular
case if you are careful in how you code it up.

> 
> I anticipate 2 bottlenecks...
> 
> 1. My anticipated bottleneck under postgres is that the DB-writing app.
> must parse incoming bursts of data and store in the DB.  The machine
> sending this data is seeing a delay in processing.  Debugging has shown
> that the INSERTS (on the order of a few thousand) is where most of the time
> is wasted.

You will do well to gather your incoming data into a TEMP table then
insert the whole wad into the main database all in one go using
something like this:

    INSERT INTO maintable SELECT * FROM temptable;
    DELETE FROM temptable;

Actually, this same trick might solve your postgresql performance
problem and thus obviate the need to port your code.

> 
> 2. The other bottleneck is data retrieval.  My DB-reading application must
> read the DB record-by-record (opens a cursor and reads one-by-one), build
> the data into a message according to a system ICD, and ship it out.
> postgres (postmaster) CPU usage is hovering around 85 - 90% at this time.
> 
> The expansion of data will force me to go from a maximum 3400 row table to
> a maximum of 11560.

Unless each row is particularly large, this is not a very big database
and should not present a problem to either SQLite or PostgreSQL.  Unless
you are doing some kind of strange join that you haven't told us about.

If your data formatting takes a long time, the reader might block the
writer in SQLite.  The writer process will have to wait to do its write
until the reader has finished.  You can avoid this by making a copy of
the data to be read into a temporary table before formatting it:

    CREATE TEMP TABLE outbuf AS SELECT * FROM maintable;
    SELECT * FROM outbuf;
      -- Do your formatting and sending
    DROP TABLE outbuf;

Since PostgreSQL supports READ COMMITTED isolation by default, the
writer lock will not be a problem there.  But you will have the same
issue on PosgreSQL if you select SERIALIZABLE isolation.  SQLite only
does SERIALIZABLE for database connections running in separate
processes.
--
D. Richard Hipp   <[EMAIL PROTECTED]>

Reply via email to