FAQ items 3.10 and 4.9 might give you a running start.

On 2000-01-12, Robert Wagner mentioned:

> Hello All,
> 
> Anyone know if read performance on a postgres database decreases at an
> increasing rate, as the number of stored records increase?
> 
> This is a TCL app, which makes entries into a single, table and from time
> to time repopulates a grid control.  It must rebuild the data in the grid
> control, because other clients have since written to the same table.
> 
> It seems as if I'm missing something fundamental... maybe I am... is some
> kind of database cleanup necessary?   With less than ten records, the grid
> populates very quickly.  Beyond that, performance slows to a crawl, until
> it _seems_ that every new record doubles the time needed to retrieve the
> records.  My quick fix was to cache the data locally in TCL, and only
> retrieve changed data from the database.  But now as client demand
> increases, as well as the number of clients making changes to the table,
> I'm reaching the bottleneck again.
> 
> The client asked me yesterday to start evaluating "more mainstream"
> databases, which means that they're pissed off.  Postgres is fun to work
> with, but it's hard to learn about, and hard to justify to clients.
> 
> By the way, I have experimented with populating the exact same grid control
> on Windows NT, using MS Access (TCL runs just about anywhere).  The grid
> seemed to populate just about instantaneously.  So, is the bottleneck in
> Unix, in Postgres, and does anybody know how to make it faster?
> 
> Cheers,
> Rob
> 
> 
> 
> ************
> 
> 

-- 
Peter Eisentraut                  Sernanders väg 10:115
[EMAIL PROTECTED]                   75262 Uppsala
http://yi.org/peter-e/            Sweden



************

Reply via email to