On Thu, 2005-08-25 at 11:16 -0400, Ron wrote:
> ># - Settings -
> >
> >fsync = false # turns forced synchronization on or off
> >#wal_sync_method = fsync# the default varies across platforms:
> > # fsync, fdatasync, open_sync, or
>
> I hope
I am working on a system which will be heavily dependent on functions
(some SQL, some PL/pgSQL). I am worried about the backend caching query
execution plans for long running connections.
Given:
- Processes which are connected to the database for long periods of time
(transactions are always shor
Greetings,
We are running some performance tests in which we are attempting to
insert about 100,000,000 rows in a database at a sustained rate. About
50M rows in, our performance drops dramatically.
This test is with data that we believe to be close to what we will
encounter in production. Howe
On Mon, 2005-10-31 at 12:32 -0500, Merlin Moncure wrote:
> well, can you defer index generation until after loading the set (or use
> COPY?)
I cannot defer index generation.
We are using the copy API. Copying 1 rows in a batch.
>
> if that index is causing the problem, you may want to cons
On Mon, 2005-10-31 at 15:30 -0500, Tom Lane wrote:
> [EMAIL PROTECTED] writes:
> > I've been trying to figure out whether null values are indexed or not from
> > the documentation. I was under the impression, that null values are not
> > stored in the index.
>
> You're mistaken, at least with rega
On Mon, 2005-10-31 at 16:18 -0500, Tom Lane wrote:
> Kelly Burkhart <[EMAIL PROTECTED]> writes:
> > Ha! So I'm creating an index 98% full of nulls! Looks like this is
> > easily fixed with partial indexes.
>
> Still, though, it's not immediately clear why
On Tue, 2005-11-01 at 10:37 -0500, Ron Peacetree wrote:
> I'm surprised that no one seems to have yet suggested the following
> simple experiment:
>
> Increase the RAM 4GB -> 8GB, tune for best performance, and
> repeat your 100M row insert experiment.
>
> Does overall insert performance change?
Second try... no attachment this time.
I've finally gotten around to profiling the back end. Here is a more
precise description of what I'm doing:
I am copying data into two tables, order_main and order_transition
(table defs at the end of this post). The order_transition table has
roughly doub
On Thu, 2005-11-10 at 19:13 -0500, Tom Lane wrote:
> Kelly Burkhart <[EMAIL PROTECTED]> writes:
> > ... A graph showing the performance
> > characteristics is here:
>
> > <http://kkcsm.net/pgcpy.jpg>
>
> I hadn't looked at this chart till jus
On Fri, 2005-11-11 at 18:02 -0500, Tom Lane wrote:
> > There very well could be a pattern in the data which could affect
> > things, however, I'm not sure how to identify it in 100K rows out of
> > 100M.
>
> I conjecture that the problem areas represent places where the key
> sequence is significa
On 1/4/06, Steve Eckmann <[EMAIL PROTECTED]> wrote:
Thanks, Steinar. I don't think we would really run with fsync off, but
I need to document the performance tradeoffs. You're right that my
explanation was confusing; probably because I'm confused about how to
use COPY! I could batch multiple INSERT
On 1/8/06, Ron <[EMAIL PROTECTED]> wrote:
Among the other tricks having lots of RAM allows:If some of your tables are Read Only or VERY rarely written to, youcan preload them at boot time and make them RAM resident using the/etc/tmpfs trick.
What is the /etc/tmpfs trick?
-K
On 12/14/06, Tom Lane <[EMAIL PROTECTED]> wrote:
Evgeny Gridasov <[EMAIL PROTECTED]> writes:
> This is a Linux Debian 3.1 ontop of 2x XEON 3.4 Ghz.
> PostgreSQL is 8.2 checked out from CVS REL8_2_STABLE yesterday.
> I'm running the same Postgres on another machine,
> with Debian Etch and have the
13 matches
Mail list logo