Sorry Guy,
was just a little tired yesterday.
> Err... you're right... one of us say the same thing when I show the
> Volker mail...
Try to make a group by in the inline-view, so you will get something
like this:
>
> On Mon, 4 Aug 2003 12:02:46 -0300, "Fernando Papa" <[EMAIL PROTECTED]>
> wrot
On Tue, 5 Aug 2003, Medora Schauer wrote:
> I hope this piques someones curiosity. I'd really like to know
> what is going on here...
I think you're getting caught by the typing of constants preventing
index scans.
> "UPDATE shot_record SET trace_count = %d " \
> "WHE
Trevor Astrope kirjutas T, 05.08.2003 kell 18:59:
> I was wondering if anyone found a sweet spot regarding how many inserts to
> do in a single transaction to get the best performance? Is there an
> approximate number where there isn't any more performance to be had or
> performance may drop off
hello!!!
what is suggested partitioning schema for
postgresql??
the size of my db is 5BG and i have 36GB scsi
disk!
thanks
wilson
On Tue, 5 Aug 2003, Shridhar Daithankar wrote:
> On 5 Aug 2003 at 14:15, Peter Childs wrote:
>
> > On Tue, 5 Aug 2003, Shridhar Daithankar wrote:
> >
> > > On 5 Aug 2003 at 8:09, Jeff wrote:
> > >
> > > I would suggest autovacuum daemon which is in CVS contrib works for 7.3.x as
> > > well..
On Wednesday 06 August 2003 08:34, Yaroslav Mazurak wrote:
> Hi All!
>
>
> I have installed PostgreSQL 7.3.2 on FreeBSD 4.7, running on PC with
> CPU Pentium II 400MHz and 384Mb RAM.
Version 7.3.4 is just out - probably worth upgrading as soon as it's
convenient.
> Prob
Yaroslav Mazurak <[EMAIL PROTECTED]> writes:
>>> fsync = false
>> I'd turn fsync back on - unless you don't mind losing your data after a crash.
> This is temporary performance solution - I want get SELECT query result
> first, but current performance is too low.
Disabling fsync will not h
On Tue, 5 Aug 2003 15:26:09 -0700, Sean Chittenden
<[EMAIL PROTECTED]> wrote:
>> I have an experimental patch lying around somewhere that tries to
>> work around these problems by offering different estimation methods
>> for index scans. If you are interested, I'll dig it out.
>
>Sure, I'll take a
On Thu, 7 Aug 2003, Richard Huxton wrote:
> But this parameter controls how much memory can be allocated to sorts - I
> don't see how PG can figure out a reasonable maximum by itself.
One could have one setting for the total memory usage and pg could use
statistics or some heuristics to use the
On Thursday 07 August 2003 09:24, Yaroslav Mazurak wrote:
> > IIRC there is a limit on filesystem cache on freeBSD. 300MB by default.
> > If that is the case, you might have to raise it to make
> > effective_cache_size really effective..
>
> "Try various sysctls" says nothing for me. I want u
On 08/08/2003 11:28 mixo wrote:
I have just installed redhat linux 9 which ships with Pg
7.3.2. Pg has to be setup so that data inserts (blobs) should
be able to handle at least 8M at a time. The machine has
two P III 933MHz CPU's, 1.128G RAM (512M*2 + 128M), and
a 36 Gig hd with 1 Gig swap and 3 e
> Mem: 71M Active, 23M Inact, 72M Wired, 436K Cache, 48M Buf, 208M Free
> Swap: 368M Total, 2852K Used, 366M Free
>
> It's right that I can figure that I can use 384M (total RAM) - 72M
> (wired) - 48M (buf) = 264M for PostgreSQL.
> Hence, if I set effective_cache_size to 24M (3072 8K
Stef <[EMAIL PROTECTED]> writes:
> => Could we see the results of "EXPLAIN ANALYZE", rather than just EXPLAIN,
> => for the un-analyzed case?
> Attached the output of this.
Hmm... not immediately obvious where it's going wrong. Could you try
this (after ANALYZE):
set enable_mergejoin t
Josh Berkus <[EMAIL PROTECTED]> writes:
> My reading is that the case is "borderline";
Well, clearly the planner is flipping to a much less desirable plan, but
the core estimation error is not borderline by my standards. In the
live DB we have this subplan:
-> Nested Loop (cost=0.00..7.41 rows
On Mon, Aug 11, 2003 at 06:59:30PM -0400, Bruce Momjian wrote:
> Uh, the ext2 developers say it isn't 100% reliable --- at least that is
> that was told. I don't know any personally, but I mentioned it while I
> was visiting Red Hat, and they didn't refute it.
IMHO, if we're going to say "don't u
Here is one talking about ext2 corruption from power failure from 2002:
http://groups.google.com/groups?q=ext2+corrupt+%22power+failure%22&hl=en&lr=&ie=UTF-8&selm=alvrj5%249in%241%40usc.edu&rnum=9
---
pgman wrote:
>
>
On 11 Aug 2003 at 11:59, Serge Dorofeev wrote:
>
> Hi PostrgeSQL team,
>
> My PostrgeSQL installed as part of CYGWIN (Windows XP).
> I have compared performance PostrgeSQL to MS SQL (I used a little Java program
> with number of inserts in table).
> MS SQL is faster in 12 times :-(
> It's very
On Mon, 11 Aug 2003, Bruce Momjian wrote:
> scott.marlowe wrote:
> > On Fri, 8 Aug 2003, Andrew Sullivan wrote:
> >
> > > On Fri, Aug 08, 2003 at 09:40:20AM -0700, Jonathan Gardner wrote:
> > > >
> > > > Redhat puts ext3 on by default. Consider switching to a non-journaling FS
> > > > (ext2?) w
Tom,
> Let's see the pg_stats rows for case_clients in both databases. The
> entries for trial_groups might be relevant too.
My reading is that the case is "borderline"; that is, becuase the correlation
is about 10-20% higher on the test database (since it was restored "clean"
from backup) the
scott.marlowe wrote:
> On Fri, 8 Aug 2003, Andrew Sullivan wrote:
>
> > On Fri, Aug 08, 2003 at 09:40:20AM -0700, Jonathan Gardner wrote:
> > >
> > > Redhat puts ext3 on by default. Consider switching to a non-journaling FS
> > > (ext2?) with the partition that holds your data and WAL.
> >
> >
On Mon, Aug 11, 2003 at 03:58:41PM +0200, Stef wrote:
> I have attached an sql statement that normally runs under 3 minutes.
> That is, until I vacuum analyze the database (or just the tables in the query),
> then the same query runs longer than 12 hours, and I have to kill it.
Hmm, I have notice
Folks,
More followup on this:
The crucial difference between the two execution plans is this clause:
test db has:
-> Seq Scan on case_clients (cost=0.00..3673.48 rows=11274 width=11) (actual
time=0.02..302.20 rows=8822 loops=855)
whereas live db has:
-> Index Scan using idx_caseclients_case
On Tue, Aug 12, 2003 at 04:31:19PM -0400, Richard Welty wrote:
> impractical. has any consideration been given to providing a way to manage
> such a deployment?
Plenty. No-one's completed an implementation yet.
> or is it the judgement of the hackers community that a monsterous raid-10
> array
On Wed, 2003-08-13 at 01:47, Christopher Kings-Lynne wrote:
> > So, group the inserts in transactions with maybe 1000 commands each. It
> > will go much faster. It can then cache the rows and in the end just make
> > sure all 1000 have been written out on disk.
>
> More than that, he should be usi
On Tue, 12 Aug 2003, mixo wrote:
> that I am currently importing data into Pg which is about 2.9 Gigs.
> Unfortunately, to maintain data intergrity, data is inserted into a table
> one row at a time.'
So you don't put a number of inserts into one transaction?
If you don't do that then postgresql
On Tue, 12 Aug 2003, Bruce Momjian wrote:
>
> I think Gavin Sherry is working on this. I am CC'ing him.
>
> ---
Yes I am working on this. I am about 50% of the way through the patch but
have been held up with other work. F
Jeff <[EMAIL PROTECTED]> writes:
> On Tue, 12 Aug 2003, Christopher Browne wrote:
>> Are you _certain_ that's still true? Have you a metric that shows
>> Informix being 10x faster on a modern system? That would be quite
>> surprising...
> We were forced (for budget reason) to switch from raw dis
On Tue, 12 Aug 2003, Christopher Browne wrote:
> Are you _certain_ that's still true? Have you a metric that shows
> Informix being 10x faster on a modern system? That would be quite
> surprising...
>
We were forced (for budget reason) to switch from raw disk to cooked files
on our informix db.
Ron Johnson wrote:
> On Tue, 2003-08-12 at 13:39, Bruce Momjian wrote:
> > OK, I got some hard evidence. Here is a discussion on the Linux kernel
> > mailing list with postings from Allen Cox (ac Linux kernels) and Stephen
> > Tweedie (ext3 author).
> >
> > http://www.tux.org/hypermail/linux-
Floes table looks like this
Table "flows"
Column| Type | Modifiers
--+--+---
flidload | bigint | not null
firsttime| bigint
I think Gavin Sherry is working on this. I am CC'ing him.
---
Christopher Kings-Lynne wrote:
> > there are advantages to being able to split the database across a slew of
> > disk drives. if we accept the notion of using th
[EMAIL PROTECTED] (Jeff) writes:
> On Tue, 12 Aug 2003, Christopher Browne wrote:
>> Are you _certain_ that's still true? Have you a metric that shows
>> Informix being 10x faster on a modern system? That would be quite
>> surprising...
> We were forced (for budget reason) to switch from raw dis
> Andrew Sullivan's fairly regular response is that he tried (albeit
> not VASTLY extensively) to distinguish between disks when working
> with fibre arrays, and he couldn't measure an improvement in
> shifting WAL (the OBVIOUS thing to shift) to separate disks.
Real quick... the faster the drives
On Wed, 13 Aug 2003 09:48:18 +0800 Christopher Kings-Lynne <[EMAIL PROTECTED]> wrote:
> We've got a little bunch of us tinkering with a tablespace
> implementation.
> However, it's been staller for a while now.
interesting. i'm involved in the very early stages of a startup that is
likely to do a
Xin,
> I would recomment trying to take a close look at many of the posts on
> the Performance list (searching the archives) and paying attention to
> things such as effective_cache_size and shared_buffers. If these don't
> answer your questions, ask this list again.
Also see these articles:
http
On Wed, 2003-08-13 at 10:46, Josh Berkus wrote:
> Jeff,
>
[snip]
> > The other advantage (which I hinted to above) with raw disks is being able
> > to optimize queries to take advantage of it. Informix is multithreaded
> > and it will spawn off multiple "readers" to do say, a seq scan (and merge
Josh Berkus wrote:
> Jeff,
>
> > Informix, etc. have spent a lot of time and money working on it.
> > They also have the advantage of having many paid fulltime
> > developers who are doing this for a job, not as a weekend hobby
> > (Compared to the what? 2-3 full time PG developers).
>
> I think
Thanks to everyone who responded. It's a pity that the discussion has gone
the ext2 vs ext3 route. The main reason I asked my original question is
that I am currently importing data into Pg which is about 2.9 Gigs.
Unfortunately, to maintain data intergrity, data is inserted into a table
one row at
Stef <[EMAIL PROTECTED]> writes:
> I have attached an sql statement that normally runs under 3 minutes.
> That is, until I vacuum analyze the database (or just the tables in the query),
> then the same query runs longer than 12 hours, and I have to kill it.
Could we see the results of "EXPLAIN ANA
Shridhar Daithankar wrote:
On 11 Aug 2003 at 23:42, Ron Johnson wrote:
On Mon, 2003-08-11 at 19:50, Christopher Kings-Lynne wrote:
Well, yeah. But given the Linux propensity for introducing major
features in "minor" releases (and thereby introducing all the
attendant bugs), I'd think twice abou
On Tue, Aug 12, 2003 at 02:39:19PM -0400, Bill Moran wrote:
> Meaning ... just tell it a raw partition to keep the data on and
> Postgre would create its own "filesystem" ... obviously, doing that
> would allow Postgre to bypass all the failings of all filesystems
> and rely entirely apon its own r
the primary key is flidload
- Original Message -
From: "scott.marlowe" <[EMAIL PROTECTED]>
To: "ingrid martinez" <[EMAIL PROTECTED]>
Cc: "Andrew Sullivan" <[EMAIL PROTECTED]>;
<[EMAIL PROTECTED]>
Sent: Wednesday, August 13, 2003 11:47 AM
Subject: Re: [PERFORM] How can I Improve perform
Hi all,
I posted this problem on the sql list, and was referred to this list in stead.
I have attached an sql statement that normally runs under 3 minutes.
That is, until I vacuum analyze the database (or just the tables in the query),
then the same query runs longer than 12 hours, and I have to k
On Tue, 12 Aug 2003, Christopher Kings-Lynne wrote:
> > Well, yeah. But given the Linux propensity for introducing major
> > features in "minor" releases (and thereby introducing all the
> > attendant bugs), I'd think twice about using _any_ Linux feature
> > until it's been through a major versi
On Wed, 6 Aug 2003, Tom Lane wrote:
> Sebastien Lemieux <[EMAIL PROTECTED]> writes:
> > All the time is taken at the commit of both transaction.
>
> Sounds like the culprit is foreign-key checks.
>
> One obvious question is whether you have your foreign keys set up
> efficiently in the first pla
On Tue, Aug 12, 2003 at 12:52:46AM -0400, Bruce Momjian wrote:
I don't use Linux and was just repeating what I had heard from others,
> and read in postings. I don't have any first-hand experience with ext2
> (except for a laptop I borrowed that wouldn't boot after being shut
> off), but others o
More than likely you are suffering from an affliction known as type
mismatch. This is listed as tip 9 here on the performance list (funny, it
was sent at the bottom of your reply :-)
What happens is that when you do:
select * from some_table where id=123;
where id is a bigint the query planne
Stef <[EMAIL PROTECTED]> writes:
> => And what PG version is this, exactly?
> PostgreSQL 7.3.1
Ah, I think I see it: you are getting burnt by a mergejoin estimation
bug that was fixed in 7.3.2. Please update (you might as well go to
7.3.4 while you're at it) and see if the results improve.
On Tue, 2003-08-12 at 13:39, Bruce Momjian wrote:
> OK, I got some hard evidence. Here is a discussion on the Linux kernel
> mailing list with postings from Allen Cox (ac Linux kernels) and Stephen
> Tweedie (ext3 author).
>
> http://www.tux.org/hypermail/linux-kernel/1999week14/subject.htm
Josh Berkus <[EMAIL PROTECTED]> writes:
> Also, there doesn't seem to be any way in 7.2 for me to find out what the
> current statistics target for a column is. What am I missing?
There still isn't a handy command for it --- you have to look at
pg_attribute.attstattarget for the column.
Hi Tom,
Thanks for responding.
I got as much info as I could :
On Mon, 11 Aug 2003 11:43:45 -0400
Tom Lane <[EMAIL PROTECTED]> wrote:
=> Could we see the results of "EXPLAIN ANALYZE", rather than just EXPLAIN,
=> for the un-analyzed case?
Attached the output of this.
=> Also, what do you see
Uh, the ext2 developers say it isn't 100% reliable --- at least that is
that was told. I don't know any personally, but I mentioned it while I
was visiting Red Hat, and they didn't refute it.
Now, the failure window might be quite small, but I have seen it happen
myself, and have heard it from o
Tom,
> Partly. The numbers suggest that in ANALYZE's default sample of 3000
> rows, it's only finding about a dozen non-null tgroup_ids (yielding the
> 0.996 null_frac value); and that in one case all dozen are different and
> in the other case there are two duplicates. It would help if you
> bo
On Wed, Aug 13, 2003 at 09:03:31AM -0500, ingrid martinez wrote:
> I want to know, how can I improve the performance of postgres, I
> have a java class thar inserts register every 30 min but is very
> slow.
What does the query do? How is postgres configured?
A
--
Andrew Sullivan
People:
> On Mon, Aug 11, 2003 at 10:58:18PM -0400, Christopher Browne wrote:
> > 1. Nobody has gone through any formal proofs, and there are few
> > systems _anywhere_ that are 100% reliable.
>
> I think the problem is that ext2 is known to be not perfectly crash
> safe. That is, fsck on reboot
On Tue, 12 Aug 2003, Andrew Sullivan wrote:
> On Mon, Aug 11, 2003 at 10:58:18PM -0400, Christopher Browne wrote:
> > 1. Nobody has gone through any formal proofs, and there are few
> > systems _anywhere_ that are 100% reliable.
>
> I think the problem is that ext2 is known to be not perfectly
OK, I got some hard evidence. Here is a discussion on the Linux kernel
mailing list with postings from Allen Cox (ac Linux kernels) and Stephen
Tweedie (ext3 author).
http://www.tux.org/hypermail/linux-kernel/1999week14/subject.html#start
Search for "softupdates and ext2".
Here is the
On Wed, Aug 13, 2003 at 10:17:45AM -0500, ingrid martinez wrote:
> The query that execute is only inserts, I use a batch of 300 and then do
> commit.
>
> insert into FLOWS values(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
Are there any foreign keys, &c?
>
> and
>
> postgresql.conf looks like thi
The query that execute is only inserts, I use a batch of 300 and then do
commit.
insert into FLOWS values(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
and
postgresql.conf looks like this
#
# PostgreSQL configuration file
# -
#
# This file consists of lines of the form
#
On Wed, 2003-08-13 at 11:17, ingrid martinez wrote:
> The query that execute is only inserts, I use a batch of 300 and then do
> commit.
>
> insert into FLOWS values(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
Any foreign keys on this table? Triggers or Rules?
What kind of hardware do you have?
FWIW, Informix can be run using a "cooked" (Unix) file for storing data or it uses
"raw" disk space and bypasses the ordinary (high level) UNIX controllers and does its
own reads/writes. About 10 times faster and safer. Of course, itmay have taken a lot
of programmer time to make that solid. But
As I remember, there were clear cases that ext2 would fail to recover,
and it was known to be a limitation of the file system implementation.
Some of the ext2 developers were in the room at Red Hat when I said
that, so if it was incorrect, they would hopefully have spoken up. I
addressed the com
Josh Berkus <[EMAIL PROTECTED]> writes:
> Still, they are differences. Attached.
Actually, it was mainly "cases" that I wanted to know about ---
specifically, whichever columns are in "idx_cases_tgroup".
Also, which of the trial_groups columns is the pkey?
regards, tom la
On Tue, 12 Aug 2003, Neil Conway wrote:
> On Tue, Aug 12, 2003 at 12:52:46AM -0400, Bruce Momjian wrote:
> I don't use Linux and was just repeating what I had heard from others,
> > and read in postings. I don't have any first-hand experience with ext2
> > (except for a laptop I borrowed that wo
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Friday 08 August 2003 03:28, mixo wrote:
> I have just installed redhat linux 9 which ships with Pg
> 7.3.2. Pg has to be setup so that data inserts (blobs) should
> be able to handle at least 8M at a time. The machine has
> two P III 933MHz CPU's,
> > > Well, yeah. But given the Linux propensity for introducing major
> > > features in "minor" releases (and thereby introducing all the
> > > attendant bugs), I'd think twice about using _any_ Linux feature
> > > until it's been through a major version (e.g. things introduced in
> > > 2.4.x won
On Mon, 11 Aug 2003 14:25:03 -0400
Tom Lane <[EMAIL PROTECTED]> wrote:
=> set enable_mergejoin to off;
=> explain analyze ... query ...
=>
=> If it finishes in a reasonable amount of time, send the explain output.
Hi again,
I did this on the 7.3.1 database, and attached the output.
Tom,
> Partly. The numbers suggest that in ANALYZE's default sample of 3000
> rows, it's only finding about a dozen non-null tgroup_ids (yielding the
> 0.996 null_frac value); and that in one case all dozen are different and
> in the other case there are two duplicates. It would help if you
> bo
Greg,
> FWIW, Informix can be run using a "cooked" (Unix) file for storing data or
> it uses "raw" disk space and bypasses the ordinary (high level) UNIX
> controllers and does its own reads/writes. About 10 times faster and safer.
> Of course, itmay have taken a lot of programmer time to make tha
Dear master:
I have learned postgreSQL for serveral days, now i meet some problems. when I use a TPCC(Transaction Processing Performance Council) test program to test the performance of postgreSQL , postgreSQL works very slowly, it almost need 1 minute to finish a transaction, and the CPU pe
Hi All!
I have installed PostgreSQL 7.3.2 on FreeBSD 4.7, running on PC with
CPU Pentium II 400MHz and 384Mb RAM.
Problem is that SQL statement (see below) is running too long. With
current WHERE clause 'SUBSTR(2, 2) IN ('NL', 'NM') return 25 records.
With 1 record, SELECT time is about 50
On Tue, Aug 12, 2003 at 09:36:21AM -0700, Josh Berkus wrote:
> So it's a tradeoff with loss of performance vs. recovery time. In
> a server room with redundant backup power supplies, "clean room"
> security and fail-over services, I can certainly imagine that data
> journalling would not be neede
On Fri, 8 Aug 2003, Andrew Sullivan wrote:
> On Fri, Aug 08, 2003 at 09:40:20AM -0700, Jonathan Gardner wrote:
> >
> > Redhat puts ext3 on by default. Consider switching to a non-journaling FS
> > (ext2?) with the partition that holds your data and WAL.
>
> I would give you exactly the opposite
On Wed, 6 Aug 2003, Tom Lane wrote:
> "scott.marlowe" <[EMAIL PROTECTED]> writes:
> > On Wed, 6 Aug 2003, Tom Lane wrote:
> >> One obvious question is whether you have your foreign keys set up
> >> efficiently in the first place. As a rule, the referenced and
> >> referencing columns should have
On 11 Aug 2003 at 19:33, [EMAIL PROTECTED] wrote:
> Currently we are using postgresql 7.3 with Redhat linux 9. We find that
> when we try to execute 200,000 update statement through JDBC, the performance
> of degraded obviously for each update statement when comparing with less update
> stateme
On Mon, Aug 11, 2003 at 10:58:18PM -0400, Christopher Browne wrote:
> 1. Nobody has gone through any formal proofs, and there are few
> systems _anywhere_ that are 100% reliable.
I think the problem is that ext2 is known to be not perfectly crash
safe. That is, fsck on reboot after a crash can
On Wed, Aug 06, 2003 at 12:45:34AM -0400, Tom Lane wrote:
> For core code, the answer would be a big NYET. We do not do feature
> additions in point releases, only bug fixes. While contrib code is more
> under the author's control than the core committee's control, I'd still
> say that you'd be m
> On Tue, 2003-08-05 at 17:40, Christopher Browne wrote:
> > Unfortunately, a "configurable-via-tables" pg_autovacuum is also going
> > to be quite different from the current "unconfigurable" version.
You don't need to create actual tables - just use 'virtual' tables, like the
pg_settings one. T
On Tue, 5 Aug 2003, Trevor Astrope wrote:
> I was wondering if anyone found a sweet spot regarding how many inserts to
> do in a single transaction to get the best performance? Is there an
> approximate number where there isn't any more performance to be had or
> performance may drop off?
>
>
On 11 Aug 2003, Ron Johnson wrote:
> On Mon, 2003-08-11 at 19:50, Christopher Kings-Lynne wrote:
> > > Well, yeah. But given the Linux propensity for introducing major
> > > features in "minor" releases (and thereby introducing all the
> > > attendant bugs), I'd think twice about using _any_ Linu
Yaroslav Mazurak <[EMAIL PROTECTED]> writes:
> Current postgresql.conf settings (some) are:
> max_locks_per_transaction = 16
This strikes me as a really bad idea --- you save little space by
reducing it from the default, and open yourself up to unexpected failures.
> wal_buffers = 256
Tha
Matthew T. O'Connor wrote:
> Fair point, my only concern is that a backend integrated
> pg_autovacuum would be radically different from the current libpq
> based client application.
Unfortunately, a "configurable-via-tables" pg_autovacuum is also going
to be quite different from the current "uncon
Josh,
I'm sure that you've thought of this, but it sounds like you may not have
done an analyze in your new DB.
Thanks,
Peter Darley
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Josh Berkus
Sent: Monday, August 11, 2003 3:48 PM
To: Ron Johnson; PgS
Hi,
I'm running on Redhat 7.2 with postgresql 7.3.2 and I have two schema in
the same database 'db' and 'db_dev'. Both contain a set of >20 tables for
a total of less than 50 Mb of data each (on the order of 50k rows in
total). Once in a while (often these days!), I need to synchronize the
dev
Josh Berkus <[EMAIL PROTECTED]> writes:
> Tom,
>> Okay, here's our problem:
>>
>> live DB: tgroup_id n_distinct = -1
>>
>> test DN: tgroup_id n_distinct = 11
>>
>> The former estimate actually means that it thinks tgroup_id is a unique
>> column, whereas the latter says there are only 11 disti
Hi!
Please send me the test db and the queries, with precise information
maybe the developers can help.
--
Tomka Gergely
"S most - vajon barbárok nélkül mi lesz velünk?
Ők mégiscsak megoldás voltak valahogy..."
---(end of broadcast)---
TIP 6: Ha
Hi Xin;
PostgreSQL is configured to run on virutally anything out of the box.
The reason for this is that, like Oracle, the database manager will not
start if it cannot allocate sufficient resources. We take the approach
of ensuring that it will start so you can tune it.
I would recomment trying
Sebastien Lemieux <[EMAIL PROTECTED]> writes:
> All the time is taken at the commit of both transaction.
Sounds like the culprit is foreign-key checks.
One obvious question is whether you have your foreign keys set up
efficiently in the first place. As a rule, the referenced and
referencing colu
On Wed, Aug 06, 2003 at 03:03:41PM -0300, Wilson A. Galafassi Jr. wrote:
> I'm installing Postgresql under linux for better performance and i want to know how
> is the best configuration.
> 1. What is the best linux distribuition for better performance?
The Linux distribution itself isn't that i
Try the pg_autovacuum daemon in CVS contrib dir. It works fine with 7.3.
Chris
- Original Message -
From: "Jeff" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, August 05, 2003 8:09 PM
Subject: [PERFORM] Some vacuum & tuning help
> I've been trying to search through the arc
On Tue, 12 Aug 2003 13:09:42 -0700 Josh Berkus <[EMAIL PROTECTED]> wrote:
> This idea has been discussed numerous times on the HACKERS list, and is
> a
> (pretty much) closed issue. While Oracle and SQL Server use their own
> filesystems, PostgreSQL will not because:
...
> 2) The filesystem pro
From: "Christopher Browne" <[EMAIL PROTECTED]>
> Shridhar Daithankar wrote:
> > I agree, specifying per table thresholds would be good in autovacuum..
>
> Which begs the question of what the future direction is for pg_autovacuum.
This is a good question.
> There would be some merit to having pg_
Bruce Momjian <[EMAIL PROTECTED]> writes:
> Rod Taylor wrote:
>> On Fri, 2003-08-08 at 14:53, Andrew Sullivan wrote:
>>> I would give you exactly the opposite advice: _never_ use a
>>> non-journalling fs for your data and WAL. I suppose if you can
>>> afford to lose some transactions, you can do w
Sean Chittenden <[EMAIL PROTECTED]> writes:
>> If you CLUSTER on an index and then ANALYSE, you get a correlation of
>> 1.0 (== optimum) for the first column of the index.
> Correlating of what to what? Of data to nearby data? Of data to
> related data (ie, multi-column index?)? Of related data
Scott Cain <[EMAIL PROTECTED]> writes:
> A few days ago, I asked for advice on speeding up substring queries on
> the GENERAL mailing list. Joe Conway helpfully pointed out the ALTER
> TABLE STORAGE EXTERNAL documentation. After doing the alter,
> the queries got slower! Here is the background:
I've been trying to search through the archives, but it hasn't been
successful.
We recently upgraded from pg7.0.2 to 7.3.4 and things were happy. I'm
trying to fine tune things to get it running a bit better and I'm trying
to figure out how vacuum output correlates to tuning parameters.
Here's th
Josh Berkus <[EMAIL PROTECTED]> writes:
> By everything I can measure, the live database and the test are
> identical; yet the test does not think that idx_caseclients_case is
> very accessable, and the live database knows it is.
Let's see the pg_stats rows for case_clients in both databases. The
> there are advantages to being able to split the database across a slew of
> disk drives. if we accept the notion of using the native OS filesystem on
> each, it would seem that being able to direct various tables and indices
to
> specific drives might be a valuble capability. i know that i could
Bruce Momjian commented:
"Uh, the ext2 developers say it isn't 100% reliable" ... "I mentioned
it while I was visiting Red Hat, and they didn't refute it."
1. Nobody has gone through any formal proofs, and there are few
systems _anywhere_ that are 100% reliable. NASA has occasionally lost
spa
On 5 Aug 2003 at 9:18, Jeff wrote:
> As for the pg_dumping of it. I suppose it would work on this table as it
> is only a couple million rows and not terribly big data-wise. The other
> tables in this db are rather big and a load is not fast. (It is about
> 8GB).
You need to dump only those table
1 - 100 of 136 matches
Mail list logo