Greg,
> On that note, can I raise the idea again of dropping the default
> value for random_page_cost in postgresql.conf? I think 4 is too
> conservative in this day and age. Certainly the person who will
> be negatively impacted by a default drop of 4 to 3 will be the
> exception and not the rule
Bruce Momjian wrote:
> Agreed. I think we should reduce it at least to 3.
Note that changing it from 4 to 3 or even 2 is unlikely to really change much.
Many of the plans people complain about turn out to have critical points
closer to 1.2 or 1.1.
The only reason things work out better with
Greg Sabino Mullane wrote:
[ There is text before PGP section. ]
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
>
> > Reducing random_page_cost is usually the best way to get the
> > planner to favor indexscans more.
>
> On that note, can I raise the idea again of dropping the default
>
"Greg Sabino Mullane" <[EMAIL PROTECTED]> writes:
> On that note, can I raise the idea again of dropping the default
> value for random_page_cost in postgresql.conf? I think 4 is too
> conservative in this day and age. Certainly the person who will
> be negatively impacted by a default drop of 4 to
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
> Reducing random_page_cost is usually the best way to get the
> planner to favor indexscans more.
On that note, can I raise the idea again of dropping the default
value for random_page_cost in postgresql.conf? I think 4 is too
conservative in this
On 3/14/2005 2:26 PM, Bryan Encina wrote:
Note also that the Slony-I replication system has problems
with column
names identical to reserved words. This is rooted in the fact
that the
quote_ident() function doesn't quote reserved words ... as it IMHO is
supposed to do.
Jan
Does this apply to
He doesn't have a RAID controller, it's software RAID...
Alex Turner
netEconomis
On Mon, 14 Mar 2005 16:18:00 -0500, Merlin Moncure
<[EMAIL PROTECTED]> wrote:
> Alex Turner wrote:
> > 35 Trans/sec is pretty slow, particularly if they are only one row at
> > a time. I typicaly get 200-400/sec on
Actualy my statistics were off a bit I realised - chance of failure
for one drive is 1 in X. change of failure in RAID 0 is 7 in X,
chance of one drive failure in 14 drive RAID 5 is 14 in X,13 in X for
second drive, total probably is 182 in X*X, which is much lower than
RAID 0.
Your drive perfor
Alex Turner wrote:
> 35 Trans/sec is pretty slow, particularly if they are only one row at
> a time. I typicaly get 200-400/sec on our DB server on a bad day. Up
> to 1100 on a fresh database.
Well, don't rule out that his raid controller is not caching his writes.
His WAL sync method may be ov
Alex Turner wrote:
I would recommend running a bonnie++ benchmark on your array to see if
it's the array/controller/raid being crap, or wether it's postgres. I
have had some very surprising results from arrays that theoretically
should be fast, but turned out to be very slow.
I would also seriousl
Alex Turner wrote:
a 14 drive stripe will max out the PCI bus long before anything else,
the only reason for a stripe this size is to get a total accessible
size up. A 6 drive RAID 10 on a good controller can get up to
400Mb/sec which is pushing the limit of the PCI bus (taken from
offical 3ware 9
All,
I have a 13 disk (250G each) software raid 5 set using 1 16 port adaptec SATA
controller.
I am very happy with the performance. The reason I went with the 13 disk raid 5
set was for the space NOT performance.
I have a single postgresql database that is over 2 TB with about 500 GB free
Alex Turner <[EMAIL PROTECTED]> writes:
> a 14 drive stripe will max out the PCI bus long before anything else,
Hopefully anyone with a 14 drive stripe is using some combination of 64 bit
PCI-X cards running at 66Mhz...
> the only reason for a stripe this size is to get a total accessible
> siz
a 14 drive stripe will max out the PCI bus long before anything else,
the only reason for a stripe this size is to get a total accessible
size up. A 6 drive RAID 10 on a good controller can get up to
400Mb/sec which is pushing the limit of the PCI bus (taken from
offical 3ware 9500S 8MI benchmarks
> Note also that the Slony-I replication system has problems
> with column
> names identical to reserved words. This is rooted in the fact
> that the
> quote_ident() function doesn't quote reserved words ... as it IMHO is
> supposed to do.
>
>
> Jan
>
Does this apply to table names as well
Jan Wieck <[EMAIL PROTECTED]> writes:
> quote_ident() function doesn't quote reserved words ... as it IMHO is
> supposed to do.
You're right, it probably should. The equivalent code in pg_dump knows
about this, but quote_ident() doesn't.
One thing that's been on my mind with respect to all this
Tom Lane wrote:
=?windows-1250?Q?Miroslav_=8Aulc?= <[EMAIL PROTECTED]> writes:
seriously, I am far below this level of knowledge. But I can contribute
a test that (maybe) can help. I have rewritten the query so it JOINs the
varchar() fields (in fact all fields except the IDPK) at the last INNE
Tom Lane wrote:
So I have some results. I have tested the query on both PostgreSQL 8.0.1
and MySQL 4.1.8 with LIMIT set to 30 and OFFSET set to 6000. PostgreSQL
result is 11,667.916 ms, MySQL result is 448.4 ms.
That's a fairly impressive discrepancy :-(, and even the slot_getattr()
patch th
=?windows-1250?Q?Miroslav_=8Aulc?= <[EMAIL PROTECTED]> writes:
> seriously, I am far below this level of knowledge. But I can contribute
> a test that (maybe) can help. I have rewritten the query so it JOINs the
> varchar() fields (in fact all fields except the IDPK) at the last INNER
> JOIN. Th
On 3/14/2005 1:28 PM, Robert Treat wrote:
Yeah... how come no one told him "don't do that"? LIMIT is an SQL
reserved word, so it's likely to cause trouble in any database you try
to use it on... I'd strongly recommend renaming that column asap. You
can see other reserved words at
http://www.postgr
Josh Berkus wrote:
A,
This is a Sun e450 with dual TI UltraSparc II processors and 2G of RAM.
It is currently running Debian Sarge with a 2.4.27-sparc64-smp custom
compiled kernel. Postgres is installed from the Debian package and uses
all the configuration defaults.
Please read http://www.powerp
Yeah... how come no one told him "don't do that"? LIMIT is an SQL
reserved word, so it's likely to cause trouble in any database you try
to use it on... I'd strongly recommend renaming that column asap. You
can see other reserved words at
http://www.postgresql.org/docs/8.0/interactive/sql-keywords
=?ISO-8859-2?Q?Miroslav_=A9ulc?= <[EMAIL PROTECTED]> writes:
> [ concerning a deeply nested LEFT JOIN to get data from a star schema ]
> So I have some results. I have tested the query on both PostgreSQL 8.0.1
> and MySQL 4.1.8 with LIMIT set to 30 and OFFSET set to 6000. PostgreSQL
> result is
Kaloyan Iliev Iliev wrote:
Hi,
I have an idea about your problem. Will it be difficult not to change
the entire code but only the queries? You can change type in the
Postgres to bool. Then, when select data you can use a CASE..WHEN to
return 'Y' or 'N' or even write a little function which accep
Hi,
I have an idea about your problem. Will it be difficult not to change
the entire code but only the queries? You can change type in the
Postgres to bool. Then, when select data you can use a CASE..WHEN to
return 'Y' or 'N' or even write a little function which accepts bool
and returns 'Y' o
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi all,
running 7.4.x I still have problem
with the select but I do not find any solution apart to rise to 0.7 the
cpu_tuple_cost, I'm reposting it in the hope to discover a glitch in
the planner.
# explain analyze select * from v_sc_user_request whe
Miroslav ¦ulc wrote:
PFC wrote:
Your query seems of the form :
SELECT FROM main_table LEFT JOIN a lot of tables ORDER BY
sort_key LIMIT N OFFSET M;
I would suggest to rewrite it in a simpler way : instead of
generating the whole result set, sorting it, and then grabbing a
slice, gener
Tom Lane wrote:
Just FYI, I did a quick search-and-replace on your dump to replace
varchar(1) by "char", which makes the column fixed-width without any
change in the visible data. This made hardly any difference in the
join speed though :-(. So that is looking like a dead end.
I'll try to chan
=?ISO-8859-15?Q?Miroslav_=A6ulc?= <[EMAIL PROTECTED]> writes:
> PFC wrote:
>> Instead of a varchar(1) containing 'y' or 'n' you could use a BOOL
>> or an integer.
> Sure I could. The problem is our project still supports both MySQL and
> PostgreSQL. We used enum('Y','N') in MySQL so there would
Miroslav ©ulc <[EMAIL PROTECTED]> writes:
> I think so. I'll change the varchar(1) fields to char(1) where possible
char isn't faster than varchar on postgres. If anything it may be slightly
slower because every comparison first needs to pad both sides with spaces.
--
greg
--
Miroslav Šulc wrote:
Tom Lane wrote:
...
I think the reason this is popping to the top of the runtime is that the
joins are so wide (an average of ~85 columns in a join tuple according
to the numbers above). Because there are lots of variable-width columns
involved, most of the time the fast path
Harald Fuchs wrote:
Sure I could. The problem is our project still supports both MySQL and
PostgreSQL. We used enum('Y','N') in MySQL so there would be a lot of
changes in the code if we would change to the BOOL data type.
Since BOOL is exactly what you want to express and since MySQL also
sup
Tom Lane wrote:
=?windows-1250?Q?Miroslav_=8Aulc?= <[EMAIL PROTECTED]> writes:
As there are a lot of varchar(1) in the AdDevicesSites table, wouldn't
be helpful to change them to char(1)? Would it solve the variable-width
problem at least for some fields and speed the query up?
No, becaus
In article <[EMAIL PROTECTED]>,
=?ISO-8859-15?Q?Miroslav_=A6ulc?= <[EMAIL PROTECTED]> writes:
>> Instead of a varchar(1) containing 'y' or 'n' you could use a
>> BOOL or an integer.
> Sure I could. The problem is our project still supports both MySQL and
> PostgreSQL. We used enum('Y','N') in My
=?windows-1250?Q?Miroslav_=8Aulc?= <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Actually, we already had a pending patch (from Atsushi Ogawa) that
>> eliminates that particular O(N^2) behavior in another way. After
>> applying it, I get about a factor-of-4 reduction in the runtime for
>> Miros
=?windows-1250?Q?Miroslav_=8Aulc?= <[EMAIL PROTECTED]> writes:
> As there are a lot of varchar(1) in the AdDevicesSites table, wouldn't
> be helpful to change them to char(1)? Would it solve the variable-width
> problem at least for some fields and speed the query up?
No, because char(1) isn't p
On Wed, 9 Mar 2005, Michael McFarland wrote:
> I continue to be stumped by this. You are right that I should have
> listed the estimates provided by explain... basically for the select where
> bar = 41, it's estimating there will be 40,000 rows instead of 7, out of
> what's actuallly 5 millio
Jacques Caron wrote:
I'm preparing a set of servers which will eventually need to handle a
high volume of queries (both reads and writes, but most reads are very
simple index-based queries returning a limited set of rows, when not
just one), and I would like to optimize things as much as possibl
PFC wrote:
Instead of a varchar(1) containing 'y' or 'n' you could use a BOOL
or an integer.
Sure I could. The problem is our project still supports both MySQL and
PostgreSQL. We used enum('Y','N') in MySQL so there would be a lot of
changes in the code if we would change to the BOOL data t
1) in PostgreSQL I use 'varchar(1)' for a lot of fields and in MySQL I
use 'enum'
2) in PostgreSQL in some cases I use connection fields that are not of
the same type (smallint <-> integer (SERIAL)), in MySQL I use the same
types
Well both those things will make PostgreSQL slower...
Chris
--
Christopher Kings-Lynne wrote:
1) in PostgreSQL I use 'varchar(1)' for a lot of fields and in MySQL
I use 'enum'
2) in PostgreSQL in some cases I use connection fields that are not
of the same type (smallint <-> integer (SERIAL)), in MySQL I use the
same types
Well both those things will make P
Instead of a varchar(1) containing 'y' or 'n' you could use a BOOL or an
integer.
Your query seems of the form :
SELECT FROM main_table LEFT JOIN a lot of tables ORDER BY sort_key LIMIT
N OFFSET M;
I would suggest to rewrite it in a simpler way : instead of generating
the whole result s
John Arbash Meinel wrote:
In fact, on MySQL I didn't see any slow reactions so I didn't measure
and inspect it. But I can try it if I figure out how to copy the
database from PostgreSQL to MySQL.
I figured you still had a copy of the MySQL around to compare to. You
probably don't need to spend too
You will still need to use double quotes in 8.0.1...
Chris
Gourish Singbal wrote:
Thanks a lot,
we might be upgrading to 8.0.1 soon.. till than using double quotes
should be fine.
regards
gourish
On Mon, 14 Mar 2005 18:25:22 +1100, Russell Smith <[EMAIL PROTECTED]> wrote:
On Mon, 14 Mar 2005 06:14
Tom Lane wrote:
...
I think the reason this is popping to the top of the runtime is that the
joins are so wide (an average of ~85 columns in a join tuple according
to the numbers above). Because there are lots of variable-width columns
involved, most of the time the fast path for field access does
Thanks a lot,
we might be upgrading to 8.0.1 soon.. till than using double quotes
should be fine.
regards
gourish
On Mon, 14 Mar 2005 18:25:22 +1100, Russell Smith <[EMAIL PROTECTED]> wrote:
> On Mon, 14 Mar 2005 06:14 pm, Gourish Singbal wrote:
> > Guys,
> >
> > I am having a problem firing que
Tom Lane wrote:
I wrote:
Since ExecProject operations within a nest of joins are going to be
dealing entirely with Vars, I wonder if we couldn't speed matters up
by having a short-circuit case for a projection that is only Vars.
Essentially it would be a lot like execJunk.c, except able to cope
47 matches
Mail list logo