[PERFORM] planner with index scan cost way off actual cost, advices to tweak cost constants?

2006-03-17 Thread Guillaume Cottenceau
Hi, I have a problem with the postgres planner, which gives a cost to index scan which is much higher than actual cost (worst case considered, e.g. without any previous disk cache), and am posting here for advices for tweaking cost constants. Because of this problem, the planner typically chooses

Re: [PERFORM] Background writer configuration

2006-03-17 Thread Evgeny Gridasov
Yesterday we recieved a new server 2xAMD64(2core x 2chips = 4 cores) 8GB RAM and RAID-1 (LSI megaraid) I've maid some tests with pgbench (scaling 1000, database size ~ 16Gb) First of all, I'd like to mention that it was strange to see that the server performance degraded by 1-2% when we changed

Re: [PERFORM] 1 TB of memory

2006-03-17 Thread Luke Lonergan
Josh, On 3/16/06 9:43 PM, Josh Berkus josh@agliodbs.com wrote: With a single 3 Gbyte/second infiniband connection to the device? Hey, take it easy! Jim's post was tongue-in-cheek. You're right - I insulted his bandwidth, sorry :-) - Luke ---(end of

Re: [PERFORM] Background writer configuration

2006-03-17 Thread PFC
First of all, I'd like to mention that it was strange to see that the server performance degraded by 1-2% when we changed kernel/userland to x86_64 from default installed i386 userland/amd64 kernel. The operating system was Debian Linux, filesystem ext3. Did you use postgres compiled

Re: [PERFORM] 1 TB of memory

2006-03-17 Thread Rodrigo Madera
For God's sake buy a mainframe! =o) On 3/17/06, Michael Stone [EMAIL PROTECTED] wrote: On Thu, Mar 16, 2006 at 10:44:25PM -0800, Luke Lonergan wrote: You'd be better off with 4 x $10K servers that do 800MB/s from disk each and a Bizgres MPP - then you'd do 3.2GB/s (faster than the SSD) at a

Re: [PERFORM] 1 TB of memory

2006-03-17 Thread Alex Stapleton
We got a quote for one of these (entirely for comedy value of course) and it was in the region of £1,500,000 give or take a few thousand. On 16 Mar 2006, at 18:33, Jim Nasby wrote: PostgreSQL tuned to the max and still too slow? Database too big to fit into memory? Here's the solution!

Re: [PERFORM] 1 TB of memory

2006-03-17 Thread Merlin Moncure
On 3/16/06, Jim Nasby [EMAIL PROTECTED] wrote: PostgreSQL tuned to the max and still too slow? Database too big to fit into memory? Here's the solution! http://www.superssd.com/ products/tera-ramsan/ Anyone purchasing one will be expected to post benchmarks! :) I like their approach...ddr

Re: [PERFORM] Background writer configuration

2006-03-17 Thread Evgeny Gridasov
template1=# select version(); version - PostgreSQL 8.1.3 on x86_64-pc-linux-gnu, compiled by GCC cc (GCC) 3.3.5

Re: [PERFORM] Background writer configuration

2006-03-17 Thread PFC
I got this : template1=# select version(); version -- PostgreSQL 8.1.2 on x86_64-pc-linux-gnu,

Re: [PERFORM] 1 TB of memory

2006-03-17 Thread Merlin Moncure
On 3/17/06, Rodrigo Madera [EMAIL PROTECTED] wrote: I don't know about you databasers that crunch in some selects, updates and deletes, but my personal developer workstation is planned to be a 4x 300GB SATA300 with a dedicated RAID stripping controller (no checksums, just speedup) and 4x AMD64

Re: [PERFORM] Background writer configuration

2006-03-17 Thread Kevin Grittner
On Fri, Mar 17, 2006 at 6:24 am, in message [EMAIL PROTECTED], Evgeny Gridasov [EMAIL PROTECTED] wrote: I've maid some tests with pgbench If possible, tune the background writer with your actual application code under normal load. Optimal tuning is going to vary based on usage patterns.

Re: [PERFORM] Help optimizing a slow index scan

2006-03-17 Thread Dan Harris
Markus Bertheau wrote: Have you tried using a GIST index on lat long? These things are meant for two-dimensional data, whereas btree doesn't handle two-dimensional data that well. How many rows satisfy either of the long / lat condition? According to the analyze, less than 500 rows

Re: [PERFORM] Help optimizing a slow index scan

2006-03-17 Thread Dan Harris
Dan Harris wrote: Markus Bertheau wrote: Have you tried using a GIST index on lat long? These things are meant for two-dimensional data, whereas btree doesn't handle two-dimensional data that well. How many rows satisfy either of the long / lat condition? According to the analyze, less

Re: [PERFORM] Help optimizing a slow index scan

2006-03-17 Thread Merlin Moncure
On 3/16/06, Dan Harris [EMAIL PROTECTED] wrote: explain analyze select distinct eventmain.incidentid, eventmain.entrydate, eventgeo.long, eventgeo.lat, eventgeo.geox, eventgeo.geoy from eventmain, eventgeo where eventmain.incidentid = eventgeo.incidentid and ( long

Re: [PERFORM] Background writer configuration

2006-03-17 Thread Steve Atkins
On Mar 17, 2006, at 4:24 AM, Evgeny Gridasov wrote: Yesterday we recieved a new server 2xAMD64(2core x 2chips = 4 cores) 8GB RAM and RAID-1 (LSI megaraid) I've maid some tests with pgbench (scaling 1000, database size ~ 16Gb) First of all, I'd like to mention that it was strange to see that

Re: [PERFORM] Background writer configuration

2006-03-17 Thread Steinar H. Gunderson
On Fri, Mar 17, 2006 at 08:56:58AM -0800, Steve Atkins wrote: 64 bit binaries usually run marginally slower than 32 bit binaries. This depends a bit on the application, and what you mean by 64 bit (ie. what architecture). Some specialized applications actually benefit from having a 64-bit native

Re: [PERFORM] 1 TB of memory

2006-03-17 Thread Jim C. Nasby
On Thu, Mar 16, 2006 at 10:44:25PM -0800, Luke Lonergan wrote: Jim, PostgreSQL tuned to the max and still too slow? Database too big to fit into memory? Here's the solution! http://www.superssd.com/ products/tera-ramsan/ With a single 3 Gbyte/second infiniband connection to the device?

Re: [PERFORM] Help optimizing a slow index scan

2006-03-17 Thread Bruno Wolff III
On Fri, Mar 17, 2006 at 08:34:26 -0700, Dan Harris [EMAIL PROTECTED] wrote: Markus Bertheau wrote: Have you tried using a GIST index on lat long? These things are meant for two-dimensional data, whereas btree doesn't handle two-dimensional data that well. How many rows satisfy either of the

Re: [PERFORM] Background writer configuration

2006-03-17 Thread 11
On 2006-03-17, at 15:50, Evgeny Gridasov wrote: template1=# select version(); version -- --- PostgreSQL 8.1.3 on x86_64-pc-linux-gnu, compiled by GCC cc (GCC)

Re: [PERFORM] Help optimizing a slow index scan

2006-03-17 Thread Merlin Moncure
On 3/17/06, Bruno Wolff III [EMAIL PROTECTED] wrote: Have you looked at using the Earth Distance contrib module? If a spherical model of the earth is suitable for your application, then it may work for you and might be easier than trying to create Gist indexes yourself. earth distance = great

Re: [PERFORM] Background writer configuration

2006-03-17 Thread Evgeny Gridasov
[EMAIL PROTECTED]:~$ file /usr/lib/postgresql/8.1/bin/postgres /usr/lib/postgresql/8.1/bin/postgres: ELF 64-bit LSB executable, AMD x86-64, version 1 (SYSV), for GNU/Linux 2.6.0, dynamically linked (uses shared libs), stripped On Fri, 17 Mar 2006 18:56:32 +0100 11 [EMAIL PROTECTED] wrote: On

Re: [PERFORM] 1 TB of memory

2006-03-17 Thread Luke Lonergan
Jim, On 3/17/06 9:36 AM, Jim C. Nasby [EMAIL PROTECTED] wrote: Now what happens as soon as you start doing random I/O? :) Well - given that we've divided the data into 32 separate segments, and that seeking is done in parallel over all 256 disk drives, random I/O rocks hard and scales. Of

Re: [PERFORM] Help optimizing a slow index scan

2006-03-17 Thread Dan Harris
Merlin Moncure wrote: As others will probably mention, effective queries on lot/long which is a spatial problem will require r-tree or gist. I don't have a lot of experience with exotic indexes but this may be the way to go. One easy optimization to consider making is to make an index on

Re: [PERFORM] Help optimizing a slow index scan

2006-03-17 Thread Merlin Moncure
On 3/17/06, Dan Harris [EMAIL PROTECTED] wrote: Merlin Moncure wrote: Thanks to everyone for your suggestions. One problem I ran into is that apparently my version doesn't support the GIST index that was mentioned. function 'box' doesn't exist ).. So I'm guessing that both this as well as

Re: [PERFORM] 1 TB of memory

2006-03-17 Thread Scott Marlowe
On Fri, 2006-03-17 at 15:28, Merlin Moncure wrote: On 3/17/06, Luke Lonergan [EMAIL PROTECTED] wrote: Now what happens as soon as you start doing random I/O? :) If you are accessing 3 rows at a time from among billions, the problem you have is mostly access time - so an SSD might be very

[PERFORM] Best OS Configuration for Dual Xeon w/4GB Adaptec RAID 2200S

2006-03-17 Thread Kenji Morishige
About a year ago we decided to migrate our central database that powers various intranet tools from MySQL to PostgreSQL. We have about 130 tables and about 10GB of data that stores various status information for a variety of services for our intranet. We generally have somewhere between 150-200

Re: [PERFORM] Best OS Configuration for Dual Xeon w/4GB

2006-03-17 Thread Scott Marlowe
On Fri, 2006-03-17 at 16:11, Kenji Morishige wrote: About a year ago we decided to migrate our central database that powers various intranet tools from MySQL to PostgreSQL. We have about 130 tables and about 10GB of data that stores various status information for a variety of services for

Re: [PERFORM] Best OS Configuration for Dual Xeon w/4GB Adaptec RAID 2200S

2006-03-17 Thread Claus Guttesen
Here is my current configuration: Dual Xeon 3.06Ghz 4GB RAM Adaptec 2200S 48MB cache 4 disks configured in RAID5 FreeBSD 4.11 w/kernel options: options SHMMAXPGS=65536 options SEMMNI=256 options SEMMNS=512 options SEMUME=256 options SEMMNU=256

Re: [PERFORM] Best OS Configuration for Dual Xeon w/4GB Adaptec RAID 2200S

2006-03-17 Thread Tom Lane
Kenji Morishige [EMAIL PROTECTED] writes: ... We generally have somewhere between 150-200 connections to the database at any given time and probably anywhere between 5-10 new connections being made every second and about 100 queries per second. Most of the queries and transactions are very

Re: [PERFORM] Best OS Configuration for Dual Xeon w/4GB

2006-03-17 Thread Scott Marlowe
On Fri, 2006-03-17 at 17:03, Claus Guttesen wrote: Here is my current configuration: Also, I may be overlooking a postgresql.conf setting. I have attached the config file. You could try to lower shared_buffers from 3 to 16384. Setting this value too high can in some cases be

Re: [PERFORM] Best OS Configuration for Dual Xeon w/4GB Adaptec RAID 2200S

2006-03-17 Thread Claus Guttesen
4. Are there any other settings in the conf file I could try to tweak? One more thing :-) I stumbled over this setting, this made the db (PG 7.4.9) make use of the index rather than doing a sequential scan and it reduced a query from several minutes to some 20 seconds. random_page_cost = 2

Re: [PERFORM] Best OS Configuration for Dual Xeon w/4GB

2006-03-17 Thread Mark Kirkwood
Scott Marlowe wrote: On Fri, 2006-03-17 at 16:11, Kenji Morishige wrote: About a year ago we decided to migrate our central database that powers various intranet tools from MySQL to PostgreSQL. We have about 130 tables and about 10GB of data that stores various status information for a variety

Re: [PERFORM] Best OS Configuration for Dual Xeon w/4GB Adaptec RAID 2200S

2006-03-17 Thread Kenji Morishige
Thanks guys, I'm studying each of your responses and am going to start to experiement. Unfortunately, I don't have another box with similar specs to do a perfect experiment with, but I think I'm going to go ahead and open a service window to ungrade the box to FBSD6.0 and apply some other

Re: [PERFORM] Help optimizing a slow index scan

2006-03-17 Thread Tom Lane
Dan Harris [EMAIL PROTECTED] writes: Furthermore, by doing so, I am tying my queries directly to postgres-isms. One of the long term goals of this project is to be able to fairly transparently support any ANSI SQL-compliant back end with the same code base. Unfortunately, there isn't any

Re: [PERFORM] Help optimizing a slow index scan

2006-03-17 Thread Michael Fuhr
On Fri, Mar 17, 2006 at 11:41:11PM -0500, Tom Lane wrote: Dan Harris [EMAIL PROTECTED] writes: Furthermore, by doing so, I am tying my queries directly to postgres-isms. One of the long term goals of this project is to be able to fairly transparently support any ANSI SQL-compliant back