Jim C. Nasby wrote:
> Hrm... I was about to suggest that for timing just the query (and not
> output/data transfer time) using explain analyze, but then I remembered
> that explain analyze can incur some non-trivial overhead with the timing
> calls. Is there a way to run the query but have psql ign
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
>> Feel free to propose better cost equations.
> Where would I look in code to see what's used now?
All the gold is hidden in src/backend/optimizer/path/costsize.c.
regards, tom lane
---(end of broadcast)
On Thu, Apr 21, 2005 at 08:24:15AM -0400, Jeff wrote:
>
> On Apr 21, 2005, at 7:49 AM, Shoaib Burq (VPAC) wrote:
>
> >Now I have not touch the $PGDATA/postgresql.conf (As I know very little
> >about memory tuning) Have run VACCUM & ANALYZE.
> >
> You should really, really bump up shared_buffers a
Shoaib Burq (VPAC) schrieb:
Hi everybody,
One of our clients was using SQL-Server and decided to switch to
PostgreSQL 8.0.1.
Hardware: Dual processor Intel(R) Xeon(TM) CPU 3.40GHz
OS: Enterprise Linux with 2.6.9-5 SMP kernel
Filesystem: ext3
SHMMAX: $ cat /proc/sys/kernel/shmmax
6442450944 <--- b
Quoting Bill Chandler <[EMAIL PROTECTED]>:
> Running PostgreSQL 7.4.2, Solaris.
> Client is reporting that the size of an index is
> greater than the number of rows in the table (1.9
> million vs. 1.5 million). Index was automatically
> created from a 'bigserial unique' column.
> We have been ru
We have been using Postgresql for many years now...
We have always used it with the native OS it was build from, FreeBSD.
FreeBSD is rock solid stable. Very reliable.
With so many rumors about Linux being faster especialy the 2.6.x
kernel, I have decided to give it another try. I have not used Li
I saw an interesting thought in another thread about placing database data
in a partition that uses cylinders at the outer edge of the disk. I want
to try this. Are the lower number cylinders closer to the edge of a SCSI
disk or is it the other way around? What about ATA?
Cheers,
Rick
--
I just finished testing Postgres, MYSQL, and MSSQL on my machine (2 gigs
internal XP).
I have adjusted the postgres config to what I think is an ok place and have
mysql default and mssql default.
Using Aqua studio a program that hooks to all three I have found:
Initial exec Second exec R
Hi.
Sometimes, if the random number generator, that PostgreSQL uses,
isn't good enough, the randomly selected pages for the statistics
might not be random enough.
Solaris is unknown to me. Maybe the used random number generator there
isn't good enough?
Good statistics depend on good random numbers.
"David Roussel" <[EMAIL PROTECTED]> writes:
> Note there is no reference to iso_pjm_data_update_events_event_id_key
> which is the index that went wacky on us. Does that seem weird to you?
What that says is that that index doesn't belong to that table. You
sure it wasn't a chance coincidence of
Are you using 8.0.2? I hope so because there were some Win32
performance changes related to fsync() in that release.
---
Joel Fradkin wrote:
> I just finished testing Postgres, MYSQL, and MSSQL on my machine (2 gigs
> inter
You would be interested in
http://archives.postgresql.org/pgsql-hackers/2005-04/msg00565.php
On Thu, Apr 21, 2005 at 03:33:05PM -0400, Dave Chapeskie wrote:
> On Thu, Apr 21, 2005 at 11:28:43AM -0700, Josh Berkus wrote:
> > Michael,
> >
> > > Every five minutes, DBCC INDEXDEFRAG will report t
On Fri, Apr 22, 2005 at 10:08:06PM -0400, Tom Lane wrote:
> "Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> > I've run some performance tests. The actual test case is at
> > http://stats.distributed.net/~decibel/timing.sql, and the results are at
> > http://stats.distributed.net/~decibel/timing.log. I
Building a single-column index on a dual opteron with 4G of memory, data
on a 4 SATA RAID10; OS, logs and tempsace on a SATA mirror, with
sort_mem set to 2.5G, create index is actually CPU bound for large
portions of time. The postgresql process and system time are accounting
for an entire CPU, and
Anjan,
> This is what I used to find the table size, which probably doesn't
> include the index size. Is there a way to find out size of indexes?
>
> select relpages * 8192 as size_in_bytes from pg_class where relnamespace
> = (select oid from pg_namespace where nspname = 'public') and relname =
>
One further question is: is this really a meaningful test? I mean, in
production are you going to query 30 rows regularly?
It is a query snippet if you will as the view I posted for audit and case
where tables are joined are more likely to be ran.
Josh and I worked over this until we got ex
Hi there, We need to update a table of about 1.2GB (and about 900k rows) size. I was wondering if I should let the regular cron job take care of clean up (vacuum db Mon-Sat, vacuum full on Sun, followed by Reindex script), or manually do this on the table followed by the update. This is what
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> I've run some performance tests. The actual test case is at
> http://stats.distributed.net/~decibel/timing.sql, and the results are at
> http://stats.distributed.net/~decibel/timing.log. In a nutshell, doing
> an index scan appears to be about 2x faster
On Fri, Apr 22, 2005 at 05:04:19PM -0400, Joel Fradkin wrote:
> And is the system always going to be used by only one user?
> No we have 400+ concurrent users
>
> I guess the question is if this big select is representative of the load you
> expect in production.
> Yes we see many time on the tw
I've run some performance tests. The actual test case is at
http://stats.distributed.net/~decibel/timing.sql, and the results are at
http://stats.distributed.net/~decibel/timing.log. In a nutshell, doing
an index scan appears to be about 2x faster than a sequential scan and a
sort.
Something else
> > Solaris is unknown to me. Maybe the used random number generator there
> > isn't good enough?
>
> Hmmm. Good point. Will have to test on Linux.
Nope:
Linux 2.4.20:
test=# select tablename, attname, n_distinct from pg_stats where tablename =
'web_site_activity_fa';
tablename |
On Fri, Apr 22, 2005 at 01:51:08PM -0400, Joel Fradkin wrote:
> I just finished testing Postgres, MYSQL, and MSSQL on my machine (2 gigs
> internal XP).
>
> I have adjusted the postgres config to what I think is an ok place and have
> mysql default and mssql default.
>
> Using Aqua studio a progra
Marko,
> Sometimes, if the random number generator, that PostgreSQL uses,
> isn't good enough, the randomly selected pages for the statistics
> might not be random enough.
>
> Solaris is unknown to me. Maybe the used random number generator there
> isn't good enough?
Hmmm. Good point. Will have
Hrm... I was about to suggest that for timing just the query (and not
output/data transfer time) using explain analyze, but then I remembered
that explain analyze can incur some non-trivial overhead with the timing
calls. Is there a way to run the query but have psql ignore the output?
If so, you c
On Fri, 22 Apr 2005 10:06:33 -0400, "Tom Lane" <[EMAIL PROTECTED]> said:
> David Roussel <[EMAIL PROTECTED]> writes:
> > |dave_data_update_eventsr 1593600.0 40209
> > |dave_data_update_events_event_id_key i 1912320.0 29271
>
> Hmm ... what PG version is this, and what
I've also now tried looking at pg_class.relpages. I compared the
results before and after vacuum. The results stayed the same, except
for five rows that increased after the vacuum. Here is the select on
those rows after the vacuum:
relname | relnamespace | reltype
Quoting Alvaro Herrera <[EMAIL PROTECTED]>:
> One further question is: is this really a meaningful test? I mean, in
> production are you going to query 30 rows regularly? And is the
> system always going to be used by only one user? I guess the question
> is if this big select is representa
David Roussel <[EMAIL PROTECTED]> writes:
> |dave_data_update_eventsr 1593600.0 40209
> |dave_data_update_events_event_id_key i 1912320.0 29271
Hmm ... what PG version is this, and what does VACUUM VERBOSE on
that table show?
regards, tom lane
On Apr 21, 2005, at 11:33 PM, Shoaib Burq (VPAC) wrote:
BTW I guess should mention that I am doing the select count(*) on a
View.
A bit of a silly question...
but are you actually selecting all the rows from this query in
production or would it be more selective? ie select * from bigslowview
w
On 4/21/05, Enrico Weigelt <[EMAIL PROTECTED]> wrote:
>
> > Even if your data never changes it *can* change so the function should
> > be at most stable not immutable.
>
> okay, the planner sees that the table could potentionally change.
> but - as the dba - I'd like to tell him, this table *neve
On 4/21/05, Enrico Weigelt <[EMAIL PROTECTED]> wrote:
> * Tom Lane <[EMAIL PROTECTED]> wrote:
>
>
> > Yeah, I was actually thinking about a two-step process: inline the
> > function to produce somethig equivalent to a handwritten scalar
> > sub-SELECT, and then try to convert sub-SELECTs into joi
Dave Page wrote:
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
Andreas Pflug
Sent: 21 April 2005 14:06
To: Joel Fradkin
Cc: 'John A Meinel'; josh@agliodbs.com;
pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Joel's Performance Issues WAS :
Tom Lane wrote:
> Are the tables in question particularly wide (many columns)?
Yes they are. They both have 20 columns. If I cut down the duration
table to just 1 column of timestamps, the planner uses the index.
Interesting, so I could store just the timestamps in another table (view
doesn't hel
On 22 Apr 2005, at 06:57, Tom Lane wrote:
Bill Chandler <[EMAIL PROTECTED]> writes:
Client is reporting that the size of an index is
greater than the number of rows in the table (1.9
million vs. 1.5 million).
This thread seems to have wandered away without asking the critical
question "what did you
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> Andreas Pflug
> Sent: 21 April 2005 14:06
> To: Joel Fradkin
> Cc: 'John A Meinel'; josh@agliodbs.com;
> pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Joel's Performance Issues WAS :
35 matches
Mail list logo