Question: Does anyone have any idea what bottleneck I am hitting? An
index's performance should in theory scale as the log of the number of
rows - what am I missing here?
These can help people on the list to help you :
- Your hardware config (CPU, RAM, disk) ?
-
Hello,
We have installed postgres 8.2.0
default time zone which postgres server using is
template1=# SHOW timezone;
TimeZone
---
ETC/GMT-5
(1 row)
But we want to set this timezone parameter to IST.
Our system timezone is also in IST. We are using solaris.
Please provide me some help
soni de wrote:
But we want to set this timezone parameter to IST.
Our system timezone is also in IST. We are using solaris.
This is the performance-list, and this is not a performance-related
question. Please use the pgsql-general or pgsql-novice list for this
kind of questions.
PostgreSQL
am Tue, dem 19.06.2007, um 13:12:58 +0530 mailte soni de folgendes:
Hello,
We have installed postgres 8.2.0
default time zone which postgres server using is
template1=# SHOW timezone;
TimeZone
---
ETC/GMT-5
(1 row)
But we want to set this timezone parameter to
Hi list members,I have a question regarding hardware issues for a SDI (Spatial data infrastructure). It will consist of PostgreSQL with PostGIS and a UMN Mapserver/pmapper set up.At our institute we are currently establishing a small GIS working group. The data storage for vector data should be
Tom Lane wrote:
Karl Wright [EMAIL PROTECTED] writes:
- At any given time, there are up to 100 of these operations going on at
once against the same database.
It sounds like your hardware is far past maxed out. Which is odd
since tables with a million or so rows are pretty small for modern
At our institute we are currently establishing a small GIS working group.
The data storage for vector data should be the central PostGIS system.
Raster data will be held in file system.
Mostly the users are accessing the data base in read only mode. From the
client side there is not much write
On Jun 13, 2007, at 11:43 AM, Markus Schiltknecht wrote:
In the mean time, I've figured out that the box in question peaked
at about 1450 NOTPMs with 120 warehouses with RAID 1+0. I'll try to
compare again to RAID 6.
Is there any place where such results are collected?
There is the
An overnight VACUUM helped things quite a bit. I am now getting
throughput of around 75 transactions per minute, where before I was
getting 30. Also, the CPU is no longer pegged, and the machines load
average has dropped to an acceptable 6-10 from somewhere above 20.
While this is still
On Mon, Jun 18, 2007 at 04:35:11PM -0700, Steve Atkins wrote:
On Jun 18, 2007, at 4:09 PM, [EMAIL PROTECTED] wrote:
The tricky bits are going to be defining the problem and creating the
alogrithm to do the maths from input to output.
Why not methodically discuss the the alogrithms on
Can we please trim this down to just advocacy?
On Jun 18, 2007, at 1:17 PM, Joshua D. Drake wrote:
Jonah H. Harris wrote:
On 6/18/07, Andreas Kostyrka [EMAIL PROTECTED] wrote:
As a cynic, I might ask, what Oracle is fearing?
As a realist, I might ask, how many times do we have to answer
Karl Wright wrote:
This particular run lasted four days before a VACUUM became essential.
The symptom that indicates that VACUUM is needed seems to be that the
CPU usage of any given postgresql query skyrockets. Is this essentially
correct?
Are you saying you weren't used to run VACUUM
Alvaro Herrera wrote:
Karl Wright wrote:
This particular run lasted four days before a VACUUM became essential.
The symptom that indicates that VACUUM is needed seems to be that the
CPU usage of any given postgresql query skyrockets. Is this essentially
correct?
Are you saying you weren't
Karl Wright [EMAIL PROTECTED] writes:
This particular run lasted four days before a VACUUM became essential. The
symptom that indicates that VACUUM is needed seems to be that the CPU usage of
any given postgresql query skyrockets. Is this essentially correct?
Postgres is designed on the
Karl Wright wrote:
Alvaro Herrera wrote:
Karl Wright wrote:
This particular run lasted four days before a VACUUM became essential.
The symptom that indicates that VACUUM is needed seems to be that the
CPU usage of any given postgresql query skyrockets. Is this essentially
correct?
On Tue, 2007-06-19 at 09:37 -0400, Karl Wright wrote:
Alvaro Herrera wrote:
Karl Wright wrote:
This particular run lasted four days before a VACUUM became essential.
The symptom that indicates that VACUUM is needed seems to be that the
CPU usage of any given postgresql query
Gang,
Hoping you all can help me with a rather bizarre issue that I've run
across. I don't really need a solution, I think I have one, but I'd
really like to run it by everyone in case I'm headed in the wrong
direction.
I'm running a small Slony (v1.1.5)/postgresql 8.0.4 cluster (on
Gregory Stark wrote:
Karl Wright [EMAIL PROTECTED] writes:
This particular run lasted four days before a VACUUM became essential. The
symptom that indicates that VACUUM is needed seems to be that the CPU usage of
any given postgresql query skyrockets. Is this essentially correct?
Postgres
Alvaro Herrera wrote:
Karl Wright wrote:
Alvaro Herrera wrote:
Karl Wright wrote:
This particular run lasted four days before a VACUUM became essential.
The symptom that indicates that VACUUM is needed seems to be that the
CPU usage of any given postgresql query skyrockets. Is this
Kurt Overberg [EMAIL PROTECTED] writes:
In my investigation of this anomaly, I noticed that the data/ dir on
db1 (the master) is around 60 Gigs. The data directory on the slaves
is around 25Gb. After about 3 months of head scratching, someone on
the irc channel suggested that it may be
Kurt Overberg wrote:
In my investigation of this anomaly, I noticed that the data/ dir on db1
(the master) is around 60 Gigs. The data directory on the slaves is
around 25Gb. After about 3 months of head scratching, someone on the
irc channel suggested that it may be due to index bloat.
In response to Karl Wright [EMAIL PROTECTED]:
Alvaro Herrera wrote:
Karl Wright wrote:
Alvaro Herrera wrote:
Karl Wright wrote:
This particular run lasted four days before a VACUUM became essential.
The symptom that indicates that VACUUM is needed seems to be that the
CPU usage
Bill Moran wrote:
In response to Karl Wright [EMAIL PROTECTED]:
Alvaro Herrera wrote:
Karl Wright wrote:
Alvaro Herrera wrote:
Karl Wright wrote:
This particular run lasted four days before a VACUUM became essential.
The symptom that indicates that VACUUM is needed seems to be that the
Karl Wright [EMAIL PROTECTED] writes:
Fine - but what if the previous vacuum is still in progress, and does not
finish in 5 minutes?
Yes, well, there are problems with this design but the situation is already
much improved in 8.2 and there are more improvements on the horizon.
But it's likely
A useful utility that I've found is PgFouine. It has an option to
analyze VACUUM VERBOSE logs. It has been instrumental in helping me
figure out whats been going on with my VACUUM that is taking 4+
hours, specifically tracking the tables that are taking the longest.
I highly recommend
Tom Lane wrote:
Karl Wright [EMAIL PROTECTED] writes:
Also, as I said before, I have done extensive query analysis and found
that the plans for the queries that are taking a long time are in fact
very reasonable. Here's an example from the application log of a query
that took way more time
Karl Wright [EMAIL PROTECTED] writes:
[2007-06-18 09:39:49,797]ERROR Plan: Index Scan using i1181764142395 on
intrinsiclink (cost=0.00..14177.29 rows=5 width=253)
[2007-06-18 09:39:49,797]ERROR Plan: Index Cond: ((jobid = $2) AND
((childidhash)::text = ($3)::text))
[2007-06-18
Tom Lane wrote:
Karl Wright [EMAIL PROTECTED] writes:
[2007-06-18 09:39:49,797]ERROR Plan: Index Scan using i1181764142395 on
intrinsiclink (cost=0.00..14177.29 rows=5 width=253)
[2007-06-18 09:39:49,797]ERROR Plan: Index Cond: ((jobid = $2) AND
((childidhash)::text = ($3)::text))
[EMAIL PROTECTED] writes:
sizes etc., I wondered about the hardware. Most things were about the I/O
of harddisks, RAM and file system. Is the filesystem that relevant?
Because wo want to stay at Ubuntu because of the software support,
espacially for the GIS-Systems. I think we need at least
Gregory Stark writes:
VACUUM doesn't require shutting down the system, it doesn't lock any tables or
otherwise prevent other jobs from making progress. It does add extra i/o but
In addition to what Gregory pointed out, you may want to also consider using
Autovacuum. That may also help.
Alvaro Herrera writes:
How large is the database? I must admit I have never seen a database
that took 4 days to vacuum. This could mean that your database is
humongous, or that the vacuum strategy is wrong for some reason.
Specially with 16GB of RAM.
I have a setup with several databases
Karl Wright [EMAIL PROTECTED] writes:
I did an ANALYZE on that table and repeated the explain, and got this:
...
... even more wildly wrong.
Hmm. You might need to increase the statistics target for your larger
tables. It's probably not a big deal for queries like this one, but I'm
worried
Karl Wright wrote:
I did an ANALYZE on that table and repeated the explain, and got this:
metacarta= analyze intrinsiclink;
ANALYZE
metacarta= explain select count(*) from intrinsiclink where
jobid=1181766706097 and
childidhash='7E130F3B688687757187F1638D8776ECEF3009E0';
Karl Wright writes:
I'm not writing off autovacuum - just the concept that the large tables
aren't the ones that are changing. Unfortunately, they *are* the most
dynamically updated.
Would be possible for you to partition the tables?
By date or some other fashion to try to have some tables
Chris Browne [EMAIL PROTECTED] writes:
[EMAIL PROTECTED] (Kurt Overberg) writes:
In my investigation of this anomaly, I noticed that the data/ dir on
db1 (the master) is around 60 Gigs. The data directory on the slaves
is around 25Gb. After about 3 months of head scratching, someone on
the
Gregory Stark [EMAIL PROTECTED] writes:
Karl Wright [EMAIL PROTECTED] writes:
In this case it looks like the planner is afraid that that's exactly
what will happen --- a cost of 14177 suggests that several thousand row
fetches are expected to happen, and yet it's only predicting 5 rows out
soni de [EMAIL PROTECTED] writes:
But we want to set this timezone parameter to IST.
Which IST are you interested in? Irish, Israel, or Indian Standard Time?
Postgres prefers to use the zic timezone names, which are less
ambiguous. Try this to see likely options:
regression=# select * from
Please read the whole email before replying:
I love the feedback I have received but I feel that somehow I did not
communicate the intent of this mini project very well. So let me
outline a few basics and who the audience was intended for.
Mini project title:
Initial Configuration
Richard,
Thanks for the feedback! I found oid2name and have been mucking
about with it, but haven't really found anything that stands out
yet. Most of the tables/indexes I'm comparing across machines seem
to take up a similar amount of disk space. I think I'm going to have
to get
Campbell, Lance wrote:
Please think simple. I stress the word simple. The real challenge here
is getting the formulas correct. Someone mentioned to not focus on the
values but just get something out there for everyone to help tweak. I
agree!
What questions do you think should be asked in
Campbell, Lance writes:
3) I suggested JavaScript because most people that get started with
PostgreSQL will go to the web in order to find out about issues relating
Why not c?
It could then go into contrib.
Anyways.. language is likely the least important issue..
As someone mentioned.. once
Francisco and Richard,
Why ask about disk or raid? How would that impact any settings in
postgresql.conf?
I did forget the obvious question:
What OS are you using?
Thanks,
Lance Campbell
Project Manager/Software Architect
Web Services at Public Affairs
University of Illinois
217.333.0382
Kurt Overberg wrote:
Richard,
Thanks for the feedback! I found oid2name and have been mucking about
with it, but haven't really found anything that stands out yet. Most of
the tables/indexes I'm comparing across machines seem to take up a
similar amount of disk space. I think I'm going to
Campbell, Lance wrote:
Francisco and Richard,
Why ask about disk or raid? How would that impact any settings in
postgresql.conf?
Well, random_page_cost will depend on how fast your disk system can
locate a non-sequential page. If you have a 16-disk RAID-10 array that's
noticably less time
What version of PostgreSQL are you using?
I think newbies should be pushed a bit to use the latest versions, maybe
with some advice on how to setup the apt sources (in debian/ubuntu) to get
them.
How much memory will be available to PostgreSQL?
How many connections will be made to
Chris,
I took your advice, and I had found that sl_log_1 seems to be causing
some of the problem. Here's the result of a VACUUM VERBOSE
mydb # vacuum verbose _my_cluster.sl_log_1 ;
INFO: vacuuming _my_cluster.sl_log_1
INFO: index sl_log_1_idx1 now contains 309404 row versions in
1421785
Campbell, Lance writes:
Francisco and Richard,
Why ask about disk or raid? How would that impact any settings in
postgresql.conf?
If the user has 2 disks and says that he will do a lot of updates he could
put pg_xlog in the second disk.
---(end of
On 6/19/07, Francisco Reyes [EMAIL PROTECTED] wrote:
Campbell, Lance writes:
Francisco and Richard,
Why ask about disk or raid? How would that impact any settings in
postgresql.conf?
If the user has 2 disks and says that he will do a lot of updates he could
put pg_xlog in the second disk.
Yudhvir,
I completely agree. I was just putting together a similar email.
Thanks,
Lance Campbell
Project Manager/Software Architect
Web Services at Public Affairs
University of Illinois
217.333.0382
http://webservices.uiuc.edu
From: [EMAIL
Francisco Reyes wrote:
I have a setup with several databases (the largest of which is 1TB
database) and I do a nightly vacuum analyze for ALL databases. It takes
about 22 hours. And this is with constant updates to the large 1TB
database. This is with Postgresql 8.1.3
22h nightly? Wow, you
On Tue, 19 Jun 2007 12:58:26 -0400
Francisco Reyes [EMAIL PROTECTED] wrote:
Campbell, Lance writes:
3) I suggested JavaScript because most people that get started with
PostgreSQL will go to the web in order to find out about issues relating
Why not c?
Why not whatever and install it on
D'Arcy,
I wanted to put it on the www.postgresql.org site. That is what I said
in my original email. I don't believe anyone from the actual project
has contacted me.
I am setting up a JavaScript version first. If someone wants to do a
different one feel free. I will have all of the
Below is a link to the HTML JavaScript configuration page I am creating:
http://www.webservices.uiuc.edu/postgresql/
I had many suggestions. Based on the feedback I received, I put
together the initial list of questions. This list of questions can be
change.
Memory
There are many
On 6/19/07, Campbell, Lance [EMAIL PROTECTED] wrote:
Below is a link to the HTML JavaScript configuration page I am creating:
http://www.webservices.uiuc.edu/postgresql/
I had many suggestions. Based on the feedback I received, I put together
the initial list of questions. This list of
Heikki Linnakangas writes:
On a serious note, the index vacuum improvements in 8.2 might help you
to cut that down. You seem to be happy with your setup, but I thought
I'd mention it..
I am really, really trying.. to go to 8.2.
I have a thread on general going on for about a week.
I am
Campbell, Lance writes:
For the 6) Are your searches:
How about having many simple
---(end of broadcast)---
TIP 6: explain analyze is your friend
On Tue, 19 Jun 2007, Y Sidhu wrote:
On 6/19/07, Francisco Reyes [EMAIL PROTECTED] wrote:
Campbell, Lance writes:
Francisco and Richard,
Why ask about disk or raid? How would that impact any settings in
postgresql.conf?
If the user has 2 disks and says that he will do a lot of
On Tue, 19 Jun 2007, Campbell, Lance wrote:
Memory
There are many different ways to ask about memory. Rather than ask a
series of questions I went with a single question, #2. If it is better
to ask about the memory in a series of questions then please give me the
questions you would ask and
Vivek Khera writes:
no file will ever be larger than 1Gb I didn't need to make any
adjustments to the newfs parameters.
You should consider using newfs -i 65536 for partitions to be used for
postgresql. You will get more usable space and will still have lots of free
inodes.
For my next
there are three catagories of memory useage
1. needed by other software
2. available for postgres
3. needed by the OS
There's actually only two required memory questions:
M1) How much RAM do you have on this machine?
M2) Is this:
() Dedicated PostgreSQL Server?
() Server
On Tue, Jun 19, 2007 at 10:49:01AM -0700, Y Sidhu wrote:
On 6/19/07, Francisco Reyes [EMAIL PROTECTED] wrote:
Campbell, Lance writes:
Francisco and Richard,
Why ask about disk or raid? How would that impact any settings in
postgresql.conf?
If the user has 2 disks and says that he
Now I am at the difficult part, what parameters to calculate and how to
calculate them. Everything below has to do with PostgreSQL version 8.2:
The parameters I would think we should calculate are:
max_connections
shared_buffers
work_mem
maintenance_work_mem
effective_cache_size
On Mon, 2007-06-18 at 17:55 +0200, David Tokmatchi wrote:
I am DBA for Oracle and beginner on Postgres. For an company in
France, I must make a comparative study, between Postgres and Oracle.
Can you send any useful document which can help me.
Scalability ? Performance? Benchmark ?
On Thu, 2007-06-14 at 16:14 -0700, Craig James wrote:
Looking for replication solutions, I find:
Slony-I
Seems good, single master only, master is a single point of failure,
no good failover system for electing a new master or having a failed
master rejoin the cluster. Slave databases
Kurt Overberg [EMAIL PROTECTED] writes:
mydb # vacuum verbose _my_cluster.sl_log_1 ;
INFO: sl_log_1: found 455001 removable, 309318 nonremovable row
versions in 13764 pages
DETAIL: 0 dead row versions cannot be removed yet.
Hmm. So you don't have a long-running-transactions problem (else
PFC [EMAIL PROTECTED] writes:
What version of PostgreSQL are you using?
I think newbies should be pushed a bit to use the latest versions,
How about pushed *hard* ? I'm constantly amazed at the number of people
who show up in the lists saying they installed 7.3.2 or whatever random
That's the thing thats kinda blowing my mind here, when I look at
that table:
db1=# select count(*) from _my_cluster.sl_log_1 ;
count
---
6788
(1 row)
As far as my DB is concerned, there's only ~7000 rows (on average)
when I look
in there (it does fluctuate, I've seen it go as high as
Lance,
The parameters I would think we should calculate are:
max_connections
shared_buffers
work_mem
maintenance_work_mem
effective_cache_size
random_page_cost
Actually, I'm going to argue against messing with random_page_cost. It's a
cannon being used when a slingshot is called
Kurt Overberg [EMAIL PROTECTED] writes:
That's the thing thats kinda blowing my mind here, when I look at
that table:
db1=# select count(*) from _my_cluster.sl_log_1 ;
count
---
6788
(1 row)
Well, that's real interesting. AFAICS there are only two possibilities:
1. VACUUM sees
On Jun 19, 2007, at 7:26 PM, Tom Lane wrote:
Kurt Overberg [EMAIL PROTECTED] writes:
That's the thing thats kinda blowing my mind here, when I look at
that table:
db1=# select count(*) from _my_cluster.sl_log_1 ;
count
---
6788
(1 row)
Well, that's real interesting. AFAICS there
Kurt Overberg [EMAIL PROTECTED] writes:
Okay, I've grabbed pg_filedump and got it running on the appropriate
server.
I really have No Idea how to read its output though. Where does the
ctid from sl_log_1
appear in the following listing?
ctid is (block number, item number)
Block0
On Wed, 20 Jun 2007, Mike Benoit wrote:
It would be cool if someone started a generic configuration+benchmark
utility that could be used with virtually any software.
It would be cool. It would also be impossible.
Create a benchmark script for the application that returns relevant
metrics.
Greg Smith [EMAIL PROTECTED] writes:
On Wed, 20 Jun 2007, Mike Benoit wrote:
I don't think something like this would be very difficult at all to
write
Here I just smile and say that proves you've never tried to write one :)
I'm with Greg on this. It's not that easy to optimize in a
73 matches
Mail list logo