Hi,
While monitioring we noticed that there are no details in the pg_statistics
for a particular table. Can you let us know what might be the reason? Also
what steps can be taken care for adding the statistics?
Note: The queries which are running on this table are taken longer time then
al the
Nimesh Satam wrote:
While monitioring we noticed that there are no details in the pg_statistics
for a particular table. Can you let us know what might be the reason? Also
what steps can be taken care for adding the statistics?
Have you ANALYZEd the table?
--
Heikki Linnakangas
Heikki,
Thank you for replying.
We have already used analyze command on the table.
We have also ran the vacuum analyze command.
But they are not helping.
Thanks,
Nimesh.
On 6/11/07, Heikki Linnakangas [EMAIL PROTECTED] wrote:
Nimesh Satam wrote:
While monitioring we noticed that there
Actually this one is an opteron, so it looks like it's all good.
Dave
On 8-Jun-07, at 3:41 PM, Guy Rouillier wrote:
Dave Cramer wrote:
It's an IBM x3850 using linux redhat 4.0
I had to look that up, web site says it is a 4-processor, dual-core
(so 8 cores) Intel Xeon system. It also says
On Mon, Jun 11, 2007 at 02:28:32PM +0530, Nimesh Satam wrote:
We have already used analyze command on the table.
We have also ran the vacuum analyze command.
But they are not helping.
Is there any data in the table? What does ANALYZE VERBOSE or VACUUM
ANALYZE VERBOSE show for this table?
Michael,
Following is the output of Vacuum analze on the same table:
*psql =# VACUUM ANALYZE verbose cam_attr;
INFO: vacuuming public.cam_attr
INFO: index cam_attr_pk now contains 11829 row versions in 63 pages
DETAIL: 0 index pages have been deleted, 0 are currently reusable.
CPU
On Mon, Jun 11, 2007 at 07:22:24PM +0530, Nimesh Satam wrote:
INFO: analyzing public.cam_attr
INFO: cam_attr: scanned 103 of 103 pages, containing 11829 live rows and
0 dead rows; 6000 rows in sample, 11829 estimated total rows
Looks reasonable.
Also how do we check if the statistics are
On Jun 4, 2007, at 1:56 PM, Markus Schiltknecht wrote:
Simplistic throughput testing with dd:
dd of=test if=/dev/zero bs=10K count=80
80+0 records in
80+0 records out
819200 bytes (8.2 GB) copied, 37.3552 seconds, 219 MB/s
pamonth:/opt/dbt2/bb# dd if=test of=/dev/zero bs=10K
On May 29, 2007, at 12:03 PM, Joost Kraaijeveld wrote:
vacuum_cost_delay = 200
vacuum_cost_page_hit = 6
#vacuum_cost_page_miss = 10 # 0-1 credits
#vacuum_cost_page_dirty = 20# 0-1 credits
vacuum_cost_limit = 100
I didn't see anyone else mention this, so...
On Jun 8, 2007, at 11:31 AM, Dave Cramer wrote:
Is it possible that providing 128G of ram is too much ? Will other
systems in the server bottleneck ?
Providing to what? PostgreSQL? The OS? My bet is that you'll run into
issues with how shared_buffers are managed if you actually try and
On 10-Jun-07, at 11:11 PM, Jim Nasby wrote:
On Jun 8, 2007, at 11:31 AM, Dave Cramer wrote:
Is it possible that providing 128G of ram is too much ? Will other
systems in the server bottleneck ?
Providing to what? PostgreSQL? The OS? My bet is that you'll run
into issues with how
Hi,
Jim Nasby wrote:
I don't think that kind of testing is useful for good raid controllers
on RAID5/6, because the controller will just be streaming the data out;
it'll compute the parity blocks on the fly and just stream data to the
drives as fast as possible.
That's why I called it
On Mon, Jun 11, 2007 at 11:09:42AM -0400, Dave Cramer wrote:
and set them to anything remotely close to 128GB.
Well, we'd give 25% of it to postgres, and the rest to the OS.
Are you quite sure that PostgreSQL's management of the buffers is
efficient with such a large one? In the past, that
Markus Schiltknecht wrote:
For dbt2, I've used 500 warehouses and 90 concurrent connections,
default values for everything else.
500? That's just too much for the hardware. Start from say 70 warehouses
and up it from there 10 at a time until you hit the wall. I'm using 30
connections with
Hi All,
I really hope someone can shed some light on my problem. I'm not sure if
this is a posgres or potgis issue.
Anyway, we have 2 development laptops and one live server, somehow I
managed to get the same query to perform very well om my laptop, but on
both the server and the other laptop
Tyrrill, Ed wrote:
QUERY PLAN
---
Merge Left Join (cost=38725295.93..42505394.70 rows=13799645 width=8)
(actual
On 2007-06-11 Christo Du Preez wrote:
I really hope someone can shed some light on my problem. I'm not sure
if this is a posgres or potgis issue.
Anyway, we have 2 development laptops and one live server, somehow I
managed to get the same query to perform very well om my laptop, but
on both
Hi all,
It seems that I have an issue with the performance of a PostgreSQL server.
I'm running write-intensive, TPC-C like tests. The workload consist of
150 to 200 thousand transactions. The performance varies dramatically,
between 5 and more than 9 hours (I don't have the exact figure for
Vladimir Stankovic wrote:
I'm running write-intensive, TPC-C like tests. The workload consist of
150 to 200 thousand transactions. The performance varies dramatically,
between 5 and more than 9 hours (I don't have the exact figure for the
longest experiment). Initially the server is relatively
Hi Andrew
On 11-Jun-07, at 11:34 AM, Andrew Sullivan wrote:
On Mon, Jun 11, 2007 at 11:09:42AM -0400, Dave Cramer wrote:
and set them to anything remotely close to 128GB.
Well, we'd give 25% of it to postgres, and the rest to the OS.
Are you quite sure that PostgreSQL's management of the
On 2007-06-11 Christo Du Preez wrote:
I really hope someone can shed some light on my problem. I'm not sure
if this is a posgres or potgis issue.
Anyway, we have 2 development laptops and one live server, somehow I
managed to get the same query to perform very well om my laptop, but
on both
Configuration
OS: FreeBSD 6.1 Stable
Postgresql: 8.1.4
RAID card 1 with 8 drives. 7200 RPM SATA RAID10
RAID card 2 with 4 drives. 10K RPM SATA RAID10
Besides having pg_xlog in the 10K RPM drives what else can I do to best use
those drives other than putting some data in them?
Iostat shows
22 matches
Mail list logo