Jean-Pierre Pelletier wrote:
> Here are the explain analyze:
What is the explain analyze if you use "set enable_seqscan to off"?
Also, can you post the output of:
\d supplier
\d price
\d content
Mostly I just want to see what the indexes are, in the case that you
don't want to show us your schem
Jean-Pierre Pelletier wrote:
> Hi,
>
> I've got many queries running much slower on 8.1 beta2 than on 8.0.1
> Here is a simplified one that takes 484 ms on 8.1 and 32 ms on 8.0.1.
>
> select
> 0
> from
> Content C
>
> left outer join Supplier S
> on C.SupplierId = S.SupplierId
>
> le
Jeremiah Jahn wrote:
>here's an example standard query. Ireally have to make the first hit go
>faster. The table is clustered as well on full_name as well. 'Smith%'
>took 87 seconds on the first hit. I wonder if I set up may array wrong.
>I remeber see something about DMA access versus something e
Jeremiah Jahn wrote:
>On Wed, 2005-08-17 at 21:21 -0500, John A Meinel wrote:
>
>
>>Jeremiah Jahn wrote:
>>
>>
>>>I just put together a system with 6GB of ram on a 14 disk raid 10 array.
>>>When I run my usual big painful queries, I get very little to know
>>>memory usage. My production box
Steinar H. Gunderson wrote:
>On Sun, Aug 14, 2005 at 07:27:38PM -0500, John Arbash Meinel wrote:
>
>
>>My guess is that this is part of a larger query. There isn't really much
>>you can do. If you want all 3.2M rows, then you have to wait for them to
>>be pulled
Stéphane COEZ wrote:
>Hi,
>
>I have a perfomance issue :
>
>I run PG (8.0.3) and SQLServer2000 on a Windows2000 Server (P4 1,5Ghz 512Mo)
>I have a table (320 rows) and I run this single query :
>
>select cod from mytable group by cod
>I have an index on cod (char(4) - 88 different values)
>
>P
Petr Kavan wrote:
> I have database of company data, and some of them is table of
> information about employees. I need each employee to have access only
> to his own row. Postgre cannot do this by system of privileges,
> because that can give privileges only to whole tables.
>
> Possibility is to
Meetesh Karia wrote:
> Hi all,
>
> We're using 8.0.3 and we're seeing a problem where the planner is
> choosing a seq scan and hash join over an index scan. If I set
> enable_hashjoin to off, then I get the plan I'm expecting and the query
> runs a lot faster. I've also tried lowering the random
John Arbash Meinel wrote:
>Matthew Schumacher wrote:
>
>
>
>>All it's doing is trying the update before the insert to get around the
>>problem of not knowing which is needed. With only 2-3 of the queries
>>implemented I'm already back to running about
Matthew Schumacher wrote:
>Tom Lane wrote:
>
>
>
>>I looked into this a bit. It seems that the problem when you wrap the
>>entire insertion series into one transaction is associated with the fact
>>that the test does so many successive updates of the single row in
>>bayes_vars. (VACUUM VERBOSE
Matthew Schumacher wrote:
>All it's doing is trying the update before the insert to get around the
>problem of not knowing which is needed. With only 2-3 of the queries
>implemented I'm already back to running about the same speed as the
>original SA proc that is going to ship with SA 3.1.0.
>
>A
Josh Berkus wrote:
>Dennis,
>
>
>
>> EXCEPTION WHEN unique_violation THEN
>>
>>
>
>I seem to remember that catching an exception in a PL/pgSQL procedure was a
>large performance cost. It'd be better to do UPDATE ... IF NOT FOUND.
>
>
>
Actually, he was doing an implicit UPDATE IF N
Alex Stapleton wrote:
>
> On 20 Jun 2005, at 15:59, Jacques Caron wrote:
>
...
>> ANALYZE is not a very expensive operation, however VACUUM can
>> definitely be a big strain and take a long time on big tables,
>> depending on your setup. I've found that partitioning tables (at the
>> applica
Josh Close wrote:
>I have some queries that have significan't slowed down in the last
>couple days. It's gone from 10 seconds to over 2 mins.
>
>The cpu has never gone over 35% in the servers lifetime, but the load
>average is over 8.0 right now. I'm assuming this is probably due to
>disk io.
>
>I
Sebastian Hennebrueder wrote:
> I found a solution to improve my query. I do not know why but the
> statistics for all column has been 0.
> I changed this to 10 for index columns and to 20 for all foreign key
> columns.
> and to 100 for foreign key columns.
> I set the random page cost to 2
> and
Joel Fradkin wrote:
Here is the result after putting it back to 4 the original value (I had done
that prior to your suggestion of using 2 or 3) to see what might change.
I also vacummed and thought I saw records deleted in associate, which I
found odd as this is a test site and no new records were
Joel Fradkin wrote:
shared_buffers = 8000 # min 16, at least max_connections*2, 8KB
each
work_mem = 8192#1024# min 64, size in KB
max_fsm_pages = 3 # min max_fsm_relations*16, 6 bytes each
effective_cache_size = 4 #1000 # typically 8KB each
random_page_c
Joel Fradkin wrote:
Running this explain on windows box, but production on linux both 8.0.1
The MSSQL is beating me out for some reason on this query.
The linux box is much more powerful, I may have to increase the cache,
but I am pretty sure its not an issue yet.
It has 8 gig internal memory any r
Florin Andrei wrote:
On Wed, 2005-03-30 at 20:11 -0600, John Arbash Meinel wrote:
Florin Andrei wrote:
For performance reasons, i was thinking to keep the tables append-only,
and simply rotate them out every so often (daily?) and delete those
tables that are too old. Is that a good idea?
If you
Florin Andrei wrote:
On Wed, 2005-03-30 at 19:59 -0600, John Arbash Meinel wrote:
Put pg_xlog onto the same drive as the OS, not the drive with the database.
I forgot to mention: the OS drive is purposefully made very slow - the
write cache is turned off and the FS is Ext3 with data=journal. Is
Florin Andrei wrote:
On Wed, 2005-03-30 at 17:50 -0800, Florin Andrei wrote:
Function: Essentially a logging server. There are two applications (like
syslog) on the same box that are logging to pgsql, each one to its own
database. There are a few tables in one DB, and exactly one table in the
othe
Florin Andrei wrote:
Hardware: relatively modern Intel CPU, OS and database each on its own
IDE hard-drive (separate IDE cables). Enough memory, i think, but i
can't add too much (not beyond 1GB).
Software: Linux-2.6, pgsql-8.0.1
Function: Essentially a logging server. There are two applications (l
Keith Worthington wrote:
Hi All,
I am developing a simple set returning function as my first step towards more
complicated processes. I would like to understand the implications of using
the dynamic query capability.
I have built two forms of an identically performing function. The first uses
a s
Alexander Ranaldi wrote:
Greetings everyone,
I am about to migrate to Postgres from MySQL. My DB isn't enormous (<
1gb), consists mostly of just text, but is accessed quite heavily.
Because size isn't a huge issue, but performance is, I am willing to
normalize as necessary.
Currently I have a table
Miroslav ¦ulc wrote:
PFC wrote:
Your query seems of the form :
SELECT FROM main_table LEFT JOIN a lot of tables ORDER BY
sort_key LIMIT N OFFSET M;
I would suggest to rewrite it in a simpler way : instead of
generating the whole result set, sorting it, and then grabbing a
slice, gener
Miroslav Šulc wrote:
Tom Lane wrote:
...
I think the reason this is popping to the top of the runtime is that the
joins are so wide (an average of ~85 columns in a join tuple according
to the numbers above). Because there are lots of variable-width columns
involved, most of the time the fast path
Miroslav Šulc wrote:
John Arbash Meinel wrote:
...
Many of the columns are just varchar(1) (because of the migration from
MySQL enum field type) so the record is not so long as it could seem.
These fields are just switches (Y(es) or N(o)). The problem is users
can define their own templates and in
Miroslav Šulc wrote:
Hi John,
thank you for your response.
I will comment on things separately.
John Arbash Meinel wrote:
...
These external tables contain information that are a unique parameter
of the AdDevice (like Position, Region, County, City etc.), in some
containing localized description
Miroslav Šulc wrote:
Hi John,
thank you for your response.
How about a quick side track.
Have you played around with your shared_buffers, maintenance_work_mem,
and work_mem settings?
What version of postgres are you using? The above names changed in 8.0,
and 8.0 also has some perfomance improvement
Miroslav Šulc wrote:
Hi all,
I am new to PostgreSQL and query optimizations. We have recently moved
our project from MySQL to PostgreSQL and we are having performance
problem with one of our most often used queries. On MySQL the speed
was sufficient but PostgreSQL chooses time expensive query plan.
Karim Nassar wrote:
Thanks to all for the tips.
...
In general I would recommend RAID1, because that is the safe bet. If
your db is the bottleneck, and your data isn't all that critical, and
you are read heavy, I would probably go with RAID1, if you are write
Rick Schumeyer wrote:
I have two index questions. The first is about an issue that has been
recently discussed,
and I just wanted to be sure of my understanding. Functions like
count(), max(), etc. will
use sequential scans instead of index scans because the index doesn’t
know which rows
are actual
Ken Egervari wrote:
Josh,
...
I thought about this, but it's very important since shipment and
shipment_status are both updated in real time 24/7/365. I think I
might be able to cache it within the application for 60 seconds at
most, but it would make little difference since people tend to refresh
Ken wrote:
Richard,
What do you mean by summary table? Basically a cache of the query
into a table with replicated column names of all the joins? I'd
probably have to whipe out the table every minute and re-insert the
data for each carrier in the system. I'm not sure how expensive this
operation
Ken Egervari wrote:
Josh,
I did everything you said and my query does perform a bit better.
I've been getting speeds from 203 to 219 to 234 milliseconds now. I
tried increasing the work mem and the effective cache size from the
values you provided, but I didn't see any more improvement. I've
tri
Jeremiah Jahn wrote:
On Thu, 2005-03-03 at 11:46 -0600, John A Meinel wrote:
...
Not really, about 2% of the returned rows are thrown away for security
reasons based on the current user, security groups they belong to and
different flags in the data itself. So the count for this is generated
on th
Ken Egervari wrote:
First, what version of postgres, and have you run VACUUM ANALYZE
recently?
Also, please attach the result of running EXPLAIN ANALYZE.
(eg, explain analyze select s.* from shipment ...)
I'm using postgres 8.0. I wish I could paste explain analyze, but I
won't be at work for a f
Ken Egervari wrote:
I've tried to use Dan Tow's tuning method and created all the right
indexes from his diagraming method, but the query still performs quite
slow both inside the application and just inside pgadmin III. Can
anyone be kind enough to help me tune it so that it performs better in
po
Markus Schaber wrote:
Hi, John,
John Arbash Meinel schrieb:
I am doing research for a project of mine where I need to store
several billion values for a monitoring and historical tracking system
for a big computer system. My currect estimate is that I have to store
(somehow) around 1 billion
Sven Willenberger wrote:
On Tue, 2005-03-01 at 09:48 -0600, John Arbash Meinel wrote:
Sven Willenberger wrote:
Trying to determine the best overall approach for the following
scenario:
Each month our primary table accumulates some 30 million rows (which
could very well hit 60+ million rows per
Vig, Sandor (G/FI-2) wrote:
385 transaction/sec?
fsync = false
risky but fast.
I think with a dedicated RAID10 for pg_xlog (or possibly a battery
backed up ramdisk), and then a good amount of disks in a bulk RAID10 or
possibly a good partitioning of the db across multiple raids, you could
probably
Ramon Bastiaans wrote:
Hi all,
I am doing research for a project of mine where I need to store
several billion values for a monitoring and historical tracking system
for a big computer system. My currect estimate is that I have to store
(somehow) around 1 billion values each month (possibly more).
Sven Willenberger wrote:
Trying to determine the best overall approach for the following
scenario:
Each month our primary table accumulates some 30 million rows (which
could very well hit 60+ million rows per month by year's end). Basically
there will end up being a lot of historical data with litt
John Allgood wrote:
Hello Again
In the below statement you mention putting each database on its own
raid mirror.
"However, sticking with your arrangement, it would seem that you might be
able to get some extra performance if each database is on it's own raid,
since you are fairly likely to have 2 t
John Allgood wrote:
Here is a summary about the cluster suite from redhat. All 9 databases
will be on the primary server the secondary server I have is the
failover. They don't actually share the partitions at the same time.
When you have some type of failure the backup server takes over. Once
you
John Allgood wrote:
This some good info. The type of attached storage is a Kingston 14 bay
Fibre Channel Infostation. I have 14 36GB 15,000 RPM drives. I think
the way it is being explained that I should build a mirror with two
disk for the pg_xlog and the striping and mirroring the rest and put
al
John Allgood wrote:
I think maybe I didn't explain myself well enough. At most we will
service 200-250 connections across all the 9 databases mentioned. The
database we are building is for a trucking company. Each of the
databases represents a different division. With one master database
that every
werner fraga wrote:
Certain queries on my database get slower after
running a VACUUM ANALYZE. Why would this happen, and
how can I fix it?
I am running PostgreSQL 7.4.2 (I also seen this
problem on v. 7.3 and 8.0)
Here is a sample query that exhibits this behaviour
(here the query goes from 1 secon
Chris Kratz wrote:
Hello All,
In contrast to what we hear from most others on this list, we find our
database servers are mostly CPU bound. We are wondering if this is because
we have postgres configured incorrectly in some way, or if we really need
more powerfull processor(s) to gain more perform
Silke Trissl wrote:
Sorry,
is there a way to tell Postgres which index to use when a query is
issued in 7.4.2?
PostgreSQL adjusts usage through global parameters, statistics, and
periodic ANALYZE. Please post an EXPLAIN ANALYZE (not just EXPLAIN)
for your query and people on this list can help
Gaetano Mendola wrote:
Steven Rosenstein wrote:
DELETE FROM detail JOIN summary ON (summary.id=detail.sum_id) WHERE
collect_date='2005-02-05';
You have to tell it what table you are deleting from. Select * from A
join B is both tables. What you want to do is fix the where clause.
DELETE FROM detai
Joost Kraaijeveld wrote:
Hi all,
I have a freshly vacuumed table with 1104379 records with a index on zipcode.
Can anyone explain why the queries go as they go, and why the performance
differs so much (1 second versus 64 seconds, or stated differently, 1
records per second versus 1562 recor
SpaceBallOne wrote:
Thanks for the reply John,
There are approximately 800 rows total in our job table (which stays
approximately the same because 'completed' jobs get moved to a
'job_archive' table).The other jobs not shown by the specific query
could be on backorder status, temporary deleted stat
53 matches
Mail list logo