for reading,
--patrick
__
Do you Yahoo!?
Check out the new Yahoo! Front Page.
www.yahoo.com
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
the VACUUM improves in performance. It does not. After
the VACUUM the statement remains slow.
Thanks for your help,
--patrick
--- Tom Lane [EMAIL PROTECTED] wrote:
patrick ~ [EMAIL PROTECTED] writes:
that if I 'createdb' and populate it with the sanatized data the
query in question is quite
Just wanted to know if there were any insights after looking at
requested 'explain analyze select ...'?
Thanks,
--patrick
__
Do you Yahoo!?
Check out the new Yahoo! Front Page.
www.yahoo.com
---(end of broadcast
solution left for me?
2. Am I in anyway screwing the db doing this?
Best regards,
--patrick
__
Do you Yahoo!?
Check out the new Yahoo! Front Page.
www.yahoo.com
---(end of broadcast)---
TIP 4
pkk_purchase where offer_id in ( 795, 2312
) group by offer_id ;
offer_id | count
--+---
795 | 4
2312 | 1015
(2 rows)
Time: 21.118 ms
--patrick
--- Tom Lane [EMAIL PROTECTED] wrote:
patrick ~ [EMAIL PROTECTED] writes:
1. Is this really the only solution left
Sorry for the late reply. Was feeling a bit under the weather
this weekend and didn't get a chance to look at this.
--- Tom Lane [EMAIL PROTECTED] wrote:
patrick ~ [EMAIL PROTECTED] writes:
PREPARE pkk_00 ( integer ) the def of pkk_offer_has_pending_purc( integer
)
This is what you
Hi John,
Thanks for your reply and analysis.
--- John Meinel [EMAIL PROTECTED] wrote:
patrick ~ wrote:
[...]
pkk=# explain analyze execute pkk_01(241 );
QUERY PLAN
, but the
larger the recordset the slower the data is return to the client. I played
around with the cache size on the driver and found a value between 100 to
200 provided good results.
HTH
Patrick Hatcher
here's the URL:
http://techdocs.postgresql.org/techdocs/pgsqladventuresep2.php
Patrick Hatcher
Macys.Com
Legacy Integration Developer
415-422-1610 office
HatcherPT - AIM
Patrick
Do you have an index on ts.bytes? Josh had suggested this and after I put
it on my summed fields, I saw a speed increase. I can't remember the
article was that Josh had written about index usage, but maybe he'll chime
in and supply the URL for his article.
hth
Patrick Hatcher
shared_buffers = 2000 # min 16, at least max_connections*2, 8KB
each
sort_mem = 12288# min 64, size in KB
# - Free Space Map -
max_fsm_pages = 10 # min max_fsm_relations*16, 6 bytes each
#max_fsm_relations = 1000 # min 100, ~50 bytes each
TIA
Patrick Hatcher
.
544679 pages are or will become empty, including 0 at the end of the table.
692980 pages containing 4433398408 free bytes are potential move
destinations.
CPU 29.55s/4.13u sec elapsed 107.82 sec.
TIA
Patrick Hatcher
---(end of broadcast)---
TIP 8
.
Patrick Hatcher
[EMAIL PROTECTED]
omTo
Sent
to very large sites.
I would very much appreciate any advice that some experienced users may have
to offer me for such a situation. TIA
Patrick
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
: "cdm_email_data": 65869 pages, 3000 rows
sampled, 392333 estimated total rows
#After vacuum full(s)
mdc_oz=# select count(*) from
cdm.cdm_email_data; count-5433358(1
row)
TIA
Patrick
Treat [EMAIL PROTECTED]
To: Patrick Hatcher [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Monday, September 20, 2004 11:12 PM
Subject: Re: [PERFORM] vacuum full max_fsm_pages question
On Tuesday 21 September 2004 00:01, Patrick Hatcher wrote:
Hello.
Couple of questions:
- Q1: Today I decided
I upgraded to 7.4.3 this morning and
did a vacuum full analyze on the problem table and now the indexes show
the correct number of records
Patrick Hatcher
Macys.Com
Josh Berkus [EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]
09/21/04 10:49 AM
To
Patrick Hatcher
[EMAIL PROTECTED]
cc
. Is there something I should
be looking for in my conf settings?
TIA
Patrick
SQL:
---Bring back only selected records
to run through the update process.
--Without the function the SQL takes
10secs to return 90,000 records
SELECT count(pm.pm_delta_function_amazon(upc.keyp_upc,'amazon'))
FROM
Thanks for the help.
I found the culprit. The user
had created a function within the function (
pm.pm_price_post_inc(prod.keyp_products)).
Once this was fixed the time dropped dramatically.
Patrick Hatcher
Macys.Com
Legacy Integration Developer
415-422-1610 office
HatcherPT - AIM
Patrick
fetching
that data from the database each time. My advice: if you're not a masochist,
use a template engine (or simply parse out a print_r() ) to create these PHP
arrays or SQL functions.
Greg, thanks a lot for the advice. I owe you a beer ;)
On Saturday 18 September 2004 23:07, you wrote:
Patrick
is not dropping easily?
On Tuesday 05 October 2004 10:32, you wrote:
Patrick,
First off, thanks for posting this solution! I love to see a new demo of
The Power of Postgres(tm) and have been wondering about this particular
problem since it came up on IRC.
The array method works quite nicely
Is it all the foreign keys that are stalling the drop? I have done VACUUM
ANALYZE on the entire db. Could anyone offer some insight as to why this
index is not being used or why the index is not dropping easily?
On Tuesday 05 October 2004 10:32, you wrote:
Patrick,
First off, thanks
I do mass inserts daily into PG. I drop the all indexes except my primary key and then use the COPY FROM command. This usually takes less than 30 seconds. I spend more time waiting for indexes to recreate.PatrickHatcherMacys.Com[EMAIL PROTECTED] wrote: -To: [EMAIL PROTECTED]From: Christopher
Dear,
We are using PostgreSQL for 4 Years now, one can say it is a blessing to
maintain. Our previous database was number one (;-), it was much harder
to maintain so labor is a pro for PostgreSQL ...
Kind Regards
Patrick Meylemans
IT Manager
WTCM-CRIF
Celestijnenlaan 300C
3001 Helerlee
At 11
I could create and populate my databases into
?...since the databases does not support updates for this
application.
Sorry for my naive questions and my poor english but any help or advise
will be greatly appreciated !
Patrick Vedrines
PS (maybe of interest for some users like me) :
I created a partiti
types you mentionned (reiserfs
vs ext3) ?
I don't use RAID since the security is not a
concern.
Thank a lot for your help !
Patrick
Hi
Patrick, How is configured your disk array? Do you
have a Perc 4?Tip: Use reiserfs instead ext3, raid 0+1 and deadline
I/O scheduler in kernel linux 2.6
(and how) to create a RAMdisk ?
Thank a lot for your help !
Patrick
,
Patrick
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
security but only about speed and capacity ( maybe
this switch was not set properly at this time...).
Thank you for these interesting links: I 've sent
them to my system engineer with my two hands !
Amicalement
Patrick
- Original Message -
From:
Gustavo
Franklin Nóbrega - Planae
# (same)
#cpu_index_tuple_cost = 0.001 # (same)
#cpu_operator_cost = 0.0025 # (same)
Patrick Hatcher
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
different
streets in 2 different cities may have the same name.
Amicalement
Patrick
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
)
Index Cond: (u.upc = outer.item_upc)
(15 rows)
Unfortunately, I need this criteria since it contains the first date of the
order and is used to pull the correct price.
Any suggestions?
TIA
Patrick
---(end of broadcast)---
TIP 5: don't forget
Thanks. No foreign keys and I've been bitten by the mismatch datatypes and
checked that before sending out the message :)
Patrick Hatcher
Development Manager Analytics/MIO
Macys.com
Tom Lane
'::date))
Total runtime: 753467.847 ms
Patrick Hatcher
Development Manager Analytics/MIO
Macys.com
415-422-1610
Tom Lane
[EMAIL PROTECTED
We have size and color in the product table itself. It is really an
attribute of the product. If you update the availability of the product
often, I would split out the quantity into a separate table so that you can
truncate and update as needed.
Patrick Hatcher
Development Manager Analytics
this problem. (I am try this test on another machine with the
same version of PostgreSQL installed on it, and right now,
it is stuck on the first of the two huge tables, and it has
already been going for more than 2 hrs).
I'm open to any ideas and/or suggestions (within reason) :)
Best regards,
--patrick
?
Is this because the -c option drops all foreign keys and
so the restore goes faster? Should this be the preferred,
recommended and documented method to run pg_restore?
Any drawbacks to this method?
Thanks,
--patrick
On 4/12/06, Tom Lane [EMAIL PROTECTED] wrote:
patrick keshishian [EMAIL
On 4/13/06, Jim C. Nasby [EMAIL PROTECTED] wrote:
On Thu, Apr 13, 2006 at 06:26:00PM -0700, patrick keshishian wrote:
$ dropdb dbname
$ createdb dbname
$ pg_restore -vsOd dbname dbname.DUMP
That step is pointless, because the next pg_restore will create the
schema for you anyway.
Yes, I
: (oid 1::oid)
(4 rows)
Time: 27.301 ms
db=# select count(*) from pk_c2 b0 where b0.offer_id=7141 and oid 1;
count
---
1
(1 row)
Time: 1.900 ms
What gives?
This seems just too hokey for my taste.
--patrick
db=# select version();
version
again for your input,
--patrick
On 4/18/06, Tom Lane [EMAIL PROTECTED] wrote:
patrick keshishian [EMAIL PROTECTED] writes:
I've been struggling with some performance issues with certain
SQL queries. I was prepping a long-ish overview of my problem
to submit, but I think I'll start out
to restart Pg. Once restarted we were able to do a VACUUM FULL and this
took care of the issue.
hth
Patrick Hatcher
Development Manager Analytics/MIO
Macys.com
Matteo Sgalaberni
it is joined, even when it is scanned with an index?
I'm pretty sure it is because of the reduced table sizes,
since the server configuration is the same.
Thoughts?
Thanks,
Patrick
-performance/2008-03/msg00371.php
In my case, it was by a factor of 2.
Of course, I can't turn off nested loop in my database,
it will impact performance on small tables too much...
So there is no easy fix for that, it seems,
beside playing with per-column statistics-gathering target maybe?
Patrick
I'm running 8.4.2 and have noticed a similar heavy preference for
sequential scans and hash joins over index scans and nested loops. Our
database is can basically fit in cache 100% so this may not be
applicable to your situation, but the following params seemed to help
us:
seq_page_cost = 1.0
regards, Patrick
-Original Message-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Tom Lane
Sent: Monday, March 22, 2010 12:22 PM
To: Christian Brink
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] PostgreSQL upgraded
, lack of readahead,
TLB misses, etc
cpu_tuple_cost = 1.0
cpu_index_tuple_cost = 0.5
cpu_operator_cost = 0.25
effective_cache_size = 1000MB
shared_buffers = 1000MB
-Original Message-
From: Robert Haas [mailto:robertmh...@gmail.com]
Sent: Wednesday, March 24, 2010 5:47 PM
To: Eger, Patrick
rows retrieved.
Anyone have any ideas on where I should start looking to figure this out? I
didn't perform any special steps when moving to v8.4, I just did a pg_dump from
the 8.3 server and restored it on the new 8.4 servers. Maybe that is where I
made a mistake.
Thanks!
Patrick
with me not providing enough
information.
-Patrick
- Original Message -
From: Kevin Grittner kevin.gritt...@wicourts.gov
To: Patrick Donlin pdon...@oaisd.org, pgsql-performance@postgresql.org
Sent: Thursday, July 15, 2010 10:55:19 AM GMT -05:00 US/Canada Eastern
Subject: Re
it down. Running
the query from my webserver yielded much better times, but from a quick look it
seems my 8.4 server is still a bit slower. I will share more details as I dig
into it more tomorrow or Monday.
-Patrick
- Original Message -
From: Merlin Moncure mmonc...@gmail.com
site is responsive and the benchmark result is more or less the same as
FreeBSD with the 'sync' turned off.
3)
For FreeBSD, same setting with Postgresql on UFS:
The performance is between ZFS (default, sync enabled) and ZFS (sync disabled).
Thanks,
Patrick
--- On Mon, 1/7/13, Patrick Dung
Linux or ZFS
without sync.
Best regards,
Patrick
--- On Tue, 1/8/13, k...@rice.edu k...@rice.edu wrote:
From: k...@rice.edu k...@rice.edu
Subject: Re: [PERFORM] Sub optimal performance with default setting of
Postgresql with FreeBSD 9.1 on ZFS
To: Patrick Dung patrick_...@yahoo.com.hk
Cc: pgsql
= 22303MB
I have been struggling to make these types of query fast because they are
very common (basically fetching all of the metadata for a document, and we
have a lot of metadata and a lot of documents). Any help is appreciated!
Thanks,
Patrick
is. However, given the
accurate statistics, I can't see how.
BTW I tried playing with random_page_cost. If I lower it to 2.0 then
it chooses the fast plan. At 3.0 it chooses the slow plan.
Thanks!
Patrick
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make
On Wed, Dec 10, 2014 at 2:44 AM, Maila Fatticcioni
mfatticci...@mbigroup.it wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hello.
I need to tune a postgres installation I've just made to get a better
performance. I use two identical servers with a hot replication
configuration. The
Hi everyone --
I had an issue the other day where a relatively simple query went from
taking about 1 minute to execute to taking 19 hours. It seems that the
planner chooses to use a materialize sometimes [1] and not other times
[2]. I think the issue is that the row count estimate for the result
On Wed, Jun 10, 2015 at 2:08 PM, Merlin Moncure mmonc...@gmail.com wrote:
On Wed, Jun 10, 2015 at 3:55 PM, Patrick Krecker patr...@judicata.com wrote:
OK. Well, fortunately for us, we have a lot of possible solutions this
problem, and it sounds like actually getting statistics for attributes
OK. Well, fortunately for us, we have a lot of possible solutions this
problem, and it sounds like actually getting statistics for attributes
? 'reference' is not realistic. I just wanted to make sure it wasn't
some configuration error on our part.
Can anyone explain where exactly the estimate
obvious. This is a common failure pattern with caches.
Patrick B. Kelly
--
http://patrickbkelly.org
---(end of broadcast)---
TIP 2: you can get off all lists at once
58 matches
Mail list logo