[PERFORM] Server misconfiguration???

2005-10-10 Thread Andy
full parameter. If I look at memory allocation, it never goes over 250MB whatever I do with the database. The kernel shmmax is set to 600MB. Database Size is around 550MB.     Need some advise.   Thanks. Andy.  

Re: [PERFORM] Server misconfiguration???

2005-10-10 Thread Andy
g" service from this system. Andy. - Original Message - From: "Christopher Kings-Lynne" <[EMAIL PROTECTED]> To: "Andy" <[EMAIL PROTECTED]> Cc: Sent: Monday, October 10, 2005 11:55 AM Subject: Re: [PERFORM] Server misconfiguration??? A lot of them a

Re: [PERFORM] Server misconfiguration???

2005-10-10 Thread Andy
gards, Andy. - Original Message - From: "Tom Lane" <[EMAIL PROTECTED]> To: "Andy" <[EMAIL PROTECTED]> Cc: Sent: Monday, October 10, 2005 5:18 PM Subject: Re: [PERFORM] Server misconfiguration??? "Andy" <[EMAIL PROTECTED]> writes: I get t

[PERFORM] Massive delete performance

2005-10-11 Thread Andy
t of data's to be deleted.       Or is there any other solution for this? DB -> (replication) RE_DB -> (copy) -> COPY_DB -> (Delete unnecesary data) -> CLIENT_DB -> (ISDN connection) -> Data's to the client.   Regards, Andy.      

Re: [PERFORM] Massive delete performance

2005-10-11 Thread Andy
ng any type of vacuum after the whole process? What kind? Full vacuum. (cmd: vacuumdb -f) Is there any configuration parameter for delete speed up? - Original Message - From: "Sean Davis" <[EMAIL PROTECTED]> To: "Andy" <[EMAIL PROTECTED]>; Sent: Tuesday

Re: [PERFORM] Massive delete performance

2005-10-11 Thread Andy
<[EMAIL PROTECTED]> To: Sent: Tuesday, October 11, 2005 3:19 PM Subject: Re: [PERFORM] Massive delete performance On Tue, Oct 11, 2005 at 10:47:03AM +0300, Andy wrote: So, I have a replication only with the tables that I need to send, then I make a copy of this replication, and from th

Re: [PERFORM] Massive delete performance

2005-10-11 Thread Andy
ot;.id) Total runtime: 31952.811 ms - Original Message - From: "Tom Lane" <[EMAIL PROTECTED]> To: "Andy" <[EMAIL PROTECTED]> Cc: "Steinar H. Gunderson" <[EMAIL PROTECTED]>; Sent: Tuesday, October 11, 2005 5:17 PM Subject: Re: [PERFORM] Massive d

Re: [PERFORM] Server misconfiguration???

2005-10-14 Thread Andy
Yes I did, and it works better(on a test server). I had no time to put it in production. I will try to do small steps to see what happens. Regards, Andy. - Original Message - From: "Andrew Sullivan" <[EMAIL PROTECTED]> To: Sent: Thursday, October 13, 2005 6:0

[PERFORM] Improving Inner Join Performance

2006-01-05 Thread Andy
s puts in some other search fields on the where then the query runs faster but in this format sometimes it takes a lot lot of time(sometimes even 2,3 seconds).   Can this be tuned somehow???   Regards, Andy.    

Re: [PERFORM] Improving Inner Join Performance

2006-01-06 Thread Andy
Yes I have indexes an all join fields. The tables have around 30 columns each and around 100k rows. The database is vacuumed every hour. Andy. - Original Message - From: "Frank Wiles" <[EMAIL PROTECTED]> To: "Andy" <[EMAIL PROTECTED]> Cc: Sent: Th

Re: [PERFORM] Improving Inner Join Performance

2006-01-06 Thread Andy
Sorry, I had to be more specific. VACUUM ANALYZE is performed every hour. Regards, Andy. - Original Message - From: "Michael Glaesemann" <[EMAIL PROTECTED]> To: "Andy" <[EMAIL PROTECTED]> Cc: Sent: Friday, January 06, 2006 11:45 AM Subject: Re

Re: [PERFORM] Improving Inner Join Performance

2006-01-06 Thread Andy
the user can have. I use this to build pages of results. Andy. - Original Message - From: "Pandurangan R S" <[EMAIL PROTECTED]> To: "Andy" <[EMAIL PROTECTED]> Cc: Sent: Friday, January 06, 2006 11:56 AM Subject: Re: [PERFORM] Improving Inner Join Performance

Re: [PERFORM] Improving Inner Join Performance

2006-01-08 Thread Andy
shared_buffers = 10240effective_cache_size = 64000RAM on server: 1Gb. Andy. - Original Message - From: "Frank Wiles" <[EMAIL PROTECTED]> To: "Andy" <[EMAIL PROTECTED]> Sent: Friday, January 06, 2006 7:12 PM Subject: Re: [PERFORM] Improving Inner J

[PERFORM] LIKE search and performance

2007-05-23 Thread Andy
eq scan on the whole table and that takes some time. How can this be optimized or made in another way to be faster? I tried to make indexes on the columns but no success. PG 8.2 Regards, Andy.

Re: [PERFORM] LIKE search and performance

2007-05-24 Thread Andy
Thank you all for the answers. I will try your suggestions and see what that brings in terms of performance. Andy. > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] On Behalf Of > Rigmor Ukuhe > Sent: Wednesday, May 23, 2007 6:52 PM > Cc:

[PERFORM] Performance improvements/regressions from 8.4 to 9.0?

2010-09-29 Thread Andy
Hi, Are there any significant performance improvements or regressions from 8.4 to 9.0? If yes, which areas (inserts, updates, selects, etc) are those in? In a related question, is there any public data that compares the performances of various Postgresql versions? Thanks -- Sent vi

Re: [PERFORM] UUID performance as primary key

2010-10-15 Thread Andy
Wouldn't UUID PK cause a significant drop in insert performance because every insert is now out of order, which leads to a constant re-arranging of the B+ tree? The amount of random IO's that's going to generate would just kill the performance. --- On Fri, 10/15/10, Craig Ringer wrote: From:

Re: [PERFORM] Hardware recommendations

2010-12-08 Thread Andy
If you are IO-bound, you might want to consider using SSD. A single SSD could easily give you more IOPS than 16 15k SAS in RAID 10. --- On Wed, 12/8/10, Benjamin Krajmalnik wrote: > From: Benjamin Krajmalnik > Subject: [PERFORM] Hardware recommendations > To: pgsql-performance@postgresql.org

Re: [PERFORM] Hardware recommendations

2010-12-08 Thread Andy
> > If you are IO-bound, you might want to consider using > SSD. > > > > A single SSD could easily give you more IOPS than 16 > 15k SAS in RAID 10. > > Are there any that don't risk your data on power loss, AND > are cheaper > than SAS RAID 10? > Vertex 2 Pro has a built-in supercapacitor to s

Re: [PERFORM] Hardware recommendations

2010-12-10 Thread Andy
> We use ZFS and use SSDs for both the log device and > L2ARC.  All disks > and SSDs are behind a 3ware with BBU in single disk > mode.  Out of curiosity why do you put your log on SSD? Log is all sequential IOs, an area in which SSD is not any faster than HDD. So I'd think putting log on SSD

Re: [PERFORM] Hardware recommendations

2010-12-10 Thread Andy
> The "common knowledge" you based that comment on, may > actually not be very up-to-date anymore. Current > consumer-grade SSD's can achieve up to 200MB/sec when > writing sequentially and they can probably do that a lot > more consistent than a hard disk. > > Have a look here: http://www.anandt

Re: [PERFORM] concurrent IO in postgres?

2010-12-23 Thread Andy
--- On Thu, 12/23/10, John W Strange wrote: > Typically my problem is that the > large queries are simply CPU bound..  do you have a > sar/top output that you see. I'm currently setting up two > FusionIO DUO @640GB in a lvm stripe to do some testing with, > I will publish the results after I'm d

Re: [PERFORM] general hardware advice

2011-02-06 Thread Andy
--- On Sun, 2/6/11, Linos wrote: > I am studying too the possibility of use an OCZ Vertex 2 > Pro with Flashcache or Bcache to use it like a second level > filesystem cache, any comments on that please? > OCZ Vertex 2 Pro is a lot more expensive than other SSD of comparable performances beca

Re: [PERFORM] Intel SSDs that may not suck

2011-03-28 Thread Andy
. Is there any benchmark measuring the performance of these SSD's (the new Intel vs. the new SandForce) running database workloads? The benchmarks I've seen so far are for desktop applications. Andy --- On Mon, 3/28/11, Greg Smith wrote: > From: Greg Smith > Subject: [PERFORM

Re: [PERFORM] Intel SSDs that may not suck

2011-04-06 Thread Andy
--- On Wed, 4/6/11, Scott Carey wrote: > I could care less about the 'fast' sandforce drives.  > They fail at a high > rate and the performance improvement is BECAUSE they are > using a large, > volatile write cache.  The G1 and G2 Intel MLC also use volatile write cache, just like most SandF

[PERFORM] BBU still needed with SSD?

2011-07-17 Thread Andy
AID 1, would that be any slower than 2 SSD in HW RAID 1 with BBU? What are the pros and cons? Thanks. Andy -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] BBU still needed with SSD?

2011-07-18 Thread Andy
--- On Mon, 7/18/11, David Rees wrote: > >> In this case is BBU still needed? If I put 2 SSD > in software RAID 1, would > >> that be any slower than 2 SSD in HW RAID 1 with > BBU? What are the pros and > >> cons? > > What will perform better will vary greatly depending on the > exact > SSDs,

Re: [PERFORM] BBU still needed with SSD?

2011-07-18 Thread Andy
> > I'm not comparing SSD in SW RAID with rotating disks > in HW RAID with > > BBU though. I'm just comparing SSDs with or without > BBU. I'm going to > > get a couple of Intel 320s, just want to know if BBU > makes sense for > > them. > > Yes, it certainly does, even if you have a RAID BBU. "ev

Re: [PERFORM] Intel 320 SSD info

2011-08-24 Thread Andy
According to the specs for database storage: "Random 4KB arites: Up to 600 IOPS" Is that for real? 600 IOPS is *atrociously terrible* for an SSD. Not much faster than mechanical disks. Has anyone done any performance benchmark of 320 used as a DB storage? Is it really that slow?

Re: [PERFORM] Suggestions for Intel 710 SSD test

2011-10-01 Thread Andy
Do you have an Intel 320?  I'd love to see tests comparing 710 to 320 and see if it's worth the price premium. From: David Boreham To: PGSQL Performance Sent: Saturday, October 1, 2011 10:39 PM Subject: [PERFORM] Suggestions for Intel 710 SSD test I have a 71

Re: [PERFORM] Intel 710 pgbench write latencies

2011-11-02 Thread Andy
Your results are consistent with the benchmarks I've seen. Intel SSD have much worse write performance compared to SSD that uses Sandforce controllers, which Vertex 2 Pro does. According to this benchmark, at high queue depth the random write performance of Sandforce is more than 5 times that o

Re: [PERFORM] TCP Overhead on Local Loopback

2012-04-01 Thread Andy
You could try using Unix domain socket and see if the performance improves. A relevant link: http://stackoverflow.com/questions/257433/postgresql-unix-domain-sockets-vs-tcp-sockets From: Ofer Israeli To: "pgsql-performance@postgresql.org" Sent: Sunday, Apri

[PERFORM] Slow query, where am I going wrong?

2012-10-30 Thread Andy
have also become extremely slow. I was expecting a drop off when the database grew out of memory, but not this much. Am I really missing the target somewhere? Any help and or suggestions will be very much appreciated. Best regards, Andy. http://explain.depesz.com/s/cfb select distinct

Re: [PERFORM] Speed of exist

2013-02-18 Thread Andy
Limit the sub-queries to 1, i.e. : select 1 from Table2 where Table2.ForeignKey = Table1.PrimaryKey fetch first 1 rows only Andy. On 19.02.2013 07:34, Bastiaan Olij wrote: Hi All, Hope someone can help me a little bit here: I've got a query like the following: -- select Column1, Co

[PERFORM] 2 phase commit: performance implications?

2005-12-12 Thread Andy Ballingall
t with, easily extensible, allows a close coupling between the apache server responsible for a region and the database it hits. Any insights gratefully received! Andy Ballingall ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings

Re: [PERFORM] 2 phase commit: performance implications?

2005-12-21 Thread Andy Ballingall
as the data. Yes, I'd prefer things to be that way in any event. Regards, Andy ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings

Re: [PERFORM] mysql to postgresql, performance questions

2010-03-20 Thread Andy Colson
ime required mysql database that is running. Its my MythTV box at home, and I have to ask permission from my GF before I take the box down to upgrade anything. And heaven forbid if it crashes or anything. So I do have experience with care and feeding of mysql. And no, I'm not kidding.) And

Re: [PERFORM] Database size growing over time and leads to performance impact

2010-03-27 Thread Andy Colson
n") You need to vacuum way more often than once a week. Just VACUUM ANALYZE, two, three times a day. Or better yet, let autovacuum do its thing. (if you do have autovacuum enabled, then the only problem is the open transaction thing). Dont "VACUUM FULL", its not helping

Re: [PERFORM] Performance regarding LIKE searches

2010-03-29 Thread Andy Colson
uot; or "fake prepare". It does "real" by default. Try setting: $dbh->{pg_server_prepare} = 0; before you prepare/run that statement and see if it makes a difference. http://search.cpan.org/dist/DBD-Pg/Pg.pm#prepare -Andy -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] Database size growing over time and leads to performance impact

2010-03-30 Thread Andy Colson
is different than transaction. The output above looks good, that's what you want to see. (If it had said "idle in transaction" that would be a problem). I dont think you need to change anything. Hopefully just vacuuming more often will help. -Andy -- Sent via pgsql-performance

Re: [PERFORM] REINDEXing database-wide daily

2010-03-30 Thread Andy Colson
ew weeks or once a month). -Andy -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] How check execution plan of a function

2010-04-08 Thread Andy Colson
could optimize that one statement. -Andy -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] How check execution plan of a function

2010-04-09 Thread Andy Colson
a string, then execute it, like: a := "select junk from aTable where akey = 5"; EXECUE a; (I dont think that's the exact right syntax, but hopefully gets the idea across) -Andy -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your

Re: [PERFORM] Slow Bulk Delete

2010-05-08 Thread Andy Colson
could try batching them together: DELETE FROM table1 WHERE table2_id in (11242939, 1,2,3,4,5, 42); Also are you preparing the query? -Andy -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mail

Re: [PERFORM] slow query performance

2010-06-03 Thread Andy Colson
On 6/3/2010 12:47 PM, Anj Adu wrote: I cant seem to pinpoint why this query is slow . No full table scans are being done. The hash join is taking maximum time. The table dev4_act_action has only 3 rows. box is a 2 cpu quad core intel 5430 with 32G RAM... Postgres 8.4.0 1G work_mem 20G effective_

Re: [PERFORM] How to insert a bulk of data with unique-violations very fast

2010-06-06 Thread Andy Colson
om addentry('2010-006-06 8:00:00', 130); I do an extra check that if the date's match that the level's match too, but you wouldnt have to. There is a unique index on adate. -Andy -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] Analysis Function

2010-06-10 Thread Andy Colson
x27;-'||p_day1||''') d1, date(extract(YEAR FROM m.taken)||''-'||p_month2||'-'||p_day2||''') d2 * What is a better way to create those dates (without string concatenation, I presume)? Dave I assume you are doing this in a lo

Re: [PERFORM] query tuning help

2010-06-14 Thread Andy Colson
you have indexes on emaildetails(emailid) and vantage_email_track(mailid)? -Andy -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] Dead lock

2010-06-14 Thread Andy Colson
nk about it. You start two transactions at the same time. A transaction is defined as "do this set of operations, all of which must succeed or fail atomicly". One transaction cannot update the exact same row as another transaction because that would break the second transa

Re: [PERFORM] performance on new linux box

2010-07-07 Thread Andy Colson
augh) I got about 20. I had to go out of my way (way out) to enable the disk caching, and even then only got 50 meg a second. http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm -Andy -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes

Re: [PERFORM] Queries with conditions using bitand operator

2010-07-13 Thread Andy Colson
th int's and string's but I couldnt find a way using the & operator. -Andy -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] performance on new linux box

2010-07-13 Thread Andy Colson
would help (cuz clustered is assuming sequential reads). or if you seq scan a table, it might help (as long as the table is stored relatively close together). But if you have a big db, that doesnt fit into cache, and you bounce all over the place doing seeks, I doubt it'll help. -Andy -- Sen

Re: [PERFORM] Identical query slower on 8.4 vs 8.3

2010-07-15 Thread Andy Colson
FULL is usually bad. Stick to "vacuum analyze" and drop the full. Do you have indexes on: test.tid, testresult.fk_tid, questionresult.fk_trid and testresult.trid -Andy On 7/15/2010 10:12 AM, Patrick Donlin wrote: I'll read over that wiki entry, but for now here is the

Re: [PERFORM] Using more tha one index per table

2010-07-21 Thread Andy Colson
n it. PG never uses the unique index on id, it always table scans it... because its faster. -Andy -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] Simple (hopefully) throughput question?

2010-11-03 Thread Andy Colson
, but when you fire off "select * from bigtable" pg will create the entire resultset in memory (and maybe swap?) and then send it all to the client in one big lump. You might try a cursor and fetch 100-1000 at a time from the cursor. No idea if it would be faster or slower. -And

Re: [PERFORM] Huge overestimation in rows expected results in bad plan

2010-11-09 Thread Andy Colson
ok. But its doing a sequential scan. Are you missing an index? Also: http://explain.depesz.com/ is magic. -Andy -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] MVCC performance issue

2010-11-12 Thread Andy Colson
t, etc). select count(*) for example is always going to be slow... just expect it, lets not destroy what works well about the database just to make it fast. Instead, find a better alternative so you dont have to run it. Just like any database, you have to work within MVCC's good points

Re: [PERFORM] Running PostgreSQL as fast as possible no matter the consequences

2010-11-15 Thread Andy Colson
and the rest of your data, including the system catalogs, will still be intact. if I am reading this right means: we can run our db safely (with fsync and full_page_writes enabled) except for tables of our choosing? If so, I am very +1 for this! -Andy -- Sent via pgsql-performance mailin

Re: [PERFORM] Compared MS SQL 2000 to Postgresql 9.0 on Windows

2010-12-07 Thread Andy Colson
re about transactions, but PG really does. Make sure all your code is properly starting and commiting transactions. -Andy -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] Compared MS SQL 2000 to Postgresql 9.0 on Windows

2010-12-07 Thread Andy Colson
On 12/7/2010 1:22 PM, Justin Pitts wrote: Also, as a fair warning: mssql doesn't really care about transactions, but PG really does. Make sure all your code is properly starting and commiting transactions. -Andy I do not understand that statement. Can you explain it a bit better? In

Re: [PERFORM] Compared MS SQL 2000 to Postgresql 9.0 on Windows

2010-12-07 Thread Andy Colson
On 12/7/2010 2:10 PM, Kenneth Marshall wrote: On Tue, Dec 07, 2010 at 11:56:51AM -0800, Richard Broersma wrote: On Tue, Dec 7, 2010 at 11:43 AM, Andy Colson wrote: In PG the first statement you fire off (like an "insert into" for example) will start a transaction. ?If you dont com

Re: [PERFORM] Help with bulk read performance

2010-12-14 Thread Andy Colson
How'd he get along? http://archives.postgresql.org/message-id/4cd1853f.2010...@noaa.gov -Andy -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] Help with bulk read performance

2010-12-14 Thread Andy Colson
On 12/14/2010 9:41 AM, Jim Nasby wrote: On Dec 14, 2010, at 9:27 AM, Andy Colson wrote: Is this the same thing Nick is working on? How'd he get along? http://archives.postgresql.org/message-id/4cd1853f.2010...@noaa.gov So it is. The one I replied to stood out because no one had repli

Re: [PERFORM] Compared MS SQL 2000 to Postgresql 9.0 on Windows

2010-12-17 Thread Andy Colson
pace for the imagery. The imagery code uses more cpu that PG does. The database is 98% read, though, so my setup is different that yours. My maps get 100K hits a day. The cpu's never use more than 20%. I'm running on a $350 computer, AMD Dual core, with 4 IDE disks in softwar

Re: [PERFORM] Compared MS SQL 2000 to Postgresql 9.0 on Windows

2010-12-17 Thread Andy Colson
code, one for each database. In the end, can PG be fast? Yes. Very. But only when you treat is as PG. If you try to use PG as if it were mssql, you wont be a happy camper. -Andy -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your sub

Re: [PERFORM] queries with lots of UNIONed relations

2011-01-13 Thread Andy Colson
up? *scratches head* Because it all fix it memory and didnt swap to disk? -Andy -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] queries with lots of UNIONed relations

2011-01-13 Thread Andy Colson
On 1/13/2011 4:49 PM, Robert Haas wrote: On Thu, Jan 13, 2011 at 5:47 PM, Andy Colson wrote: I don't believe there is any case where hashing each individual relation is a win compared to hashing them all together. If the optimizer were smart enough to be considering the situation as a

Re: [PERFORM] Possible to improve query plan?

2011-01-16 Thread Andy Colson
et an "explain analyze"? It give's more info. (Also, have you seen http://explain.depesz.com/) Last: If you wanted to force the index usage, for a test, you could drop the other indexes. I assume this is on a test box so it should be ok. If its live, you could w

Re: [PERFORM] Possible to improve query plan?

2011-01-16 Thread Andy Colson
-Original Message- From: Andy Colson [mailto:a...@squeakycode.net] Sent: Monday, 17 January 2011 5:22 p.m. To: Jeremy Palmer Cc: pgsql-performance@postgresql.org Subject: Re: [PERFORM] Possible to improve query plan? First, wow, those are long names... I had a hard time keeping track

Re: [PERFORM] Migrating to Postgresql and new hardware

2011-01-18 Thread Andy Colson
ge pattern is (70% read, small columns, no big blobs (like photos), etc)... and even then we'd still have to guess. I can tell you, however, having your readers and writers not block each other is really nice. Not only will I not compare apples to oranges, but I really wont compare app

Re: [PERFORM] Migrating to Postgresql and new hardware

2011-01-18 Thread Andy Colson
oops, call them database 'a' and database 'b'. -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] Migrating to Postgresql and new hardware

2011-01-20 Thread Andy Colson
fashionable non-SQL databases, but it's pretty well known in wider circles. -- Craig Ringer Or... PG is just so good we've never had to use more than one database server! :-) -Andy -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to you

Re: [PERFORM] Fun little performance IMPROVEMENT...

2011-01-21 Thread Andy Colson
? Is the stress package running niced? -Andy -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] Queries becoming slow under heavy load

2011-01-25 Thread Andy Colson
st and one when its slow? Looks to me, in both cases, you are not using much memory at all. (if you happen to have 'free', its output is a little more readable, if you wouldn't mind posting it (only really need it for when the box is slow) -Andy -- Sent via pgsql-perfor

Re: [PERFORM] High load,

2011-01-27 Thread Andy Colson
check some of your sql statements and make sure they are all behaving. You may not notice a table scan when the user count is low, but you will when it gets higher. Have you run each of your queries through explain analyze lately? Have you checked for bloat? You are vacuuming/autovacuum

Re: [PERFORM] High load,

2011-01-27 Thread Andy Colson
On 1/27/2011 9:09 AM, Michael Kohl wrote: On Thu, Jan 27, 2011 at 4:06 PM, Andy Colson wrote: Have you run each of your queries through explain analyze lately? A code review including checking of queries is on our agenda. You are vacuuming/autovacuuming, correct? Sure :-) Thank you

Re: [PERFORM] Get master-detail relationship metadata

2011-02-03 Thread Andy Colson
to the next product lastprodid = prodid ... etc > Is there any better way to do it? And how reliable is this? It makes the sql really easy, but the code complex... so pick your poison. -Andy -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] getting the most of out multi-core systems for repeated complex SELECT statements

2011-02-03 Thread Andy Colson
le cores you need multiple database connections. 3) If your jobs are IO bound, then running multiple jobs may hurt performance. Your naive approach is the best. Just spawn off two jobs (or three, or whatever). I think its also the only method. (If there is another method, I dont know what

Re: [PERFORM] getting the most of out multi-core systems for repeated complex SELECT statements

2011-02-03 Thread Andy Colson
ead a paper someplace that said shared cache (L1/L2/etc) multicore cpu's would start getting really slow at 16/32 cores, and that message passing was the way forward past that. If PG started aiming for 128 core support right now, it should use some kinda message passing with queues thing

Re: [PERFORM] getting the most of out multi-core systems for repeated complex SELECT statements

2011-02-03 Thread Andy Colson
On 02/03/2011 10:00 PM, Greg Smith wrote: Andy Colson wrote: Cpu's wont get faster, but HD's and SSD's will. To have one database connection, which runs one query, run fast, it's going to need multi-core support. My point was that situations where people need to

Re: [PERFORM] Performance trouble finding records through related records

2011-03-01 Thread Andy Colson
t in ( select id frome details where some set is bad ) and id in ( select anotherid from anothertable where ... ) Its the subselects you need to think about. Find one that gets you a small set that's interesting somehow. Once you get all your little sets, its easy to combine them. -A

Re: [PERFORM] Performance trouble finding records through related records

2011-03-02 Thread Andy Colson
On 03/02/2011 06:12 PM, sverhagen wrote: Thanks for your help already! Hope you're up for some more :-) Andy Colson wrote: First off, excellent detail. Second, your explain analyze was hard to read... but since you are not really interested in your posted query, I wont worry about lo

Re: [PERFORM] Performance trouble finding records through related records

2011-03-03 Thread Andy Colson
On 3/3/2011 3:19 AM, sverhagen wrote: Andy Colson wrote: For your query, I think a join would be the best bet, can we see its explain analyze? Here is a few variations: SELECT events_events.* FROM events_events WHERE transactionid IN ( SELECT transactionid FROM

Re: [PERFORM] Performance issues

2011-03-08 Thread Andy Colson
I have seen really complex geometries cause problems. If you have thousands of points, when 10 would do, try ST_Simplify and see if it doesnt speed things up. -Andy On 3/8/2011 2:42 AM, Andreas Forø Tollefsen wrote: Hi. Thanks for the comments. My data is right, and the result is exactly

Re: [PERFORM] Performance issues

2011-03-08 Thread Andy Colson
On 3/8/2011 10:58 AM, Andreas Forø Tollefsen wrote: Andy. Thanks. That is a great tips. I tried it but i get the error: NOTICE: ptarray_simplify returned a <2 pts array. Query: SELECT ST_Intersection(priogrid_land.cell, ST_Simplify(cshapeswdate.geom,0.1)) AS geom, priogrid_land.gid AS divi

Re: [PERFORM] Fastest pq_restore?

2011-03-17 Thread Andy Colson
s quick as it can be? Thanks. autovacuum = off fsync = off synchronous_commit = off full_page_writes = off bgwriter_lru_maxpages = 0 -Andy -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] Fastest pq_restore?

2011-03-18 Thread Andy Colson
On 3/18/2011 9:38 AM, Kevin Grittner wrote: Andy Colson wrote: On 03/17/2011 09:25 AM, Michael Andreasen wrote: I've been looking around for information on doing a pg_restore as fast as possible. bgwriter_lru_maxpages = 0 I hadn't thought much about that last one -- d

Re: [PERFORM] Performance on AIX

2011-03-19 Thread Andy Colson
more than happy to benchmark it and send it back :-) Or, more seriously, even remote ssh would do. -Andy -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance

[PERFORM] Opteron/FreeBSD/PostgreSQL performance poor

2006-07-05 Thread andy rost
top indicates a significant number or sblock states and occasional smwai states; e) ps auxww | grep postgres doesn't show anything abnormal; f) ESQL applications are very slow. We VACUUM ANALYZE user databases every four hours. We VACUUM template1 every 4 hours. We make a copy of the c

Re: [PERFORM] Opteron/FreeBSD/PostgreSQL performance poor

2006-07-05 Thread andy rost
Hi Stephen, Thanks for your input. My follow ups are interleaved below ... Stephen Frost wrote: * andy rost ([EMAIL PROTECTED]) wrote: We're in the process of porting from Informix 9.4 to PostgreSQL 8.1.3. Our PostgreSQL server is an AMD Opteron Dual Core 275 with two 2.2 Ghz 6

Re: [PERFORM] Opteron/FreeBSD/PostgreSQL performance poor

2006-07-07 Thread andy rost
es in excess of 5 may hamper performance). Thanks again ... Andy Mark Kirkwood wrote: andy rost wrote: effective_cache_size = 27462# typically 8KB each This seems like it might be a little low... How much memory do you have in the system? Then again, with your shared_me

Re: [PERFORM] Opteron/FreeBSD/PostgreSQL performance poor

2006-07-07 Thread andy rost
Hi Merlin, Thanks for the input. Please see below ... Merlin Moncure wrote: On 7/5/06, andy rost <[EMAIL PROTECTED]> wrote: fsync = on # turns forced synchronization have you tried turning this off and measuring performance? No, not yet. We'

[PERFORM] Working on huge RAM based datasets

2004-07-08 Thread Andy Ballingall
, with time, be more and more DB applications that would want to capitalise on the potential speed improvements that come with not having to work hard to get the right bits in the right bit of memory all the time? And finally, am I worrying too much, and actually this problem is common to all dat

Re: [PERFORM] Working on huge RAM based datasets

2004-07-09 Thread Andy Ballingall
enting with stuff the way it works at the moment. Many thanks, Andy ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org

Re: [PERFORM] Working on huge RAM based datasets

2004-07-09 Thread Andy Ballingall
just a few years. The disk system gets relegated to a data preload on startup and servicing the writes as the server does its stuff. Regards, Andy ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

Re: [PERFORM] Working on huge RAM based datasets

2004-07-13 Thread Andy Ballingall
p://www.sgi.com/servers/altix/ (This won lots of awards recently) The nice thing about the two things above is that they run linux in a single address space NUMA setup, and in theory you can just bolt on more CPUs and more RAM as your needs grow. Thanks, Andy - Original Message - F

Re: [PERFORM] Working on huge RAM based datasets

2004-07-25 Thread Andy Ballingall
io, without breaking the existing usage scenarios of PG in the traditional 'DB > RAM' scenario? The answer isn't "undermine the OS". The answer is "make the postmaster able to build and operate with persistent, query optimised representations of

Re: [PERFORM] Used computers?

2009-07-20 Thread Andy Colson
versystems.com/used-ibm-servers.htm You could check with them and see what they are selling for. (And maybe what they'd buy for) Also, there is always ebay. -Andy -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.p

Re: [PERFORM] PG 8.3 and server load

2009-08-19 Thread Andy Colson
knees by the sheer number of connections. check "ps ax|grep http|wc --lines" and make sure its not too big. (perhaps less than 100) -Andy -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] PG 8.3 and server load

2009-08-19 Thread Andy Colson
ate && vmstat Wed Aug 19 10:01:23 CDT 2009 procs ---memory-- ---swap-- -io --system-- cpu r b swpd free buff cache si sobibo incs us sy id wa 0 0 20920 106376 59220 75310160074 1530 3 10 5 74 12 On Wed, Aug 19, 2

  1   2   >