Re: [PERFORM] How to access data of SQL server database from PostgreSQL

2017-08-01 Thread Joshua D. Drake

On 07/31/2017 09:25 PM, Daulat Ram wrote:

Hi team,

I need to connect to MS-SQL server 2008/2012 from PostgreSQL 9.5 in 
Windows7 environment to fetch the tables of SQL server.


Please help on this.


https://github.com/tds-fdw/tds_fdw

JD



Regards,

Daulat




DISCLAIMER:

This email message is for the sole use of the intended recipient(s) and 
may contain confidential and privileged information. Any unauthorized 
review, use, disclosure or distribution is prohibited. If you are not 
the intended recipient, please contact the sender by reply email and 
destroy all copies of the original message. Check all attachments for 
viruses before opening them. All views or opinions presented in this 
e-mail are those of the author and may not reflect the opinion of Cyient 
or those of our affiliates.



--
Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc

PostgreSQL Centered full stack support, consulting and development.
Advocate: @amplifypostgres || Learn: https://pgconf.us
* Unless otherwise stated, opinions are my own.   *


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Very poor read performance, query independent

2017-07-11 Thread Joshua D. Drake

On 07/11/2017 04:15 PM, Merlin Moncure wrote:

On Mon, Jul 10, 2017 at 9:03 AM, Charles Nadeau
 wrote:

I’m running PostgreSQL 9.6.3 on Ubuntu 16.10 (kernel 4.4.0-85-generic).
Hardware is:

*2x Intel Xeon E5550

*72GB RAM

*Hardware RAID10 (4 x 146GB SAS 10k) P410i controller with 1GB FBWC (80%
read/20% write) for Postgresql data only:

The problem I have is very poor read. When I benchmark my array with fio I
get random reads of about 200MB/s and 1100IOPS and sequential reads of about
286MB/s and 21000IPS. But when I watch my queries using pg_activity, I get
at best 4MB/s. Also using dstat I can see that iowait time is at about 25%.
This problem is not query-dependent.


Stop right there. 1100 iops * 8kb = ~8mb/sec raw which might
reasonably translate to 4mb/sec to the client. 200mb/sec random
read/sec on spinning media is simply not plausible;


Sure it is, if he had more than 4 disks ;) but he also isn't going to 
get 1100 IOPS from 4 10k disks. The average 10k disk is going to get 
around 130 IOPS . If he only has 4 then there is no way he is getting 
1100 IOPS.


Using the above specs (4x146GB) the best he can reasonably hope for from 
the drives themselves is about 50MB/s add in the 1GB FWBC and that is 
how he is getting those high numbers for IOPS but that is because of 
caching.


He may need to adjust his readahead as well as his kernel scheduler. At 
a minimum he should be able to saturate the drives without issue.


JD



--
Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc

PostgreSQL Centered full stack support, consulting and development.
Advocate: @amplifypostgres || Learn: https://pgconf.us
* Unless otherwise stated, opinions are my own.   *


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] [ADMIN] Monitoring tool for Postgres Database

2017-05-26 Thread Joshua D. Drake

On 05/25/2017 07:15 PM, Scott Mead wrote:


Thanks

ravi



We use Zabbix.

JD









--
Command Prompt, Inc.  http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Questionaire: Common WAL write rates on busy servers.

2017-04-27 Thread Joshua D. Drake

On 04/27/2017 09:34 AM, Andres Freund wrote:

On 2017-04-27 09:31:34 -0700, Joshua D. Drake wrote:

On 04/27/2017 08:59 AM, Andres Freund wrote:




I would agree it isn't yet a widespread issue.


I'm not yet sure about that actually.  I suspect a large percentage of
people with such workloads aren't lingering lots on the lists.


That would probably be true. I was thinking of it more as the "most new 
users are in the cloud" and the "cloud" is going to be rare that a cloud 
user is going to be able to hit that level of writes. (at least not 
without spending LOTS of money)






The only people that are likely going to see this are going to be on bare
metal. We should definitely plan on that issue for say 11.


"plan on that issue" - heh. We're talking about major engineering
projects here ;)


Sorry, wasn't trying to make light of the effort. :D





I do have a question though, where you have seen this issue is it with
synchronous_commit on or off?


Both. Whether that matters or not really depends on the workload. If you
have bulk writes, it doesn't really matter much.


Sure, o.k.

Thanks,

Andres



- Andres




--
Command Prompt, Inc.  http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.
Unless otherwise stated, opinions are my own.


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Questionaire: Common WAL write rates on busy servers.

2017-04-27 Thread Joshua D. Drake

On 04/27/2017 08:59 AM, Andres Freund wrote:



Ok, based on the, few, answers I've got so far, my experience is indeed
skewed.  A number of the PG users I interacted with over the last couple
years had WAL write ranges somewhere in the range of 500MB/s to 2.2GB/s
(max I'veseen).  At that point WAL insertion became a major bottleneck,
even if storage was more than fast enough to keep up.  To address these
we'd need some changes, but the feedback so far suggest that it's not
yet a widespread issue...


I would agree it isn't yet a widespread issue.

The only people that are likely going to see this are going to be on 
bare metal. We should definitely plan on that issue for say 11. I do 
have a question though, where you have seen this issue is it with 
synchronous_commit on or off?


Thanks,

JD



--
Command Prompt, Inc.  http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.
Unless otherwise stated, opinions are my own.


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] How can I find the source of postgresql per-connection memory leaks?

2017-01-12 Thread Joshua D. Drake

On 01/12/2017 09:08 AM, Eric Jensen wrote:

I'm using postgresql 9.5.4 on amazon RDS with ~1300 persistent
connections from rails 4.2 with "prepared_statements: false". Over the
enter image description here


PostgreSQL on RDS is a closed product. My recommendation would be to 
contact Amazon support. They are likely to be able to provide you with 
better support.


Sincerely,

JD



--
Command Prompt, Inc.  http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.
Unless otherwise stated, opinions are my own.


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Isolation of tx logs on VMware

2016-12-13 Thread Joshua D. Drake

On 12/13/2016 12:16 PM, ProPAAS DBA wrote:

Hi All;




I'm not a VMware expert, however I thought VMware would allow the
creation of multiple disk volumes and attach them via separate mount
points. Is this not true? If it is an option can someone point me to a
how to...


Yes it is possible to do this and then you will be able to use standard 
OS tools to determine the IO utilization.





Also, if we cannot do multiple VMDK volumes then what is everyone's
thoughts about relocating pg_xlog to an NFS mount?



I personally wouldn't do it but it would depend on the implementation.

JD



Thanks in advance








--
Command Prompt, Inc.  http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Big Memory Boxes and pgtune

2016-10-28 Thread Joshua D. Drake

On 10/28/2016 08:44 AM, Warner, Gary, Jr wrote:

I've recently been blessed to move one of my databases onto a huge IBM P8 
computer.   Its a power PC architecture with 20 8-way cores (so postgres SHOULD 
believe there are 160 cores available) and 1 TB of RAM.

I've always done my postgres tuning with a copy of "pgtune" which says in the 
output:

# WARNING
# this tool not being optimal
# for very high memory systems

So . . . what would I want to do differently based on the fact that I have a "very 
high memory system"?


The most obvious is that you are going to want to have (depending on 
PostgreSQL version):


* A very high shared_buffers (in newer releases, it is not uncommon to 
have many, many GB of)
* Use that work_mem baby. You have 1TB available? Take your average data 
set return, and make work_mem at least that.
* IIRC (and this may be old advice), maintenance_work_mem up to 4GB. As 
I recall it won't effectively use more than that but I could be wrong.


Lastly but most importantly, test test test.

JD

--
Command Prompt, Inc.  http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.
Unless otherwise stated, opinions are my own.


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Indexes for hashes

2016-06-15 Thread Joshua D. Drake

On 06/15/2016 07:20 AM, Ivan Voras wrote:

Hi,

Just for testing... is there a fast (i.e. written in C) crc32 or a
similar small hash function for PostgreSQL?


https://www.postgresql.org/docs/9.5/static/pgcrypto.html

We also have a builtin md5().

JD
--
Command Prompt, Inc.  http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] checkpoints, proper config

2015-12-10 Thread Joshua D. Drake

On 12/10/2015 10:35 AM, Tory M Blue wrote:



Thiis valid regardless of the workload?


Yes.



Seems that I would be storing a
ton of data and writing it once an hour, so would have potential perf
hits on the hour. I guess I'm not too  up to date on the checkpoint
configuration.


No, that isn't how it works.

http://www.postgresql.org/docs/9.4/static/wal-configuration.html



My settings on this particular DB

fsync = off


This will cause data corruption in the event of improper shutdown.



#synchronous_commit = on



I would turn that off and turn fsync back on.

JD

--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing "I'm offended" is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] checkpoints, proper config

2015-12-10 Thread Joshua D. Drake

On 12/10/2015 01:12 AM, Tory M Blue wrote:


checkpoint_timeout = 5min

checkpoint_completion_target = 0.9



The above is your problem. Make checkpoint_timeout = 1h . Also, 
considering turning synchronous_commit off.


JD



--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing "I'm offended" is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] checkpoints, proper config

2015-12-10 Thread Joshua D. Drake

On 12/10/2015 12:58 PM, Tory M Blue wrote:


synchronous is commented out, is it on by default?


Yes it is on by default.



This is a slony slave node, so I'm not too worried about this particular
host losing it's data, thus fsync is off,

thanks again sir

Tory




--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing "I'm offended" is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Hmmm... why does CPU-intensive pl/pgsql code parallelise so badly when queries parallelise fine? Anyone else seen this?

2015-07-08 Thread Joshua D. Drake


On 07/08/2015 10:48 AM, Craig James wrote:


I admit that I haven't read this whole thread but:

Using Apache Fast-CGI, you are going to fork a process for each
instance of the function being executed and that in turn will use
all CPUs up to the max available resource.

With PostgreSQL, that isn't going to happen unless you are running
(at least) 8 functions across 8 connections.


Well, right, which is why I mentioned even with dozens of clients.
Shouldn't that scale to at least all of the CPUs in use if the function
is CPU intensive (which it is)?


In theory but that isn't PostgreSQL that does that, it will be the 
kernel scheduler. Although (and I am grasping at straws):


I wonder if the execution is taking place outside of the backend proper 
or... are you using a pooler?


JD




Craig




JD

--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing I'm offended is basically telling the world you can't
control your own emotions, so everyone else should do it for you.




--
-
Craig A. James
Chief Technology Officer
eMolecules, Inc.
-



--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing I'm offended is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Hmmm... why does CPU-intensive pl/pgsql code parallelise so badly when queries parallelise fine? Anyone else seen this?

2015-07-07 Thread Joshua D. Drake


On 07/07/2015 08:05 PM, Craig James wrote:



No ideas, but I ran into the same thing. I have a set of C/C++ functions
that put some chemistry calculations into Postgres as extensions (things
like, calculate the molecular weight of this molecule). As SQL
functions, the whole thing bogged down, and we never got the scalability
we needed. On our 8-CPU setup, we couldn't get more than 2 CPUs busy at
the same time, even with dozens of clients.

When I moved these same functions into an Apache fast-CGI HTTP service
(exact same code, same network overhead), I could easily scale up and
use the full 100% of all eight CPUs.

I have no idea why, and never investigated further. The convenience of
having the functions in SQL wasn't that important.


I admit that I haven't read this whole thread but:

Using Apache Fast-CGI, you are going to fork a process for each instance 
of the function being executed and that in turn will use all CPUs up to 
the max available resource.


With PostgreSQL, that isn't going to happen unless you are running (at 
least) 8 functions across 8 connections.


JD

--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing I'm offended is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] New server: SSD/RAID recommendations?

2015-07-06 Thread Joshua D. Drake


On 07/06/2015 09:56 AM, Steve Crawford wrote:

On 07/02/2015 07:01 AM, Wes Vaske (wvaske) wrote:



For what it's worth, in my most recent iteration I decided to go with
the Intel Enterprise NVMe drives and no RAID. My reasoning was thus:

1. Modern SSDs are so fast that even if you had an infinitely fast RAID
card you would still be severely constrained by the limits of SAS/SATA.
To get the full speed advantages you have to connect directly into the bus.


Correct. What we have done in the past is use smaller drives with RAID 
10. This isn't for the performance but for the longevity of the drive. 
We obviously could do this with Software RAID or Hardware RAID.




2. We don't typically have redundant electronic components in our
servers. Sure, we have dual power supplies and dual NICs (though
generally to handle external failures) and ECC-RAM but no hot-backup CPU
or redundant RAM banks and...no backup RAID card. Intel Enterprise SSD
already have power-fail protection so I don't need a RAID card to give
me BBU. Given the MTBF of good enterprise SSD I'm left to wonder if
placing a RAID card in front merely adds a new point of failure and
scheduled-downtime-inducing hands-on maintenance (I'm looking at you,
RAID backup battery).


That's an interesting question. It definitely adds yet another 
component. I can't believe how often we need to hotfix a raid controller.


JD


--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing I'm offended is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Do work_mem and shared buffers have 1g or 2g limit on 64 bit linux?

2015-06-13 Thread Joshua D. Drake


On 06/13/2015 10:27 AM, Kaijiang Chen wrote:

Hi, I am using postgresql 9.2.10 on centos 6.2, 64 bit version. The
server has 512 GB mem.

The jobs are mainly OLAP like. So I need larger work_mem and shared
buffers. From the source code, there is a constant MaxAllocSize==1GB.
So, I wonder whether work_mem and shared buffers can exceed 2GB in the
64 bit Linux server?


Shared Buffers is not limited.

Work_mem IIRC can go past 2GB but has never been proven to be effective 
after that.


It does depend on the version you are running.

JD




Thanks and regards,
Kaijiang




--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing I'm offended is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Are there tuning parameters that don't take effect immediately?

2015-06-12 Thread Joshua D. Drake


On 06/12/2015 01:37 PM, Michael Nolan wrote:

Last night I was doing some tuning on a database  The longest query I
was running was taking around 160 seconds.  I didn't see much change in
the running time for that query, even after restarting PG.

Today, with roughly the same system load (possibly even a bit heavier
load), that query is running about 40 seconds.


Sounds like some of the relations are cached versus not.

JD


--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing I'm offended is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] How to reduce writing on disk ? (90 gb on pgsql_tmp)

2015-06-03 Thread Joshua D. Drake


On 06/03/2015 03:16 PM, Tomas Vondra wrote:


What is more important, though, is the amount of memory. OP reported the
query writes ~95GB of temp files (and dies because of full disk, so
there may be more). The on-disk format is usually more compact than the
in-memory representation - for example on-disk sort often needs 3x less
space than in-memory qsort. So we can assume the query needs 95GB of
data. Can you explain how that's going to fit into the 64GB RAM?


Cache is free memory. If you think of it any other way when you're
looking at memory usage and pressure on theings like swap you're
gonna make some bad decisions.


Cache is not free memory - it's there for a purpose and usually plays a
significant role in performance. Sure, it may be freed and used for
other purposes, but that has consequences - e.g. it impacts performance
of other queries etc. You generally don't want to do that on production.


Exactly. If your cache is reduced your performance is reduced because 
less things are in cache. It is not free memory. Also the command free 
is not useful in this scenario. It is almost always better to use sar so 
you can see where the data points are that free is using.


Sincerely,

JD



--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing I'm offended is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] PostgreSQL disk fragmentation causes performance problems on Windows

2015-04-29 Thread Joshua D. Drake


On 04/29/2015 01:08 AM, Andres Freund wrote:


Which OS and filesystem is this done on? Because many halfway modern
systems, like e.g ext4 and xfs, implement this in the background as
'delayed allocation'.


Oh, it's in the subject. Stupid me, sorry for that. I'd consider testing
how much better this behaves under a different operating system, as a
shorter term relief.


This is a known issue on the Windows platform. It is part of the 
limitations of that environment. Linux/Solaris/FreeBSD do not suffer 
from this issue in nearly the same manner.


jD




--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing I'm offended is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] [ADMIN] Hardware Configuration and other Stuff

2015-03-18 Thread Joshua D. Drake


On 03/18/2015 12:07 PM, Vivekanand Joshi wrote:



Here are few questions:

1.) I don't need a load balancing solution. It must be high availability
server and I can work with asynchronous replication. The most important
thing here would be recovery should be as fast as possible.

What approach would you recommend?


LinuxHA + Corosync/Pacemaker etc...



2.) Recommendations on indexes, WAL, table spaces. I am not asking about
on which key I need to make indexes, but an high level approach about
how to keep them? This might come out as a weird question to many but
please excuse me for being a novice.


This is too broad of a question without understanding the hardware it 
will be on.





*Most Important Question:*

3.) What would be the ideal hardware configuration for this requirement?
I know there is not a one-stop answer for this, but let's take it is a
starting point. We can come to a proper conclusion after a discussion.

What are the best on-line resources/books which can tell us about the
hardware requirements?


And see above. You need a consultant. I am sure you will get some decent 
responses but this isn't just about PostgreSQL, this is about 
architecture of a rather complex solution and a migration.


Sincerely,

Joshua D. Drake


--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, @cmdpromptinc

Now I get it: your service is designed for a customer
base that grew up with Facebook, watches Japanese seizure
robot anime, and has the attention span of a gnat.
I'm not that user., Tyler Riddle



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] MusicBrainz postgres performance issues

2015-03-15 Thread Joshua D. Drake


On 03/15/2015 09:43 AM, Scott Marlowe wrote:


* Consider installing perf (linux-utils-$something) and doing a
   systemwide profile.

3.2 isn't the greatest kernel around, efficiency wise. At some point you
might want to upgrade to something newer. I've seen remarkable
differences around this.


Not at some point, now. 3.2 - 3.8 are undeniably broken for PostgreSQL.



That is an understatement. Here's a nice article on why it's borked:

http://www.databasesoup.com/2014/09/why-you-need-to-avoid-linux-kernel-32.html

Had a 32 core machine with big RAID BBU and 512GB memory that was
dying using 3.2 kernel. went to 3.11 and it went from a load of 20 to
40 to a load of 5.


Yep, I can confirm this behavior.




You really should upgrade postgres to a newer major version one of these
days. Especially 9.2. can give you a remarkable improvement in
performance with many connections in a read mostly workload.


Seconded.

JD

--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, @cmdpromptinc

Now I get it: your service is designed for a customer
base that grew up with Facebook, watches Japanese seizure
robot anime, and has the attention span of a gnat.
I'm not that user., Tyler Riddle



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] MusicBrainz postgres performance issues

2015-03-15 Thread Joshua D. Drake


On 03/15/2015 05:08 AM, Robert Kaye wrote:




On Mar 15, 2015, at 12:41 PM, Andreas Kretschmer akretsch...@spamfence.net 
wrote:

just a wild guess: raid-controller BBU faulty


We don’t have a BBU in this server, but at least we have redundant power 
supplies.

In any case, how would a fault batter possibly cause this?


The controller would turn off the cache.

JD



--

--ruaok

Robert Kaye -- r...@musicbrainz.org --http://musicbrainz.org






--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, @cmdpromptinc

Now I get it: your service is designed for a customer
base that grew up with Facebook, watches Japanese seizure
robot anime, and has the attention span of a gnat.
I'm not that user., Tyler Riddle



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Turn off Hyperthreading! WAS: 60 core performance with 9.3

2014-08-21 Thread Joshua D. Drake


On 08/21/2014 04:29 PM, Josh Berkus wrote:


On 08/21/2014 04:08 PM, Steve Crawford wrote:

On 08/21/2014 03:51 PM, Josh Berkus wrote:

On 08/21/2014 02:26 PM, Scott Marlowe wrote:

I'm running almost the exact same setup in production as a spare. It
has 4 of those CPUs, 256G RAM, and is currently set to use HT. Since
it's a spare node I might be able to do some testing on it as well.
It's running a 3.2 kernel right now. I could probably get a later
model kernel on it even.

You know about the IO performance issues with 3.2, yes?


Were those 3.2 only and since fixed or are there issues persisting in
3.2+? The 12.04 LTS release of Ubuntu Server was 3.2 but the 14.04 is 3.13.


The issues I know of were fixed in 3.9.



Correct. If you run trusty backports you are good to go.

JD

--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, @cmdpromptinc
If we send our children to Caesar for their education, we should
 not be surprised when they come back as Romans.


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] pg_dump vs pg_basebackup

2014-03-25 Thread Joshua D. Drake


On 03/25/2014 05:05 AM, Claudio Freire wrote:


On Tue, Mar 25, 2014 at 4:39 AM, David Johnston pol...@yahoo.com wrote:

Hai,

Can anyone tell me the difference and performance between pgdump and
pg_basebackup if I want to backup a large database.



Honestly,

Neither is particularly good at backing up large databases. I would look 
into PITR with rsync.


JD


--
Command Prompt, Inc. - http://www.commandprompt.com/  509-416-6579
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc
Political Correctness is for cowards.


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] pg_dump vs pg_basebackup

2014-03-25 Thread Joshua D. Drake


On 03/25/2014 08:18 AM, Ilya Kosmodemiansky wrote:


Joshua,

that is really good point: an alternative is to use pg_basebackup
through ssh tunnel with compression, but rsync is much simpler.


Or rsync over ssh. The advantage is that you can create backups that 
don't have to be restored, just started. You can also use the 
differential portions of rsync to do it multiple times a day without 
much issue.


JD

--
Command Prompt, Inc. - http://www.commandprompt.com/  509-416-6579
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc
Political Correctness is for cowards.


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] pg_dump vs pg_basebackup

2014-03-25 Thread Joshua D. Drake


On 03/25/2014 08:21 AM, Magnus Hagander wrote:

I would say that's the one thing that rsync is *not*. pg_basebackup
takes care of a lot of things under the hood. rsync is a lot more
complicated, in particular in failure scenarios, since you have to
manually deal with pg_start/stop_backup().

There are definitely reasons you'd prefer rsync over pg_basebackup, but
I don't believe simplicity is one of them.

//Magnus


Good God man... since when do you top post!

Well there are tools that use rsync to solve those issues :P. We even 
have one that does multi-threaded rsync so you can pull many Terabytes 
in very little time (relatively).


JD


--
Command Prompt, Inc. - http://www.commandprompt.com/  509-416-6579
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc
Political Correctness is for cowards.


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] One huge db vs many small dbs

2013-12-05 Thread Joshua D. Drake




One of the many questions we have is about performance of the db if we
work with only one (using a ClientID to separete de clients info) or thousands
of separate dbs. The management of the dbs is not a huge concert as we
have an automated tool.


If you are planning on using persisted connections, the large number of DB 
approach is going to have a significant disadvantage.  You cannot pool 
connections between databases.  So if you have 2000 databases, you are going to 
need a minimum of 2000 connections to service those database (assuming you want 
to keep at least one active connection open per client at a time).


That isn't exactly true. You could run multiple poolers.

JD



Brad.






--
Command Prompt, Inc. - http://www.commandprompt.com/  509-416-6579
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc
For my dreams of your image that blossoms
   a rose in the deeps of my heart. - W.B. Yeats


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] How clustering for scale out works in PostgreSQL

2013-08-29 Thread Joshua D. Drake


On 08/29/2013 07:59 AM, Richard Huxton wrote:


On 29/08/13 13:14, bsreejithin wrote:


I am *expecting 1000+ hits to my PostgreSQL DB* and I doubt my
standalone DB
will be able to handle it.


We are going to need a little more detail here. In a normal environment 
1000+ hits isn't that much, even if the hit is generating a dozen 
queries per page.


A more appropriate action would be to consider the amount of transaction 
per second and the type of queries the machine will be doing. You will 
want to look into replication, hot standby as well as read only scaling 
with pgpool-II.





OMG! 1000 hits every year! And hits too - not just any type of
query :-)

Seriously, if you try describing your setup, what queries make up your
hits and what you mean by 1000 then there are people on this list who
can tell you what sort of setup you'll need.

While you're away googling though, replication is indeed the term you
want. In particular hot standby which lets you run read-only queries
on the replicas.


Sarcasm with new recruits to the community is not the way to go.


JD

--
Command Prompt, Inc. - http://www.commandprompt.com/  509-416-6579
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc
For my dreams of your image that blossoms
   a rose in the deeps of my heart. - W.B. Yeats


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] How clustering for scale out works in PostgreSQL

2013-08-29 Thread Joshua D. Drake


On 08/29/2013 09:42 AM, bsreejithin wrote:


The performance test that was conducted was for 1 Hour.

There are 6 transactions. 2 DB inserts and 4 SELECTs.
Every 2 minutes there will be 4 SELECTs. And every 3 minutes there will be 2
DB inserts.


This shouldn't be a problem with proper hardware and a connection 
pooler. The concern isn't the 1000 sessions, it is the creating and 
destroying in rapid succession of 1000 connections. A connection pooler 
will resolve that issue.


Sincerely,

JD

--
Command Prompt, Inc. - http://www.commandprompt.com/  509-416-6579
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc
For my dreams of your image that blossoms
   a rose in the deeps of my heart. - W.B. Yeats


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Performance autovaccum

2013-07-09 Thread Joshua D. Drake


On 07/09/2013 03:14 PM, Josh Berkus wrote:


On 07/08/2013 09:14 AM, Jeison Bedoya wrote:

Hi, i have a postgresql 9.2.2, but i don´t use autovaccum but i want to
begin to use it.  some recommendation about the optimal configuration?
or some link to explain it.


Initial configuration:

autovacuum = on

There, you're done.  You only do something else if the default
configuraiton is proven not to work for you.



Well, and a restart of PostgreSQL. It should also be noted that 
autovacuum by default is on. You can check to see if it is currently 
running for you by issuing the following command from psql:


show autovacuum;

Other than that JoshB is correct. The default settings for autovacuum 
work for the 95% of users out there.


JD



--
Command Prompt, Inc. - http://www.commandprompt.com/  509-416-6579
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc
For my dreams of your image that blossoms
   a rose in the deeps of my heart. - W.B. Yeats


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Reliability with RAID 10 SSD and Streaming Replication

2013-05-22 Thread Joshua D. Drake


On 05/22/2013 11:06 AM, Greg Smith wrote:


I have some moderately fast SSD based transactional systems that are
still using traditional drives with battery-backed cache for the
sequential writes of the WAL volume, where the data volume is on Intel
710 disks.  WAL writes really burn through flash cells, too, so keeping
them on traditional drives can be cost effective in a few ways.  That
approach is lucky to hit 10K TPS though, so it can't compete against
what a PCI-E card like the FusionIO drives are capable of.


Greg, can you elaborate on the SSD + Xlog issue? What type of burn 
through are we talking about?


JD



--
Command Prompt, Inc. - http://www.commandprompt.com/  509-416-6579
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc
For my dreams of your image that blossoms
   a rose in the deeps of my heart. - W.B. Yeats


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Reliability with RAID 10 SSD and Streaming Replication

2013-05-22 Thread Joshua D. Drake


On 05/22/2013 01:57 PM, Merlin Moncure wrote:


On Wed, May 22, 2013 at 3:06 PM, Greg Smith g...@2ndquadrant.com wrote:

You bet, and I haven't recommended anyone buy a 710 since the announcement.
However, hit the street is still an issue.  No one has been able to keep
DC S3700 drives in stock very well yet.  It took me three tries through
Newegg before my S3700 drive actually shipped.


Well, let's look a the facts:
*) 2x write endurance vs 710 (500x 320)
*) 2-10x performance depending on workload specifics
*) much better worst case/average latency
*) half the cost of the 710!?


I am curious how the 710 or S3700 stacks up against the new M500 from 
Crucial? I know Intel is kind of the goto for these things but the m500 
is power off protected and rated at: Endurance: 72TB total bytes written 
(TBW), equal to 40GB per day for 5 years .


Granted it isn't he fasted pig in the poke but it sure seems like a very 
reasonable drive for the price:


http://www.newegg.com/Product/Product.aspx?Item=20-148-695ParentOnly=1IsVirtualParent=1

Sincerely,

Joshua D. Drake

--
Command Prompt, Inc. - http://www.commandprompt.com/  509-416-6579
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc
For my dreams of your image that blossoms
   a rose in the deeps of my heart. - W.B. Yeats


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Reliability with RAID 10 SSD and Streaming Replication

2013-05-22 Thread Joshua D. Drake


On 05/22/2013 04:37 PM, Merlin Moncure wrote:


On Wed, May 22, 2013 at 5:42 PM, Joshua D. Drake j...@commandprompt.com wrote:

I am curious how the 710 or S3700 stacks up against the new M500 from
Crucial? I know Intel is kind of the goto for these things but the m500 is
power off protected and rated at: Endurance: 72TB total bytes written (TBW),
equal to 40GB per day for 5 years .


I don't think the m500 is power safe (nor is any drive at the 1$/gb
price point).


According the the data sheet it is power safe.

http://investors.micron.com/releasedetail.cfm?ReleaseID=732650
http://www.micron.com/products/solid-state-storage/client-ssd/m500-ssd

Sincerely,

JD




--
Command Prompt, Inc. - http://www.commandprompt.com/  509-416-6579
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc
For my dreams of your image that blossoms
   a rose in the deeps of my heart. - W.B. Yeats


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Reliability with RAID 10 SSD and Streaming Replication

2013-05-22 Thread Joshua D. Drake


On 05/22/2013 07:17 PM, Merlin Moncure wrote:


  According the the data sheet it is power safe.
 
  http://investors.micron.com/releasedetail.cfm?ReleaseID=732650
  http://www.micron.com/products/solid-state-storage/client-ssd/m500-ssd

Wow, that seems like a pretty good deal then assuming it works and
performs decently.


Yeah that was my thinking. Sure it isn't an S3700 but for the money it 
is still faster than the comparable spindle configuration.


JD



merlin




--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Setup of four 15k SAS disk with LSI raid controller

2013-03-13 Thread Joshua D. Drake


On 03/13/2013 11:45 AM, Vasilis Ventirozos wrote:

Its better to split WAL segments and data just because these two have
different io requirements and because its easier to measure and tune
things if you have them on different disks.


Generally speaking you are correct but we are talking about RAID 0 here.

JD

--
Command Prompt, Inc. - http://www.commandprompt.com/
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC
@cmdpromptinc - 509-416-6579


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Optimize SELECT * from table WHERE foreign_key_id IN (key1,key2,key3,key4...)

2013-03-05 Thread Joshua D. Drake


On 03/05/2013 03:51 PM, Niels Kristian Schjødt wrote:


3ms isn't slow


Sorry, it's 3323ms!


Can I do anything to optimize that query or maybe the index or something?


your index is already used


Okay this leaves me with - get better hardware or?


What does explain analyze say versus just explain.

JD
--
Command Prompt, Inc. - http://www.commandprompt.com/
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC
@cmdpromptinc - 509-416-6579


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Request for help with slow query

2012-10-29 Thread Joshua D. Drake


On 10/29/2012 12:25 PM, Woolcock, Sean wrote:


I thought that an index was implicitly created for foreign keys, but I see that 
that's not true. I've just created one now and re-ran the query but it did not 
change the query plan or run time.


1. Explain analyze, not explain please

Check to see if estimated rows differs wildly from actual.

2. Seriously... 8.1? That is not supported. Please upgrade to a 
supported version of PostgreSQL.


http://www.postgresql.org/support/versioning/

3. Simple things:

 A. Have you run analyze on the two tables?
 B. What is your default_statistics_target?

Joshua D. Drake

--
Command Prompt, Inc. - http://www.commandprompt.com/
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC
@cmdpromptinc - 509-416-6579


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] auto-vacuum vs. full table update

2012-04-26 Thread Joshua D. Drake


On 04/26/2012 12:49 PM, Craig James wrote:


An update to our system means I'm going to be rewriting every row of
some large tables (20 million rows by 15 columns).  In a situation
like this, can auto-vacuum take care of it, or should I plan on
vacuum-full/reindex to clean up?



If you rewrite the whole table, you will end up with a table twice the 
size, it will not be compacted but as the table grows, the old space 
will be reused.


jD


This is 8.4.4.

Thanks,
Craig




--
Command Prompt, Inc. - http://www.commandprompt.com/
PostgreSQL Support, Training, Professional Services and Development
The PostgreSQL Conference - http://www.postgresqlconference.org/
@cmdpromptinc - @postgresconf - 509-416-6579

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Performance

2011-04-13 Thread Joshua D. Drake

On 04/13/2011 05:03 PM, Tom Lane wrote:

That 4:1 ratio is based on some rather extensive experimentation that
I did back in 2000.  In the interim, disk transfer rates have improved
quite a lot more than disk seek times have, and the CPU cost to process
a page's worth of data has also improved compared to the seek time.

My experience is that at least a 1/1 is more appropriate.

JD

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Linux: more cores = less concurrency.

2011-04-11 Thread Joshua D. Drake
On Mon, 11 Apr 2011 13:09:15 -0500, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
 Glyn Astill glynast...@yahoo.co.uk wrote:
  
 The new server uses 4 x 8 core Xeon X7550 CPUs at 2Ghz
  
 Which has hyperthreading.
  
 our current servers are 2 x 4 core Xeon E5320 CPUs at 2Ghz.
  
 Which doesn't have hyperthreading.
  
 PostgreSQL often performs worse with hyperthreading than without. 
 Have you turned HT off on your new machine?  If not, I would start
 there.

And then make sure you aren't running CFQ.

JD

  
 -Kevin

-- 
PostgreSQL - XMPP: jdrake(at)jabber(dot)postgresql(dot)org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Performance Test for PostgreSQL9

2011-03-02 Thread Joshua D. Drake
On Wed, 2011-03-02 at 13:19 -0500, Robert Haas wrote:
 On Sun, Feb 27, 2011 at 10:26 PM, Selva manickaraja mavle...@gmail.com 
 wrote:
  We have installed PostgreSQL9 and setup standby(s). Now we have to test the
  performance before we migrate all the data from Informix. The PostgreSQL9
  that we installed is the Linux version from EnterpriseDB which runs on Red
  Hat. The documentation on PostgreSQL website shows that we have gmake from
  source. So for that purpose we downloaded the source into a UBuntu machine
  to gmake and install it. But UBuntu on the other hand complaints that it
  can't find gmake. So looks like we are stuck here.
 
 I am a bit confused.  Why would you need to install from source
 instead of using an installer (either from EnterpriseDB or installing
 via apt-get)?

To be rude but honest. If you can't solve that problem you really should
contract with someone to help you with your performance tests because
you are not going to be able to adequately tune PostgreSQL for a proper
test.

That said, the reason you can't find make is that you don't have the
proper development tools installed.

+1 to what Robert said.

Sincerely,

Joshua D. Drake


-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Performance Test for PostgreSQL9

2011-03-02 Thread Joshua D. Drake
On Thu, 2011-03-03 at 13:16 +0800, Selva manickaraja wrote:
 Thanks for the enlightenment. I will then look into other tools that
 help
 with performance testing. Is pgbench really useful? We need to produce
 the
 reports and statistics to our management as we are planning to migrate
 one
 system at a time from Informix. This is to ensure that we do not
 overload
 the database with all the systems eventually. So can pgbench help us
 here? 

If you have an existing system, you best bet is to migrate your schema
and a data snapshot from that system to PostgreSQL. Then take a portion
of your more expensive queries and port them to PostgreSQL and compare
from there.

A vanilla PgBench or other workload manager will do nothing to help you
with a real world metric to provide to those that wear ties.

JD

-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Write-heavy pg_stats_collector on mostly idle server

2011-02-07 Thread Joshua D. Drake
On Mon, 2011-02-07 at 14:58 -0800, Josh Berkus wrote:
  Anyone seen anything like this before?
 
  
  it is the expected behavior, IIRC
 
 OK.  It just seems kind of pathological for stats file writing to be 10X
 the volume of data writing.  I see why it's happening, but I think it's
 something we should fix.

I don't think it is expected. As I recall, it is something we fixed a
couple of major versions back (8.2?). It used to be that stats would
write every 500ms. We changed that to when they are asked for (via a
select from the table or something). Specifically because it could cause
this type of problem.

Am I thinking of something else?

I remember going back and forth with tgl about this, tgl?

JD

 
 -- 
   -- Josh Berkus
  PostgreSQL Experts Inc.
  http://www.pgexperts.com
 

-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [HACKERS] [PERFORM] Slow count(*) again...

2011-02-03 Thread Joshua D. Drake
On Thu, 2011-02-03 at 18:33 -0500, Mladen Gogala wrote:
  
   
  Exactly what we don't want.

 Who is we?

The majority of long term hackers.

-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [HACKERS] [PERFORM] Slow count(*) again...

2011-02-03 Thread Joshua D. Drake
On Thu, 2011-02-03 at 18:12 -0800, Conor Walsh wrote:
  I can't remember
  anyone ever complaining ANALYZE took too long to run.  I only
  remember complaints of the form I had to remember to manually run it
  and I wish it had just happened by itself.
 
 Robert,
 
 This sounds like an argument in favor of an implicit ANALYZE after all
 COPY statements, and/or an implicit autoanalyze check after all
 INSERT/UPDATE statements.

Well that already happens. Assuming you insert/update or copy in a
greater amount than the threshold for the 

autovacuum_analyze_scale_factor

Then autovacuum is going to analyze on the next run. The default is .1
so it certainly doesn't take much.

JD

 
 -Conor
 

-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] SELECT INTO large FKyed table is slow

2010-11-30 Thread Joshua D. Drake
On Sun, 2010-11-28 at 12:46 +0100, Mario Splivalo wrote:
 The database for monitoring certain drone statuses is quite simple:
 

 This is the slow part:
 INSERT INTO drones_history (sample_id, drone_id, drone_log_notice, 
 drone_temperature, drone_pressure)
 SELECT * FROM tmpUpdate;
 
 For 100 rows this takes around 2 seconds. For 1000 rows this takes 
 around 40 seconds. For 5000 rows this takes around 5 minutes.
 For 50k rows this takes around 30 minutes! Now this is where I start lag 
 because I get new CSV every 10 minutes or so.

Have you considered making the foreign key check deferrable?

JD

-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] partitioning question 1

2010-10-28 Thread Joshua D. Drake
On Thu, 2010-10-28 at 09:36 -0700, Ben wrote:
 hello --
 
 my last email was apparently too long to respond to so i'll split it up into 
 shorter pieces.  my first question :
 
 my understanding of how range partitioning and constraint exclusion works 
 leads me to believe that it does not buy any query performance that a 
 clustered index doesn't already give you -- the advantages are all in 
 maintainability.  an index is able to eliminate pages just as well as 
 constraint exclusion is able to eliminate table partitions.  the I/O 
 advantages of having queries target small subtables are the same as the I/O 
 advantages of clustering the index : result pages in a small range are very 
 close to each other on disk.

Not entirely true. One a clustered index will not stay clustered if you
are still updating data that is in the partition. You shouldn't
underestimate the benefit of smaller relations in terms of maintenance
either.

 
 finally, since constraint exclusion isn't as flexible as indexing (i've seen 
 old mailing list posts that say that constraint exclusion only works with 
 static constants in where clauses, and only works with simple operators like 
 ,  which basically forces btree indexes when i want to use gist) it is 
 indeed likely that partitioning can be slower than one big table with a 
 clustered index.

Yes the constraints have to be static. Not sure about the operator
question honestly.


 is my intuition completely off on this?

You may actually want to look into expression indexes, not clustered
ones.

Sincerely,

Joshua D. Drake

-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] partitioning question 1

2010-10-28 Thread Joshua D. Drake
On Thu, 2010-10-28 at 11:44 -0700, Ben wrote:

  Yes the constraints have to be static. Not sure about the operator
  question honestly.
 
 this seems to severely restrict their usefulness -- our queries are data 
 warehouse analytical -type  queries, so the constraints are usually 
 data-driven (come from joining against other tables.)

Well it does and it doesn't. Keep in mind that the constraint can be:

date = '2010-10-01 and date = '2010-10-31'

What it can't be is something that contains date_part() or extract() (as
an example) 

 
  is my intuition completely off on this?
  
  You may actually want to look into expression indexes, not clustered
  ones.

Take a look at the docs:

http://www.postgresql.org/docs/8.4/interactive/indexes-expressional.html

It could be considered partitioning without breaking up the table,
just the indexes.

Sincerely,

Joshua D. Drake


-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] partitioning question 1

2010-10-28 Thread Joshua D. Drake
On Thu, 2010-10-28 at 12:25 -0700, Ben wrote:

 i think we are talking about two different things here: the constraints on 
 the table, and the where-clause constraints in a query which may or may not 
 trigger constraint exclusion.  i understand that table constraints have to be 
 constants -- it doesn't make much sense otherwise.  what i am wondering about 
 is, will constraint exclusion be triggered for queries where the column that 
 is being partitioned on is being constrained things that are not static 
 constants, for instance, in a join.  (i'm pretty sure the answer is no, 
 because i think constraint exclusion happens before real query planning.)  a 
 concrete example :
 
 create table foo (i integer not null, j float not null);
 create table foo_1 (check ( i = 0 and i  10) ) inherits (foo);
 create table foo_2 (check ( i = 10 and i  20) ) inherits (foo);
 create table foo_3 (check ( i = 20 and i  30) ) inherits (foo);
 etc..
 
 create table bar (i integer not null, k float not null);
 
 my understanding is that a query like
 
 select * from foo, bar using (i);
 
 can't use constraint exclusion, even if the histogram of i-values on table 
 bar says they only live in the range 0-9, and so the query will touch all of 
 the tables.  i think this is not favorable compared to a single foo table 
 with a well-maintained btree index on i.
 

My tests show you are incorrect:


part_test=# explain analyze select * from foo join bar using (i) where
i=9;
QUERY
PLAN
--
 Nested Loop  (cost=34.26..106.76 rows=200 width=20) (actual
time=0.004..0.004 rows=0 loops=1)
   -  Append  (cost=0.00..68.50 rows=20 width=12) (actual
time=0.004..0.004 rows=0 loops=1)
 -  Seq Scan on foo  (cost=0.00..34.25 rows=10 width=12)
(actual time=0.001..0.001 rows=0 loops=1)
   Filter: (i = 9)
 -  Seq Scan on foo_1 foo  (cost=0.00..34.25 rows=10 width=12)
(actual time=0.000..0.000 rows=0 loops=1)
   Filter: (i = 9)
   -  Materialize  (cost=34.26..34.36 rows=10 width=12) (never
executed)
 -  Seq Scan on bar  (cost=0.00..34.25 rows=10 width=12) (never
executed)
   Filter: (i = 9)
 Total runtime: 0.032 ms
(10 rows)



-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] partitioning question 1

2010-10-28 Thread Joshua D. Drake
On Thu, 2010-10-28 at 12:59 -0700, Ben wrote:
 On Oct 28, 2010, at 12:44 PM, Joshua D. Drake wrote:
  
  My tests show you are incorrect:
  
  
  part_test=# explain analyze select * from foo join bar using (i) where
  i=9;
 QUERY
  PLAN
  --
  Nested Loop  (cost=34.26..106.76 rows=200 width=20) (actual
  time=0.004..0.004 rows=0 loops=1)
-  Append  (cost=0.00..68.50 rows=20 width=12) (actual
  time=0.004..0.004 rows=0 loops=1)
  -  Seq Scan on foo  (cost=0.00..34.25 rows=10 width=12)
  (actual time=0.001..0.001 rows=0 loops=1)
Filter: (i = 9)
  -  Seq Scan on foo_1 foo  (cost=0.00..34.25 rows=10 width=12)
  (actual time=0.000..0.000 rows=0 loops=1)
Filter: (i = 9)
-  Materialize  (cost=34.26..34.36 rows=10 width=12) (never
  executed)
  -  Seq Scan on bar  (cost=0.00..34.25 rows=10 width=12) (never
  executed)
Filter: (i = 9)
  Total runtime: 0.032 ms
  (10 rows)
 
 strange.  my tests don't agree with your tests :

Do you have constraint_exclusion turned on? You should verify with show
constraint_exclusion (I saw what you wrote below).

JD

P.S. Blatant plug, you coming to http://www.postgresqlconference.org ?


-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Postgres insert performance and storage requirement compared to Oracle

2010-10-25 Thread Joshua D. Drake
On Mon, 2010-10-25 at 11:36 -0700, Divakar Singh wrote:
  
 68 Rows inserted: 100,000 
 Above results show good INSERT performance of PG when using SQL
 procedures. But 
 performance when I use C++ lib is very bad. I did that test some time
 back so I 
 do not have data for that right now.

This is interesting, are you using libpq or libpqXX?

Joshua D. Drake


 
 
-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] XFS vs Ext3, and schedulers, for WAL

2010-10-11 Thread Joshua D. Drake
On Mon, 2010-10-11 at 10:50 -0700, Josh Berkus wrote:
  There's a number of blog tests floating around comparing XFS and Ext3,
  and the various Linux schedulers, for PGDATA or for an all-in-one mount.
  
  However, the WAL has a rather particular write pattern, and it's
  reasonable to assume that it shouldn't be optimized the same way as
  PGDATA.  Has anyone done any head-to-heads for WAL drive configuration
  changes?
 
 That would be a no, then.  Looks like I have my work cut out for me ...

The only thing I have done is:

http://www.commandprompt.com/blogs/joshua_drake/2008/04/is_that_performance_i_smell_ext2_vs_ext3_on_50_spindles_testing_for_postgresql/

It doesn't cover XFS but it provides a decent and simple comparison on
ext2/ext3 etc...

Remember xlog is sequential so pushing it off is useful.

JD

 
 -- 
   -- Josh Berkus
  PostgreSQL Experts Inc.
  http://www.pgexperts.com
 

-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Issue for partitioning with extra check constriants

2010-10-04 Thread Joshua D. Drake
On Mon, 2010-10-04 at 11:34 -0700, Josh Berkus wrote:
  And your point is?  The design center for the current setup is maybe 5
  or 10 partitions.  We didn't intend it to be used for more partitions
  than you might have spindles to spread the data across.
 
 Where did that come from? 

Yeah that is a bit odd. I don't recall any discussion in regards to such
a weird limitation.

  It certainly wasn't anywhere when the feature
 was introduced.  Simon intended for this version of partitioning to
 scale to 100-200 partitions (and it does, provided that you dump all
 other table constraints), and partitioning has nothing to do with
 spindles.  I think you're getting it mixed up with tablespaces.

Great! that would be an excellent addition.


 
 The main reason for partitioning is ease of maintenance (VACUUM,
 dropping partitions, etc.) not any kind of I/O optimization.

Well that is certainly a main reason but it is not the main reason.
We have lots of customers using it to manage very large amounts of data
using the constraint exclusion features (and gaining from the smaller
index sizes).


Jd

-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Query much faster with enable_seqscan=0

2010-09-21 Thread Joshua D. Drake
On Tue, 2010-09-21 at 14:02 -0500, Ogden wrote:
 How odd, I set the following:
 
 seq_page_cost = 1.0 
 random_page_cost = 2.0
 
 And now the query runs in milliseconds as opposed to 14 seconds. Could this 
 really be the change? I am running ANALYZE now - how often is it recommended 
 to do this?

PostgreSQL's defaults are based on extremely small and some would say
(non production) size databases. As a matter of course I always
recommend bringing seq_page_cost and random_page_cost more in line.

However, you may want to try moving random_page_cost back to 4 and try
increasing cpu_tuple_cost instead.

Sincerely,

Joshua D. Drake

-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Held idle connections vs use of a Pooler

2010-09-14 Thread Joshua D. Drake
On Tue, 2010-09-14 at 10:10 -0600, mark wrote:
 Hello,
 
 I am relatively new to postgres (just a few months) so apologies if
 any of you are bearing with me.
 
 I am trying to get a rough idea of the amount of bang for the buck I
 might see if I put in a connection pooling service into the enviroment
 vs our current methodology of using persistent open connections.

Well what a pooler does is provide persisten open connections that can
be reused. What tech are you using for these persisten open connections?


 Most of the connections from the various apps hold idle connections
 until they need to execute a query once done go back to holding an
 open idle connection.  (there are ~600 open connections at any given
 time, and most of the time most are idle)

Sounds like each app is holding its own pool?


 I think from reading this list for a few weeks the answer is move to
 using connection pooling package elsewhere to better manage incoming
 connections, with a lower number to the db.

Correct, because each connection is overhead. If you have 600
connections, of which really only 20 are currently executing, that is
highly inefficient. 

A pooler would have say, 40 connections open, with 20 currently
executing and a max pool of 600.

 
 I am told this will require some re-working of some app code as I
 understand pg-pool was tried a while back in our QA environment and
 server parts of various in-house apps/scripts/..etc started to
 experience show stopping problems.

Use pgbouncer. It is what Skype uses.

Sincerely,

Joshua D. Drake


-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Where does data in pg_stat_user_tables come from?

2010-09-13 Thread Joshua D. Drake
On Mon, 2010-09-13 at 16:06 -0700, Josh Berkus wrote:
 All,
 
 I've been looking at pg_stat_user_tables (in 8.3, because of a project I
 have), and it appears that autovacuum, and only autovaccum, updates the
 data for this view.  This means that one can never have data in
 pg_stat_user_tables which is completely up-to-date, and if autovacuum is
 off, the view is useless.

As I recall its kept in shared_buffers (in some kind of counter) and
updated only when it is requested or when autovacuum fires. This was
done because we used to write stats every 500ms and it was a bottleneck.
(IIRC)

Joshua D. Drake


-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Testing Sandforce SSD

2010-07-24 Thread Joshua D. Drake
On Sat, 2010-07-24 at 16:21 -0400, Greg Smith wrote:
 Greg Smith wrote:
  Note that not all of the Sandforce drives include a capacitor; I hope 
  you got one that does!  I wasn't aware any of the SF drives with a 
  capacitor on them were even shipping yet, all of the ones I'd seen 
  were the chipset that doesn't include one still.  Haven't checked in a 
  few weeks though.
 
 Answer my own question here:  the drive Yeb got was the brand spanking 
 new OCZ Vertex 2 Pro, selling for $649 at Newegg for example:  
 http://www.newegg.com/Product/Product.aspx?Item=N82E16820227535 and with 
 the supercacitor listed right in the main production specifications 
 there.  This is officially the first inexpensive (relatively) SSD with a 
 battery-backed write cache built into it.  If Yeb's test results prove 
 it works as it's supposed to under PostgreSQL, I'll be happy to finally 
 have a moderately priced SSD I can recommend to people for database 
 use.  And I fear I'll be out of excuses to avoid buying one as a toy for 
 my home system.

That is quite the toy. I can get 4 SATA-II with RAID Controller, with
battery backed cache, for the same price or less :P

Sincerely,

Joshua D. Drake

-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Pooling in Core WAS: Need help in performance tuning.

2010-07-23 Thread Joshua D. Drake
On Thu, 2010-07-22 at 20:56 +0100, Hannu Krosing wrote:
  
  Let's extend this shall we:
  
  Avoid adding yet another network hop
 
 postgreSQL is multi-process, so you either have a separate pooler
 process or need to put pooler functionality in postmaster, bothw ways
 you still have a two-hop scenario for connect. you may be able to pass
 the socket to child process and also keep it, but doing this for both
 client and db sides seems really convoluted. 

Which means, right now there is three hops. Reducing one is good.

 Or is there a prortable way to pass sockets back and forth between
 parent and child processes ?
 
 If so, then pgbouncer could use it as well.
 
  Remove of a point of failure
 
 rather move the point of failure from external pooler to internal
 pooler ;)

Yes but at that point, it doesn't matter. 

 
  Reduction of administrative overhead
 
 Possibly. But once you start actually using it, you still need to
 configure and monitor it and do other administrator-y tasks.

Yes, but it is inclusive.

 
  Integration into our core authentication mechanisms
 
 True, although for example having SSL on client side connection will be
 so slow that it hides any performance gains from pooling, at least for
 short-lived connections.

Yes, but right now you can't use *any* pooler with LDAP for example. We
could if pooling was in core. Your SSL argument doesn't really work
because its true with or without pooling.

  Greater flexibility in connection control
 
 Yes, poolers can be much more flexible than default postgresql. See for
 example pgbouncers PAUSE , RECONFIGURE and RESUME commands 

:D

 
  And, having connection pooling in core does not eliminate the use of an
  external pool where it makes since.
 
 Probably the easiest way to achieve pooling in core would be adding an
 option to start pgbouncer under postmaster control.

Yeah but that won't happen. Also I think we may have a libevent
dependency that we have to work out.

 
 You probably can't get much leaner than pgbouncer.

Oh don't get me wrong. I love pgbouncer. It is my recommended pooler but
even it has limitations (such as auth).

Sincerely,

Joshua D. Drake

-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Strange explain on partitioned tables

2010-07-23 Thread Joshua D. Drake
On Fri, 2010-07-23 at 15:03 -0700, Gerald Fontenay wrote:
  The PostgreSQL partitioning system is aimed to support perhaps a 
  hundred inherited tables.  You can expect to get poor performance on 
  queries if you create 1000 of them. 
 
 Hi,
 
 Why is that you would expect poor performance for say 1000 or more? I 
 have a ~1000 inherited tables and I don't see any significant slowdowns. 
 I only ever access a single inherited table at a time though in this 
 situation. I suppose I am using inheritance only for organization in 
 this case...

It is variable based on workload and as I recall has to do with the
planning time. As the number of children increases, so does the planning
time.

Joshua D. Drake


-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Pooling in Core WAS: Need help in performance tuning.

2010-07-22 Thread Joshua D. Drake
On Thu, 2010-07-22 at 14:36 -0400, Robert Haas wrote: 
 On Mon, Jul 12, 2010 at 6:58 AM, Craig Ringer
 cr...@postnewspapers.com.au wrote:
  So rather than asking should core have a connection pool perhaps
  what's needed is to ask what can an in-core pool do that an external
  pool cannot do?
 
 Avoid sending every connection through an extra hop.

Let's extend this shall we:

Avoid adding yet another network hop
Remove of a point of failure
Reduction of administrative overhead
Integration into our core authentication mechanisms
Greater flexibility in connection control

And, having connection pooling in core does not eliminate the use of an
external pool where it makes since.

JD

-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt



-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Identical query slower on 8.4 vs 8.3

2010-07-15 Thread Joshua D. Drake
On Thu, 2010-07-15 at 10:41 -0400, Patrick Donlin wrote:

 Results when running on the v8.3.7 server
 Total query runtime: 32185 ms.
 700536 rows retrieved.
 
 Results when running on the v8.4.4 server
 Total query runtime: 164227 ms.
 700536 rows retrieved.
 

 
 Anyone have any ideas on where I should start looking to figure this
 out? I didn't perform any special steps when moving to v8.4, I just
 did a pg_dump from the 8.3 server and restored it on the new 8.4
 servers. Maybe that is where I made a mistake.

Three immediate things come to mind:

1. One had relations in file or shared buffer cache, the other didn't
2. One is running ext4 versus ext3 and when you end up spilling to disk
when you over run work_mem, the ext4 machine is faster, but without
knowing which machine is which it is a bit tough to diagnose.
3. You didn't run ANALYZE on one of the machines

Sincerely,

Joshua D. Drake

 
 Thanks!
 Patrick
 

-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] partition queries hitting all partitions even though check key is specified

2010-07-12 Thread Joshua D. Drake
On Mon, 2010-07-12 at 22:01 -0500, Josh Berkus wrote:
 On 9/2/09 10:05 AM, Kevin Kempter wrote:
  On Wednesday 02 September 2009 09:02:27 Scott Marlowe wrote:
  On Wed, Sep 2, 2009 at 8:52 AM, Kevin Kempterkev...@consistentstate.com 
  wrote:
  Hi all;
 
  I cant figure out why we're scanning all of our partitions.
 
 I don't think extract() is immutable, which would pretty much invalidate
 your check constraints as far as CE is concerned.

Correct.

Joshua D. Drake


-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] performance on new linux box

2010-07-08 Thread Joshua D. Drake
On Thu, 2010-07-08 at 09:31 -0700, Ryan Wexler wrote:
 The raid card the server has in it is:
 3Ware 4 Port 9650SE-4LPML RAID Card
 
 Looking it up, it seems to indicate that it has BBU 

No. It supports a BBU. It doesn't have one necessarily.

You need to go into your RAID BIOS. It will tell you.

Sincerely,

Joshua D. Drake


-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] PostgreSQL as a local in-memory cache

2010-06-24 Thread Joshua D. Drake
On Thu, 2010-06-24 at 21:14 +0200, Pavel Stehule wrote:
 2010/6/24 Josh Berkus j...@agliodbs.com:
 
  And I'm also planning to implement unlogged tables, which have the
  same contents for all sessions but are not WAL-logged (and are
  truncated on startup).
 
 this is similar MySQL's memory tables. Personally, I don't see any
 practical sense do same work on PostgreSQL now, when memcached exists.

Because memcache is yet another layer and increases overhead to the
application developers by adding yet another layer to work with. Non
logged tables would rock.

SELECT * FROM foo;

:D

JD




-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Occasional giant spikes in CPU load

2010-06-24 Thread Joshua D. Drake
On Thu, 2010-06-24 at 17:50 -0700, Craig James wrote:
 I'm reviving this question because I never figured it out.  To summarize: At 
 random intervals anywhere from a few times per hour to once or twice a day, 
 we see a huge spike in CPU load that essentially brings the system to a halt 
 for up to a minute or two.  Previous answers focused on what is it doing, 
 i.e. is it really Postgres or something else?
 
 Now the question has narrowed down to this: what could trigger EVERY postgres 
 backend to do something at the same time?  See the attached output from top 
 -b, which shows what is happening during one of the CPU spikes.

checkpoint causing IO Wait.

What does sar say about these times?

Joshua D. Drake


-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] slow index lookup

2010-06-22 Thread Joshua D. Drake
On Tue, 2010-06-22 at 18:00 -0700, Anj Adu wrote:
 i have several partitions like this (similar size ...similar data
 distribution)..these partitions are only inserted..never updated.
 Why would I need to vacuum..
 

An explain analyze is what is in order for further diagnosis.

JD


 I can reindex..just curious what can cause the index to go out of whack.
 
 On Tue, Jun 22, 2010 at 4:44 PM, Alvaro Herrera
 alvhe...@commandprompt.com wrote:
  Excerpts from Anj Adu's message of mar jun 22 17:44:39 -0400 2010:
  This query seems unreasonable slow on a well-indexed table (13 million
  rows). Separate indexes are present on guardid_id , from_num and
  targetprt columns.
 
  Maybe you need to vacuum or reindex?
 
  --
  Álvaro Herrera alvhe...@commandprompt.com
  The PostgreSQL Company - Command Prompt, Inc.
  PostgreSQL Replication, Consulting, Custom Development, 24x7 support
 
 

-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread Joshua D. Drake
On Tue, 2010-04-20 at 10:39 -0700, David Kerr wrote:
 Howdy all,
 
 I've got a huge server running just postgres. It's got 48 cores and 256GB of 
 ram. Redhat 5.4, Postgres 8.3.9.
 64bit OS. No users currently.
 
 I've got a J2EE app that loads data into the DB, it's got logic behind it so 
 it's not a simple bulk load, so
 i don't think we can use copy.
 
 Based on the tuning guides, it set my effective_cache_size to 128GB (1/2 the 
 available memory) on the box.
 
 When I ran my load, it took aproximately 15 hours to do load 20 million 
 records. I thought this was odd because
 on a much smaller machine I was able to do that same amount of records in 6 
 hours.
 
 My initial thought was hardware issues so we got sar, vmstat, etc all running 
 on the box and they didn't give
 any indication that we had resource issues.
 
 So I decided to just make the 2 PG config files look the same. (the only 
 change was dropping effective_cache_size 
 from 128GB to 2GB).
 
 Now the large box performs the same as the smaller box. (which is fine).
 
 incidentally, both tests were starting from a blank database.
 
 Is this expected? 

Without a more complete picture of the configuration, this post doesn't
mean a whole lot. Further, effective_cash_size is not likely to effect a
bulk load at all.

Joshua D. Drake



 
 Thanks!
 
 Dave
 


-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564
Consulting, Training, Support, Custom Development, Engineering



-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Occasional giant spikes in CPU load

2010-04-07 Thread Joshua D. Drake
On Wed, 2010-04-07 at 14:37 -0700, Craig James wrote:
 Most of the time Postgres runs nicely, but two or three times a day we get a 
 huge spike in the CPU load that lasts just a short time -- it jumps to 10-20 
 CPU loads.  Today it hit 100 CPU loads.  Sometimes days go by with no spike 
 events.  During these spikes, the system is completely unresponsive (you 
 can't even login via ssh).
 
 I managed to capture one such event using top(1) with the batch option as a 
 background process.  See output below - it shows 19 active postgress 
 processes, but I think it missed the bulk of the spike.

What does iostat 5 say during the jump?

Joshua D. Drake



-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564
Consulting, Training, Support, Custom Development, Engineering
Respect is earned, not gained through arbitrary and repetitive use or Mr. or 
Sir.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Occasional giant spikes in CPU load

2010-04-07 Thread Joshua D. Drake
On Wed, 2010-04-07 at 14:45 -0700, Craig James wrote:
 On 4/7/10 2:40 PM, Joshua D. Drake wrote:
  On Wed, 2010-04-07 at 14:37 -0700, Craig James wrote:
  Most of the time Postgres runs nicely, but two or three times a day we get 
  a huge spike in the CPU load that lasts just a short time -- it jumps to 
  10-20 CPU loads.  Today it hit 100 CPU loads.  Sometimes days go by with 
  no spike events.  During these spikes, the system is completely 
  unresponsive (you can't even login via ssh).
 
  I managed to capture one such event using top(1) with the batch option 
  as a background process.  See output below - it shows 19 active postgress 
  processes, but I think it missed the bulk of the spike.
 
  What does iostat 5 say during the jump?
 
 It's very hard to say ... I'll have to start a background job to watch for a 
 day or so.  While it's happening, you can't login, and any open windows 
 become unresponsive.  I'll probably have to run it at high priority using 
 nice(1) to get any data at all during the event.

Do you have sar runing? Say a sar -A ?

 
 Would vmstat be informative?

Yes.

My guess is that it is not CPU, it is IO and your CPU usage is all WAIT
on IO.

To have your CPUs so flooded that they are the cause of an inability to
log in is pretty suspect.

Joshua D. Drake


 
 Thanks,
 Craig
 


-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564
Consulting, Training, Support, Custom Development, Engineering
Respect is earned, not gained through arbitrary and repetitive use or Mr. or 
Sir.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] disk space usage unexpected

2010-02-15 Thread Joshua D. Drake
On Mon, 2010-02-15 at 14:59 -0500, Rose Zhou wrote:
 Good day,
 
 I have a PostgreSQL 8.4 database installed on WinXP x64 with very heavy
 writing and updating on a partitioned table. Sometimes within one minute,
 there are tens of file with size=1,048,576kb (such as
 filenode.1,filenode.2,...filenode.43) created in the database subdirectory
 within PGDATA/base. 
 
 This caused the disk space quickly used up. Is this expected?

Yes. Especially if autovacuum is not keeping up with the number of
updates.

Joshua D. Drake


 
 Thanks for any information
 
 
 
 Best Regards
 
 Rose Zhou 
 
 


-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564
Consulting, Training, Support, Custom Development, Engineering
Respect is earned, not gained through arbitrary and repetitive use or Mr. or 
Sir.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Dell PERC H700/H800

2010-02-11 Thread Joshua D. Drake
On Thu, 2010-02-11 at 12:39 +, Matthew Wakeling wrote:
 Just a heads up - apparently the more recent Dell RAID controllers will no 
 longer recognise hard discs that weren't sold through Dell.
 
 http://www.channelregister.co.uk/2010/02/10/dell_perc_11th_gen_qualified_hdds_only/
 
 As one of the comments points out, that kind of makes them no longer SATA 
 or SAS compatible, and they shouldn't be allowed to use those acronyms any 
 more.

That's interesting. I know that IBM at least on some of their models
have done the same. Glad I use HP :)

Joshua D. Drake

 
 Matthew
 
 -- 
  An optimist sees the glass as half full, a pessimist as half empty,
  and an engineer as having redundant storage capacity.
 


-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564
Consulting, Training, Support, Custom Development, Engineering
Respect is earned, not gained through arbitrary and repetitive use or Mr. or 
Sir.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] New server to improve performance on our large and busy DB - advice?

2010-01-14 Thread Joshua D. Drake
On Thu, 14 Jan 2010 14:17:13 -0500, Carlo Stonebanks
stonec.regis...@sympatico.ca wrote:
 My client just informed me that new hardware is available for our DB
 server.
 
 . Intel Core 2 Quads Quad
 . 48 GB RAM
 . 4 Disk RAID drive (RAID level TBD)
 
 I have put the ugly details of what we do with our DB below, as well as
 the 
 postgres.conf settings. But, to summarize: we have a PostgreSQL 8.3.6 DB

 with very large tables and the server is always busy serving a constant 
 stream of single-row UPDATEs and INSERTs from parallel automated
processes.
 
 There are less than 10 users, as the server is devoted to the KB
 production 
 system.
 
 My questions:
 
 1) Which RAID level would you recommend

10

 2) Which Windows OS would you recommend? (currently 2008 x64 Server)

If you have to run Windows... that works.

 3) If we were to port to a *NIX flavour, which would you recommend?
(which 
 support trouble-free PG builds/makes please!)

Community driven:
Debian Stable
CentOS 5

Commercial:
Ubuntu LTS
RHEL 5

 4) Is this the right PG version for our needs?

You want to run at least the latest stable 8.3 series which I believe is
8.3.9.
With the imminent release of 8.5 (6 months), it may be time to move to
8.4.2 instead.


Joshua D. Drake


 
 Thanks,
 
 Carlo
 
 The details of our use:
 
 . The DB hosts is a data warehouse and a knowledgebase (KB) tracking the

 professional information of 1.3M individuals.
 . The KB tables related to these 130M individuals are naturally also
large
 . The DB is in a perpetual state of serving TCL-scripted Extract,
 Transform 
 and Load (ETL) processes
 . These ETL processes typically run 10 at-a-time (i.e. in parallel)
 . We would like to run more, but the server appears to be the bottleneck
 . The ETL write processes are 99% single row UPDATEs or INSERTs.
 . There are few, if any DELETEs
 . The ETL source data are import tables
 . The import tables are permanently kept in the data warehouse so that
we 
 can trace the original source of any information.
 . There are 6000+ and counting
 . The import tables number from dozens to hundreds of thousands of rows.

 They rarely require more than a pkey index.
 . Linking the KB to the source import date requires an audit table of
 500M 
 rows, and counting.
 . The size of the audit table makes it very difficult to manage,
 especially 
 if we need to modify the design.
 . Because we query the audit table different ways to audit the ETL
 processes 
 decisions, almost every column in the audit table is indexed.
 . The maximum number of physical users is 10 and these users RARELY
 perform 
 any kind of write
 . By contrast, the 10+ ETL processes are writing constantly
 . We find that internal stats drift, for whatever reason, causing row
seq 
 scans instead of index scans.
 . So far, we have never seen a situation where a seq scan has improved 
 performance, which I would attribute to the size of the tables
 . We believe our requirements are exceptional, and we would benefit 
 immensely from setting up the PG planner to always favour index-oriented

 decisions - which seems to contradict everything that PG advice suggests
 as 
 best practice.
 
 Current non-default conf settings are:
 
 autovacuum = on
 autovacuum_analyze_scale_factor = 0.1
 autovacuum_analyze_threshold = 250
 autovacuum_naptime = 1min
 autovacuum_vacuum_scale_factor = 0.2
 autovacuum_vacuum_threshold = 500
 bgwriter_lru_maxpages = 100
 checkpoint_segments = 64
 checkpoint_warning = 290
 datestyle = 'iso, mdy'
 default_text_search_config = 'pg_catalog.english'
 lc_messages = 'C'
 lc_monetary = 'C'
 lc_numeric = 'C'
 lc_time = 'C'
 log_destination = 'stderr'
 log_line_prefix = '%t '
 logging_collector = on
 maintenance_work_mem = 16MB
 max_connections = 200
 max_fsm_pages = 204800
 max_locks_per_transaction = 128
 port = 5432
 shared_buffers = 500MB
 vacuum_cost_delay = 100
 work_mem = 512MB

-- 
PostgreSQL - XMPP: jdrake(at)jabber(dot)postgresql(dot)org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] 8.4.1 ubuntu karmic slow createdb

2009-12-11 Thread Joshua D. Drake
On Thu, 2009-12-10 at 20:38 -0500, Nikolas Everett wrote:
 In my limited experience ext4 as presented by Karmic is not db
 friendly.  I had to carve my swap partition into a swap partition and
 an xfs partition to get better db performance.  Try fsync=off first,
 but if that doesn't work then try a mini xfs.

Do not turn fsync off. That is bad advice. I would not suggest ext4 at
this point for database operations. Use ext3. It is backward compatible.

Joshua D. Drake


-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564
Consulting, Training, Support, Custom Development, Engineering
If the world pushes look it in the eye and GRR. Then push back harder. - 
Salamander


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] 8.4.1 ubuntu karmic slow createdb

2009-12-11 Thread Joshua D. Drake
On Fri, 2009-12-11 at 15:43 -0500, Nikolas Everett wrote:
 Turning fsync off on a dev database is a bad idea?  Sure you might
 kill it and have to start over, but thats kind of the point in a dev
 database.

My experience is that bad dev practices turn into bad production
practices, whether intentionally or not.

Joshua D. Drake


-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564
Consulting, Training, Support, Custom Development, Engineering
If the world pushes look it in the eye and GRR. Then push back harder. - 
Salamander


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] truncate in transaction blocks read access

2009-11-30 Thread Joshua D. Drake
On Mon, 2009-11-30 at 10:50 -0800, Craig James wrote:
 I have a million-row table (two text columns of ~25 characters each plus two 
 integers, one of which is PK) that is replaced every week.  Since I'm doing 
 it on a live system, it's run inside a transaction.  This is the only time 
 the table is modified; all other access is read-only.
 
 I wanted to use truncate table for efficiency, to avoid vacuum and index 
 bloat, etc.  But when I do truncate inside a transaction, all clients are 
 blocked from read until the entire transaction is complete.  If I switch to 
 delete from ..., it's slower, but other clients can continue to use the old 
 data until the transaction commits.
 
 The only work-around I've thought of is to create a brand new table, populate 
 it and index it, then start a transaction that drops the old table and 
 renames the new one.
 
 Any thoughts?

Use partitioning so you can roll off data.

http://www.postgresql.org/docs/8.3/interactive/ddl-partitioning.html

Joshua D. Drake


 
 Thanks,
 Craig
 
 


-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564
Consulting, Training, Support, Custom Development, Engineering
If the world pushes look it in the eye and GRR. Then push back harder. - 
Salamander


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] database size growing continously

2009-10-29 Thread Joshua D. Drake
On Thu, 2009-10-29 at 17:00 +0100, Ludwik Dylag wrote:
 2009/10/29 Peter Meszaros p...@prolan.hu
 Hi All,
 
 I use postgresql 8.3.7 as a huge queue. There is a very simple
 table
 with six columns and two indices, and about 6 million records
 are
 written into it in every day continously commited every 10
 seconds from
 8 clients. The table stores approximately 120 million records,
 because a
 cron job daily deletes those ones are older than 20 day.
 Autovacuum is
 on and every settings is the factory default except some
 unrelated ones
 (listen address, authorization). But my database is growing,
 characteristically ~600MByte/day, but sometimes much slower
 (eg. 10MB,
 or even 0!!!).
 
 I've also tried a test on another server running the same
 postgresql,
 where 300 million record was loaded into a freshly created
 database,
 and 25 million was deleted with single DELETE command.  The
 'vacuum
 verbose phaseangle;' command seems to be running forever for
 hours:
 
 
 Try increasing max_fsm_pages and shared_buffers
 These changes did speed up vacuum full on my database.
 With shared_buffers remember to increase max shm in your OS.

If you overran your max_fsm_pages you are going to have indexes that are
not properly cleaned up, even after a vacuum full. You will need to
cluster or reindex.

Joshua D. Drake






-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564
Consulting, Training, Support, Custom Development, Engineering
If the world pushes look it in the eye and GRR. Then push back harder. - 
Salamander


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] dump time increase by 1h with new kernel

2009-10-08 Thread Joshua D. Drake
On Thu, 2009-10-08 at 10:44 -0700, Justin T Pryzby wrote:
 Hi Everyone


Did your scheduler change between the kernel versions? 

 Not sure if sar can provide other data included by vmstat: IO merged
 in/out, {,soft}irq ticks?
 
 Thanks,
 Justin
 
-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564
Consulting, Training, Support, Custom Development, Engineering
If the world pushes look it in the eye and GRR. Then push back harder. - 
Salamander


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Bottleneck?

2009-08-06 Thread Joshua D. Drake
On Thu, 2009-08-06 at 11:57 -0400, Kenneth Cox wrote:
 I wasn't able to compile dtrace on either CentOS 5.3 or Fedora 11.  But  
 the author is responsive and the problem doesn't look hard to fix.  It  
 sits in my inbox awaiting some hacking time...

Why aren't you using systemtap again? As I recall it uses the same
interface as dtrace. The front end is just different.

Joshua D. Drake

-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Bottleneck?

2009-08-06 Thread Joshua D. Drake
On Thu, 2009-08-06 at 12:38 -0400, Ray Stell wrote:
 On Thu, Aug 06, 2009 at 09:12:22AM -0700, Joshua D. Drake wrote:
  Why aren't you using systemtap again? 
 
 1. significant solaris responsibilites

There is your problem right there ;)

 2. significant linux responsibilities
 3. tool consolidation delusions

Hah! I know this one.

 
 Can you drive dtace toolkit via systemtap?
 

I don't know. Tom?

Joshua D. Drake

-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Postgres replication: dump/restore, PITR, Slony,...?

2009-06-11 Thread Joshua D. Drake

 
 Then there are Slony-I, Buchardo, Mamoth Replicator from CMO, simple
 replication in Postgres 8.4 and other projects...

CMO? :)

Joshua D. Drake
 
 Suggestions?
 Thanks,
 
 -- Shaul
 
-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Postgres replication: dump/restore, PITR, Slony,...?

2009-06-11 Thread Joshua D. Drake
On Thu, 2009-06-11 at 16:30 +, Greg Sabino Mullane wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: RIPEMD160
 
  Then there are Slony-I, Buchardo, Mamoth Replicator from CMO, simple
  replication in Postgres 8.4 and other projects...
 
  CMO? :)
 
 Buchardo? :)

A new desert, Buchardo CMO:

Two shots of brandy
One shot of rum
Vanilla Ice cream
Cherries

Blend to perfection.

Joshua D. Drake

-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Hosted servers with good DB disk performance?

2009-05-27 Thread Joshua D. Drake
On Tue, 2009-05-26 at 19:52 -0600, Scott Marlowe wrote:
 On Tue, May 26, 2009 at 7:41 PM, Scott Carey sc...@richrelevance.com wrote:
 
  On 5/26/09 6:17 PM, Greg Smith gsm...@gregsmith.com wrote:
 
  On Tue, 26 May 2009, Joshua D. Drake wrote:
 
  CMD doesn't rent hardware you would have to provide that, Rack Space
  does.
 
  Part of the idea was to avoid buying a stack of servers, if this were just
  a where do I put the boxes at? problem I'd have just asked you about it
  already.

Heh. Well on another consideration any rental will out live its cost
effectiveness in 6 months or less. At least if you own the box, its
useful for a long period of time.

Heck I got a quad opteron, 2 gig of memory with 2 6402 HP controllers
and 2 fully loaded MSA30s for 3k. Used of course but still.

The equivalent machine brand new is 10k and the same machine from Rack
Space is going to be well over 1200.00 a month.


Joshua D. Drake

-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Hosted servers with good DB disk performance?

2009-05-26 Thread Joshua D. Drake
On Tue, 2009-05-26 at 17:51 -0400, Greg Smith wrote:
 I keep falling into situations where it would be nice to host a server 
 somewhere else.  Virtual host solutions and the mysterious cloud are no 
 good for the ones I run into though, as disk performance is important for 
 all the applications I have to deal with.
 
 What I'd love to have is a way to rent a fairly serious piece of dedicated 
 hardware, ideally with multiple (at least 4) hard drives in a RAID 
 configuration and a battery-backed write cache.  The cache is negotiable. 
 Linux would be preferred, FreeBSD or Solaris would also work; not Windows 
 though (see good DB performance).
 
 Is anyone aware of a company that offers such a thing?

Sure, CMD will do it, so will Rack Space and a host of others. If you
are willing to go with a VPS SliceHost are decent folk. CMD doesn't rent
hardware you would have to provide that, Rack Space does.

Joshua D. Drake

-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] raid10 hard disk choice

2009-05-21 Thread Joshua D. Drake
On Thu, 2009-05-21 at 10:25 -0400, Merlin Moncure wrote:
 On Thu, May 21, 2009 at 8:47 AM, Linos i...@linos.es wrote:
  Hello,
 i have to buy a new server and in the budget i have (small) i have to
  select one of this two options:
 
  -4 sas 146gb 15k rpm raid10.
  -8 sas 146gb 10k rpm raid10.
 
  The server would not be only dedicated to postgresql but to be a file
  server, the rest of options like plenty of ram and battery backed cache raid
  card are done but this two different hard disk configuration have the same
  price and i am not sure what it is better.
 
  If the best option it is different for postgresql that for a file server i
  would like to know too, thanks.
 
 I would say go with the 10k drives.  more space, flexibility (you can
 dedicate a volume to WAL), and more total performance on paper.  I
 would also, if you can afford it and they fit, get two small sata
 drives, mount raid 1 and put the o/s on those.

+1 on that.

Joshua D. Drake


 
 merlin
 
-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Any better plan for this query?..

2009-05-12 Thread Joshua D. Drake
On Tue, 2009-05-12 at 17:22 +0200, Dimitri wrote:
 Robert, what I'm testing now is 256 users max. The workload is growing
 progressively from 1, 2, 4, 8 ... to 256 users. Of course the Max
 throughput is reached on the number of users equal to 2 * number of
 cores, but what's important for me here - database should continue to
 keep the workload! - response time regressing, but the troughput
 should remain near the same.
 
 So, do I really need a pooler to keep 256 users working??  - I don't
 think so, but please, correct me.

If they disconnect and reconnect yes. If they keep the connections live
then no. 

Joshua D. Drake

-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Any better plan for this query?..

2009-05-12 Thread Joshua D. Drake
On Tue, 2009-05-12 at 20:34 -0400, Aidan Van Dyk wrote:
 * Joshua D. Drake j...@commandprompt.com [090512 19:27]:
  
  Apache solved this problem back when it was still called NSCA HTTPD. Why
  aren't we preforking again?
 
 Of course, preforking and connection pooling are totally different
 beast...
 

Yes and no. They both solve similar problems and preforking solves more
problems when you look at the picture in entirety (namely authentication
integration etc..)

 But, what really does preforking give us?  A 2 or 3% improvement?

It depends on the problem we are solving. We can test it but I would bet
it is more than that especially in a high velocity environment.

   The
 forking isn't the expensive part,

It is expensive but not as expensive as the below.

  the per-database setup that happens is
 the expensive setup...  All pre-forking would save us is a tiny part of
 the initial setup, and in turn make our robust postmaster controller no
 longer have control.

I don't buy this. Properly coded we aren't going to lose any control.

Joshua D. Drake

-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] PostgreSQL with PostGIS on embedded hardware

2009-05-08 Thread Joshua D. Drake
On Fri, 2009-05-08 at 18:06 +0200, Paolo Rizzi wrote:
 Hi all,
 recently I came across a question from a customer of mine, asking me if 
 it would feasible to run PostgreSQL along with PostGIS on embedded hardware.
 They didn't give me complete information, but it should be some kind of 
 industrial PC with a 600MHz CPU. Memory should be not huge nor small, 
 maybe a couple of GBytes, hard disk should be some type of industrial 
 Compact Flash of maybe 16 GBytes.
 

Well the CPU is slow the but rest isn't so bad.

 They are thinking about using this setup on-board of public buses and 
 trams, along with a GPS receiver, for self-localization. So that when 
 the bus or tram enters defined zones or passes near defined points, 
 events are triggered.
 The database could probably be used completely read-only or almost that.
 
 What performances do you think would be possible for PostgreSQL+PostGIS 
 on such hardware???
 

If you aren't doing a lot of writing I don't see a huge barrier to this.

Sincerely,

Joshua D. Drkae


-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread Joshua D. Drake
On Wed, 2009-04-22 at 21:53 +0100, James Mansion wrote:
 Stephen Frost wrote:
  You're re-hashing things I've already said.  The big win is batching the
  inserts, however that's done, into fewer transactions.  Sure, multi-row
  inserts could be used to do that, but so could dropping begin/commits in
  right now which probably takes even less effort.

 Well, I think you are seriously underestimating the cost of the 
 round-trip compared

The breakdown is this:

1. Eliminate single inserts
2. Eliminate round trips

Yes round trips are hugely expensive. 

 
  No, as was pointed out previously already, you really just need 2.  A

 And I'm disagreeing with that.  Single row is a given, but I think 
 you'll find it pays to have one

My experience shows that you are correct. Even if you do a single BEGIN;
with 1000 inserts you are still getting a round trip for every insert
until you commit. Based on 20ms round trip time, you are talking
20seconds additional overhead.

Joshua D. Drake


-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Using IOZone to simulate DB access patterns

2009-04-10 Thread Joshua D. Drake
On Fri, 2009-04-03 at 17:09 -0700, Josh Berkus wrote:
 On 4/3/09 4:12 PM, Josh Berkus wrote:
  All,
 
  I've been using Bonnie++ for ages to do filesystem testing of new DB
  servers. But Josh Drake recently turned me on to IOZone.
 
 Related to this: is IOZone really multi-threaded?  I'm doing a test run 
 right now, and only one CPU is actually active.  While there are 6 
 IOZone processes, most of them are idle.

In order to test real interactivity (AFAIK) with iozone you have to
launch multiple iozone instances. You also need to do them from separate
directories, otherwise it all starts writing the same file. The work I
did here: 

http://www.commandprompt.com/blogs/joshua_drake/2008/04/is_that_performance_i_smell_ext2_vs_ext3_on_50_spindles_testing_for_postgresql/

Was actually with multiple bash scripts firing separate instances. The
interesting thing here is the -s 1000m and -r8k. Those options are
basically use a 1000 meg file (like our data files) with 8k chunks (like
our pages).

Based on your partitioning scheme, what is the break out? Can you
reasonably expect all partitions to be used equally?

Sincerely,

Joshua D. Drake


 
 --Josh
 
 
-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Performance of archive logging in a PITR restore

2009-03-16 Thread Joshua D. Drake
On Mon, 2009-03-16 at 12:11 -0400, Mark Steben wrote:
 First of all, I did pose this question first on the pgsql – admin
 mailing list.


 The issue is that during a restore on a remote site, (Postgres 8.2.5) 
 
 archived logs are taking an average of 35 – 40 seconds apiece to
 restore.  

Archive logs are restored in a serialized manner so they will be slower
to restore in general.

Joshua D. Drake



-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] suggestions for postgresql setup on Dell 2950 , PERC6i controller

2009-02-05 Thread Joshua D. Drake
On Thu, 2009-02-05 at 12:40 +, Matt Burke wrote:
 Arjen van der Meijden wrote:
 

 Are there any reasonable choices for bigger (3+ shelf) direct-connected
 RAID10 arrays, or are hideously expensive SANs the only option? I've
 checked out the latest Areca controllers, but the manual available on
 their website states there's a limitation of 32 disks in an array...
 

HP P800.

-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Using multiple cores for index creation?

2009-01-29 Thread Joshua D. Drake
On Thu, 2009-01-29 at 18:09 -0500, Robert Haas wrote:
 On Thu, Jan 29, 2009 at 3:21 PM, henk de wit henk53...@hotmail.com wrote:
  Hi,
  When I try to restore a database dump on PostgreSQL 8.3
  that's approximately 130GB in size and takes about 1 hour, I noticed index
  creation makes up the bulk of that time. I'm using a very fast I/O subsystem
  (16 Mtron Pro 7535 SSDs using a dual 1.2Ghz IOP/4GB cache RAID controller),
  fast CPUs (2 quad core C2Q's at 2.6Ghz) and 32GB RAM. From monitoring the
  restore process, I learned that only 10 minutes is spend doing IO, while the
  rest of the time is spend on creating the indexes. Index creation seems to
  be completely CPU bound.
  The problem is that only 1 CPU core is used. My other 7 cores are just
  sitting there doing nothing. It seems to me that creating each index,
  especially for different tables, is something that can be done
  independently.
  Is there some way I can let PostgreSQL use multiple cores for creating the
  indexes?
 
 Andrew Dunstan has been working on this problem.  His latest parallel
 restore patch can be found here:
 
 http://archives.postgresql.org/message-id/4977e070.6070...@dunslane.net


Yeah but that isn't useful for 8.3. What can be done in this specific
situation is to make sure you dump with the -Fc option. You can then
pull a TOC out with pg_restore and break that appart. Reading the TOC is
pretty self evident. Once you get down to index creation you can create
multiple files each with a group of indexes to create. Then call
pg_restore multiple times in a script against the individual TOC and you
will use all cores.

Joshua D. Drake

P.S. Increase maintenance_work_mem can help too


 
 ...Robert
 
-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] [PERFORMANCE] Buying hardware

2009-01-26 Thread Joshua D. Drake
On Mon, 2009-01-26 at 14:58 -0500, Jeff wrote:

 voila. I have 2 full copies of the db.  You could even expand it a bit  
 and after the rsync  friends have it fire up the instance and run  
 pg_dump against it for a pg_restore compatible dump just in case.
 
 It takes a long time to restore a 300GB db, even if you cheat and  
 parallelify some of it. 8.4 may get a  pg_restore that can load in  
 parallel - which will help somewhat.

Somewhat? Just to be clear, if you have the hardware for it, parallel
restore can take a 500GB restore in 2.5 hours (versus 15). IMO, that is
a *little* more than somewhat. Maybe, a bit? ;)

Joshua D. Drake

-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] postgresql 8.3 tps rate

2009-01-22 Thread Joshua D. Drake
On Thu, 2009-01-22 at 17:47 +0200, Ibrahim Harrani wrote:
 Hi,
 
 I am running postgresql 8.3.5 on FreeBSD with Dual core Intel(R)
 Xeon(R) CPU 3065  @ 2.33GHz, 2GB RAM and Seagate Technology -
 Barracuda 7200.10 SATA 3.0Gb/ (RAID 1).
 
 I made several benchmark test with pgbench, TPS rate is almost 40 +/- 5.
 $ pgbench -i pgbench -s 50 -U pgsql
 
 [pg...@$ pgbench -c 200 -t 2 -U pgsql -d pgbench
 
 transaction type: TPC-B (sort of)
 scaling factor: 10
 number of clients: 200
 number of transactions per client: 2
 number of transactions actually processed: 400/400
 tps = 39.044088 (including connections establishing)
 tps = 41.528795 (excluding connections establishing)
 
 [pg...@$ pgbench -c 100 -t 5 -U pgsql -d pgbench
 
 transaction type: TPC-B (sort of)
 scaling factor: 10
 number of clients: 100
 number of transactions per client: 5
 number of transactions actually processed: 500/500
 tps = 30.162271 (including connections establishing)
 tps = 30.643256 (excluding connections establishing)
 
 Is this rate is normal or not? What can I do to improve tps and insert
 performance?

Run a real benchmark. Running 400/500 transactions doesn't give you any
real indication of what is going on. Run 5 or so and see how it
looks.

Joshua D. Drake

-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Need help with 8.4 Performance Testing

2008-12-13 Thread Joshua D. Drake
On Sat, 2008-12-13 at 07:44 -0800, da...@lang.hm wrote:
 On Sat, 13 Dec 2008, Robert Haas wrote:

  This may be a little off-topic, but I'd be interested in hearing more
  details about how you (or others) would do this...  manufacturer,
  model, configuration?  How many hard drives do you need to get 25
  spindles?  And where can you get that many 15K hard drives for under
  $10K?  My lack of experience in this area is showing here, but,
  seriously, any suggestions appreciated.
 

http://h71016.www7.hp.com/ctoBases.asp?oi=E9CEDBEID=19701SBLID=ProductLineId=450FamilyId=2570LowBaseId=15222LowPrice=$1,899.00familyviewgroup=757viewtype=Matrix

Retail cost, 15k, 36GB drives, ~ 12k. A phone call and threat of buying
dell gets its for ~ 10k.

Joshua D. Drake


-- 
PostgreSQL
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Need help with 8.4 Performance Testing

2008-12-13 Thread Joshua D. Drake
On Sat, 2008-12-13 at 12:45 -0700, Scott Marlowe wrote:
 On Sat, Dec 13, 2008 at 11:37 AM, Joshua D. Drake j...@commandprompt.com 
 wrote:
  On Sat, 2008-12-13 at 07:44 -0800, da...@lang.hm wrote:
  On Sat, 13 Dec 2008, Robert Haas wrote:

  http://h71016.www7.hp.com/ctoBases.asp?oi=E9CEDBEID=19701SBLID=ProductLineId=450FamilyId=2570LowBaseId=15222LowPrice=$1,899.00familyviewgroup=757viewtype=Matrix
 
  Retail cost, 15k, 36GB drives, ~ 12k. A phone call and threat of buying
  dell gets its for ~ 10k.
 
 I prefer to deal with companies that I don't have to horse trade with
 to get a good deal.  You can threaten dell and get good deals, but if

And what company would that be? There is zero major server manufacturer
that doesn't do the, Oh you have a competitive bid... let's just lower
that quote for you

Note: HP can beat Dell, every time on an apples to apples quote. At
least when I have done it.

Joshua D. Drake





 
-- 
PostgreSQL
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Need help with 8.4 Performance Testing

2008-12-13 Thread Joshua D. Drake
On Sat, 2008-12-13 at 12:57 -0700, Scott Marlowe wrote:
 On Sat, Dec 13, 2008 at 12:47 PM, Joshua D. Drake j...@commandprompt.com 
 wrote:
  On Sat, 2008-12-13 at 12:45 -0700, Scott Marlowe wrote:
  On Sat, Dec 13, 2008 at 11:37 AM, Joshua D. Drake j...@commandprompt.com 
  wrote:
   On Sat, 2008-12-13 at 07:44 -0800, da...@lang.hm wrote:
   On Sat, 13 Dec 2008, Robert Haas wrote:

  Note: HP can beat Dell, every time on an apples to apples quote. At
  least when I have done it.
 
 Aberdeen inc, the one I listed in my previous response. 

Sorry didn't see it.

  Their on site
 price for a single quad core xeon, 8 gig 800MHZ ram and 24 15k5 SAS
 drives is $10,080 or so.  I've never had to tell them I was getting a
 better price anywhere else.  They just give me a great quote each
 time, they have very fast and efficient customer service, and they
 give a 5 year warranty on everything they custom build for you.  I'm a
 very satisfied customer.

Well that would work for CMD but CMDs customer's would say, Who the
heck is Aberdeen?


 Few places have made me so happy with a decision to spend $22k on
 servers as these guys have.
 

Well that is definitely a solid testimony. I don't have any experience
with them but I know that the smaller companies always provide better
service so I have no reason to doubt you. The larger the company gets
the harder it is to get through the muck of bureaucracy.

I have to be honest though, unless the customer explicitly states we
don't have a problem with white box, its going to be HP, DELL or IBM,
occasionally SUN but once they see how ridiculous the initial quotes
from Sun are they generally don't want to deal with them anymore.

Sincerely,

Joshua D. Drake



-- 
PostgreSQL
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Need help with 8.4 Performance Testing

2008-12-13 Thread Joshua D. Drake
On Sat, 2008-12-13 at 19:16 -0700, Scott Marlowe wrote:

 Isn't it amazing how many small businesses won't buy from other small
 businesses?  They'd much rather give their money to a company they
 don't like because they'll be around a while (the big company).
 

True enough!

Joshua D. Drake


-- 
PostgreSQL
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Experience with HP Smart Array P400 and SATA drives?

2008-12-09 Thread Joshua D. Drake
On Tue, 2008-12-09 at 13:10 +0100, Mario Weilguni wrote:
 Scott Marlowe schrieb:
  On Tue, Dec 2, 2008 at 2:22 AM, Mario Weilguni [EMAIL PROTECTED] wrote:

 I still think we must be doing something wrong here, I googled the 
 controller and Linux, and did not find anything indicating a problem. 
 The HP SmartArray series is quite common, so a lot of users would have 
 the same problem.

Yes the SmartArray series is quite common and actually know to perform
reasonably well, in RAID 10. You still appear to be trying RAID 5.

Joshua D. Drake


 
 Thanks!
 
-- 
PostgreSQL
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Experience with HP Smart Array P400 and SATA drives?

2008-12-09 Thread Joshua D. Drake
On Tue, 2008-12-09 at 18:27 +0200, Peter Eisentraut wrote:
 Aidan Van Dyk wrote:
  * Joshua D. Drake [EMAIL PROTECTED] [081209 11:01]:
   
  Yes the SmartArray series is quite common and actually know to perform
  reasonably well, in RAID 10. You still appear to be trying RAID 5.
  
  *boggle* 
  
  Are people *still* using raid5?
  
  /me gives up!
 
 What do you suggest when there is not enough room for a RAID 10?

RAID 1.

Joshua D. Drake


 
-- 
PostgreSQL
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


  1   2   3   4   5   >