Re: [PERFORM] Risk of data corruption/loss?

2013-03-13 Thread Joshua Berkus
Neils, > - Master server with battery back raid controller with 4 SAS disks in > a RAID 0 - so NO mirroring here, due to max performance > requirements. > - Slave server setup with streaming replication on 4 HDD's in RAID > 10. The setup will be done with synchronous_commit=off and > synchronous_s

Re: [PERFORM] PostgreSQL 9.2.3 performance problem caused Exclusive locks

2013-03-13 Thread Joshua Berkus
Emre, > > LOG: process 4793 acquired ExclusiveLock on extension of relation > > 305605 of database 16396 after 2348.675 ms The reason you're seeing that message is that you have log_lock_waits turned on. That message says that some process waited for 2.3 seconds to get a lock for expanding the

Re: [PERFORM] Increasing WAL usage followed by sudden drop

2012-08-17 Thread Joshua Berkus
> We are not doing anything to postgres that would cause the rise and > drop. > Data base activity is pretty consistent. nor are we doing any kind > of > purge. This week the drop occurred after 6 days. We are thinking it > must > be some kind of internal postgres activity but we can't track it

Re: [PERFORM] Linux machine aggressively clearing cache

2012-03-28 Thread Joshua Berkus
> This may just be a typo, but if you really did create write (dirty) > block device cache by writing the pg_dump file somewhere, then that > is what it's supposed to do ;) The pgdump was across the network. So the only caching on the machine was read caching. > Read cache of course does not

[PERFORM] Linux machine aggressively clearing cache

2012-03-27 Thread Joshua Berkus
Have run across some memory behavior on Linux I've never seen before. Server running RHEL6 with 96GB of RAM. Kernel 2.6.32 PostgreSQL 9.0 208GB database with fairly random accesses over 50% of the database. Now, here's the weird part: even after a week of uptime, only 21 to 25GB of cache is ev

Re: [PERFORM] Determining working set size

2012-03-27 Thread Joshua Berkus
Peter, Check out pg_fincore. Still kind of risky on a production server, but does an excellent job of measuring page access on Linux. - Original Message - > Baron Swartz's recent post [1] on working set size got me to > thinking. > I'm well aware of how I can tell when my database's wor

Re: [PERFORM] random_page_cost = 2.0 on Heroku Postgres

2012-02-12 Thread Joshua Berkus
> Is there an easy and unintrusive way to get such a metric as the > aggregated query times? And to normalize it for how much work > happens > to have been doing on at the time? You'd pretty much need to do large-scale log harvesting combined with samples of query concurrency taken several time

Re: [PERFORM] Performance

2011-04-28 Thread Joshua Berkus
All, > The easiest place to start is by re-using the work already done by the > TPC for benchmarking commercial databases. There are ports of the TPC > workloads to PostgreSQL available in the DBT-2, DBT-3, and DBT-5 > tests; Also EAStress, which I think the project still has a license for. The

Re: [PERFORM] Shouldn't we have a way to avoid "risky" plans?

2011-03-25 Thread Joshua Berkus
> mergejoinscansel doesn't currently try to fix up the histogram bounds > by > consulting indexes. At the time I was afraid of the costs of doing > that, and I still am; but it would be a way to address this issue. Oh? Hmmm. I have a ready-made test case for the benefit case on this. However