Neils,
> - Master server with battery back raid controller with 4 SAS disks in
> a RAID 0 - so NO mirroring here, due to max performance
> requirements.
> - Slave server setup with streaming replication on 4 HDD's in RAID
> 10. The setup will be done with synchronous_commit=off and
> synchronous_s
Emre,
> > LOG: process 4793 acquired ExclusiveLock on extension of relation
> > 305605 of database 16396 after 2348.675 ms
The reason you're seeing that message is that you have log_lock_waits turned on.
That message says that some process waited for 2.3 seconds to get a lock for
expanding the
> We are not doing anything to postgres that would cause the rise and
> drop.
> Data base activity is pretty consistent. nor are we doing any kind
> of
> purge. This week the drop occurred after 6 days. We are thinking it
> must
> be some kind of internal postgres activity but we can't track it
> This may just be a typo, but if you really did create write (dirty)
> block device cache by writing the pg_dump file somewhere, then that
> is what it's supposed to do ;)
The pgdump was across the network. So the only caching on the machine was read
caching.
> Read cache of course does not
Have run across some memory behavior on Linux I've never seen before.
Server running RHEL6 with 96GB of RAM.
Kernel 2.6.32
PostgreSQL 9.0
208GB database with fairly random accesses over 50% of the database.
Now, here's the weird part: even after a week of uptime, only 21 to 25GB of
cache is ev
Peter,
Check out pg_fincore. Still kind of risky on a production server, but does an
excellent job of measuring page access on Linux.
- Original Message -
> Baron Swartz's recent post [1] on working set size got me to
> thinking.
> I'm well aware of how I can tell when my database's wor
> Is there an easy and unintrusive way to get such a metric as the
> aggregated query times? And to normalize it for how much work
> happens
> to have been doing on at the time?
You'd pretty much need to do large-scale log harvesting combined with samples
of query concurrency taken several time
All,
> The easiest place to start is by re-using the work already done by the
> TPC for benchmarking commercial databases. There are ports of the TPC
> workloads to PostgreSQL available in the DBT-2, DBT-3, and DBT-5
> tests;
Also EAStress, which I think the project still has a license for.
The
> mergejoinscansel doesn't currently try to fix up the histogram bounds
> by
> consulting indexes. At the time I was afraid of the costs of doing
> that, and I still am; but it would be a way to address this issue.
Oh? Hmmm. I have a ready-made test case for the benefit case on this.
However