On Jan 20, 2010, at 5:32 AM, fka...@googlemail.com wrote:
Bulk inserts into an indexed table is always significantly
slower than inserting unindexed and then indexing later.
Agreed. However, shouldn't this be included in the disk-time
counters? If so, it should by near 100%.
Well,
Scott Carey wrote:
On Jan 20, 2010, at 5:32 AM, fka...@googlemail.com wrote:
In the attachement you'll find 2 screenshots perfmon34.png
and perfmon35.png (I hope 2x14 kb is o.k. for the mailing
list).
I don't think they made it to the list? I didn't see it, presumably
Scott got a
Both of those refer to the *drive* cache.
greg
On 21 Jan 2010 05:58, Greg Smith g...@2ndquadrant.com wrote:
Greg Stark wrote:That doesn't sound right. The kernel having 10% of
memory dirty doesn't mean...
Most safe ways ext3 knows how to initiate a write-out on something that must
go
On Wed, 20 Jan 2010, Greg Smith wrote:
Basically, to an extent, that's right. However, when you get 16 drives or
more into a system, then it starts being an issue.
I guess if I test a system with *only* 16 drives in it one day, maybe I'll
find out.
*Curious* What sorts of systems have you
On Thu, 21 Jan 2010, Greg Smith wrote:
In the attachement you'll find 2 screenshots perfmon34.png
and perfmon35.png (I hope 2x14 kb is o.k. for the mailing
list).
I don't think they made it to the list?
No, it seems that no emails with image attachments ever make it through
the list server.
* Greg Smith g...@2ndquadrant.com [100121 00:58]:
Greg Stark wrote:
That doesn't sound right. The kernel having 10% of memory dirty
doesn't mean there's a queue you have to jump at all. You don't get
into any queue until the kernel initiates write-out which will be
based on the usage
* Greg Smith:
Note the comment from the first article saying those delays can be 30
seconds or more. On multiple occasions, I've measured systems with
dozens of disks in a high-performance RAID1+0 with battery-backed
controller that could grind to a halt for 10, 20, or more seconds in
this
Aidan Van Dyk wrote:
Sure, if your WAL is on the same FS as your data, you're going to get
hit, and *especially* on ext3...
But, I think that's one of the reasons people usually recommend putting
WAL separate.
Separate disks can actually concentrate the problem. The writes to the
data disk
* Greg Smith g...@2ndquadrant.com [100121 09:49]:
Aidan Van Dyk wrote:
Sure, if your WAL is on the same FS as your data, you're going to get
hit, and *especially* on ext3...
But, I think that's one of the reasons people usually recommend putting
WAL separate.
Separate disks can actually
Hi,
I'm not sure this is the right place to ask my question, so please if it is
not let me know where I can get an answer from.
I'm using postgresql 8.4 on Linux machine with 1.5 GB RAM, and I'm issuing
an update query with a where clause that updates approximately 100 000 rows
in a table
elias ghanem e.gha...@acteos.com wrote:
I'm not sure this is the right place to ask my question
Yes it is. You gave a lot of good information, but we'd have a
better shot at diagnosing the issue with a bit more. Please read
the following and resubmit with as much of the requested
Scott Marlowe escribió:
On Thu, Jan 14, 2010 at 12:17 PM, Carlo Stonebanks
stonec.regis...@sympatico.ca wrote:
4) Is this the right PG version for our needs?
8.3 is very stable. Update to the latest. 8.4 seems good, but I've
had, and still am having, problems with it crashing in
Aidan Van Dyk ai...@highrise.ca wrote:
But, I think that's one of the reasons people usually recommend
putting WAL separate. Even if it's just another partition on the
same (set of) disk(s), you get the benefit of not having to wait
for all the dirty ext3 pages from your whole database FS to
* Matthew Wakeling:
The data needs to be written first to the WAL, in order to provide
crash-safety. So you're actually writing 1600MB, not 800.
In addition to that, PostgreSQL 8.4.2 seems pre-fill TOAST files (or
all relations?) with zeros when they are written first, which adds
another 400
Now, with ext4 moving to full barrier/fsync support, we could get to the
point where WAL in the main data FS can mimic the state where WAL is
seperate, namely that WAL writes can jump the queue and be written
without waiting for the data pages to be flushed down to disk, but also
that you'll
On Thu, Jan 21, 2010 at 8:51 AM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
Scott Marlowe escribió:
On Thu, Jan 14, 2010 at 12:17 PM, Carlo Stonebanks
stonec.regis...@sympatico.ca wrote:
4) Is this the right PG version for our needs?
8.3 is very stable. Update to the latest. 8.4
Scott Marlowe escribió:
On Thu, Jan 21, 2010 at 8:51 AM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
Scott Marlowe escribió:
On Thu, Jan 14, 2010 at 12:17 PM, Carlo Stonebanks
stonec.regis...@sympatico.ca wrote:
4) Is this the right PG version for our needs?
8.3 is very
On Thu, Jan 21, 2010 at 9:44 AM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
Scott Marlowe escribió:
On Thu, Jan 21, 2010 at 8:51 AM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
Scott Marlowe escribió:
On Thu, Jan 14, 2010 at 12:17 PM, Carlo Stonebanks
Hi,
Thanks for your help, here's more details as you requested:
-The version of postgres is 8.4 (by the way select pg_version() is not
working but let's concentrate on the query issue)
Here's the full definition of the table with it's indices:
-- Table: in_sortie
-- DROP TABLE in_sortie;
elias ghanem e.gha...@acteos.com wrote:
here's more details as you requested
You didn't include an EXPLAIN ANALYZE of the UPDATE statement.
-The version of postgres is 8.4 (by the way select pg_version() is
not working but let's concentrate on the query issue)
As far as I know, there is
On Thu, 2010-01-21 at 13:44 -0300, Alvaro Herrera wrote:
I think Devrim publishes debuginfo packages which you need to install
separately.
Right.
--
Devrim GÜNDÜZ, RHCE
Command Prompt - http://www.CommandPrompt.com
devrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
The issues we are seeing besides just saying the reports take over 26 hours,
is that the query seems to be CPU bound. Meaning that the query consumes an
entire CPU and quite often it is sitting with 65%-90% WAIT. Now this is not
iowait, the disks are fine, 5000-6000tps, 700K reads etc with maybe
Hi Jeff,
Are you running VACUUM (without FULL) regularly? And if so, is that
insufficient?
Unfortunately, we have not run vacuumlo as often as we would like, and that
has caused a lot of garbage blobs to get generated by our application.
You can always expect some degree of bloat. Can you
Does anyone know if there exists an open source implementation of the
TPC-C benchmark for postgresql somewhere?
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
tmp wrote:
Does anyone know if there exists an open source implementation of the
TPC-C benchmark for postgresql somewhere?
http://osdldbt.sourceforge.net/
http://wiki.postgresql.org/wiki/DBT-2
Pierre Frédéric Caillaud wrote:
Does postgres write something to the logfile whenever a fsync()
takes a suspiciously long amount of time ?
Not specifically. If you're logging statements that take a while, you
can see this indirectly, but commits that just take much longer than usual.
elias ghanem wrote:
Actually this query is inside a function and this function is called
from a .sh file using the following syntax: psql -h $DB_HOST -p $DB_PORT
-d $DB_NAME -U $DB_USER -c SELECT testupdate()
(the function is called 100 times with a vacuum analyze after each call
for the
Tory M Blue wrote:
Any assistance would be appreciated, don't worry about slapping me
around I need to figure this out. Otherwise I'm buying new hardware
where it may not be required.
What is the reporting query that takes 26 hours? You didn't seem to
include it, or any query plan information
28 matches
Mail list logo