Re: [HACKERS] Load Distributed Checkpoints test results

2007-06-20 Thread Greg Smith

On Wed, 20 Jun 2007, Bruce Momjian wrote:


I don't expect this patch to be perfect when it is applied.  I do expect
to be a best effort, and it will get continual real-world testing during
beta and we can continue to improve this.


This is completely fair.  Consider my suggestions something that people 
might want look out for during beta rather than a task Heikki should worry 
about before applying the patch.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Load Distributed Checkpoints test results

2007-06-20 Thread Greg Smith

On Wed, 20 Jun 2007, Heikki Linnakangas wrote:

You mean the shift and "flattening" of the graph to the right in the delivery 
response time distribution graph?


Right, that's what ends up happening during the problematic cases.  To 
pick numbers out of the air, instead of 1% of the transactions getting 
nailed really hard, by spreading things out you might have 5% of them get 
slowed considerably but not awfully.  For some applications, that might be 
considered a step backwards.



I'd like to understand the underlaying mechanism


I had to capture regular snapshots of the buffer cache internals via 
pg_buffercache to figure out where the breakdown was in my case.


I don't have any good simple ideas on how to make it better in 8.3 timeframe, 
so I don't think there's much to learn from repeating these tests.


Right now, it's not clear which of the runs represent normal behavior and 
which might be anomolies.  That's the thing you might learn if you had 10 
at each configuration instead of just 1.  The goal for the 8.3 timeframe 
in my mind would be to perhaps have enough data to give better guidelines 
for defaults and a range of useful settings in the documentation.


The only other configuration I'd be curious to see is pushing the number 
of warehouses even more to see if the 90% numbers spread further from 
current behavior.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] Load Distributed Checkpoints test results

2007-06-20 Thread Heikki Linnakangas

Joshua D. Drake wrote:
The only comment I have is that is could be useful to 
be able to turn this feature off via GUC. Other than that, I think it is 
great.


Yeah, you can do that.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] Load Distributed Checkpoints test results

2007-06-20 Thread Joshua D. Drake

Bruce Momjian wrote:

Greg Smith wrote:



I don't expect this patch to be perfect when it is applied.  I do expect
to be a best effort, and it will get continual real-world testing during
beta and we can continue to improve this.  Right now, we know we have a
serious issue with checkpoint I/O, and this patch is going to improve
that in most cases.  I don't want to see us reject it or greatly delay
beta as we try to make it perfect.

My main point is that should keep trying to make the patch better, but
the patch doesn't have to be perfect to get applied.  I don't want us to
get into a death-by-testing spiral.


Death by testing? The only comment I have is that is could be useful to 
be able to turn this feature off via GUC. Other than that, I think it is 
great.


Joshua D. Drake







--

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Load Distributed Checkpoints test results

2007-06-20 Thread Heikki Linnakangas

Greg Smith wrote:
While it shows up in the 90% figure, what happens is most obvious in the 
response time distribution graphs.  Someone who is currently getting a 
run like #295 right now: http://community.enterprisedb.com/ldc/295/rt.html


Might be really unhappy if they turn on LDC expecting to smooth out 
checkpoints and get the shift of #296 instead: 
http://community.enterprisedb.com/ldc/296/rt.html


You mean the shift and "flattening" of the graph to the right in the 
delivery response time distribution graph? Looking at the other runs, 
that graph looks sufficiently different between the two baseline runs 
and the patched runs that I really wouldn't draw any conclusion from that.


In any case you *can* disable LDC if you want to.

That is of course cherry-picking the most extreme examples.  But it 
illustrates my concern about the possibility for LDC making things worse 
on a really overloaded system, which is kind of counter-intuitive 
because you might expect that would be the best case for its improvements.


Well, it is indeed cherry-picking, so I still don't see how LDC could 
make things worse on a really overloaded system. I grant you there might 
indeed be one, but I'd like to understand the underlaying mechanism, or 
at least see one.


Since there is so much variability in results 
when you get into this territory, you really need to run a lot of these 
tests to get a feel for the spread of behavior.


I think that's the real lesson from this. In any case, at least LDC 
doesn't seem to hurt much in any of the test configurations tested this 
far, and smooths the checkpoints a lot in most configurations.


 I spent about a week of 
continuously running tests stalking this bugger before I felt I'd mapped 
out the boundaries with my app.  You've got your own priorities, but I'd 
suggest you try to find enough time for a more exhaustive look at this 
area before nailing down the final form for the patch.


I don't have any good simple ideas on how to make it better in 8.3 
timeframe, so I don't think there's much to learn from repeating these 
tests.


That said, running tests is easy and doesn't take much effort. If you 
have suggestions for configurations or workloads to test, I'll be happy 
to do that.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] Load Distributed Checkpoints test results

2007-06-20 Thread Bruce Momjian
Greg Smith wrote:
> I think it does a better job of showing how LDC can shift the top 
> percentile around under heavy load, even though there are runs where it's 
> a clear improvement.  Since there is so much variability in results when 
> you get into this territory, you really need to run a lot of these tests 
> to get a feel for the spread of behavior.  I spent about a week of 
> continuously running tests stalking this bugger before I felt I'd mapped 
> out the boundaries with my app.  You've got your own priorities, but I'd 
> suggest you try to find enough time for a more exhaustive look at this 
> area before nailing down the final form for the patch.

OK, I have hit my limit on people asking for more testing.  I am not
against testing, but I don't want to get into a situation where we just
keep asking for more tests and not move forward.  I am going to rely on
the patch submitters to suggest when enough testing has been done and
move on.

I don't expect this patch to be perfect when it is applied.  I do expect
to be a best effort, and it will get continual real-world testing during
beta and we can continue to improve this.  Right now, we know we have a
serious issue with checkpoint I/O, and this patch is going to improve
that in most cases.  I don't want to see us reject it or greatly delay
beta as we try to make it perfect.

My main point is that should keep trying to make the patch better, but
the patch doesn't have to be perfect to get applied.  I don't want us to
get into a death-by-testing spiral.

-- 
  Bruce Momjian  <[EMAIL PROTECTED]>  http://momjian.us
  EnterpriseDB   http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Load Distributed Checkpoints test results

2007-06-20 Thread Greg Smith

On Wed, 20 Jun 2007, Heikki Linnakangas wrote:

Another series with 150 warehouses is more interesting. At that # of 
warehouses, the data disks are 100% busy according to iostat. The 90% 
percentile response times are somewhat higher with LDC, though the 
variability in both the baseline and LDC test runs seem to be pretty high.


Great, this the exactly the behavior I had observed and wanted someone 
else to independantly run into.  When you're in 100% disk busy land, LDC 
can shift the distribution of bad transactions around in a way that some 
people may not be happy with, and that might represent a step backward 
from the current code for them.  I hope you can understand now why I've 
been so vocal that it must be possible to pull this new behavior out so 
the current form of checkpointing is still available.


While it shows up in the 90% figure, what happens is most obvious in the 
response time distribution graphs.  Someone who is currently getting a run 
like #295 right now: http://community.enterprisedb.com/ldc/295/rt.html


Might be really unhappy if they turn on LDC expecting to smooth out 
checkpoints and get the shift of #296 instead: 
http://community.enterprisedb.com/ldc/296/rt.html


That is of course cherry-picking the most extreme examples.  But it 
illustrates my concern about the possibility for LDC making things worse 
on a really overloaded system, which is kind of counter-intuitive because 
you might expect that would be the best case for its improvements.


When I summarize the percentile behavior from your results with 150 
warehouses in a table like this:


TestLDC %   90%
295 None3.703
297 None4.432
292 10  3.432
298 20  5.925
296 30  5.992
294 40  4.132

I think it does a better job of showing how LDC can shift the top 
percentile around under heavy load, even though there are runs where it's 
a clear improvement.  Since there is so much variability in results when 
you get into this territory, you really need to run a lot of these tests 
to get a feel for the spread of behavior.  I spent about a week of 
continuously running tests stalking this bugger before I felt I'd mapped 
out the boundaries with my app.  You've got your own priorities, but I'd 
suggest you try to find enough time for a more exhaustive look at this 
area before nailing down the final form for the patch.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Load Distributed Checkpoints test results

2007-06-20 Thread Heikki Linnakangas
I've uploaded the latest test results to the results page at 
http://community.enterprisedb.com/ldc/


The test results on the index page are not in a completely logical 
order, sorry about that.


I ran a series of tests with 115 warehouses, and no surprises there. LDC 
smooths the checkpoints nicely.


Another series with 150 warehouses is more interesting. At that # of 
warehouses, the data disks are 100% busy according to iostat. The 90% 
percentile response times are somewhat higher with LDC, though the 
variability in both the baseline and LDC test runs seem to be pretty 
high. Looking at the response time graphs, even with LDC there's clear 
checkpoint spikes there, but they're much less severe than without.


Another series was with 90 warehouses, but without think times, driving 
the system to full load. LDC seems to smooth the checkpoints very nicely 
 in these tests.


Heikki Linnakangas wrote:

Gregory Stark wrote:

"Heikki Linnakangas" <[EMAIL PROTECTED]> writes:
Now that the checkpoints are spread out more, the response times are 
very

smooth.


So obviously the reason the results are so dramatic is that the 
checkpoints
used to push the i/o bandwidth demand up over 100%. By spreading it 
out you
can see in the io charts that even during the checkpoint the i/o busy 
rate

stays just under 100% except for a few data points.

If I understand it right Greg Smith's concern is that in a busier 
system where
even *with* the load distributed checkpoint the i/o bandwidth demand 
during t
he checkpoint was *still* being pushed over 100% then spreading out 
the load

would only exacerbate the problem by extending the outage.

To that end it seems like what would be useful is a pair of tests with 
and
without the patch with about 10% larger warehouse size (~ 115) which 
would

push the i/o bandwidth demand up to about that level.


I still don't see how spreading the writes could make things worse, but 
running more tests is easy. I'll schedule tests with more warehouses 
over the weekend.


It might even make sense to run a test with an outright overloaded to 
see if
the patch doesn't exacerbate the condition. Something with a warehouse 
size of
maybe 150. I would expect it to fail the TPCC constraints either way 
but what
would be interesting to know is whether it fails by a larger margin 
with the

LDC behaviour or a smaller margin.


I'll do that as well, though experiences with tests like that in the 
past have been that it's hard to get repeatable results that way.




--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] Load Distributed Checkpoints test results

2007-06-18 Thread Greg Smith

On Mon, 18 Jun 2007, Simon Riggs wrote:

Smoother checkpoints mean smaller resource queues when a burst coincides 
with a checkpoint, so anybody with throughput-maximised or bursty apps 
should want longer, smooth checkpoints.


True as long as two conditions hold:

1) Buffers needed to fill allocation requests are still being written fast 
enough.  The buffer allocation code starts burning a lot of CPU+lock 
resources when many clients are all searching the pool looking for a 
buffers and there aren't many clean ones to be found.  The way the current 
checkpoint code starts at the LRU point and writes everything dirty in the 
order new buffers will be allocating in as fast as possible means it's 
doing the optimal procedure to keep this from happening.  It's being 
presumed that making the LRU writer active will mitigate this issue, my 
experience suggests that may not be as effective as hoped--unless it gets 
changed so that it's allowed to decrement usage_count.


To pick one example of a direction I'm a little concerned about related to 
this, Itagaki's sorted writes results look very interesting.  But as his 
test system is such that the actual pgbench TPS numbers are 1/10 of the 
ones I was seeing when I started having ugly buffer allocation issues, I'm 
real sure the particular test he's running isn't sensitive to issues in 
this area at all; there's just not enough buffer cache churn if you're 
only doing a couple of hundred TPS for this to happen.


2) The checkpoint still finishes in time.

The thing you can't forget about when dealing with an overloaded system is 
that there's no such thing as lowering the load of the checkpoint such 
that it doesn't have a bad impact.  Assume new transactions are being 
generated by an upstream source such that the database itself is the 
bottleneck, and you're always filling 100% of I/O capacity.  All I'm 
trying to get everyone to consider is that if you have a large pool of 
dirty buffers to deal with in this situation, it's possible (albeit 
difficult) to get into a situation where if the checkpoint doesn't write 
out the dirty buffers fast enough, the client backends will evacuate them 
instead in a way that makes the whole process less efficient than the 
current behavior.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] Load Distributed Checkpoints test results

2007-06-18 Thread Simon Riggs
On Sun, 2007-06-17 at 01:36 -0400, Greg Smith wrote:

> The last project I was working on, any checkpoint that caused a 
> transaction to slip for more than 5 seconds would cause a data loss.  One 
> of the defenses against that happening is that you have a wicked fast 
> transaction rate to clear the buffer out when thing are going well, but by 
> no means is that rate the important thing--never having the response time 
> halt for so long that transactions get lost is.

You would want longer checkpoints in that case.

You're saying you don't want long checkpoints because they cause an
effective outage. The current situation is that checkpoints are so
severe that they cause an effective halt to processing, even though
checkpoints allow processing to continue. Checkpoints don't hold any
locks that prevent normal work from occurring but they did cause an
unthrottled burst of work to occur that raised expected service times
dramatically on an already busy server.

There were a number of effects contributing to the high impact of
checkpointing. Heikki's recent changes reduce the impact of checkpoints
so that they do *not* halt other processing. Longer checkpoints do *not*
mean longer halts in processing, they actually reduce the halt in
processing. Smoother checkpoints mean smaller resource queues when a
burst coincides with a checkpoint, so anybody with throughput-maximised
or bursty apps should want longer, smooth checkpoints.

You're right to ask for a minimum write rate, since this allows very
small checkpoints to complete in reduced times. There's no gain from
having long checkpoints per se, just the reduction in peak write rate
they typically cause.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Load Distributed Checkpoints test results

2007-06-16 Thread Heikki Linnakangas

Josh Berkus wrote:
Where is the most current version of this patch?  I want to test it on TPCE, 
but there seem to be  4-5 different versions floating around, and the patch 
tracker hasn't been updated.


It would be the ldc-justwrites-2.patch:
http://archives.postgresql.org/pgsql-patches/2007-06/msg00149.php

Thanks in advance for the testing!

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] Load Distributed Checkpoints test results

2007-06-16 Thread Greg Smith

On Fri, 15 Jun 2007, Gregory Stark wrote:


But what you're concerned about is not OLTP performance at all.


It's an OLTP system most of the time that periodically gets unexpectedly 
high volume.  The TPC-E OLTP test suite actually has a MarketFeed 
component to in it that has similar properties to what I was fighting 
with.  In a real-world Market Feed, you spec the system to survive a very 
high volume day of trades.  But every now and then there's some event that 
causes volumes to spike way outside of any you would ever be able to plan 
for, and much data ends up getting lost as a result from systems not being 
able to keep up.  A look at the 1987 "Black Monday" crash is informative 
here: http://en.wikipedia.org/wiki/Black_Monday_(1987)


But the point is you're concerned with total throughput and not response 
time. You don't have a fixed rate imposed by outside circumstances with 
which you have to keep up all the time. You just want to be have the 
highest throughput overall.


Actually, I think I care about reponse time more than you do.  In a 
typical data logging situation, there is some normal rate at which you 
expect transactions to arrive.  There's usually something memory-based 
upsteam that can buffer a small amount of delay, so an occasional short 
checkpoint blip can be tolerated.  But if there's ever a really extended 
one, you actually start losing data when the buffers overflow.


The last project I was working on, any checkpoint that caused a 
transaction to slip for more than 5 seconds would cause a data loss.  One 
of the defenses against that happening is that you have a wicked fast 
transaction rate to clear the buffer out when thing are going well, but by 
no means is that rate the important thing--never having the response time 
halt for so long that transactions get lost is.


The good news is that this should be pretty easy to test though. The 
main competitor for DBT2 is BenchmarkSQL whose main deficiency is 
precisely the lack of support for the think times.


Maybe you can get something useful out of that one.  I found the 
performance impact of the JDBC layer in the middle so lowered overall 
throughput and distanced me from what was happening that it blurred what 
was going on.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] Load Distributed Checkpoints test results

2007-06-16 Thread Josh Berkus
All,

Where is the most current version of this patch?  I want to test it on TPCE, 
but there seem to be  4-5 different versions floating around, and the patch 
tracker hasn't been updated.

-- 
Josh Berkus
PostgreSQL @ Sun
San Francisco

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Load Distributed Checkpoints test results

2007-06-15 Thread PFC
On Fri, 15 Jun 2007 22:28:34 +0200, Gregory Maxwell <[EMAIL PROTECTED]>  
wrote:



On 6/15/07, Gregory Stark <[EMAIL PROTECTED]> wrote:
While in theory spreading out the writes could have a detrimental  
effect I

think we should wait until we see actual numbers. I have a pretty strong
suspicion that the effect would be pretty minimal. We're still doing  
the same

amount of i/o total, just with a slightly less chance for the elevator
algorithm to optimize the pattern.


..and the sort patching suggests that the OS's elevator isn't doing a
great job for large flushes in any case. I wouldn't be shocked to see
load distributed checkpoints cause an unconditional improvement since
they may do better at avoiding the huge burst behavior that is
overrunning the OS elevator in any case.


	...also consider that if someone uses RAID5, sorting the writes may  
produce more full-stripe writes, which don't need the read-then-write  
RAID5 performance killer...


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] Load Distributed Checkpoints test results

2007-06-15 Thread Gregory Maxwell

On 6/15/07, Gregory Stark <[EMAIL PROTECTED]> wrote:

While in theory spreading out the writes could have a detrimental effect I
think we should wait until we see actual numbers. I have a pretty strong
suspicion that the effect would be pretty minimal. We're still doing the same
amount of i/o total, just with a slightly less chance for the elevator
algorithm to optimize the pattern.


..and the sort patching suggests that the OS's elevator isn't doing a
great job for large flushes in any case. I wouldn't be shocked to see
load distributed checkpoints cause an unconditional improvement since
they may do better at avoiding the huge burst behavior that is
overrunning the OS elevator in any case.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] Load Distributed Checkpoints test results

2007-06-15 Thread Gregory Stark
"Greg Smith" <[EMAIL PROTECTED]> writes:

> On Fri, 15 Jun 2007, Gregory Stark wrote:
>
>> If I understand it right Greg Smith's concern is that in a busier system
>> where even *with* the load distributed checkpoint the i/o bandwidth demand
>> during t he checkpoint was *still* being pushed over 100% then spreading out
>> the load would only exacerbate the problem by extending the outage.
>
> Thank you for that very concise summary; that's exactly what I've run into.
> DBT2 creates a heavy write load, but it's not testing a real burst behavior
> where something is writing as fast as it's possible to.

Ah, thanks, that's precisely the distinction that I was missing. It's funny,
something that was so counter-intuitive initially has become so ingrained in
my thinking that I didn't even notice I was assuming it any more.

DBT2 has "think times" which it uses to limit the flow of transactions. This
is critical to ensuring that you're forced to increase the scale of the
database if you want to report larger transaction rates which of course is
what everyone wants to brag about.

Essentially this is what makes it an OLTP benchmark. You're measuring how well
you can keep up with a flow of transactions which arrive at a fixed speed
independent of the database.

But what you're concerned about is not OLTP performance at all. It's a kind of
DSS system -- perhaps there's another TLA that's more precise. But the point
is you're concerned with total throughput and not response time. You don't
have a fixed rate imposed by outside circumstances with which you have to keep
up all the time. You just want to be have the highest throughput overall.

The good news is that this should be pretty easy to test though. The main
competitor for DBT2 is BenchmarkSQL whose main deficiency is precisely the
lack of support for the think times. We can run BenchmarkSQL runs to see if
the patch impacts performance when it's set to run as fast as possible with no
think times.

While in theory spreading out the writes could have a detrimental effect I
think we should wait until we see actual numbers. I have a pretty strong
suspicion that the effect would be pretty minimal. We're still doing the same
amount of i/o total, just with a slightly less chance for the elevator
algorithm to optimize the pattern.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Load Distributed Checkpoints test results

2007-06-15 Thread Heikki Linnakangas

Gregory Stark wrote:

"Heikki Linnakangas" <[EMAIL PROTECTED]> writes:

Now that the checkpoints are spread out more, the response times are very
smooth.


So obviously the reason the results are so dramatic is that the checkpoints
used to push the i/o bandwidth demand up over 100%. By spreading it out you
can see in the io charts that even during the checkpoint the i/o busy rate
stays just under 100% except for a few data points.

If I understand it right Greg Smith's concern is that in a busier system where
even *with* the load distributed checkpoint the i/o bandwidth demand during t
he checkpoint was *still* being pushed over 100% then spreading out the load
would only exacerbate the problem by extending the outage.

To that end it seems like what would be useful is a pair of tests with and
without the patch with about 10% larger warehouse size (~ 115) which would
push the i/o bandwidth demand up to about that level.


I still don't see how spreading the writes could make things worse, but 
running more tests is easy. I'll schedule tests with more warehouses 
over the weekend.



It might even make sense to run a test with an outright overloaded to see if
the patch doesn't exacerbate the condition. Something with a warehouse size of
maybe 150. I would expect it to fail the TPCC constraints either way but what
would be interesting to know is whether it fails by a larger margin with the
LDC behaviour or a smaller margin.


I'll do that as well, though experiences with tests like that in the 
past have been that it's hard to get repeatable results that way.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Load Distributed Checkpoints test results

2007-06-15 Thread Greg Smith

On Fri, 15 Jun 2007, Gregory Stark wrote:

If I understand it right Greg Smith's concern is that in a busier system 
where even *with* the load distributed checkpoint the i/o bandwidth 
demand during t he checkpoint was *still* being pushed over 100% then 
spreading out the load would only exacerbate the problem by extending 
the outage.


Thank you for that very concise summary; that's exactly what I've run 
into.  DBT2 creates a heavy write load, but it's not testing a real burst 
behavior where something is writing as fast as it's possible to.


I've been involved in applications that are more like a data logging 
situation, where periodically you get some data source tossing 
transactions in as fast as it will hit disk--the upstream source 
temporarily becomes faster at generating data during these periods than 
the database itself can be.  Under normal conditions, the LDC smoothing 
would be a win, as it would lower the number of times the entire flow of 
operations got stuck.  But at these peaks it will, as you say, extend the 
outage.


It might even make sense to run a test with an outright overloaded to 
see if the patch doesn't exacerbate the condition.


Exactly.  I expect that it will make things worse, but I'd like to keep an 
eye on making sure the knobs are available so that it's only slightly 
worse.


I think it's important to at least recognize that someone who wants LDC 
normally might occasionally have a period where they're completely 
overloaded, and that this new feature doesn't have an unexpected breakdown 
when that happens.  I'm still stuggling with creating a simple test case 
to demonstrate what I'm concerned about.  I'm not familiar enough with the 
TPC testing to say whether your suggestions for adjusting warehouse size 
would accomplish that (because the flow is so different I had to abandon 
working with that a while ago as not being representative of what I was 
doing), but I'm glad you're thinking about it.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] Load Distributed Checkpoints test results

2007-06-15 Thread Gregory Stark
"Heikki Linnakangas" <[EMAIL PROTECTED]> writes:

> I ran another series of tests, with a less aggressive bgwriter_delay setting,
> which also affects the minimum rate of the writes in the WIP patch I used.
>
> Now that the checkpoints are spread out more, the response times are very
> smooth.

So obviously the reason the results are so dramatic is that the checkpoints
used to push the i/o bandwidth demand up over 100%. By spreading it out you
can see in the io charts that even during the checkpoint the i/o busy rate
stays just under 100% except for a few data points.

If I understand it right Greg Smith's concern is that in a busier system where
even *with* the load distributed checkpoint the i/o bandwidth demand during t
he checkpoint was *still* being pushed over 100% then spreading out the load
would only exacerbate the problem by extending the outage.

To that end it seems like what would be useful is a pair of tests with and
without the patch with about 10% larger warehouse size (~ 115) which would
push the i/o bandwidth demand up to about that level.

It might even make sense to run a test with an outright overloaded to see if
the patch doesn't exacerbate the condition. Something with a warehouse size of
maybe 150. I would expect it to fail the TPCC constraints either way but what
would be interesting to know is whether it fails by a larger margin with the
LDC behaviour or a smaller margin.

Even just the fact that we're passing at 105 warehouses -- and apparently with
quite a bit of headroom too -- whereas previously we were failing at that
level on this hardware is a positive result as far as the TPCC benchmark
methodology.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Load Distributed Checkpoints test results

2007-06-15 Thread Heikki Linnakangas

Heikki Linnakangas wrote:
Here's results from a batch of test runs with LDC. This patch only 
spreads out the writes, fsyncs work as before. This patch also includes 
the optimization that we don't write buffers that were dirtied after 
starting the checkpoint.


http://community.enterprisedb.com/ldc/

See tests 276-280. 280 is the baseline with no patch attached, the 
others are with load distributed checkpoints with different values for 
checkpoint_write_percent. But after running the tests I noticed that the 
spreading was actually controlled by checkpoint_write_rate, which sets 
the minimum rate for the writes, so all those tests with the patch 
applied are effectively the same; the writes were spread over a period 
of 1 minute. I'll fix that setting and run more tests.


I ran another series of tests, with a less aggressive bgwriter_delay 
setting, which also affects the minimum rate of the writes in the WIP 
patch I used.


Now that the checkpoints are spread out more, the response times are 
very smooth.


With the 40% checkpoint_write_percent setting, the checkpoints last ~3 
minutes. About 85% of the buffer cache is dirty at the beginning of 
checkpoints, and thanks to the optimization of not writing pages dirtied 
after checkpoint start, only ~47% of those are actually written by the 
checkpoint. That explains why the checkpoints only last ~3 minutes, and 
not checkpoint_timeout*checkpoint_write_percent, which would be 6 
minutes. The estimation of how much progress has been done and how much 
is left doesn't take the gain from that optimization into account.


The sync phase only takes ~5 seconds. I'm very happy with these results.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Load Distributed Checkpoints test results

2007-06-14 Thread ITAGAKI Takahiro

Heikki Linnakangas <[EMAIL PROTECTED]> wrote:

> Here's results from a batch of test runs with LDC. This patch only 
> spreads out the writes, fsyncs work as before.

I saw similar results in my tests. Spreading only writes are enough
for OLTP at least on Linux with middle-or-high-grade storage system.
It also works well on desktop-grade Widnows machine.

However, I don't know how it works on other OSes, including Solaris
and FreeBSD, that have different I/O policies. Would anyone test it
in those environment?

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Load Distributed Checkpoints test results

2007-06-13 Thread Josh Berkus
Greg,

> However TPC-E has even more stringent requirements:

I'll see if I can get our TPCE people to test this, but I'd say that the 
existing patch is already good enough to be worth accepting based on the TPCC 
results.

However, I would like to see some community testing on oddball workloads (like 
huge ELT operations and read-only workloads) to see if the patch imposes any 
extra overhead on non-OLTP databases.

-- 
Josh Berkus
PostgreSQL @ Sun
San Francisco

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Load Distributed Checkpoints test results

2007-06-13 Thread Gregory Stark

"Heikki Linnakangas" <[EMAIL PROTECTED]> writes:

> The response time graphs show that the patch reduces the max (new-order)
> response times during checkpoints from ~40-60 s to ~15-20 s. 

I think that's the headline number here. The worst-case response time is
reduced from about 60s to about 17s. That's pretty impressive on its own. It
would be worth knowing if that benefit goes away if we push the machine again
to the edge of its i/o bandwidth.

> The change in overall average response times is also very significant. 1.5s
> without patch, and ~0.3-0.4s with the patch for new-order transactions. That
> also means that we pass the TPC-C requirement that 90th percentile of response
> times must be < average.

Incidentally this is backwards. the 90th percentile response time must be
greater than the average response time for that transaction.

This isn't actually a very stringent test given that most of the data points
in the 90th percentile are actually substantially below the maximum. It's
quite possible to achieve it even with maximum response times above 60s.

However TPC-E has even more stringent requirements:

During Steady State the throughput of the SUT must be sustainable for the
remainder of a Business Day started at the beginning of the Steady State.

Some aspects of the benchmark implementation can result in rather
insignificant but frequent variations in throughput when computed over
somewhat shorter periods of time. To meet the sustainable throughput
requirement, the cumulative effect of these variations over one Business
Day must not exceed 2% of the Reported Throughput.

Comment 1: This requirement is met when the throughput computed over any
period of one hour, sliding over the Steady State by increments of ten
minutes, varies from the Reported Throughput by no more than 2%.

Some aspects of the benchmark implementation can result in rather
significant but sporadic variations in throughput when computed over some
much shorter periods of time. To meet the sustainable throughput
requirement, the cumulative effect of these variations over one Business
Day must not exceed 20% of the Reported Throughput.

Comment 2: This requirement is met when the throughput level computed over
any period of ten minutes, sliding over the Steady State by increments one
minute, varies from the Reported Throughput by no more than 20%.


-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[HACKERS] Load Distributed Checkpoints test results

2007-06-13 Thread Heikki Linnakangas
Here's results from a batch of test runs with LDC. This patch only 
spreads out the writes, fsyncs work as before. This patch also includes 
the optimization that we don't write buffers that were dirtied after 
starting the checkpoint.


http://community.enterprisedb.com/ldc/

See tests 276-280. 280 is the baseline with no patch attached, the 
others are with load distributed checkpoints with different values for 
checkpoint_write_percent. But after running the tests I noticed that the 
spreading was actually controlled by checkpoint_write_rate, which sets 
the minimum rate for the writes, so all those tests with the patch 
applied are effectively the same; the writes were spread over a period 
of 1 minute. I'll fix that setting and run more tests.


The response time graphs show that the patch reduces the max (new-order) 
response times during checkpoints from ~40-60 s to ~15-20 s. The change 
in minute by minute average is even more significant.


The change in overall average response times is also very significant. 
1.5s without patch, and ~0.3-0.4s with the patch for new-order 
transactions. That also means that we pass the TPC-C requirement that 
90th percentile of response times must be < average.



All that said, there's still significant checkpoint spikes present, even 
though they're much less severe than without the patch. I'm willing to 
settle with this for 8.3. Does anyone want to push for more testing and 
thinking of spreading the fsyncs as well, and/or adding a delay between 
writes and fsyncs?


Attached is the patch used in the tests. It still needs some love..

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com
Index: doc/src/sgml/config.sgml
===
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/doc/src/sgml/config.sgml,v
retrieving revision 1.126
diff -c -r1.126 config.sgml
*** doc/src/sgml/config.sgml	7 Jun 2007 19:19:56 -	1.126
--- doc/src/sgml/config.sgml	12 Jun 2007 08:16:55 -
***
*** 1565,1570 
--- 1565,1619 

   
  
+  
+   checkpoint_write_percent (floating point)
+   
+checkpoint_write_percent configuration parameter
+   
+   
+
+ To spread works in checkpoints, each checkpoint spends the specified
+ time and delays to write out all dirty buffers in the shared buffer
+ pool. The default value is 50.0 (50% of checkpoint_timeout).
+ This parameter can only be set in the postgresql.conf
+ file or on the server command line.
+
+   
+  
+ 
+  
+   checkpoint_nap_percent (floating point)
+   
+checkpoint_nap_percent configuration parameter
+   
+   
+
+ Specifies the delay between writing out all dirty buffers and flushing
+ all modified files. Make the kernel's disk writer to flush dirty buffers
+ during this time in order to reduce works in the next flushing phase.
+ The default value is 10.0 (10% of checkpoint_timeout).
+ This parameter can only be set in the postgresql.conf
+ file or on the server command line.
+
+   
+  
+ 
+  
+   checkpoint_sync_percent (floating point)
+   
+checkpoint_sync_percent configuration parameter
+   
+   
+
+ To spread works in checkpoints, each checkpoint spends the specified
+ time and delays to flush all modified files.
+ The default value is 20.0 (20% of checkpoint_timeout).
+ This parameter can only be set in the postgresql.conf
+ file or on the server command line.
+
+   
+  
+ 
   
checkpoint_warning (integer)

Index: src/backend/access/transam/xlog.c
===
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/backend/access/transam/xlog.c,v
retrieving revision 1.272
diff -c -r1.272 xlog.c
*** src/backend/access/transam/xlog.c	31 May 2007 15:13:01 -	1.272
--- src/backend/access/transam/xlog.c	12 Jun 2007 08:16:55 -
***
*** 398,404 
  static void exitArchiveRecovery(TimeLineID endTLI,
  	uint32 endLogId, uint32 endLogSeg);
  static bool recoveryStopsHere(XLogRecord *record, bool *includeThis);
! static void CheckPointGuts(XLogRecPtr checkPointRedo);
  
  static bool XLogCheckBuffer(XLogRecData *rdata, bool doPageWrites,
  XLogRecPtr *lsn, BkpBlock *bkpb);
--- 398,404 
  static void exitArchiveRecovery(TimeLineID endTLI,
  	uint32 endLogId, uint32 endLogSeg);
  static bool recoveryStopsHere(XLogRecord *record, bool *includeThis);
! static void CheckPointGuts(XLogRecPtr checkPointRedo, bool immediate);
  
  static bool XLogCheckBuffer(XLogRecData *rdata, bool doPageWrites,
  XLogRecPtr *lsn, BkpBlock *bkpb);
***
*** 5319,5324 
--- 5319,5341 
  }
  
  /*
+  * GetInsertRecPtr -- Returns the current insert posit