It is only run when the build is via XML/XSLT, not via SGML/DSSSL
(because the SGML/DSSSL build already checks the validity anyway).
I'm confused. I thought the point of this change was mostly that the
SGML toolchain is less strict than the XML toolchain, and you wanted
to have the more-stric
ISTM that it may be more helful to do:
ifndef JADE
#error "no jade found on your system, cannot generate the documention"
endif
We could use $(missing) for that, which is already used for bison, flex,
and perl.
I'm fine with "$(missing)" or whatever else that provides a clear message.
Add --limit to limit latency under throttling
Under throttling, transactions are scheduled for execution at certain times.
Transactions may be far behind schedule and the system may catch up with the
load later. This option allows to change this behavior by skipping
transactions which are to
Find a small documentation patch attached:
- show the valid range for segment_timeout
- remove one spurious empty line (compared to other descriptions)
--
Fabien.diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index f23e5dc..49547ee 100644
--- a/doc/src/sgml/config.sgml
+++ b
Hello pgdevs,
I've been playing with pg for some time now to try to reduce the maximum
latency of simple requests, to have a responsive server under small to
medium load.
On an old computer with a software RAID5 HDD attached, pgbench
simple update script run for a some time (scale 100, fill
Hello Josh,
So I think that you're confusing the roles of bgwriter vs. spread
checkpoint. What you're experiencing above is pretty common for
nonspread checkpoints on slow storage (and RAID5 is slow for DB updates,
no matter how fast the disks are), or for attempts to do spread
checkpoint on f
[oops, wrong from, resent...]
Hello Jeff,
The culprit I found is "bgwriter", which is basically doing nothing to
prevent the coming checkpoint IO storm, even though there would be ample
time to write the accumulating dirty pages so that checkpoint would find a
clean field and pass in a blink.
Hello Amit,
I think another thing to know here is why exactly checkpoint
storm is causing tps to drop so steeply.
Yep. Actually it is not strictly 0, but a "few" tps that I rounded to 0.
progress: 63.0 s, 47.0 tps, lat 2.810 ms stddev 5.194, lag 0.354 ms
progress: 64.1 s, 11.9 tps, lat 8
Hello again,
I have not found any mean to force bgwriter to send writes when it can.
(Well, I have: create a process which sends "CHECKPOINT" every 0.2
seconds... it works more or less, but this is not my point:-)
There is scan_whole_pool_milliseconds, which currently forces bgwriter to
circl
Hello Andres,
checkpoint when the segments are full... the server is unresponsive about
10% of the time (one in ten transaction is late by more than 200 ms).
That's ext4 I guess?
Yes!
Did you check whether xfs yields a, err, more predictable performance?
No. I cannot test that easily wi
What are the other settings here? checkpoint_segments,
checkpoint_timeout, wal_buffers?
They simply are the defaults:
checkpoint_segments = 3
checkpoint_timeout = 5min
wal_buffers = -1
I did some test checkpoint_segments = 1, the problem is just more frequent
but shorter. I also reduc
Hello Rukh,
I have reviewed this patch.
Thanks!
[...] I get: pgbench: invalid option -- L
Which appears to be caused by the fact that the call to getopt_long()
has not been updated to reflect the new parameter.
Indeed, I only tested/used it with the --limit= syntax.
Also this part:
+
Uh. I'm not surprised you're facing utterly horrible performance with
this. Did you try using a *large* checkpoints_segments setting? To
achieve high performance
I do not seek "high performance" per se, I seek "lower maximum latency".
I think that the current settings and parameters are desig
Marking Waiting for Author until these small issues have been fixed.
I've put it back to "Needs review". Feel free to set it to "Ready" if it
is ok for you.
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.po
Hello Jeff,
The culprit I found is "bgwriter", which is basically doing nothing to
prevent the coming checkpoint IO storm, even though there would be ample
time to write the accumulating dirty pages so that checkpoint would find a
clean field and pass in a blink. Indeed, at the end of the 500 s
Hello Andres,
[...]
I think you're misunderstanding how spread checkpoints work.
Yep, definitely:-) On the other hand I though I was seeking something
"simple", namely correct latency under small load, that I would expect out
of the box.
What you describe is reasonable, and is more or les
[...] What's your evidence the pacing doesn't work? Afaik it's the fsync
that causes the problem, not the the writes themselves.
Hmmm. My (poor) understanding is that fsync would work fine if everything
was already written beforehand:-) that is it has nothing to do but assess
that all is alr
Hello Heikki,
I find the definition of the latency limit a bit strange. It's a limit on how
late a transaction can *start* compared to it's scheduled starting time, not
how long a query is allowed to last.
Yes. This is what can be done easily with pgbench under throttling. Note
that if tran
Hello Heikki,
[...]
With a latency limit on when the query should finish, as opposed to how
late it can start, it's a lot easier to give a number. For example, your
requirements might state that a user must always get a response to a click on
a web page in 200 ms, so you set the limit to 200
Hello Amit,
I see there is some merit in your point which is to make bgwriter more
useful than its current form. I could see 3 top level points to think
about whether improvement in any of those can improve the current
situation:
a. Scanning of buffer pool to find the dirty buffers that ca
Hello,
If all you want is to avoid the write storms when fsyncs start happening on
slow storage, can you not just adjust the kernel vm.dirty* tunables to
start making the kernel write out dirty buffers much sooner instead of
letting them accumulate until fsyncs force them out all at once?
I c
As for an actual "latency limit" under throttling, this is significantly
more tricky and invasive to implement... ISTM that it would mean:
[...]
Yeah, something like that. I don't think it would be necessary to set
statement_timeout, you can inject that in your script or postgresql.conf if
off:
$ pgbench -p 5440 -h /tmp postgres -M prepared -c 16 -j16 -T 120 -R 180 -L 200
number of skipped transactions: 1345 (6.246 %)
on:
$ pgbench -p 5440 -h /tmp postgres -M prepared -c 16 -j16 -T 120 -R 180 -L 200
number of skipped transactions: 1 (0.005 %)
That machine is far from idle ri
[...]
Yeah, something like that. I don't think it would be necessary to set
statement_timeout, you can inject that in your script or postgresql.conf if
you want. I don't think aborting a transaction that's already started is
necessary either. You could count it as LATE, but let it finish fi
Hello Aidan,
If all you want is to avoid the write storms when fsyncs start happening on
slow storage, can you not just adjust the kernel vm.dirty* tunables to
start making the kernel write out dirty buffers much sooner instead of
letting them accumulate until fsyncs force them out all at once?
I tried that by setting:
vm.dirty_expire_centisecs = 100
vm.dirty_writeback_centisecs = 100
So it should start writing returned buffers at most 2s after they are
returned, if I understood the doc correctly, instead of at most 35s.
The result is that with a 5000s 25tps pretty small load (th
Hello Heikki,
This now begs the question:
In --rate mode, shouldn't the reported transaction latency also be calculated
from the *scheduled* start time, not the time the transaction actually
started? Otherwise we're using two different definitions of "latency", one
for the purpose of the li
Hello Heikki,
[...] I would be fine with both.
After giving it some thought, ISTM better to choose consistency over
intuition, and have latency under throttling always defined wrt the
scheduled start time and not the actual start time, even if having a
latency of 1 ms for an OLTP load
+ if (latency_limit)
+ printf("number of transactions above the %.1f ms latency limit: "
INT64_FORMAT "\n",
+ latency_limit / 1000.0, latency_late);
+
Any reason not to report a percentage here?
Yes: I did not thought of it.
Here is a v7, with a
Hello Heikki,
For the kicks, I wrote a quick & dirty patch for interleaving the fsyncs, see
attached. It works by repeatedly scanning the buffer pool, writing buffers
belonging to a single relation segment at a time.
I tried this patch on the same host I used with the same "-R 25 -L 200 -T
Hello Peter,
Here is a review:
The version 2 of the patch applies cleanly on current head.
The ability to generate and reuse a temporary installation for different
tests looks quite useful, thus putting install out of pg_regress and in
make seems reasonnable.
However I'm wondering whether
# actual new tmp installation
.tmp_install:
$(RM) ./.tmp_install.*
$(RM) -r ./tmp_install
# create tmp installation...
touch $@
# tmp installation for the nonce
.tmp_install.$(MAKE_NONCE): .tmp_install
touch $@
Oops, I got it wrong, the install woul
Hello Marko,
I've changed the loop slightly. Do you find this more readable than the way
the loop was previously written?
It is 50% better:-)
It is no big deal, but I still fail to find the remaining continue as
usefull in this case. If you remove the "continue" line and invert the
condit
There is scan_whole_pool_milliseconds, which currently forces bgwriter to
circle the buffer pool at least once every 2 minutes. It is currently
fixed, but it should be trivial to turn it into an experimental guc that
you could use to test your hypothesis.
I recompiled with the variable coldly
That model might make some sense if you think e.g. of a web application,
where the web server has a timeout for how long it waits to get a
database connection from a pool, but once a query is started, the
transaction is considered a succeess no matter how long it takes. The
latency limit would b
Hello Mutsumara-san,
#3. Documentation
I think modulo operator explanation should put at last at the doc line.
Because the others are more frequently used.
So I like patch3 which is simple and practical.
Ok.
If you agree or reply my comment, I will mark ready for commiter.
Please find
The attached is seemed no problem. But I'd like to say about order of
explanation in five formulas.
Fix version is here. Please confirm, and I mark it for ready for
commiter.
I'm ok with this version.
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make
Hello Heikki,
I think we have to reconsider what we're reporting in 9.4, when --rate
is enabled, even though it's already very late in the release cycle.
It's a bad idea to change the definition of latency between 9.4 and 9.5,
so let's get it right in 9.4.
Indeed.
As per the attached patch.
Hello Robert,
Writing a simple expression parser for pgbench using flex and bison
would not be an inordinately difficult problem.
Sure. Note that there is not only the parsing but also the evaluating to
think of, which mean a data structure to represent the expressions which
would be more c
Hello Robert,
Sure, and I would have looked at that patch and complained that you
were implementing a modulo operator with different semantics than C.
And then we'd be right back where we are now.
[...]
Probably.
To be clear about my intent, which is a summary of what you already know:
I wo
Hello Heikki,
I looked closer at the this, and per Jan's comments, realized that we don't
log the lag time in the per-transaction log file. I think that's a serious
omission; when --rate is used, the schedule lag time is important information
to make sense of the result. I think we have to ap
Hello Robert,
I am not objecting to the functionality; I'm objecting to bolting on
ad-hoc operators one at a time. I think an expression syntax would
let us do this in a much more scalable way. If I had time, I'd go do
that, but I don't. We could add abs(x) and hash(x) and it would all
be gr
Hello Heikki,
Now that I've finished the detour and committed and backpatched the changes
to the way latency is calculated, we can get back to this patch. It needs to
be rebased.
Before rebasing, I think that there are a few small problems with the
modification applied to switch from an int
(3) I wish that the maximum implied multiplier could be explicitely
documented in the source code. From pg_rand48 source code, I think
that it is 33.27106466687737
Small possibly buggy code attached, to show how I computed the above
figure.
--
Fabien.#include
#include
int main(void
Hello Heikki
Now that I've finished the detour and committed and backpatched the changes
to the way latency is calculated, we can get back to this patch. It needs to
be rebased.
Here is the rebase, which seems ok.
See also the small issues raised aboud getPoissonRand in another email.
--
F
How should skipped transactions should be taken into account in the log file
output, with and without aggregation? I assume we'll want to have some trace
of skipped transactions in the logs.
The problem with this point is that how to report something "not done" is
unclear, especially as the
However, that would not diminish nor change much the amount and kind of
code necessary to add an operator or a function
That's not really true. You can't really add abs(x) or hash(x) right
now because the current code only supports this syntax:
\set varname operand1 [ operator operand2 ]
Th
(1) ISTM that the + 0.5 which remains in the PoissonRand computation comes
from the previous integer approach and is not needed here. If I'm not
mistaken the formula should be plain:
-log(uniform) * center
No. The +0.5 is to round the result to the nearest integer, instead of
truncati
The output would look something like this (modified from the manual's example
by hand, so the numbers don't add up):
0 199 2241 0 1175850568 995598 1020
0 200 2465 0 1175850568 998079 1010
0 201 skipped 1175850569 608 3011
0 202 skipped 1175850569 608 2400
0 203 skipped 1175850569 608 1000
0 2
[about logging...]
Here is an attempt at updating the log features, including the aggregate
and sampling stuff, with skipped transactions under throttling.
I moved the logging stuff into a function which is called when a
transaction is skipped or finished.
From a log file format perspect
Attached is an updated version of the patch.
Ok.
I notice that you decided against adding tags around function and type
names.
It's really not about the IEEE changing something, but about someone
changing the Wikipedia page. The way I linked it makes sure it always
displays the same versi
Of course a general rule how to link to WP would be nice ...
I'm afraid that the current implicit rule is more or less "no links", at
least there are very few of them but in the glossary, and when I submitted
docs with them they were removed before committing.
Ideally if external links were
Hello Peter,
I've committed the $(missing) use separately,
That was simple and is a definite improvement.
Tiny detail: the new DBTOEPUB macro definition in "src/Makefile.global.in"
lacks another tab to be nicely aligned with the other definitions.
and rebased this patch on top of that.
I'm not sure I like the idea of printing a percentage. It might be
unclear what the denominator was if somebody feels the urge to work
back to the actual number of skipped transactions. I mean, I guess
it's probably just the value you passed to -R, so maybe that's easy
enough, but then why bot
Hello Stephen,
But this is not convincing. Adding a unary function with a clean
syntax indeed requires doing something with the parser.
Based on the discussion so far, it sounds like you're coming around to
agree with Robert (as I'm leaning towards also) that we'd be better off
building a rea
So my opinion is that this small modulo operator patch is both useful and
harmless, so it should be committed.
You've really failed to make that case --- in particular, AFAICS there is
not even consensus on the exact semantics that the operator should have.
There is. Basically whatever with
Hello Heikki,
If you reject it, you can also remove the gaussian and exponential random
distribution which is near useless without a mean to add a minimal
pseudo-random stage, for which "(x * something) % size" is a reasonable
approximation, hence the modulo submission.
I'm confused. The gaus
No, it depends totally on the application. For financial and
physical inventory purposes where I have had occasion to use it,
the properties which were important were:
[...]
Hmmm. Probably I'm biased towards my compiler with an integer linear
flavor field, where C-like "%" is always a pain,
The idea of a modulo operator was not rejected, we'd just like to have the
infrastructure in place first.
Sigh.
How to transform a trivial 10 lines patch into a probably 500+ lines
project involving flex & bison & some non trivial data structures, and
which may get rejected on any ground...
Hello Robert,
I think you're making a mountain out of a molehill.
Probably. I tend to try the minimum effort first.
I implemented this today in about three hours. The patch is attached.
Great!
Your patch is 544 lines, my size evaluation was quite good:-)
Note that I probably spent 10 m
I think you're making a mountain out of a molehill. I implemented this
today in about three hours.
I think you're greatly understating your own efficiency at shift/reducing
parser mountains down to molehills. Fabien even guessed the LOC size of the
resulting patch with less than a 9% error.
Hello Heikki,
Here are new patches, again the first one is just refactoring, and the second
one contains this feature. I'm planning to commit the first one shortly, and
the second one later after people have had a chance to look at it.
I looked at it. It looks ok, but for a few spurious spac
One thing bothers me with the log format. Here's an example:
0 81 4621 0 1412881037 912698 3005
0 82 6173 0 1412881037 914578 4304
0 83 skipped 0 1412881037 914578 5217
0 83 skipped 0 1412881037 914578 5099
0 83 4722 0 1412881037 916203 3108
0 84 4142 0 1412881037 918023 2333
0 85 2465
Hmmm...
Maybe I'm a little bit too optimistic here, because it seems that I'm
suggesting to create a dead lock if the checkpointer has both buffers to
flush in waiting and wishes to close the very same file that holds them.
So on wanting to close the file the checkpointer should rather flus
-Do not update pgbench_tellers and
-pgbench_branches.
-This will avoid update contention on these tables, but
-it makes the test case even less like TPC-B.
+Shorthand for -b simple-update@1.
I don't think it is a good idea to remove entirely the descrip
"sum" is a double so count is converted to 0.0, 0.0/0.0 == NaN, hence the
comment.
PG code usually avoids that, and I recall static analyze tools type
coverity complaining that this may lead to undefined behavior. While I
agree that this would lead to NaN...
Hmmm. In this case that is what i
On Wed, Dec 16, 2015 at 6:10 AM, Robert Haas wrote:
It looks fine to me except that I think we should spell out "param" as
"parameter" throughout, instead of abbreviating.
Fine for me. I have updated the first patch as attached (still looking
at the second).
This doc update threshold -> pa
It seems also that it would be a good idea to split the patch into two
parts:
1) Refactor the code so as the existing test scripts are put under the
same umbrella with addScript, adding at the same time the new option
-b.
2) Add the weight facility and its related statistics.
Sigh. The patch &
Here is a two part v12, which:
part a (refactoring of scripts and their stats):
- fix option checks (-i alone)
- s/repleacable/replaceable/ in doc
- keep small description in doc and help for -S & -N
- fix 2 comments for pg style
- show builtin list if not found
part b (weight)
- check th
Hello Tomas,
I'm planning to do some thorough benchmarking of the patches proposed in this
thread, on various types of hardware (10k SAS drives and SSDs). But is that
actually needed? I see Andres did some testing, as he posted summary of the
results on 11/12, but I don't see any actual resul
Hello Michael,
Thanks for your remarks.
+ double constants such as 3.14156,
You meant perhaps 3.14159 :)
Indeed!
+ max(i,
...)
+ integer
Such function declarations are better with optional arguments listed
within brackets.
Why not. I did it that way because this is the s
(2a) remove double stuff, just keep integer functions.
I would rather keep min/max, though.
(2a) sounds like a fine plan to get something committable. We could keep
min/max/abs, and remove sqrt/pi. What's actually the use case for debug?
I cannot wrap my mind on one?
It was definitely
Hello Michael,
It was definitely useful to debug the double/int type stuff within
expressions when writing a non trivial pgbench script. It is probably less
interesting if there are only integers.
After looking again at the code, I remembered why double are useful: there
are needed for rand
After looking again at the code, I remembered why double are useful: there
are needed for random exponential & gaussian because the last parameter is a
double.
I do not care about the sqrt, but double must be allowed to keep that, and
the randoms are definitely useful for a pgbench script. Now
Hello Heikki,
The reason I didn't commit this back then was lack of performance testing.
I'm fairly confident that this would be a significant improvement for some
workloads, and shouldn't hurt much even in the worst case. But I did only a
little testing on my laptop. I think Simon was in fav
Hello Michael,
I'm not sure whether we are talking about the same thing:
- there a "double" type managed within expressions, but not variables
- there is a double() function, which takes an int and casts to double
I understood that you were suggesting to remove all "double" expressions,
but
Hello Robert,
I think that the 1.5 value somewhere in the patch is much too high for the
purpose because it shifts the checkpoint load quite a lot (50% more load at
the end of the checkpoint) just for the purpose of avoiding a spike which
lasts a few seconds (I think) at the beginning. A much s
Hello Robert & Tatsuo,
Some paraphrasing and additional comments.
$ pgbench -p 11002 --rate 2 --latency-limit 1 -c 10 -T 10 test
You are targetting 2 tps over 10 connections, so that is about one
transaction every 5 seconds for each connection, the target is about 20
transactions in 10 sec
Hello Robert,
On a pgbench test, and probably many other workloads, the impact of
FPWs declines exponentially (or maybe geometrically, but I think
exponentially) as we get further into the checkpoint.
Indeed. If the probability of hitting a page is uniform, I think that the
FPW probability i
Probably no skips though, because the response time needed is below 5
*seconds*, not ms : 2 tps on 10 connections, 1 transaction every 5 seconds
for each connection.
Oops. Right. But why did this test only run 16 transactions in total
instead of 20?
Because the schedule is based on a stoch
[...]
Because the schedule is based on a stochastic process, transactions are not
set regularly (that would induce patterns and is not representative of
real-life load) but randomly.
The long term average is expected to converge to 2 tps, but on a short run
it may differ significantly.
Hmm.
AFAICR with xlog-triggered checkpoints, the checkpointer progress is
measured with respect to the size of the WAL file, which does not grow
linearly in time for the reason you pointed above (a lot of FPW at the
beginning, less in the end). As the WAL file is growing quickly, the
checkpointer thi
Hello Michaël,
If I read you correctly, I should cut it out into a new file and
include it. Is it correct?
Not really, I meant to see if it would be possible to include this set
of routines directly in libpqcommon (as part of OBJS_FRONTEND). This
way any client applications could easily reuse
Hello Michaël,
And then I also had a look at src/port/snprintf.c, where things get
actually weird when no transactions are run for a script (emulated
with 2 scripts, one with @1 and the second with @1):
- 0 transactions (0.0% of total, tps = 0.00)
- latency average = -1.#IO ms
- late
Hello Michaël,
Based on that all the final results of a \set command will have an
integer format, still after going through this patch, allowing double
as return type for nested function calls (first time "nested" is
written on this thread) is actually really useful, and that's what
makes sense
Hello,
I read your patch and I know what I want to try to have a small and simple
fix. I must admit that I have not really understood in which condition the
checkpointer would decide to close a file, but that does not mean that the
potential issue should not be addressed.
There's a trivial ex
Hello Andres,
I thought of adding a pointer to the current flush structure at the vfd
level, so that on closing a file with a flush in progress the flush can be
done and the structure properly cleaned up, hence later the checkpointer
would see a clean thing and be able to skip it instead of gen
Hello Andres,
One of the point of aggregating flushes is that the range flush call cost
is significant, as shown by preliminary tests I did, probably up in the
thread, so it makes sense to limit this cost, hence the aggregation. These
removed some performation regression I had in some cases.
Hello Andres,
Hmmm. What I understood is that the workloads that have some performance
regressions (regressions that I have *not* seen in the many tests I ran) are
not due to checkpointer IOs, but rather in settings where most of the writes
is done by backends or bgwriter.
As far as I can see
Hello Andres,
Hm. New theory: The current flush interface does the flushing inside
FlushBuffer()->smgrwrite()->mdwrite()->FileWrite()->FlushContextSchedule(). The
problem with that is that at that point we (need to) hold a content lock
on the buffer!
You are worrying that FlushBuffer is holdi
Hello Michaël,
1) When precising a negative parameter in the gaussian and exponential
functions an assertion is triggered:
Assertion failed: (parameter > 0.0), function getExponentialRand, file
pgbench.c, line 494.
Abort trap: 6 (core dumped)
An explicit error is better.
Ok for assert -> erro
Hello,
Please find attached a small patch to add a throttling capability to
pgbench, that is pgbench aims at a given client transaction rate instead
of maximizing the load. The throttling relies on Poisson-distributed
delays inserted after each transaction.
I wanted that to test the impact
Hello Tom,
I'm having a hard time understanding the use-case for this feature.
Surely, if pgbench is throttling its transaction rate, you're going
to just end up measuring the throttle rate.
Indeed, I do not want to measure the tps if I throttle it.
The point is to generate a continuous but
Hello Jeff,
While I don't understand the part about his laptop battery, I think that
there is a good use case for this. If you are looking at latency
distributions or spikes, you probably want to see what they are like with a
load which is like the one you expect having, not the load which is
I'm having a hard time understanding the use-case for this feature.
Here is an example functional use case I had in mind.
Let us say I'm teaching a practice session about administrating
replication. Students have a desktop computer on which they can install
several instances or postgresql,
It does seem to me that we should Poissonize the throttle time, then
subtract the average overhead, rather than Poissonizing the difference.
After thinking again about Jeff's point and failing to sleep, I think that
doing exactly that is better because:
- it is "right"
- the code is simple
Add --throttle to pgbench
Each client is throttled to the specified rate, which can be expressed in
tps or in time (s, ms, us). Throttling is achieved by scheduling
transactions along a Poisson-distribution.
This is an update of the previous proposal which fix a typo in the sgml
documentati
Hello devs,
I've given a try to the PostgreSQL documentation in epub format.
I must admit that there is a bit of a disappointement as far as the user
experience is concerned: the generated file is barely usable on an iPad2
with the default iBooks reader, which was clearly not designed for
ha
Hello Greg,
If you add this to
https://commitfest.postgresql.org/action/commitfest_view?id=18 I'll review it
next month.
Ok. Thanks. I just did that.
I have a lot of use cases for a pgbench that doesn't just run at 100%
all the time. I had tried to simulate something with simple sleep
ca
Please find attached a small patch submission, for reference to the next
commit fest.
Each thread reports its progress about every the number of seconds
specified with the option. May be particularly useful for long running
pgbench invocations, which should always be the case.
shell> ./pg
901 - 1000 of 1589 matches
Mail list logo