This minor patch shows the expected drawing percent in multi-script
reports, next to the relative weight.
--
Fabiendiff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c
index 4196b0e..3b63d69 100644
--- a/src/bin/pgbench/pgbench.c
+++ b/src/bin/pgbench/pgbench.c
@@ -3080,10 +3080,
Remove pgbench clientDone unused "ok" parameter.
I cannot get the point of keeping a useless parameter, which is probably
there because at some point in the past it was used. If it is needed some
day it can always be reinserted.
--
Fabien.diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbe
his is non trivial, basically some kind of cypher
function but which does not operate on power-of-two sizes...
* if all necessary features are available, being able to run a strict
tpc-b bench (this mean adapting the init phase, and creating a
new builtin script which matches the real spec):
re advanced features, but with much more impact on the code, would be to
be able to change the size at database/table level.
Any thoughts?
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
This patch is pretty trivial.
Add modulo operator to pgbench.
This is useful to compute a permutation for tests with non uniform
accesses (exponential or gaussian), so as to avoid trivial correlations
between neighbour keys.
--
Fabien.diff --git a/contrib/pgbench/pgbench.c b/contrib/pgbench
Here is a patch submission for reference to the next commitfest.
Improve pgbench measurements & progress report
- Use progress option both under init & bench.
Activate progress report by default, every 5 seconds.
When initializing, --quiet reverts to the old every 100,000 insertions
Split 2 of the initial submission
pgbench: reduce and compensate throttling underestimation bias.
This is a consequence of relying on an integer random generator,
which allow to ensure that delays inserted stay reasonably in
range of the target average delay.
The bias was about 0.5% with 1000
Split 3 of the initial submission, which actually deal with data measured
and reported on stderr under various options.
This version currently takes into account many comments by Noah Mish. In
particular, the default "no report" behavior under benchmarking is not
changed, although I really t
This patch adds per-script statistics & other improvements to pgbench
Rationale: Josh asked for the per-script stats:-)
Some restructuring is done so that all stats (-l --aggregate-interval
--progress --per-script-stats, latency & lag...) share the same structures
and functions to accumulate
ng charts. The rest
usually shows about the same thing (or nothing).
Overall, I'm not quite sure the patches actually achieve the intended goals.
On the 10k SAS drives I got better performance, but apparently much more
variable behavior. On SSDs, I get a bit worse results.
Indeed.
--
Fa
Hello Aleksander,
Thanks for the look at the patch.
time pgbench -T 5 -R 0.1 -P 1 -c 2 -j 2
On my laptop this command executes 25 seconds instead of 5.
I'm pretty sure it IS a bug. Probably a minor one though.
Sure.
[...] you should probably write:
if(someint > 0)
Ok.
if(somebool ==
Hello Alvaro,
I looked at 19.d and I think the design has gotten pretty convoluted. I
think we could simplify with the following changes:
struct script_t gets a new member, of type Command **, which is
initially null.
function process_builtin receives the complete script_t (not individual
me
ice is "keep it simple".
If this is a blocker, I can sure write such an algorithm, when I have some
spare time, but I'm not sure that the purpose is worth it.
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http
*-21.patch does what you suggested above, some hidden awkwardness
but much less that the previous one.
Yeah, I think this is much nicer, don't you agree?
Yep, I said "less awkwarness than previous", a pretty contrived way to say
better:-)
However, this is still a bit broken -- you
The bug can be kept instead, and it can be called a feature.
I will leave it alone for the time being.
Maybe you could consider pushing the first part of the patch, which stops
if a transaction is scheduled after the end of the run? Or is this part
bothering you as well?
--
Fabien.
--
That is why the "fs" variable in process_file is declared "static", and why
I wrote "some hidden awkwarness".
I did want to avoid a malloc because then who would free the struct?
addScript cannot to it systematically because builtins are static. Or it
would have to create an on purpose struct,
d of the run, so the second is just a latent bug that cannot be
encountered.
I'm not sure whether I'm very clear:-)
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Committed with a few tweaks, including running pgindent over some of it.
Thanks!
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Attached is the fixed patch for the array method.
Committed with a few tweaks, including running pgindent over some of it.
Thanks. So the first set of functions is in, and the operators are
executed as functions as well. Fabien, are you planning to send
rebased versions of the rest? By that
- 32-b: add double functions, including double variables
- 32-c: remove \setrandom support (advice to use \set + random instead)
Here is a rebased version after Tom's updates, 33-b & 33-c. I also
extended the floating point syntax to signed accept signed exponents, and
changed the regexpr st
impact.
So the logical conclusion for me is that without further experimental data
it is better to have one context per table space.
If you have a hardware with plenty disks available for testing, that would
provide better data, obviously.
--
Fabien.
--
Sent via pgsql-hackers mailing lis
imental data, I still think that the one context per
table space is the reasonnable choice.
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ut for the shared version buffers are more equaly
distributed on table spaces, hence reducing sequential write
effectiveness, and for the other the dirty buffers are grouped more
clearly per table space, so it should get better sequential write
performance.
--
Fabien.
--
Sent via pgsql-hacke
return false;
+ }
Now, if rval is out of range of an integer, that is going to overflow
while trying to see whether it should divide by zero. Please work a
little harder here and in similar cases.
Ok.
Maybe add a helper function
checkIntegerEquality(PgBenchV
Hello Robert.
Here is a v34 b & c.
// comments are not allowed. I'd just remove the two you have.
Back to the eighties!
It make no sense to exit(1) and then return 0, so don't do that. I
might write this code as:
This would get rid of the internal-error case here altogether in favor
of t
Hello Robert,
Here is a v35 b & c.
This is not acceptable:
+ /* guess double type (n for "inf",
"-inf" and "nan") */
+ if (strchr(var, '.') != NULL ||
strchr(var, 'n') != NULL)
+ {
+
On Wed, Jan 27, 2016 at 2:31 PM, Fabien COELHO wrote:
- when a duration (-T) is specified, ensure that pgbench ends at that
time (i.e. do not wait for a transaction beyond the end of the run).
Every other place where doCustom() returns false is implemented as
return clientDone(...). I
Hello Robert,
[...] With your patch, you get different behavior depending on exactly
how the input is malformed.
I understand that you require only one possible error message on malformed
input, instead of failing when converting to double if the input looked
like a double (there was a '.'
varo objected to
the proposed method of fixing it as ugly, and I think he's right.
Unless you can come up with a nicer-looking fix, I think that part is
going to stay unfixed.
A bug kept on esthetical ground, that's a first!
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-h
m not sure I've seen these performance... If you have hard evidence,
please feel free to share it.
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
)..."
The text could say something about sequential writes performance because
pages are sorted.., but that it is lost for large bases and/or short
checkpoints ?
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http:
hat it can work the other way around.
I look forward to see these benchmarks later on, when you have them.
So all is well, and hopefully will be even better later on.
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www
e limits of its effect where large bases will converge to
random io performance. But maybe that is not the right place.
--
Fabien
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
the implementation.
There has been a lot of presentations over the years, and blog posts.
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
I just pushed the two major remaining patches in this thread.
Hurray! Nine months the this baby out:-)
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hello David,
Any takers to review this updated patch?
I intend to have a look at it, I had a look at a previous instance, but
I'm ok if someone wants to proceed.
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
I intend to have a look at it, I had a look at a previous instance, but
I'm ok if someone wants to proceed.
There's not exactly a long line of reviewers at the moment so if you
could do a followup review that would be great.
Ok. It is in the queue, not for right know, though.
Hello Alvaro,
If somebody specifies thousands of -f switches, they will waste a few
bytes with each, but I'm hardly concerned about a few dozen kilobytes
there ...
Ok, so you prefer a memory leak. I hate it on principle.
I don't "prefer" memory leaks -- I prefer interfaces that make sense.
Hello Tomas,
while learning about format of the transaction log produced by pgbench, I've
noticed this sentence in the section describing format of the per-transaction
log:
The last field skipped_transactions reports the number of
transactions skipped because they were too far behind sch
. Why not for
extended query cases?
Probably it can be made to work, but it is much less useful to prepare a
statement which is known to be needed just once, so I think it would be
fine to simply forbid "-M prepared" and "-C" together.
--
Fabien.
--
Sent via pgsql-hackers ma
the system could not match
the target tps, even if it can handle much more on average, so the tps
achieved when discarding late transactions would be under 4000 tps.
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
was much lower - presumably due to a lot of slow
transactions.
Yep. That is what is measured with the latency limit option, by counting
the dropped transactions that where not processed in a timely maner.
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hello Álvaro,
I pushed your 25, with some additional minor tweaks. I hope I didn't
break anything; please test.
I've made a few tests and all looks well. I guess the build farm will say
if it does not like it.
Thanks,
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgs
I've created an entry in the next commit fest so that it is not lost, but
I hope it will be committed before then.
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Here is a v36 which inspect very carefully the string to decide whether it is
an int or a double. You may, or may not, find it to your taste, I can't say.
Here is a v37 which is mostly a rebase after recent changes. Also I
noticed that I was double counting errors in the previous version, so
seconds anymore.
Yep, but they should be filtered out, "sorry, too late", so that would
count as unresponsisveness, at least for a large class of applications.
Thanks a lot for there interesting tests!
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org
inter. However, I do not have "editor priviledge" on this
wiki, maybe Tomas has?
Now, the documentation issue Tomas reported is in 9.5 already and should
be backported.
Another minor part of the patch is entirely 9.6 specific (script & -b
reference).
Not sure how to proceed...
--
Fa
fest entry from this wiki page, and a
note about partial 9.5 backporting.
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
v38 is a simple rebase, trying to keep up-to-date with Tom's work.
--
Fabien.diff --git a/doc/src/sgml/ref/pgbench.sgml b/doc/src/sgml/ref/pgbench.sgml
index dd3fb1d..cf9c1cd 100644
--- a/doc/src/sgml/ref/pgbench.sgml
+++ b/doc/src/sgml/ref/pgbench.sgml
@@ -802,9 +802,10 @@ pgbench options dbn
something like "warning, script #%d weight is zero, will be ignored".
- the documentation should be updated:-)
--
Fabien
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
v38 is a simple rebase, trying to keep up-to-date with Tom's work.
v39 is yet another rebase: 42 is in sight!
--
Fabien.diff --git a/doc/src/sgml/ref/pgbench.sgml b/doc/src/sgml/ref/pgbench.sgml
index c6d1454..4ceddae 100644
--- a/doc/src/sgml/ref/pgbench.sgml
+++ b/doc/src/sgml/ref/pgbench.s
e Universe and
Everything", computed by the supercomputer "Deep thought" in 7.5 million
years.
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Ok, I added a reference to the commitfest entry from this wiki page, and a
note about partial 9.5 backporting.
Please split the patch into one part for backporting and one part for
master-only and post both patches, clearly indicating which is which.
Attached are the full patch for head and
owing how much later
the transactions were scheduled. Again, the new code is winning.
No brainer again. I infer from this figure that with the initial version
60% of transactions have trouble being processed on time, while this is
maybe about 35% with the new version.
--
Fabien.
--
Sent via p
+- # up to 100%
| / ___ # cut short
| | /
| | |
| _/ /
|/__/
+->
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ly on distinct
dedicated disks with VMs, but this is the idea.
To emphasize potential bad effects without having to build too large a
host and involve too many table spaces, I would suggest to reduce
significantly the "checkpoint_flush_after" setting while running these
tests.
--
Fa
ction
number, not a percent.
Anyway, these are just details, your figures show that the patch is a very
significant win on SSDs, all is well!
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
My impression is that we actually know what we need to know anyway?
Sure, the overall summary is "it is much better with the patch" on this
large SSD test, which is good news because the patch was really designed
to help with HDDs.
--
Fabien.
--
Sent via pgsql-hackers ma
lative results with 4 disks, 4 tables
spaces and 4 buffers per bucket, so it is an alternative and less
expensive testing strategy.
This just shows that I usually work on a tight (negligeable?) budget:-)
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To m
v40 is yet another rebase.
--
Fabien.diff --git a/doc/src/sgml/ref/pgbench.sgml b/doc/src/sgml/ref/pgbench.sgml
index c6d1454..4ceddae 100644
--- a/doc/src/sgml/ref/pgbench.sgml
+++ b/doc/src/sgml/ref/pgbench.sgml
@@ -815,9 +815,10 @@ pgbench options dbname
- Sets variable
- that it does work:-) I'm not sure what happens by the script selection
process, it should be checked carefully because it was not designed
with allowing a zero weight, and it may depend on its/their positions.
It may already work, but it really needs checking.
Hmmm, it seems ok.
Attac
o the \set syntax is pretty easy, see attached script
- custom scripts are short, they are used by few but
advance users, for which updating would not be an issue
- the parsing & execution codes are lengthy, repetitive...
--
Fabien.#! /usr/bin/perl -wp
s/^\\setrandom\s+(\S+)\s+(
Hello Robert,
If we don't nuke it, it'll never die.
Hearing no objections, BOOM.
FIZZ! :-)
Thanks for the commits, and apology for the portability bugs.
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscrip
Hello,
In doing this, I noticed that the latency output is wrong if you use -T
instead of -t; it always says the latency is zero because "duration" is
zero. I suppose it should be like in the attached instead.
Indeed, I clearly overlooked option -t (transactions) which I never use.
Patch a
itment.
If you feel like removing the stddev line from the doc because it is not
there with usual options, fine with me.
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
he last hunk of that is overkill, so I did not push that.
Ok.
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ression is that the patch is really
designed to make a difference for HDDs, so to advise not activate on SSDs
if there is a regression in such a case.
Now this is a little disappointing as on paper sorted writes should also
be slightly better on SSDs, but if the bench says the contrary, I hav
Hello Michaël,
ISTM that if pgbench is to be stopped, the simplest option is just to abort
with a nicer error message from the get*Rand function, there is no need to
change the function signature and transfer the error management upwards.
That's fine to me, as long as the solution is elegant.
p output should be in brackets, as
FILE[@W], right?
Why not.
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
54775807)))
debug(script=0,command=1): int -9223372036854775808
Hmmm. You mean just to check the double -> int conversion for overflow,
as in:
SELECT (9223372036854775807::INT8 +
9223372036854775807::DOUBLE PRECISION)::INT8;
Ok.
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ed on a 4 disk raid10
of 4 disks, and a raid0 of 20 disks.
I guess similar but with a much lower tps. Anyway I can try that.
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hello Michaël,
Here is a v19 :
- avoid noisy changes
- abort on double->int overflow
- implement operators as functions
There is still \setrandom, that I can remove easily with a green light.
--
Fabien.diff --git a/doc/src/sgml/ref/pgbench.sgml b/doc/src/sgml/ref/pgbench.sgml
index 541d17b.
Hi Fabien,
Hello Tomas.
On 2016-01-11 14:45:16 +0100, Andres Freund wrote:
I measured it in a different number of cases, both on SSDs and spinning
rust. I just reproduced it with:
postgres-ckpt14 \
-D /srv/temp/pgdev-dev-800/ \
-c maintenance_work_mem=2GB \
-c
he
"synchronous_commit=off" chosen above.
"found" -> "fond". I confirm this opinion. If you have BBU on you
disk/raid system probably playing with some of these options is safe,
though. Not the case with my basic hardware.
--
Fabien.
--
Sent via pgsql-hackers
lts to show with a setting more or less similar
to yours.
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hello Michaël,
+ uniformly-distributed random integer in [lb,ub]
Nitpick: when defining an interval like that, you may want to add a
space after the comma.
Why not.
+ /* beware that the list is reverse in make_func */
s/reverse/reversed/?
Indeed.
+
#ifdef DEBUG
Some noise.
writing or time, and their
behavior is not the same, so this should be taken into account.
My conclusion is that there is no simple static fix to this issue, as
proposed in the submitted patch. The problem needs thinking and maths.
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers
is a 2**128 probability case which stops pgbench. It is obviously
possible to add a check to catch it, and then generate an error message,
but I would rather just ignore it and let pgbench stop on that.
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
OK, so I had an extra look at this patch and I am marking it as ready
for committer.
Ok.
- INT64_MIN / -1 throws a core dump, and errors on HEAD. I think this
should be fixed, Fabien does not.
Yep. Another point about this one is that it is not related to this patch
about functions
ts on HDDs?
both before/after patch are higher) if I disable full_page_writes,
thereby eliminating a lot of other IO.
Maybe this is an explanation
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql
with this (unreasonnable) figure to check whether I
really get a regression.
Other tests I ran with "reasonnable" settings on a large (scale=800) db
did not show any significant performance regression, up to know.
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@pos
s the
bill as I think it fits in memory, so the load is mostly write and no/very
few reads. I'll also try with scale 1000.
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hello Alvaro,
I'm looking at this part of your patch and I think it's far too big to
be a simple refactoring. Would you split it up please?
You know how delighted I am to split patches...
Here is a 5 part ordered patch serie:
a) add -b option for cumulating builtins and rework internal scri
kept just in case.
After finally nailing down the performance regression due to wal writer,
thins are looking good. I plan to post an updated version soon.
Good. The last version sent (14?) does not apply cleanly. I'm looking
forward to have another look at an updated version.
--
Fa
text
field on the "Attach thread" dialogue with the description or
giving the exact message-id gave me nothing to choose.
Strange.
You could try taking the old entry and selecting state "move to next CF"?
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgre
OK, so I had an extra look at this patch and I am marking it as ready
for committer.
Ok.
Attached is a rebase after recent changes in pgbench code & doc.
--
Fabien.diff --git a/doc/src/sgml/ref/pgbench.sgml b/doc/src/sgml/ref/pgbench.sgml
index 42d0667..d42208a 100644
--- a/doc/src/sgml/ref
s ignored depending on the third. Do as you feel.
I renamed a couple of your functionettes, for instance doSimpleStats to
addToSimpleStats and appendSimpleStats to mergeSimpleStats.
Fine with me.
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
Hello again,
If you want to implement real non-ambiguous-prefix code (i.e. have "se"
for "select-only", but reject "s" as ambiguous) be my guest.
I'm fine with filtering out ambiguous cases (i.e. just the "s" case).
Attached a small patch for that.
--
Fabien.diff --git a/doc/src/sgml/ref/p
Hello again,
Here's part b rebased, pgindented and with some minor additional tweaks
(mostly function commands and the function renames I mentioned).
Patch looks ok to me, various tests where ok as well.
Still concerned about the unlocked stat accums.
See my arguments in other mail. I can
Hello again,
Obviously this would work. I did not think the special case was worth the
extra argument. This one has some oddity too, because the second argument is
ignored depending on the third. Do as you feel.
Actually my question was whether keeping the original start_time was the
intended
While testing for something else I encountered two small bugs under very
low rate (--rate=0.1). The attached patches fixes these.
- when a duration (-T) is specified, ensure that pgbench ends at that
time (i.e. do not wait for a transaction beyond the end of the run).
- when there is a p
sh> cat div.sql
\set i -9223372036854775807
\set i :i - 1
\set i :i / -1
sh> pgbench -f div.sql
starting vacuum...end.
Floating point exception (core dumped)
I do not think that it is really worth fixing, but I will not prevent
anyone to fix it.
--
Fabien.
--
Sent via pgsql-h
computed with
much less clients than you asked for.
Pgbench is a bench tool, not a production tool.
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
s in the user
script and this is clearly reported by pgbench.
However, your argument may be relevant for avoiding fatal signal such as
generated by INT64_MAX / -1, because on this one the message error is
terse so how to fix the issue is not clear to the user.
--
Fabien.
--
Sent via pgs
that would also be a win.
/* these would raise an arithmetic error */
if (lval == INT64_MIN && rval == -1)
{
fprintf(stderr, "cannot divide or modulo INT64_MIN by -1\n");
return false;
}
This may be backpatched to old supported versions.
--
Fabien.
--
Sent vi
So I'm arguing that exiting, with an error message, is better than handling
user errors.
I'm not objecting to exiting with an error message, but I think
letting ourselves be killed by a signal is no good.
Ok, I understand this point for this purpose.
--
Fabien.
--
Sent via pgs
v22 compared to previous:
- remove the short macros (although IMO it is a code degradation)
- try not to remove/add blanks lines
- let some assert "as is"
- still exit on float to int overflow, see arguments in other mails
- check for INT64_MIN / -1 (although I think it is useless)
--
Fab
Hello Michaël,
v23 attached, which does not change the message but does the other fixes.
+if (coerceToInt(&lval) == INT64_MIN && coerceToInt(&rval) == -1)
+{
+ fprintf(stderr, "cannot divide INT64_MIN by -1\n");
+ return false;
+}
Bike-sheddi
needed to explain why such a bizarre
condition is used/needed for just the INT64_MIN case.
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
+ * zero in any case.
+ */
How would you reformulate that à-la-Fabien?
This one about modulo is fine.
I was refering to this other one in the division case:
+/* overflow check (needed for INT64_MIN) */
+if (lval != 0 && (*retval < 0
1 - 100 of 1599 matches
Mail list logo