Michaël-san,
I think that a clear warning not to run any cluster command in parallel,
under pain of possible cluster corruption, and possibly other caveats about
replication, should appear in the documentation.
I still have the following extra documentation in my notes:
Ok, it should have b
Michaël-san,
In short, you keep the main feature with:
- No tweaks with postmaster.pid.
- Rely just on the control file indicating an instance shutdown
cleanly.
- No tweaks with the system ID.
- No renaming of the control file.
Hmmm… so nothing:-)
I think that this feature is useful, in comp
Hallo Andres,
[...]
pg_upgrade in link mode intentionally wants to *permanently* disable a
cluster. And it explicitly writes a log message about it. That's not a
case to draw inferrence for this case.
Ok. My light knowledge of pg_upgrade inner working does not extend to this
level of prec
Hallo Andres,
And you're basically adding it because Fabien doesn't like
postmaster.pid and wants to invent another lockout mechanism in this
thread.
I did not suggest to rename the control file, but as it is already done by
another command it did not look like a bad idea in itself, or at le
Ok, this might not work, because of the following, less likely, race
condition:
postmaster opens control file RW
pg_checksums moves control file, postmater open file handle follows
...
So ISTM that we really need some locking to have something clean.
We are talking about complicating a m
Bonjour Michaël,
Please find attached an updated patch set, I have rebased that stuff
on top of my recent commits to refactor the control file updates.
Patch applies cleanly, compiles, make check-world seems ok, doc build ok.
It would help if the patch includes a version number. I assume tha
I have rebased it now.
Thanks. Will look at it.
If the all of aboves are involved, the line would look as the
follows.
[=== ] ( 63% of 12.53 GB, 179 MB/s, ETC 26s)
# Note that this is just an opinion.
(pg_checksum runs fast at the beginning so ETC behaves s
Hallo Peter,
Updated patch. I have squashed the two previously separate patches
together in this one.
Ok.
I do not understand the value of the SAVEPOINT in the tests.
The purpose of the SAVEPOINT in the test is because it exercises
different switch cases in CommitTransactionCommand() an
Bonjour Michaël,
Here are my notes about the fixes:
Thanks for the fixes.
- pg_resetwal got broken because the path to the control file was
incorrect. Running tests of pg_upgrade or the TAP tests of
pg_resetwal showed the failure.
Hmmm… I thought I had done that with "make check-world":-
You have one error at the end of update_controlfile(), where close()
could issue a frontend-like error for the backend, calling exit() on the
way. That's not good. (No need to send a new patch, I'll fix it
myself.)
Indeed. I meant to merge the "if (close(fd))", but ended merging the error
Michaël-san,
The issue here is that trying to embed directly the fsync routines
from file_utils.c into pg_resetwal.c messes up the inclusions because
pg_resetwal.c includes backend-side includes, which themselves touch
fd.h :(
In short your approach avoids some extra mess with the include
depe
I'm not sure that prefixing the two lines with the comment line is very
elegant, I'd suggest to put spaces, and would still try to shorten the
second sentence, maybe:
I suggest to keep two lines, and only prefix the first one.
As you feel. For me the shorter the better, though, if the infor
Bonjour Michaël-san,
Yes, that would be nice, for now I have focused. For pg_resetwal yes
we could do it easily. Would you like to send a patch?
Here is a proposal for "pg_resetwal".
The implementation basically removes a lot of copy paste and calls the
new update_controlfile function ins
Something like "%s: database folder is incompatible" for the first
line sounds kind of better per the feedback gathered. And then on the
second line:
"The database cluster was initialized with block size %u, but
pg_checksums was compiled with block size %u."
Ok. "%s" progname instead of "pg_
Hello Tomas,
So I guess this is a bug in 12788ae49e1933f463bc59a6efe46c4a01701b76, or
one of the other commits touching this part of the code.
I could not reproduce this issue on head, but I confirm on 11.2.
AFAICS on head it's fixed by 3bac77c48f166b9024a5ead984df73347466ae12
Thanks for
Hello,
Fabien, I was wondering whether you can apply TCP_USER_TIMEOUT patch and
continue discussion about 'socket_timeout'?
I can apply nothing, I'm just a small-time reviewer.
Committers on the thread are Michaël-san and Robert, however I'm not sure
that they are very sensitive to "please
Bonjour Michaël,
If the block size the tool is compiled with does not match the data
folder block size, then users would get incorrect checksums failures,
Or worse, incorrect checksump writing under "enabling"?
Initial proposal:
"%s: data directory block size %d is different to compiled-in
echo 'select 1' > select.sql
while /bin/true; do
pgbench -n -f select.sql -R 1000 -j 8 -c 8 -T 1 > /dev/null 2>&1;
date;
done;
Indeed. I'll look at it over the weekend.
So I guess this is a bug in 12788ae49e1933f463bc59a6efe46c4a01701b76, or
one of the other commits touching this part of
echo 'select 1' > select.sql
while /bin/true; do
pgbench -n -f select.sql -R 1000 -j 8 -c 8 -T 1 > /dev/null 2>&1;
date;
done;
Indeed. I'll look at it over the weekend.
So I guess this is a bug in 12788ae49e1933f463bc59a6efe46c4a01701b76, or
one of the other commits touching this part of
HI think that your patch is responsible for the added option at least.
I agree that the underlying issue that other parameters should probably
also be reused, which would be a bug fix, does not belong to this thread.
This doesn't seem to be a bug. \connect just establishes a new connection,
Hello Fabien-san.
The 2nd patch is 700 KB, I think that there is a unvoluntary file copy.
If the user reconnects, eg "\c db", the setting is lost. The
re-connection handling should probably take care of this parameter, and maybe
others.
I think your opinion is reasonable, but it seems not
Hello Robert,
Also, I do not see the downside of sending a cancel query before severing
the connection. If it is not processed, too bad, but if it is then it is
for the better.
If the network connection is dead, which is the situation the patch
intends to detect,
Hmmm... ISTM that we are n
Hallo Michael,
I propose we re-read the control file for the enable case after we
finished operating on all files and (i) check the instance is still
offline and (ii) update the checksums version from there. That should be
a small but worthwhile change that could be done anyway.
That looks li
Hello,
Yep. That is the issue I think is preventable by fsyncing updated data
*then* writing & syncing the control file, and that should be done by
pg_checksums.
Well, pg_rewind works similarly: control file gets updated and then
the whole data directory gets flushed.
So it is basically pro
For whatever it is worth, the patch looks good to me.
A minor nitpick would be to use a verb in the part:
`cost when the parameter in (0, 1)`
maybe:
`cost when the parameter's value is in (0, 1)` or similar.
Looks ok.
Apart from that, I would suggest it that the patch could be moved to
I do not think it is a good thing that two commands can write to the data
directory at the same time, really.
We don't prevent either a pg_resetwal and a pg_basebackup to run in
parallel. That would be... Interesting.
Yep, I'm trying again to suggest that this kind of thing should be
pre
Michaël-san,
Now the set of patches is:
- 0001, add --enable and --disable. I have tweaked a bit the patch so
as "action" is replaced by "mode" which is more consistent with other
tools like pg_ctl. pg_indent was also complaining about one of the
new enum structures.
Patch applies cleanly,
Hallo Michael,
I would bother rounding down < 100% to 100, because then you would get
1560/1492 MB (100\%, X MB/s)
which is kind of silly.
No, we cap the total_size to current_size so you won't see that (but
total_size will potentially gradually increase). pg_basebackup has the
same beha
Bonjour Michaël,
Yes, that would be nice, for now I have focused. For pg_resetwal yes
we could do it easily. Would you like to send a patch?
I probably can do that before next Monday. I'll prioritize reviewing the
latest instance of this patch, though.
This seem contradictory to me: you
Hello Robert,
wrote:
The main purpose of this parameter is to avoid client's waiting for DB server
infinitely, not reducing the server's burden.
This results in not waiting end-user, which is most important.
+1. If the server fails to detect that the client has gone away,
that's the serv
Bonjour Michaël,
Here is a partial review:
- 0001 if a patch to refactor the routine for the control file
update. I have made it backend-aware, and we ought to be careful with
error handling, use of fds and such, something that v4 was not very
careful about.
This refactoring patch is ok for
Attached is a rebase.
--
Fabien.diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c
index 5df54a8e57..1db6b75823 100644
--- a/src/bin/pgbench/pgbench.c
+++ b/src/bin/pgbench/pgbench.c
@@ -6032,6 +6032,96 @@ main(int argc, char **argv)
return exit_code;
}
+/* display the progr
Hello Karl,
I registered as a reviewer in the CF app.
"The string and the binary encode and decode functions..." sentence
looks strange to me, especially with the English article that I do
not really master, so maybe it is ok. I'd have written something more
straightforward, eg: "Functions en
Hello Ryohei-san,
About socket_timeout:
From: Fabien COELHO
are actually finished. I cannot say that I'm thrilled by that behavior. ISTM
that libpq
should at least attempt to cancel the query
Would you please tell me why you think so?
As I understand the "client-side timeout
Hello David,
This patch hasn't receive any review in a while and I'm not sure if that's
because nobody is interested or the reviewers think it does not need any more
review.
It seems to me that this patch as implemented does not quite satisfy any one.
I think we need to hear something from
Attached: doc_base64_v7.patch
Patch applies cleanly, doc compiles, navigation tested and ok.
"... section 6.8" -> "... Section 6.8" (capital S).
"The string and the binary encode and decode functions..." sentence looks
strange to me, especially with the English article that I do not really
Hello Karl,
Attached: doc_base64_v3.patch
I'm ok with referencing the historical MIME RFC.
"RFC2045 section 6.8" -> "RFC 2045 Section 6.8"
you can link to the RFC directly with:
https://tools.ietf.org/html/rfc2045#section-6.8";>RFC 2045
Section 6.8
--
Fabien.
Hello Karl,
Doc patch, against master. Documents encode() and decode() base64
format.
It is already documented. Enhance documentation, though.
Builds for me.
For me as well. Looks ok.
Attached: doc_base64_v1.patch
References RFC2045 section 6.8 to define base64.
Did you consider re
Hello Kirk,
so I tried to apply the patch, but part of the chunk failed,
because of the unused line below which was already removed in the
recent related commit.
PGResult*res;
I removed the line and fixed the other trailing whitespaces.
Indeed. Thanks for the fix.
See t
Bonjour Michaël,
I agree that having a server function (extension?) to do a full checksum
verification, possibly bandwidth-controlled, would be a good thing. However
it would have side effects, such as interfering deeply with the server page
cache, which may or may not be desirable.
In what i
Indeed, the patch needs a rebase & conflit resolution. I'll do it. Later.
Here is an update:
- take advantage of pg_bitutils (although I noted that the "slow"
popcount there could be speeded-up and shorten with a bitwise operator
implementation that I just removed from pgbench).
- ad
About the tcp_user_timeout libpq parameter v8.
Basically same thing about the tcp_user_timeout guc v8, especially: do you
have any advice about how I can test the feature, i.e. trigger a timeout?
Patch applies & compiles cleanly. Global check is ok, although there are no
specific tests.
Bonjour Michaël,
I gotta say, my conclusion from this debate is that it's simply a
mistake to do this without involvement of the server that can use
locking to prevent these kind of issues. It seems pretty absurd to me
to have hacky workarounds around partial writes of a live server, around
tr
Hello again Ryohei-san,
About the tcp_user_timeout libpq parameter v8.
Patch applies & compiles cleanly. Global check is ok, although there are
no specific tests.
Documentation English can be improved. Could a native speaker help,
please?
ISTM that the documentation both states that it w
Hello Ryohei-san,
There are three independent patches in this thread.
About the socket timeout patch v7:
Patch applies & compiles cleanly. "make check" is ok, although there are
no specific tests, which is probably ok. Doc build is ok.
I'd simplify the doc first sentence to:
""" Number of
Hello David & Surafel,
I think this can be marked as ready for committer now, but I'll defer
to Fabien to see if he's any other comments.
Patch v16 applies and compiles cleanly, local and global "make check"
are ok. Doc build is ok.
I did some manual testing with limit cases which did wor
Hello Andres,
I think putting this into the control file is a seriously bad
idea. Postmaster interlocks against other postmasters running via
postmaster.pid.
Having a second interlock mechanism, in a different file, doesn't make
any sort of sense. Nor does it seem sane to have external tool
Hallo Mickael,
So I have now changed behaviour so that short writes count as skipped
files and pg_verify_checksums no longer bails out on them. When this
occors a warning is written to stderr and their overall count is also
reported at the end. However, unless there are other blocks with bad
c
Hello Andres,
Note that my concern is not about the page size, but rather that as more
commands may change the cluster status by editing the control file, it would
be better that a postmaster does not start while a pg_rewind or enable
checksum or whatever is in progress, and currently there is
However, while at it, there is also the question of whether the control file
should be locked when updated, eg with flock(2) to avoid race conditions
between concurrent commands. ISTM that there is currently not such thing in
the code, but that it would be desirable.
Shouldn't be necessary -
Bonjour Michaël,
Thoughts?
None.
However, while at it, there is also the question of whether the control
file should be locked when updated, eg with flock(2) to avoid race
conditions between concurrent commands. ISTM that there is currently not
such thing in the code, but that it would be
Hallo Michael,
- * src/bin/pg_verify_checksums/pg_verify_checksums.c
+ * src/bin/pg_checksums/pg_checksums.c
That's lacking a rename, or this comment is incorrect.
Right, I started the rename, but then backed off pending further
discussion whether I should submit that or whether the committ
Hello Simon,
pgbench has a strange restriction to only allow 10 arguments, which is too
low for some real world uses.
Since MAX_ARGS is only used for an array of pointers and an array of
integers increasing this should not be a problem, so increase value to 255.
Fine with me on principle.
Hello Tom,
For quite some years now there's been dissatisfaction with our List
data structure implementation. Because it separately palloc's each
list cell, it chews up lots of memory, and it's none too cache-friendly
because the cells aren't necessarily adjacent. Moreover, our typical
usage
Hello Tom,
However, it does not discuss that an IP can (and should, IMHO) be given
through "host" if the point is to specify the target by its IP rather than
a lookup shortcut.
Ah, that's the crux of the problem.
Yep!
There are two ways that you could consider to be "best practice" for us
I also noticed that i is int in this function, but n is int64. That
seems like an oversight.
Indeed, that is a bug!
Here is a v2 with hopefully better wording, comments and a fix for the bug
you pointed out.
--
Fabien.diff --git a/doc/src/sgml/ref/pgbench.sgml b/doc/src/sgml/ref/pgbench.s
There are pretty good approximations for s > 1.0 using Riemann zeta
function and Euler derived a formula for the s = 1 case.
I believe that's what random_zipfian() already uses, because for s > 1.0
it refers to "Non-Uniform Random Variate Generation" by Luc Devroye, and
the text references th
Hello Tom,
My only complaint about this is that it makes it sound like you *must*
provide "host", even when giving "hostaddr":
That is the idea, "hostaddr" is moslty never needed.
It is the initial misleading issue I've been complaining about: one can
specify an IP *both* in host (although
Hello Robert,
(2) you are not against improving the documentation, although you find it
clear enough already, but you agree that some people could get confused.
The attached patch v4 only improves the documentation so that it reflects
what the implementation really does. I think it too bad to
Hallo Michael,
New patch attached.
Patch applies cleanly. Compiles, "make check" ok. doc build is also ok.
There are no tests, which is probably fine for such an interactive
feature.
Docs look okay to me. Clear and to the point.
About :
total_percent = total_size ? (int64) ((current_s
Hello Peter,
My 0.02€: I'm not quite interested in maintaining a tool for *one*
benchmark, whatever the benchmark, its standardness or quality.
What I like in "pgbench" is that it is both versatile and simple so that
people can benchmark their own data with their own load and their own
quer
Hello Tom,
Unfortunately, there was no activity over the last few commitfests and the
proposed patch pgbench-tap-progress-6 can't be applied anymore without
conflicts. Fabien, what are your plans about it, could you please post a
rebased version?
Here it is.
I'm confused about the intende
I am getting error while applying patch.I think the patch needs to be
redone on the latest code in master as there are some commits in master
after the patch is created
Possibly. Here is a v5 with a simplified text.
The key point is to tell that "host" expects names, ips or directories,
and
Hello Tomas,
I'm trying to use random_zipfian() for benchmarking of skewed data sets,
and I ran head-first into an issue with rather excessive CPU costs.
[...] This happens because generalizedHarmonicNumber() does this:
for (i = n; i > 1; i--)
ans += pow(i, -s);
wher
Hello Kyotaro-san,
On such standpoint, the first hunk in the patch attracted my
eyes.
host
-Name of host to connect to.host
name
-If a host name begins with a slash, it specifies Unix-domain
-communication rather than TCP/IP communication; the val
On 2018-10-26 09:21:51 +0200, Fabien COELHO wrote:
(1) you are somehow against changing the current implementation, eg erroring
out on possibly misleading configurations, because you do not think it is
really useful to help users in those cases.
I find this formulation somewhat passive
Hello Andres,
+# PGAC_C_BUILTIN_CLZLL
I think this has been partially superceded by
commit 711bab1e4d19b5c9967328315a542d93386b1ac5
Author: Alvaro Herrera
Date: 2019-02-13 16:10:06 -0300
Indeed, the patch needs a rebase & conflit resolution. I'll do it. Later.
+Function pr_pe
Hallo Michael,
I'm wondering (possibly again) about the existing early exit if one block
cannot be read on retry: the command should count this as a kind of bad
block, proceed on checking other files, and obviously fail in the end, but
having checked everything else and generated a report. I d
Hello David,
I do not understand why dump_inserts declaration has left the "flags for
options" section.
I moved that because it's no longer just a flag. It now stores an int value.
Hmmm. Indeed, all th "int"s of this section should be "bool" instead. Now,
some "flags" do not appear althou
Hola Alvaro,
In reply to https://postgr.es/m/alpine.DEB.2.21.1901102211350.27692@lancre
wherein Fabien wrote:
I'm not very happy with the resulting syntax, but IMO the feature is useful.
My initial design was to copy PL/pgSQL "into" with some "\into" orthogonal
to \; and ;, but the implementa
Hello David,
Wondering if you have anything else here? I'm happy for the v13
version to be marked as ready for committer.
I still have a few comments.
Patch applies cleanly, compiles, global & local make check are ok.
Typos and style in the doc:
"However, since, by default this op
Hello Tom,
Fixing the problem envolves deciding what is the right behavior, and
update the documentation and the implementation accordingly. Currently the
documentation suggests that :ERROR is about SQL error (subject to
interpretation), and the implementation is more or less consistent with
t
Bonsoir Daniel,
Sure. As there are several bugs (doubtful features) uncovered, a first
patch could fix "COPY ...TO STDOUT \g file", but probably replicate ERROR
current behavior however debatable it is (i.e. your patch without the
ERROR change, which is unrelated to the bug being fixed), and t
Hello David,
I thought about this and looked into it, but I decided it didn't look
like a smart thing to do. The reason is that if --inserts sets
dump_inserts to 1 then --rows-per-insert sets it to something else
that's likely fine, but if that happens in the opposite order then the
--rows-pe
Hello David & Surafel,
About this v10:
Patch applies and compiles cleanly, local & global "make check" ok.
A few comments, possibly redundant with some already in the thread.
Out of abc-order rows-per-inserts option in getopt struct.
help string does not specify the expected argument.
I st
An assorted set of command typos is fixed in the attached patch.
--
Fabien.diff --git a/contrib/pgcrypto/pgp-decrypt.c b/contrib/pgcrypto/pgp-decrypt.c
index 7d31e5354b..96c344c30b 100644
--- a/contrib/pgcrypto/pgp-decrypt.c
+++ b/contrib/pgcrypto/pgp-decrypt.c
@@ -132,7 +132,7 @@ pgp_parse_pkt_
Hello Tom,
Here is a POC which defines an internal interface for a PRNG, and use it
within pgbench, with several possible implementations which default to
rand48.
I seriously dislike this patch. pgbench's random support is quite
overengineered already IMO, and this proposes to add a whole b
It's not demonstrably slower than 2.5 either. (1.1 is measurably slower;
probably not by enough for anyone to care, but 1.5 is good enough for me.)
Good if it fails quick enough for you.
Attached a patch with the zipf doc update & the TAP test parameter change.
--
Fabien.diff --git a/doc/s
The first value is taken about 75% of the time for N=1000 and s=2.5, which
means that a non deterministic implementation would succeed about 0.75² ~
56% of the time on that one.
Right, that's about what we've been seeing on OpenBSD.
Also, the drawing procedure is less efficient when the para
Hello Tom,
Given these results, I do not think that it is useful to change
random_zipfian TAP test parameter from 2.5 to something else.
I'm not following this argument. The test case is basically useless
for its intended purpose with that parameter, because it's highly
likely that the failu
Hello Surafel,
okay .thank you for explaining. i attach a patch corrected as such
About this v9: applies cleanly, compiles, global and local "make check"
ok.
The option is not exercise in the TAP tests. I'd suggest that it should be
tested on a small table with zero, 1, more than the val
BEGIN;
UPDATE pgbench_branches SET bbalance = bbalance + 1 RETURNING * \g /bad
// the update is performed, the transaction is not rollbacked,
// *but* the output file was not written, "COMMIT" save changes.
if PQexec() could not store the results for lack of memory, it would
yield a P
- works around pgbench command splitting on spaces,
if postgres sources are in a strangely named directory…
I tested within a directory named "pg .* dir".
I've also tested it with the initial case (including a +) and it works.
Good, thanks for checking!
--
Fabien.
Hello Tom,
BTW, did you look at the question of the range of zipfian?
Yep.
I confirmed here that as used in the test case, it's generating a range way
smaller than the other ones: repeating the insertion snippet 1000x produces
stats like this: [...]
I have no idea whether that indicates
Hello Tom,
I did a little bit of googling on this subject last night, and it
seems that at least some people believe that the answer is to not
use glob, period, but read the directory for yourself.
As a short-term move to un-break the buildfarm, I'm just going to
revert that patch altogether.
Bonsoir Daniel,
sql> SELECT 1 \g /BAD
/BAD: Permission denied
sql> \echo :ERROR
false
That seems broken, because it's pointless to leave out a class of errors
from ERROR.
Yep. That is my point, but fixing it requires to decide whether it is okay
to change ERROR documentation and
Hello Tom,
Well, glob does have some metacharacters, so not doing any quoting
at all still leaves us with failure modes.
I did a little bit of googling on this subject last night, and it
seems that at least some people believe that the answer is to not
use glob, period, but read the directory
Hello Andrew,
About windows-specific issues, from File::Glob man page:
""" On DOSISH systems, backslash is a valid directory separator
character. In this case, use of backslash as a quoting character (via
GLOB_QUOTE) interferes with the use of backslash as a directory
separator. ... """
It se
Hello Tom,
Hm, so bowerbird (a Windows machine) has been failing the pgbench tests
since this went in, cf
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bowerbird&dt=2019-01-20%2004%3A57%3A01
I'm just guessing, but I suspect that bowerbird is using a path spec that
includes a backsl
Hello Tom,
Here is a POC which defines an internal interface for a PRNG, and use it
within pgbench, with several possible implementations which default to
rand48.
I seriously dislike this patch. pgbench's random support is quite
overengineered already IMO, and this proposes to add a whole b
I understand from the code that the COPY is really executed, so the ERROR
and so ROW_COUNT about the SQL should reflect that. Basically the change
makes the client believe that there is an SQL error whereas the error is
on the client.
Right, but wether COPY fails because psql can't write the
Bonjour Daniel,
:ROW_COUNT is incorrect after COPY TO STDOUT but this is a PG11-only
bug, :ROW_COUNT being a recent addition, so maybe we should deal with
this separately, since the current patch is supposed to address all versions?
ISTM that the patch is considered a bug fix, so it shoud be
Hello Tom,
Maybe on OpenBSD pg should switch srandom to srandom_deterministic?
Dunno. I'm fairly annoyed by their idea that they're smarter than POSIX.
Hmmm. I'm afraid that is not that hard.
However, for most of our uses of srandom, this behavior isn't awful;
it's only pgbench that has
Bonjour Daniel,
I took a quick look at this patch.
PFA an updated patch addressing your comments and Fabien's.
About this v2.
Applies cleanly, compiles cleanly, "make check" ok.
No tests, but Tom suggests this does not belong here: the committer is
right:-)
I tested the feature manual
Maybe on OpenBSD pg should switch srandom to srandom_deterministic?
Dunno. I'm fairly annoyed by their idea that they're smarter than POSIX.
However, for most of our uses of srandom, this behavior isn't awful;
it's only pgbench that has an expectation that the platform random()
can be made t
BTW, if you're wondering why curculio is still failing the pgbench
test,
Hmmm, that is interesting! It shows that at least some TAP tests are
useful.
all is explained here:
https://man.openbsd.org/srandom
Or at least most is explained there.
Yep. They try to be more serious than other
I'd rather keep it by simply adding the "|unknown" alternative. 30 years
of programming have taught me that testing limit & error cases is useful,
although you never know when it will be proven so.
Sorry, I don't buy this line of argument.
Reasonable test design requires making cost/benefit
Hello Tom,
Although we've got a few NetBSD and OpenBSD buildfarm critters,
none of them are doing --enable-tap-tests. If they were, we'd
have noticed the pgbench regression tests falling over:
[...]
I am, TBH, inclined to fix this by removing that test case rather
than teaching it another s
Hello Alvaro,
There's a lot of the new code in pgbench that can be simplified if we
remove \cset.
I'm not very happy with the resulting syntax, but IMO the feature is
useful. My initial design was to copy PL/pgSQL "into" with some "\into"
orthogonal to \; and ;, but the implementation was
Hello Tom,
So who needs that? Just merge the queries, if it's so important that
you avoid multiple round trips.
Pgbench is about testing (measuring) performance in various settings and
realistic scenarii: queries prepared or not, possible combined, and so on.
As postgres allows to send se
801 - 900 of 1345 matches
Mail list logo