Hello Masahiko,
So I would suggest to:
- fix the compilation issue
- leave -l/--log as it is, i.e. use "pgbench_log" as a prefix
- add --log-prefix=... (long option only) for changing this prefix
I agree. It's better to add the separated option to specify the prefix
of log file instead of
[ ... v4 ]
I checked. It works as expected. I have no more comments.
If its okay with Fabien, we can mark it ready for committer.
Patch applies, compiles (including the documentation), make check ok and
features works for me. Code could be a little simpler, but it is okay.
I've switched the
Hello Rafia,
Please keep copying to the list.
Though I find the first version of this patch acceptable in terms that it
would be helpful on enhancing the readability of expressions as you
mentioned. However, the second version is not clear as I mentioned before
also, there has to be detailed d
+1. My vote is for backslash continuations.
I'm fine with that!
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hello Julian,
I've updated my patch to work with the changes introduced to libpq by
allowing multiple hosts.
Ok. Patch applies cleanly, compiles & checks (although yet again the
feature is not tested).
Feature tested and works for me, although I'm not sure how the multi-host
warning about
Hello Magnus,
It turns out the "c2" class is added by tidy. The reason is this:
http://api.html-tidy.org/tidy/quickref_5.0.0.html#clean
I've removed the flag for the devel docs build for now (or - for any XML
based docs build). I've also forced another docs load, so the results can
be check
-f
filename
The filename in the newer html appears much larger under chrome, seemingly
because of the within a . Maybe a bug in chrome CSS
interpretation, because CSS on code seems to indicate "font-size: 1.3em", but
it seems to do 1.3**2 instead for "filename"... However it
Hello Aleksander,
```
xloginsert.c:742:18: warning: implicit conversion from 'int' to 'char'
changes value from 253 to -3 [-Wconstant-conversion]
```
There is a bunch of these in "xlog.c" as well, and the same code is used
in "pg_resetwal.c".
Patch that fixes these warnings is attached to
Welcome to v15, highlights:
Files "conditional.h" and "conditional.c" are missing from the patch.
Also, is there a particular reason why tap test have been removed?
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http:/
Hello Daniel,
Ah, I see why *that* wants to know about it ... I think. I suppose you're
arguing that variable expansion shouldn't be able to insert, say, an \else
in a non-active branch? Maybe, but if it can insert an \else in an active
branch, then why not non-active too? Seems a bit incons
Hello Corey,
v16 is everything v15 promised to be.
My 0.02€:
Patch applies, make check ok, psql make check ok as well.
Welcome to v15, highlights:
- all conditional data structure management moved to conditional.h and
conditional.c
Indeed.
I cannot say that I find it better, but (1) T
About v17:
Patch applies, "make check" & psql "make check" ok.
... '@' [...] I noticed that it takes precedence over '!'. [...]
My reasoning was this: if you're in a false block, and you're not connected
to a db, the \c isn't going to work for you until you get out of the false
block, so rig
It undoubtedly would make pg_dump smaller, though I'm not sure how much
that's worth since if you care at all about that you'll gzip it.
But, the other thing it might do is speed up COPY, especially on input. Some
performance tests of that might be interesting.
For what it is worth:
Ascii8
Hello Corey,
About v18: Patch applies, make check ok, psql tap tests ok.
ISTM that contrary to the documentation "\elif something" is not evaluated
in all cases, and the resulting code is harder to understand with a nested
switch and condition structure:
switch
default
read
if
Hello Corey,
It doesn't strike me as much cleaner, but it's no worse, either.
Hmmm.
The "if (x) { x = ... ; if (x) {" does not help much to improve
readability and understandability...
My 0.02€ about v19:
If there are two errors, I do not care which one is shown, both will have
to be fi
Hello Peter,
I wrote a few lines of perl to move replaceable out of option and did some
manual editing is special cases, the resulting simple 359 changes is
attached.
If the stylesheet produces unpleasant output, then the stylesheet should
be changed.
Sure.
I'm not sure whether it is a sty
Hello Corey,
on elif
if misplaced elif
misplaced elif error
else
eval expression
=> possible eval error
set new status if eval fine
Currently it is really:
switch (state) {
case NONE:
case ELSE_TRUE:
case ELSE_FALSE:
success = false;
show some error
de
Hello Corey,
That is accurate. The only positive it has is that the user only
experiences one error, and it's the first error that was encountered if
reading top-to-bottom, left to right. It is an issue of which we prioritize
- user experience or simpler code.
Hmmm. The last simpler structure
Hello Corey,
Tom was pretty adamant that invalid commands are not executed. So in a case
like this, with ON_ERROR_STOP off:
\if false
\echo 'a'
\elif true
\echo 'b'
\elif invalid
\echo 'c'
\endif
Both 'b' and 'c' should print, because "\elif invalid" should not execute.
The code I had before
I'm not sure whether it is a stylesheet issue: it is the stylesheet as
interpreted by chrome... all is fine with firefox. Whether the bug is in
chrome or the stylesheet or elsewhere is well beyond my HTML/CSS skills, but
I can ask around.
After asking around, and testing with a colleague, w
After asking around, and testing with a colleague, we got the same strange
size behavior on firefox mac as well.
There are 2 possible solutions:
(1) do not nest in the first place (i.e. apply the patch I sent).
(2) do use absolute sizes in the CSS, not relative ones like "1.3em"
which acc
Hello Corey,
v20: attempt at implementing the switch-on-all-states style.
For the elif I think it is both simpler and better like that. Whether
committer will agree is an unkown, as always.
For endif, I really exagerated, "switch { defaut: " is too much, please
accept my apology. Maybe ju
For endif, I really exagerated, "switch { defaut: " is too much, please
accept my apology. Maybe just do the pop & error reporting?
Or maybe be more explicit:
switch (current state)
case NONE:
error no matching if;
case ELSE_FALSE:
case ELSE_TRUE:
case ...:
pop;
Asse
About v21:
Patch applies with some offset, make check ok, psql tap tests ok.
I also did some interactive tests which behaved as I was expecting.
I'm ok with this patch. I think that the very simple automaton code
structure achieved is worth the very few code duplications. It is also
signific
Hello Peter,
I think what you are looking at is the web site stylesheet.
Yep.
The whole thing looks fine to me using the default stylesheet. On the
web site, it looks wrong to me too. I don't know what the rationale for
using 1.3em for is, but apparently it's not working correctly.
In
Starting to poke at this... the proposal to add prove checks for psql
just to see whether \if respects ON_ERROR_STOP seems like an incredibly
expensive way to test a rather minor point. On my machine, "make check"
in bin/psql goes from zero time to close to 8 seconds. I'm not really
on board
Hello Tom,
* Daniel Verite previously pointed out the desirability of disabling
variable expansion while skipping script. That doesn't seem to be here,
ISTM that it is still there, but for \elif conditions which are currently
always checked.
fabien=# \if false
fabien@# \echo `echo B
Hello Rafia,
I was reviewing v7 of this patch, to start with I found following white
space errors when applying with git apply,
/home/edb/Desktop/patches/others/pgbench-into-7.patch:66: trailing
whitespace.
Yep.
I do not know why "git apply" sometimes complains. All is fine for me both
with
Hello David,
This patch applies cleanly and compiles at cccbdde with some whitespace
issues.
$ patch -p1 < ../other/pgbench-more-ops-funcs-9.patch
(Stripping trailing CRs from patch.)
My guess is that your mailer changed the eol-style of the file when saving
it:
sh> sha1sum pg-patches/
Hello David,
Repost from bugs.
This patch does not apply at cccbdde:
Indeed. It should not. The fix is for the 9.6 branch. The issue has been
fixed by some heavy but very welcome restructuring in master.
Marked as "Waiting for Author".
I put it back to "Needs review".
--
Fabien.
--
Hello Corey & Tom,
What is not done:
- skipped slash commands still consume the rest of the line
That last part is big, to quote Tom:
* More generally, I do not think that the approach of having exec_command
simply fall out immediately when in a false branch is going to work,
because it ignor
Hello Tom,
ISTM that I've tried to suggest to work around that complexity by:
- document that \if-related commands should only occur at line start
(and extend to eol).
- detect and complain when this is not the case.
I think this is a lousy definition, and would never be considered if
Hello Tom,
I also fear that there are corner cases where the behavior would still
be inconsistent. Consider
\if ...
\set foo `echo \endif should not appear here`
In this instance, ISTM that there is no problem. On "\if true", set is
executed, all is well. On "\if false", the whole line wou
Hello Corey,
v24 highlights:
The v24 patch is twice larger that the previous submission. Sigh.
If I'm reading headers correctly, it seems that it adds an
"expected/psql-on-error-stop.out" file without a corresponding test source
in "sql/". Is this file to be simply ignored, or is a source
Hello Tom,
I'm not entirely convinced that function-per-command is an improvement
though. [...]
I don't have a definite opinion on that core question yet, since I've not
read this version of the patch. Anybody else want to give an opinion?
My 0.02€:
I've already provided my view...
Pers
Add missing support for new node fields
Commit b6fb534f added two new node fields but neglected to add copy and
comparison support for them, Mea culpa, should have checked for that.
I've been annoyed by these stupid functions and forgetting to update them
since I run into them while trying t
Andres said during the unconference last month that there was a way to
get `make check` to work with PGXS. The idea is that it would initialize
a temporary cluster, start it on an open port, install an extension, and
run the extension's test suite. I think the pg_regress --temp-install,
maybe
That does not mean that it starts a new cluster on a port. It means it
will test it against an existing cluster after you have installed into
that cluster.
Yes, that is what I was saying.
It invokes "psql" which is expected to work directly. Note that there
is no temporary installation, it
I would suggest to add that to https://wiki.postgresql.org/wiki/Todo.
I may look into it when I have time, over the summer. The key point is
that there is no need for a temporary installation, but only of a
temporary cluster, and to trick this cluster into loading the
uninstalled extension,
Hello Mitsumasa-san,
And I'm also interested in your "decile percents" output like under
followings,
decile percents: 39.6% 24.0% 14.6% 8.8% 5.4% 3.3% 2.0% 1.2% 0.7% 0.4%
Sure, I'm really fine with that.
I think that it is easier than before. Sum of decile percents is just 100%.
That's a
I have just updated the wording so that it may be clearer:
Oops, I have sent the wrong patch, without the wording fix. Here is the
real updated version, which I tested.
probability of fist/last percent of the range: 11.3% 0.0%
--
Fabien.diff --git a/contrib/pgbench/pgbench.c b/contrib/p
Hello Gavin,
decile percents: 69.9% 21.0% 6.3% 1.9% 0.6% 0.2% 0.1% 0.0% 0.0% 0.0%
probability of fist/last percent of the range: 11.3% 0.0%
I would suggest that probabilities should NEVER be expressed in percentages!
As a percentage probability looks weird, and is never used for serious
s
Yea. I certainly disagree with the patch in it's current state because
it copies the same 15 lines several times with a two word difference.
Independent of whether we want those options, I don't think that's going
to fly.
I liked a simple static string for the different variants, which means
Hello Robert,
Well, I think the feedback has been pretty clear, honestly. Here's
what I'm unhappy about: I can't understand what these options are
actually doing.
We can try to improve the documentation, once more!
However, ISTM that it is not the purpose of pgbench documentation to be a
p
pgbench with gaussian & exponential, part 1 of 2.
This patch is a subset of the previous patch which only adds the two
new \setrandom gaussian and exponantial variants, but not the
adapted pgbench test cases, as suggested by Fujii Masao.
There is no new code nor code changes.
The corresponding
However, ISTM that it is not the purpose of pgbench documentation to be a
primer about what is an exponential or gaussian distribution, so the idea
would yet be to have a relatively compact explanation, and that the
interested but clueless reader would document h..self from wikipedia or a
text b
For example, when we set the number of transaction 10,000 (-t 1),
range of aid is 100,000,
and --exponential is 10, decile percents is under following as you know.
decile percents: 63.2% 23.3% 8.6% 3.1% 1.2% 0.4% 0.2% 0.1% 0.0% 0.0%
highest/lowest percent of the range: 9.5% 0.0%
They mean
Please find attached 2 patches, which are a split of the patch discussed
in this thread.
(A) add gaussian & exponential options to pgbench \setrandom
the patch includes sql test files.
There is no change in the *code* from previous already reviewed
submissions, so I do not think that it
Hello devs,
I noticed that my pg_stat_statements is cluttered with hundreds of entries
like "DEALLOCATE dbdpg_p123456_7", occuring each only once.
Here is a patch and sql test file to:
* normalize DEALLOCATE utility statements in pg_stat_statements
Some drivers such as DBD:Pg generate proce
Hello Andres,
Why isn't the driver using the extended query protocol? Sending
PREPARE/EXECUTE/DEALLOCATE wastes roundtrips...
It seems to me that it would be more helful if these similar entries where
aggregated together, that is if the query "normalization" could ignore the
name of the descr
That's because PREPARE isn't executed as it's own statement, but done on
the protocol level (which will need noticeably fewer messages). There's
no builtin logic to ignore actual PREPARE statements.
ISTM that there is indeed a special handling in function
pgss_ProcessUtility for PREPARE and E
That's because PREPARE isn't executed as it's own statement, but done on
the protocol level (which will need noticeably fewer messages). There's
no builtin logic to ignore actual PREPARE statements.
ISTM that there is indeed a special handling in function
pgss_ProcessUtility for PREPARE and E
[...]. If we do something we should go for the && !IsA(parsetree,
DeallocateStmt), not the normalization.
Ok.
The latter is pretty darn bogus.
Yep:-) I'm fine with ignoring DEALLOCATE altogether.
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make ch
If you do not like my normalization hack (I do not like it much either:-), I
have suggested to add "&& !IsA(parsetree, DeallocateStmt)" to the condition
above, which would ignore DEALLOCATE as PREPARE and EXECUTE are currently
and rightfully ignored.
Well, EXECUTE isn't actually ignored, but
Currently \pset is supported without any argument also, so same is updated in
documentation.
\pset option [ value ]
Changed to
\pset [ option [ value ] ]
This patch does update the documentation as stated, and make it consistent
with the reality and the embedded psql help. This is an impro
This patch does update the documentation as stated, and make it
consistent with the reality and the embedded psql help. This is an
improvement and I recommand its inclusion.
I would also suggest to move the sentence at the end of the description:
"\pset without any arguments displays the curre
Please find attached 2 patches, which are a split of the patch discussed in
this thread.
Please find attached a very minor improvement to apply a code (variable
name) simplification directly in patch A so as to avoid a change in patch
B. The cumulated patch is the same as previous.
(A) ad
Hello Alvaro,
ISTM that a desirable and reasonably simple to implement feature
would be to be able to set the blocksize at "initdb" time, and
"postgres" could use the value found in the database instead of a
compile-time one.
I think you will find it more difficult to implement than it seems
Hello Robert,
Some review comments:
Thanks a lot for your return.
Please find attached two new parts of the patch (A for setrandom
extension, B for pgbench embedded test case extension).
1. I suggest that getExponentialrand and getGaussianrand be renamed to
getExponentialRand and getGaus
Resent: previous message was stalled because of a bad "From:".
ISTM that a desirable and reasonably simple to implement feature
would be to be able to set the blocksize at "initdb" time, and
"postgres" could use the value found in the database instead of a
compile-time one.
I think you will f
Note that I was more asking about the desirability of the feature,
the implementation is another, although also relevant, issue. To me
it is really desirable given the potential performance impact, but
maybe we should not care about 10%?
10% performance improvement sounds good, no doubt. What
Thank you for your grate documentation and fix working!!!
It becomes very helpful for understanding our feature.
Hopefully it will help make it, or part of it, pass through.
I add two feature in gauss_B_4.patch.
1) Add gaussianProbability() function
It is same as exponentialProbability(). A
As I was investing playing around with blocksize, I noticed that some test
cases under "make check" vary depending on compilation parameters, as
they:
- do not order the result of queries, thus are not deterministic
[join, with]
- output query plans which differ depending on some parame
Hello Andres,
The default blocksize is currently 8k, which is not necessary optimal for
all setup, especially with SSDs where the latency is much lower than HDD.
I don't think that really follows.
The rationale, which may be proven false, is that with a SSD the latency
penalty for reading
As I was investing playing around with blocksize, I noticed that some test
cases under "make check" vary depending on compilation parameters, as
they:
There has never been any expectation that the regression tests would
pass exactly no matter what the environment. If we tried to make them
d
The rationale, which may be proven false, is that with a SSD the
latency penalty for reading and writing randomly vs sequentially is
much lower than for HDD, so there is less insentive to group stuff in
larger chunks on that account.
A higher number of blocks has overhead unrelated to this t
The basic claim that I'm making wrt to this benchmark is that there may
be a significant impact on performance with changing the block size,
thus this is worth investigating. I think this claim is quite safe,
even if the benchmark is not the best possible.
Well, you went straight to making i
Hello Robert,
I wish to agree, but my interpretation of the previous code is that
they were ignored before, so ISTM that we are stuck with keeping the
same unfortunate behavior.
I don't agree. I'm not in a huge hurry to fix all the places where
pgbench currently lacks error checks just bec
Hello Robert,
3. Similarly, I suggest that the use of gaussian or uniform be an
error when argc < 6 OR argc > 6. I also suggest that the
parenthesized distribution type be dropped from the error message in
all cases.
I wish to agree, but my interpretation of the previous code is that they
we
Attached B patch does turn incorrect setrandom syntax into errors instead of
ignoring extra parameters.
First A patch is repeated to help commitfest references.
Oops, I applied the change on the wrong part:-(
Here is the change on part A which checks setrandom syntax, and B for
completenes
double dx = 0.0, dy = 0.0;
if (point->x < box->low.x)
dx = box->low.x - point->x;
if (point->x > box->high.x)
dx = point->x - box->high.x;
if (point->y < box->low.y)
dy = box->low.y - point->y;
if (point->y > box->high.y)
dy = point->y - box->high.y;
return HYPOT(dx, dy);
I feel my
ISTM that you miss the projection on the segment if dx=0 or dy=0.
I don't need to find projection itself, I need only distance. When dx = 0
then nearest point is on horizontal line of box, so distance to it is dy.
Same when dy = 0. When both of them are 0 then point is in the box.
Indeed. I
Hello Robert,
I've committed the changes to pgbench.c and the documentation changes
with some further wordsmithing.
Ok, thanks a lot for your reviews and your help with improving the
documentation.
I don't think including the other changes in patch A is a good idea,
Fine. It was mostly
Hello Robert,
[...]
One of the concerns that I have about the proposal of simply slapping a
gaussian or exponential modifier onto \setrandom aid 1 :naccounts is
that, while it will allow you to make part of the relation hot and
another part of the relation cold, you really can't get any more
Hello,
Version one is "k' = 1 + (a * k + b) modulo n" with "a" prime with
respect to "n", "n" being the number of keys. This is nearly possible,
but for the modulo operator which is currently missing, and that I'm
planning to submit for this very reason, but probably another time.
That's pr
This patch is pretty trivial.
Another slightly less trivial but more useful version.
The issue is that there are 3 definitions of modulo, two of which are fine
(Knuth floored division and Euclidian), and the last one much less useful.
Alas, C (%) & SQL (MOD) choose the bad definition:-( I
Hello Robert,
The issue is that there are 3 definitions of modulo, two of which are fine
(Knuth floored division and Euclidian), and the last one much less useful.
Alas, C (%) & SQL (MOD) choose the bad definition:-( I really need any of
the other two. The attached patch adds all versions, with
Hello Alvaro,
I wonder if it would be necessary to offer the division operator
semantics corresponding to whatever additional modulo operator we choose
to offer. That is, if we add emod, do we need "ediv" as well?
I would make sense, however I do not need it, and I'm not sure of a use
case
For example, if we had reason to be concerned about *adversarial*
inputs, I think that there is a good chance that our qsort() actually
would be problematic to the point of driving us to prefer some generally
slower alternative.
That is an interesting point.
Indeed, a database in general of
If so, adding some randomness in the decision process would suffice to
counter the adversarial input argument you raised.
This is specifically addressed by the paper. Indeed, randomly choosing
a pivot is a common strategy. It won't fix the problem.
Too bad. I must admit that I do not see how
Included is the patch to enhance the behavior of pgbench in this regard
IMO. Here is a sample session after patching:
$ ./pgbench -c 10 -T 300 -S -i test
some parameters cannot be used in initialize mode
I have not tested, but the patch looks ok in principle.
I'm not sure of the variable na
Three different modulo operators seems like a lot for a language that
doesn't even have a real expression syntax, but I'll yield to whatever
the consensus is on this one.
Here is a third simpler patch which only implements the Knuth's modulo,
where the remainder has the same sign as the divis
Random pivot selection will certainly result in more frequent lopsided
partitions without any malicious intent.
Yep. It makes "adversary input" attacks more or less impossible, at the
price of higher average cost. Maybe a less randomized version would do,
i.e. select randomly one of the "3"
Maybe we ought to break down and support a real expression syntax.
Sounds like that would be better all around.
Adding operators is more or less orthogonal with providing a new
expression syntax. I'm not sure that there is currently a crying need for
it (a syntax). It would be a significant
IMHO, while worst case performance is a very useful tool for analyzing
algorithms (particularly their worst case time complexity), a worst
case should be put in its practical context. For example, if we had
reason to be concerned about *adversarial* inputs, I think that there
is a good chance th
Hello John,
[...]
In fact, the mentioned paper says this about the subject "Moreover, if
worst-case performance is important, Quicksort is the wrong algorithm."
I fully agree with this conclusion.
--
Fabien
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make chan
Hello Tatsuo-san,
Thanks for the review. I have registered it to Aug Commit fest.
https://commitfest.postgresql.org/action/patch_view?id=1532
I'm not sure of the variable name "is_non_init_parameter_set". I would
suggest "benchmarking_option_set"?
Ok, I will replace the variable name as you
Hello Andres,
But further benchmarks sound like a good idea.
I've started running some benchmarks with pgbench, with varying block &
WAL block sizes. I've done a blog post on a small subset of results,
focussing on block size with SSDs and to validate the significance of the
figures found,
Attached patch removes spurious brackets from pgbench documentation.
--
Fabien.diff --git a/doc/src/sgml/pgbench.sgml b/doc/src/sgml/pgbench.sgml
index b7d88f3..1551686 100644
--- a/doc/src/sgml/pgbench.sgml
+++ b/doc/src/sgml/pgbench.sgml
@@ -748,7 +748,7 @@ pgbench options dbname
Hello Marko,
Here's a patch for making PL/PgSQL throw an error during compilation (instead
of runtime) if the number of parameters passed to RAISE don't match the
number of placeholders in error message. I'm sure people can see the pros of
doing it this way.
Patch scanned, applied & tested
Hello,
- I would suggest to avoid "continue" within a loop so that the code is
simpler to understand, at least for me.
I personally find the code easier to read with the continue.
Hmmm. I had to read the code to check it, and I did it twice. The point is
that there is 3 exit points instead
one note: this patch can enforce a compatibility issues - a partially
broken functions, where some badly written RAISE statements was executed
newer.
I am not against this patch, but it should be in extra check probably ??
I'm not sure about what you mean by "it should be in extra check".
Yet another very minor typo in pgbench doc.
I'm not sure of the best way to show formula in the doc.
--
Fabien.diff --git a/doc/src/sgml/pgbench.sgml b/doc/src/sgml/pgbench.sgml
index 1551686..7d09a2d 100644
--- a/doc/src/sgml/pgbench.sgml
+++ b/doc/src/sgml/pgbench.sgml
@@ -782,7 +782,7 @@ pgb
Add --limit to limit latency under throttling
Under throttling, transactions are scheduled for execution at certain
times. Transactions may be far behind schedule and the system may catch up
with the load later. This option allows to change this behavior by
skipping transactions which are too
After publishing some test results with pgbench on SSD with varying page
size, Josh Berkus pointed out that pgbench uses small 100-bytes tuples,
and that results may be different with other tuple sizes.
This patch adds an option to change the default tuple size, so that this
can be tested ea
Hello Andres,
This patch adds an option to change the default tuple size, so that this can
be tested easily.
I don't think it's beneficial to put this into pgbench. There really
isn't a relevant benefit over using a custom script here.
The scripts to run are the standard ones. The differenc
I don't think it's beneficial to put this into pgbench. There really
isn't a relevant benefit over using a custom script here.
The scripts to run are the standard ones. The difference is in the
*initialization* phase (-i), namely the filler attribute size. There is no
custom script for initial
I'm not sure about the implication of ALTER on the table storage,
Should be fine in this case. But if that's what you're concerned about -
understandably -
Indeed, my (long) experience with benchmarks is that it is a much more
complicated that it looks if you want to really understand what
Hmmm. This would mean much more changes than the pretty trivial patch I
submitted
FWIW, I find that patch really ugly. Adding the filler's with in a
printf, after the actual DDL declaration. Without so much as a
comment. Brr.
Indeed. I'm not too proud of that very point either:-) You are rig
The custom initialization is to run a manual ALTER after the
initialization.
Sure, it can be done this way.
I'm not sure about the implication of ALTER on the table storage,
Should be fine in this case.
After some testing and laughing, my conclusion is "not fine at all". The
"filler" att
201 - 300 of 1589 matches
Mail list logo