Mark Dilger writes:
> I use CREATE RULE within startup files in the fork that I maintain. I have
> lots of them, totaling perhaps 50k lines of rule code. I don't think any of
> that
> code would have a problem with the double-newline separation you propose,
> which seems a more elegant solution
Craig Ringer writes:
> On 13 December 2015 at 06:31, Tom Lane wrote:
>> I'm not particularly wedded to this rule. In principle we could go so
>> far as to import psql's code that parses commands and figures out which
>> semicolons are command terminators --- but that is a pretty large chunk
>> o
> On Dec 12, 2015, at 9:40 PM, Tom Lane wrote:
>
> Mark Dilger writes:
>>> On Dec 12, 2015, at 3:42 PM, Tom Lane wrote:
>>> ... In general, though, I'd rather not try to
>>> teach InteractiveBackend() such a large amount about SQL syntax.
>
>> I use CREATE RULE within startup files in the for
On 13 December 2015 at 06:31, Tom Lane wrote:
> I'm not particularly wedded to this rule. In principle we could go so
> far as to import psql's code that parses commands and figures out which
> semicolons are command terminators --- but that is a pretty large chunk
> of code, and I think it'd r
Mark Dilger writes:
>> On Dec 12, 2015, at 3:42 PM, Tom Lane wrote:
>> ... In general, though, I'd rather not try to
>> teach InteractiveBackend() such a large amount about SQL syntax.
> I use CREATE RULE within startup files in the fork that I maintain. I have
> lots of them, totaling perhaps
> On Dec 12, 2015, at 3:42 PM, Tom Lane wrote:
>
> Joe Conway writes:
>> On 12/12/2015 02:31 PM, Tom Lane wrote:
>>> I'm not particularly wedded to this rule. In principle we could go so
>>> far as to import psql's code that parses commands and figures out which
>>> semicolons are command term
Andres Freund writes:
> That's cool too. Besides processing the .bki files, and there largely
> reg*_in, the many restarts are the most expensive parts of initdb.
BTW, in case anyone is doubting it, I did a little bit of "perf" tracing
and confirmed Andres' comment here: more than 50% of the runt
Joe Conway writes:
> On 12/12/2015 02:31 PM, Tom Lane wrote:
>> I'm not particularly wedded to this rule. In principle we could go so
>> far as to import psql's code that parses commands and figures out which
>> semicolons are command terminators --- but that is a pretty large chunk
>> of code, a
Andres Freund writes:
> On 2015-12-12 17:31:49 -0500, Tom Lane wrote:
>> Does anyone know of people using standalone mode other than
>> for initdb?
> Unfortunately yes. There's docker instances around that configure users
> and everything using it.
Hm, that means that we *do* have to worry about
On 12/12/2015 02:31 PM, Tom Lane wrote:
> I'm not particularly wedded to this rule. In principle we could go so
> far as to import psql's code that parses commands and figures out which
> semicolons are command terminators --- but that is a pretty large chunk
> of code, and I think it'd really be
On 2015-12-12 17:31:49 -0500, Tom Lane wrote:
> I thought this sounded like a nice lazy-Saturday project, so I started
> poking at it, and attached is a WIP patch.
Not bad, not bad at all.
> After some experimentation, I came up with the idea of executing any
> time that a semicolon followed by
On 2015-12-12 13:28:28 -0500, Tom Lane wrote:
> BTW, there's another thing I'd like to see improved in this area, which is
> a problem already but will get a lot worse if we push more work into the
> post-bootstrap phase of initdb. That is that the post-bootstrap phase is
> both inefficient and im
I wrote:
> BTW, there's another thing I'd like to see improved in this area, which is
> a problem already but will get a lot worse if we push more work into the
> post-bootstrap phase of initdb. That is that the post-bootstrap phase is
> both inefficient and impossible to debug. If you've ever ha
Mark Dilger writes:
>> On Dec 11, 2015, at 2:54 PM, Caleb Welton wrote:
>> Compare:
>> CREATE FUNCTION lo_export(oid, text) RETURNS integer LANGUAGE internal
>> STRICT AS 'lo_export' WITH (OID=765);
>>
>> DATA(insert OID = 765 ( lo_export PGNSP PGUID 12 1 0 0 0 f f f
>> f t f v
> On Dec 11, 2015, at 2:54 PM, Caleb Welton wrote:
>
> The current semantic level is pretty low level, somewhat cumbersome, and
> requires filling in values that most of the time the system has a pretty good
> idea how to fill in default values.
>
> Compare:
> CREATE FUNCTION lo_export(oid, t
Andres Freund writes:
> On 2015-12-11 19:26:38 -0500, Tom Lane wrote:
>> I believe it's soluble, but it's going to take something more like
>> loading up all the data at once and then doing lookups as we write
>> out the .bki entries for each catalog. Fortunately, the volume of
>> bootstrap data
On 2015-12-11 19:26:38 -0500, Tom Lane wrote:
> I believe it's soluble, but it's going to take something more like
> loading up all the data at once and then doing lookups as we write
> out the .bki entries for each catalog. Fortunately, the volume of
> bootstrap data is small enough that that won
Mark Dilger writes:
> Now, if we know that pg_type.dat will be processed before pg_proc.dat,
> we can replace all the Oids representing datatypes in pg_proc.dat with the
> names for those types, given that we already have a name <=> oid
> mapping for types.
I don't think this is quite as simple a
On 2015-12-11 18:12:16 -0500, Tom Lane wrote:
> I think what Mark is proposing is to do the lookups while preparing the
> .bki file, which would eliminate the circularity ... at the cost of having
> to, essentially, reimplement regprocedure_in and friends in Perl.
FWIW, I did that, when this came
> On Dec 11, 2015, at 3:02 PM, Tom Lane wrote:
>
> Mark Dilger writes:
>>> On Dec 11, 2015, at 2:40 PM, Tom Lane wrote:
>>> Huh? Those files are the definition of that mapping, no? Isn't what
>>> you're proposing circular?
>
>> No, there are far more references to Oids than there are defini
Caleb Welton writes:
> ... but there is some circularity especially with respect to type
> definitions and the functions that define those types. If you changed the
> definition of prorettype into a regtype then bootstrap would try to lookup
> the type before the pg_type entry exists and throw a
Mark Dilger writes:
>> On Dec 11, 2015, at 2:40 PM, Tom Lane wrote:
>> Huh? Those files are the definition of that mapping, no? Isn't what
>> you're proposing circular?
> No, there are far more references to Oids than there are definitions of them.
Well, you're still not being very clear, but
Yes, that alone without any other changes would be a marked improvement and
could be implemented in many places, pg_operator is a good example.
... but there is some circularity especially with respect to type
definitions and the functions that define those types. If you changed the
definition of
The current semantic level is pretty low level, somewhat cumbersome, and
requires filling in values that most of the time the system has a pretty
good idea how to fill in default values.
Compare:
CREATE FUNCTION lo_export(oid, text) RETURNS integer LANGUAGE internal
STRICT AS 'lo_export' WITH (OI
> On Dec 11, 2015, at 2:40 PM, Tom Lane wrote:
>
> Mark Dilger writes:
>>> On Dec 11, 2015, at 1:46 PM, Tom Lane wrote:
>>> That's an interesting proposal. It would mean that the catalog files
>>> stay at more or less their current semantic level (direct representations
>>> of bootstrap catal
I took a look at a few of the most recent bulk edit cases for pg_proc.h:
There were two this year:
* The addition of proparallel [1]
* The addition of protransform [2]
And prior to that the most recent seems to be from 2012:
* The addition of proleakproof [3]
Quick TLDR - the changes needed to
Mark Dilger writes:
>> On Dec 11, 2015, at 1:46 PM, Tom Lane wrote:
>> That's an interesting proposal. It would mean that the catalog files
>> stay at more or less their current semantic level (direct representations
>> of bootstrap catalog contents), but it does sound like a more attractive
>>
> On Dec 11, 2015, at 1:46 PM, Tom Lane wrote:
>
> Alvaro Herrera writes:
>> Crazy idea: we could just have a CSV file which can be loaded into a
>> table for mass changes using regular DDL commands, then dumped back from
>> there into the file. We already know how to do these things, using
>>
Alvaro Herrera writes:
> Crazy idea: we could just have a CSV file which can be loaded into a
> table for mass changes using regular DDL commands, then dumped back from
> there into the file. We already know how to do these things, using
> \copy etc. Since CSV uses one line per entry, there woul
Makes sense.
During my own prototyping what I did was generate the sql statements via sql
querying the existing catalog. Way easier than hand writing 1000+ function
definitions and not difficult to modify for future changes. As affirmed that
it was very easy to adapt my existing sql to acco
Caleb Welton wrote:
> I'm happy working these ideas forward if there is interest.
>
> Basic design proposal is:
> - keep a minimal amount of bootstrap to avoid intrusive changes to core
> components
> - Add capabilities of creating objects with specific OIDs via DDL during
> initdb
> - Updat
I'm happy working these ideas forward if there is interest.
Basic design proposal is:
- keep a minimal amount of bootstrap to avoid intrusive changes to core
components
- Add capabilities of creating objects with specific OIDs via DDL during
initdb
- Update the caching/resolution mechanism f
Hello Hackers,
Reviving an old thread on simplifying the bootstrap process.
I'm a developer from the GPDB / HAWQ side of the world where we did some
work a while back to enable catalog definition via SQL files and we have
found it valuable from a dev perspective. The mechanism currently in t
On 2015-03-07 18:09:36 -0600, Jim Nasby wrote:
> How often does a normal user actually initdb? I don't think it's that
> incredibly common. Added time to our development cycle certainly is a
> concern though.
There's many shops that run initdb as part of their test/CI systems.
Greetings,
Andres
On Sun, Mar 8, 2015 at 12:35 PM, Andres Freund wrote:
> On 2015-03-04 10:25:58 -0500, Robert Haas wrote:
>> Another advantage of this is that it would probably make git less
>> likely to fumble a rebase. If there are lots of places in the file
>> where we have the same 10 lines in a row with occa
On 03/08/2015 10:11 PM, Alvaro Herrera wrote:
Tom Lane wrote:
Andres Freund writes:
And even if it turns out to actually be bothersome, you can help
yourself by passing -U 5/setting diff.context = 5 or something like
that.
Um. Good luck with getting every patch submitter to do that.
Can we
Tom Lane wrote:
> Andres Freund writes:
> > And even if it turns out to actually be bothersome, you can help
> > yourself by passing -U 5/setting diff.context = 5 or something like
> > that.
>
> Um. Good luck with getting every patch submitter to do that.
Can we do it centrally somehow?
--
Á
Stephen Frost writes:
> * Andrew Dunstan (and...@dunslane.net) wrote:
>> On 03/07/2015 05:46 PM, Andres Freund wrote:
>>> On 2015-03-07 16:43:15 -0600, Jim Nasby wrote:
Semi-related... if we put some special handling in some places for
bootstrap
mode, couldn't most catalog objects
Andres Freund writes:
> On 2015-03-04 10:25:58 -0500, Robert Haas wrote:
>> Another advantage of this is that it would probably make git less
>> likely to fumble a rebase. If there are lots of places in the file
>> where we have the same 10 lines in a row with occasional variations,
>> rebasing a
On 2015-03-04 10:25:58 -0500, Robert Haas wrote:
> Another advantage of this is that it would probably make git less
> likely to fumble a rebase. If there are lots of places in the file
> where we have the same 10 lines in a row with occasional variations,
> rebasing a patch could easily pick the
On 3/7/15 6:02 PM, Stephen Frost wrote:
* Andrew Dunstan (and...@dunslane.net) wrote:
On 03/07/2015 05:46 PM, Andres Freund wrote:
On 2015-03-07 16:43:15 -0600, Jim Nasby wrote:
Semi-related... if we put some special handling in some places for bootstrap
mode, couldn't most catalog objects be
* Andrew Dunstan (and...@dunslane.net) wrote:
> On 03/07/2015 05:46 PM, Andres Freund wrote:
> >On 2015-03-07 16:43:15 -0600, Jim Nasby wrote:
> >>Semi-related... if we put some special handling in some places for bootstrap
> >>mode, couldn't most catalog objects be created using SQL, once we got
>
On 03/07/2015 05:46 PM, Andres Freund wrote:
On 2015-03-07 16:43:15 -0600, Jim Nasby wrote:
Semi-related... if we put some special handling in some places for bootstrap
mode, couldn't most catalog objects be created using SQL, once we got
pg_class, pg_attributes and pg_type created? That would
On 2015-03-07 16:43:15 -0600, Jim Nasby wrote:
> Semi-related... if we put some special handling in some places for bootstrap
> mode, couldn't most catalog objects be created using SQL, once we got
> pg_class, pg_attributes and pg_type created? That would theoretically allow
> us to drive much more
On 3/4/15 9:07 AM, Stephen Frost wrote:
* Robert Haas (robertmh...@gmail.com) wrote:
On Wed, Mar 4, 2015 at 9:42 AM, Tom Lane wrote:
and make it harder to compare entries by grepping out some common
substring.
Could you give an example of the sort of thing you wish to do?
On that angle, I
Robert Haas wrote:
> On Wed, Mar 4, 2015 at 2:27 PM, Alvaro Herrera
> wrote:
> > BTW one solution to the merge problem is to have unique separators for
> > each entry. For instance, instead of
> Speaking from entirely too much experience, that's not nearly enough.
> git only needs 3 lines of c
On Wed, Mar 4, 2015 at 2:27 PM, Alvaro Herrera wrote:
> Andrew Dunstan wrote:
>> On 03/04/2015 09:51 AM, Robert Haas wrote:
>> >On Wed, Mar 4, 2015 at 9:06 AM, Peter Eisentraut wrote:
>> >>>and make it harder to compare entries by grepping out some common
>> >>>substring.
>> >>Could you give an e
On 3/4/15 9:51 AM, Robert Haas wrote:
> On Wed, Mar 4, 2015 at 9:06 AM, Peter Eisentraut wrote:
>>> and make it harder to compare entries by grepping out some common
>>> substring.
>>
>> Could you give an example of the sort of thing you wish to do?
>
> e.g. grep for a function name and check tha
Andrew Dunstan wrote:
>
> On 03/04/2015 09:51 AM, Robert Haas wrote:
> >On Wed, Mar 4, 2015 at 9:06 AM, Peter Eisentraut wrote:
> >>>and make it harder to compare entries by grepping out some common
> >>>substring.
> >>Could you give an example of the sort of thing you wish to do?
> >e.g. grep fo
Robert Haas writes:
> Another advantage of this is that it would probably make git less
> likely to fumble a rebase. If there are lots of places in the file
> where we have the same 10 lines in a row with occasional variations,
> rebasing a patch could easily pick the the wrong place to reapply t
On Wed, Mar 4, 2015 at 10:04 AM, Andrew Dunstan wrote:
> Is it necessarily an all or nothing deal?
>
> Taking a previous example, we could have something like:
>
> {
> oid => 2249, oiddefine => 'CSTRINGOID', typname => 'cstring',
> typlen => -2, typbyval => 1,
> ..
On 2015-03-04 09:55:01 -0500, Robert Haas wrote:
> On Wed, Mar 4, 2015 at 9:42 AM, Tom Lane wrote:
> I wonder if we should have a tool in our repository to help people
> edit the file. So instead of going in there yourself and changing
> things by hand, or writing your own script, you can do:
>
On 03/04/2015 09:51 AM, Robert Haas wrote:
On Wed, Mar 4, 2015 at 9:06 AM, Peter Eisentraut wrote:
and make it harder to compare entries by grepping out some common
substring.
Could you give an example of the sort of thing you wish to do?
e.g. grep for a function name and check that all the
* Robert Haas (robertmh...@gmail.com) wrote:
> On Wed, Mar 4, 2015 at 9:42 AM, Tom Lane wrote:
> >>> and make it harder to compare entries by grepping out some common
> >>> substring.
> >
> >> Could you give an example of the sort of thing you wish to do?
> >
> > On that angle, I'm dubious that a
On 03/04/2015 09:42 AM, Tom Lane wrote:
Peter Eisentraut writes:
On 3/3/15 9:49 PM, Robert Haas wrote:
Even this promises to vastly increase the number of lines in the file,
I think lines are cheap. Columns are much harder to deal with.
Yeah. pg_proc.h is already impossible to work with i
Robert Haas writes:
> On Wed, Mar 4, 2015 at 9:06 AM, Peter Eisentraut wrote:
>> Could you give an example of the sort of thing you wish to do?
> e.g. grep for a function name and check that all the matches have the
> same volatility.
Well, grep is not going to work too well anymore, but extrac
On Wed, Mar 4, 2015 at 9:42 AM, Tom Lane wrote:
>>> and make it harder to compare entries by grepping out some common
>>> substring.
>
>> Could you give an example of the sort of thing you wish to do?
>
> On that angle, I'm dubious that a format that allows omission of fields is
> going to be easy
On Wed, Mar 4, 2015 at 9:06 AM, Peter Eisentraut wrote:
>> and make it harder to compare entries by grepping out some common
>> substring.
>
> Could you give an example of the sort of thing you wish to do?
e.g. grep for a function name and check that all the matches have the
same volatility.
--
Peter Eisentraut writes:
> On 3/3/15 9:49 PM, Robert Haas wrote:
>> Even this promises to vastly increase the number of lines in the file,
> I think lines are cheap. Columns are much harder to deal with.
Yeah. pg_proc.h is already impossible to work with in a standard
80-column window. I don'
On 3/3/15 9:49 PM, Robert Haas wrote:
>> Yeah. One thought though is that I don't think we need the "data" layer
>> in your proposal; that is, I'd flatten the representation to something
>> more like
>>
>> {
>> oid => 2249,
>> oiddefine => 'CSTRINGOID',
>> typname =
On 2015-03-04 08:47:44 -0500, Robert Haas wrote:
> >> Even this promises to vastly increase the number of lines in the file,
> >> and make it harder to compare entries by grepping out some common
> >> substring. I agree that the current format is a pain in the tail, but
> >> pg_proc.h is >5k lines
>> Even this promises to vastly increase the number of lines in the file,
>> and make it harder to compare entries by grepping out some common
>> substring. I agree that the current format is a pain in the tail, but
>> pg_proc.h is >5k lines already. I don't want it to be 100k lines
>> instead.
>
On 2015-03-03 21:49:21 -0500, Robert Haas wrote:
> On Sat, Feb 21, 2015 at 11:34 AM, Tom Lane wrote:
> > Andres Freund writes:
> >> On 2015-02-20 22:19:54 -0500, Peter Eisentraut wrote:
> >>> On 2/20/15 8:46 PM, Josh Berkus wrote:
> Or what about just doing CSV?
> >
> >>> I don't think that
On Sat, Feb 21, 2015 at 11:34 AM, Tom Lane wrote:
> Andres Freund writes:
>> On 2015-02-20 22:19:54 -0500, Peter Eisentraut wrote:
>>> On 2/20/15 8:46 PM, Josh Berkus wrote:
Or what about just doing CSV?
>
>>> I don't think that would actually address the problems. It would just
>>> be the
On Sat, Feb 21, 2015 at 11:08 PM, Andres Freund wrote:
> The changes in pg_proc.h are just to demonstrate that using names
> instead of oids works.
Fwiw I always thought it was strange how much of our bootstrap was
done in a large static text file. Very little of it is actually needed
for bootstr
On 2015-02-21 17:43:09 +0100, Andres Freund wrote:
> One thing I was considering was to do the regtype and regproc lookups
> directly in the tool. That'd have two advantages: 1) it'd make it
> possible to refer to typenames in pg_proc, 2) It'd be much faster. Right
> now most of initdb's time is do
On February 21, 2015 7:20:04 PM CET, Andrew Dunstan wrote:
>
>On 02/21/2015 11:43 AM, Tom Lane wrote:
>
>> {
>> oid => 2249,
>> oiddefine => 'CSTRINGOID',
>> typname => 'cstring',
>> typlen => -2,
>> typbyval => 1,
>> ...
>> }
>
>
>which
On 02/21/2015 11:43 AM, Tom Lane wrote:
Andrew Dunstan writes:
On 02/21/2015 09:39 AM, Andrew Dunstan wrote:
Personally, I think I would prefer that we use JSON (and yes, there's
a JSON::Tiny module, which definitely lives up to its name).
For one thing, we've made a feature of supporting JSO
On 2015-02-21 11:34:09 -0500, Tom Lane wrote:
> Andres Freund writes:
> > On 2015-02-20 22:19:54 -0500, Peter Eisentraut wrote:
> >> On 2/20/15 8:46 PM, Josh Berkus wrote:
> >>> Or what about just doing CSV?
>
> >> I don't think that would actually address the problems. It would just
> >> be the
Andrew Dunstan writes:
> On 02/21/2015 09:39 AM, Andrew Dunstan wrote:
>> Personally, I think I would prefer that we use JSON (and yes, there's
>> a JSON::Tiny module, which definitely lives up to its name).
>> For one thing, we've made a feature of supporting JSON, so arguably we
>> should eat
Andres Freund writes:
> On 2015-02-20 22:19:54 -0500, Peter Eisentraut wrote:
>> On 2/20/15 8:46 PM, Josh Berkus wrote:
>>> Or what about just doing CSV?
>> I don't think that would actually address the problems. It would just
>> be the same format as now with different delimiters.
> Yea, we ne
On 02/21/2015 09:39 AM, Andrew Dunstan wrote:
On 02/21/2015 05:04 AM, Andres Freund wrote:
Yes, that's a good point. I have zero desire to open-code a format
though, I think that's a bad idea. We could say we just include
Yaml::Tiny, that's what it's made for.
Personally, I think I would
On 02/21/2015 05:04 AM, Andres Freund wrote:
Yes, that's a good point. I have zero desire to open-code a format
though, I think that's a bad idea. We could say we just include
Yaml::Tiny, that's what it's made for.
Personally, I think I would prefer that we use JSON (and yes, there's a
JSO
On 2015-02-20 22:19:54 -0500, Peter Eisentraut wrote:
> On 2/20/15 8:46 PM, Josh Berkus wrote:
> > What about YAML? That might have been added somewhat earlier.
>
> YAML isn't included in Perl, but there is
>
> Module::Build::YAML - Provides just enough YAML support so that
> Module::Build works e
On 21/02/15 04:22, Peter Eisentraut wrote:
I violently support this proposal.
Maybe something rougly like:
# pg_type.data
CatalogData(
'pg_type',
[
{
oid => 2249,
data => {typname => 'cstring', typlen => -2, typbyval => 1, fake =>
'...'},
oiddefine =
I violently support this proposal.
> Maybe something rougly like:
>
> # pg_type.data
> CatalogData(
> 'pg_type',
> [
> {
>oid => 2249,
>data => {typname => 'cstring', typlen => -2, typbyval => 1, fake =>
> '...'},
>oiddefine => 'CSTRINGOID'
> }
> ]
>
On 2/20/15 8:46 PM, Josh Berkus wrote:
> What about YAML? That might have been added somewhat earlier.
YAML isn't included in Perl, but there is
Module::Build::YAML - Provides just enough YAML support so that
Module::Build works even if YAML.pm is not installed
which might work.
> Or what abou
On 02/20/2015 03:41 PM, Andres Freund wrote:
> What I think we should do is to add pg_.data files that contain
> the actual data that are automatically parsed by Catalog.pm. Those
> contain the rows in some to-be-decided format. I was considering using
> json, but it turns out only perl 5.14 starte
78 matches
Mail list logo