Re: psql \set variables in crosstab queries?
On Sat, Mar 4, 2023 at 10:05 PM Ron wrote: > > Ugh. It's a long and hairy query that would be a nightmare in a format > statement. > > Assuming you can pass this thing into the crosstab function in the first place you must already have put it into a string. Changing "crosstab" to "format", plopping in the placeholders, and adding a couple of more passed-in arguments is not materially more effort or hairier. You could create a SRF for the pre-crosstab data then do: SELECT crosstab('select * from table_func(%,%,%)', :'dd', ...). The table_func itself would then just use pl/pgsql variables (or SQL ones...) in a normal query instead of a dynamic query (presuming you are only substituting values anyway). David J.
Re: psql \set variables in crosstab queries?
On 3/4/23 19:32, David G. Johnston wrote: On Sat, Mar 4, 2023 at 5:20 PM Ron wrote: But crosstab takes text strings as parameters. How then do you use \set variables in crosstab queries? You need to dynamically write the textual query you want to send to the crosstab function. In particular that means writing it using "format()" and then substituting the values into the query via placeholders. Roughly like: SELECT * FROM crosstab(format('SELECT * FROM %I WHERE %I = %L, :'tbl_name', :'col_name', :'compare_value')); David J. Ugh. It's a long and hairy query that would be a nightmare in a format statement. -- Born in Arizona, moved to Babylonia.
Re: psql \set variables in crosstab queries?
On 3/4/23 19:22, Tom Lane wrote: Ron writes: According to https://www.postgresql.org/docs/12/app-psql.html#APP-PSQL-VARIABLES and experience, variables don't interpolate inside of string literals: " Variable interpolation will not be performed within quoted SQL literals and identifiers. Therefore, a construction such as ':foo' doesn't work to produce a quoted literal from a variable's value. " But crosstab takes text strings as parameters. How then do you use \set variables in crosstab queries? If you read a little further, you'll find out the syntax for converting the value of a psql variable to a SQL string literal: regression=# \set foo BAR regression=# select ':foo'; ?column? -- :foo (1 row) regression=# select :'foo'; ?column? -- BAR (1 row) What the server got in the last case was "select 'BAR';". postgres=# \set foo BAR postgres=# select :'foo'; ?column? -- BAR (1 row) postgres=# select $$ :foo $$; ?column? -- :foo (1 row) postgres=# select $$ :'foo' $$; ?column? -- :'foo' (1 row) -- Born in Arizona, moved to Babylonia.
Re: psql \set variables in crosstab queries?
On Sat, Mar 4, 2023 at 5:20 PM Ron wrote: > > But crosstab takes text strings as parameters. How then do you use \set > variables in crosstab queries? > > You need to dynamically write the textual query you want to send to the crosstab function. In particular that means writing it using "format()" and then substituting the values into the query via placeholders. Roughly like: SELECT * FROM crosstab(format('SELECT * FROM %I WHERE %I = %L, :'tbl_name', :'col_name', :'compare_value')); David J.
Re: psql \set variables in crosstab queries?
Ron writes: > According to > https://www.postgresql.org/docs/12/app-psql.html#APP-PSQL-VARIABLES and > experience, variables don't interpolate inside of string literals: > " > Variable interpolation will not be performed within quoted SQL literals and > identifiers. Therefore, a construction such as ':foo' doesn't work to > produce a quoted literal from a variable's value. > " > But crosstab takes text strings as parameters. How then do you use \set > variables in crosstab queries? If you read a little further, you'll find out the syntax for converting the value of a psql variable to a SQL string literal: regression=# \set foo BAR regression=# select ':foo'; ?column? -- :foo (1 row) regression=# select :'foo'; ?column? -- BAR (1 row) What the server got in the last case was "select 'BAR';". regards, tom lane
psql \set variables in crosstab queries?
According to https://www.postgresql.org/docs/12/app-psql.html#APP-PSQL-VARIABLES and experience, variables don't interpolate inside of string literals: " Variable interpolation will not be performed within quoted SQL literals and identifiers. Therefore, a construction such as ':foo' doesn't work to produce a quoted literal from a variable's value. " $ psql12 -v BOM=2023-02-01 -af foo.sql select :'BOM'::timestamp + interval'6 month'; ?column? - 2023-08-01 00:00:00 (1 row) select $$ :BOM $$; ?column? -- :BOM (1 row) But crosstab takes text strings as parameters. How then do you use \set variables in crosstab queries? -- Born in Arizona, moved to Babylonia.
Re: Dropping behavior for unique CONSTRAINTs
On 3/4/23 05:51, Peter J. Holzer wrote: On 2023-03-04 02:34:02 -0600, Ron wrote: On 3/4/23 02:03, Peter J. Holzer wrote: [snip] So your plan is to create a unique constraint (backed by a unique index) and then to drop the index and keep the constraint? That doesn't work. A unique constraint can't exist without a (unique) index. Think about it: With a unique constraint PostgreSQL needs to check for every insert whether the value already exists in the table. Without an index this would mean a full table scan. I cut my teeth on an RDBMS which didn't automagically create a backing index. You had to do it yourself... Just curious: Which RDBMS was that? Rdb/VMS (the DEC product for OpenVMS, which has been owned by Oracle for the past 25 years). Speaking of foreign key constraints: Neither Oracle nor PostgreSQL automatically add an index on a foreign key. That bit me hard back in the day ... Us too. (Well, the developer, from before I arrived.) -- Born in Arizona, moved to Babylonia.
Re: DISTINCT *and* ORDER BY in aggregate functions on expressions(!)y
On Sat, 4 Mar 2023, Alban Hertroys wrote: >> But isn’t that the same as with a regular LEFT JOIN? > >Similar, but not the same, I’d say. > >I do now notice that I made some copying errors there, I was a bit >nauseous at that time. np, I’m under pollen attack currently so also not at my best. >That should have read: > >>> select jsonb_build_object('opening_times’, >>> jsonb_agg(obj >>> ORDER BY >>> obj->>'weekday’, >>> obj->>'from_hour’, >>> obj->>'to_hour') >>> ) >>> from cot >>> cross join lateral jsonb_build_object( >>> 'weekday', cot.weekday, >>> 'from_hour', cot.from_hour, >>> 'to_hour', cot.to_hour) obj > >The lateral join applies the function to each row returned from the >left side of the join and enriches that row with the function result. I >used a cross join because there is no join condition to apply to the >lateral, otherwise you could also use an inner join on true. Okay, so I got it now. But that’s still identical to the LEFT JOIN which I’m using in that example, because… >A left join wouldn’t make much sense here, unless the function could … I have a LEFT JOIN already and can just use the CTE there, so I don’t have to add an extra lateral join. But good to know for the future/when I don’t have that. >return NULL - for example if it were a function marked as STRICT and >some of the input parameter values (from the table) could be NULL. OK. >You need a sub-select, which in turn creates its own result set. It’s >up to the planner whether the left or the right side gets executed >first, after which the results of the other side of the join get merged >to this, or whether this can all be collected in one go. That’s up to >the query planner to decide though, and it could be right. OK, but that shouldn’t make a difference here as it needs to run over all rows of the cgwa table anyway (possibly reduced by filtering on users). While not the case here, I see that for other entries the lateral join would cause more work: for the “qualification” kinds of tables, for example, the individual qualification table has very few rows (these are entered by the site admin), but the m:n connection table (whatever the correct name for these is) has a lot because many of the users have many of these qualifications. If I use a CTE to add a JSON object to the individual qualification table first, it doesn’t run on each qualification multiple times; if I use a lateral join, it possibly, modulo planner optimisations, runs the jsonb_build_object function many times per qualification despite them all giving the same result. And, even if the optimisations catch that, it’s mentally not the same. >In my experience, lateral joins go well with the jsonb functions. They >tend to reduce code repetition when referencing object members, such as >in your case. Right. In my case I can get the same by adding a CTE instead though, and it’s hard to see which is better, performance-wise. This is a lot to take in, and I r̲e̲a̲l̲l̲y̲ appreciate the detailed explanations given alongside ☻ bye, //mirabilos -- 15:41⎜ Somebody write a testsuite for helloworld :-)
Re: Dropping behavior for unique CONSTRAINTs
Ah, my apologies for missing that in the docs. I had previously noticed the CONCURRENTLY option on drop index, but I misread and incorrectly thought that unique indexes themselves could not be dropped concurrently, rather than that being true for only unique indexes backing constraints. Apologies on my misunderstanding!Thanks greatly for your help!Best,CSBOn Mar 3, 2023, at 5:54 AM, David Rowley wrote:On Fri, 3 Mar 2023 at 23:17, Conner Bean wrote:I wanted to avoid using a unique index since dropping them requires anexclusive lock and cannot be done concurrently. My thought was to thenuse a unique constraint, since I've read unofficial docs[0] that saythese can be dropped safely with no lock.You should try the official documents. You won't find any wording inthose that say that a unique constraint can be dropped without anylocking.If you look at https://www.postgresql.org/docs/current/sql-altertable.htmlyou'll see "Note that the lock level required may differ for eachsubform. An ACCESS EXCLUSIVE lock is acquired unless explicitlynoted.", and if you look at DROP CONSTRAINT that it mentions nothingabout any lower-level locks, so you can assume that DROP CONSTRAINTobtains an access exclusive lock on the table being altered.If you have a look athttps://www.postgresql.org/docs/15/sql-dropindex.html check out theCONCURRENTLY option. That option allows an index to be dropped withoutblocking concurrent reads and writes to the table. It seems like justhaving a unique index without the constraint is likely your best betif you can't afford to block any traffic for the brief moment it wouldtake to drop the constraint.David
Re: shp2pgsql error under windows
## Raivo Rebane (raivor...@gmail.com): > Raivo@DESKTOP-69FUU49 /cygdrive/f/3D-data/Kataster > > $ cat kataster.sql | less > > > > S^@E^@T^@ ^@C^@L^@I^@E^@N^@T^@_^@E^@N^@C^@O^@D^@I^@N^@G^@ ^@T^@O^@ That's a BOM and the rest looks like UTF-16 (or UCS-2). You can use recode (also available in Cygwin) to convert that. You could also use shp2pgsql in Cygwin and be done with that (Windows is a quite bloated bootloader for Cygwin, but you do you)). Regards, Christoph -- Spare Space
shp2pgsql error under windows
Hi I am usingPS F:\3D-data\Kataster> shp2pgsql -g piirid SHP_KATASTRIYKSUS kataster > kataster.sql under Windows 10 And it’s output is full of zeroe Cygwin helps to find them out, Raivo@DESKTOP-69FUU49 /cygdrive/f/3D-data/Kataster $ cat kataster.sql | less S^@E^@T^@ ^@C^@L^@I^@E^@N^@T^@_^@E^@N^@C^@O^@D^@I^@N^@G^@ ^@T^@O^@ ^@U^@T^@F^@8^@;^@^M^@ ^@S^@E^@T^@ ^@S^@T^@A^@N^@D^@A^@R^@D^@_^@C^@O^@N^@F^@O^@R^@M^@I^@N^@G^@_^@S^@T^@R^@I^@N^@G^@S^@ ^@T^@O^@ ^@O^@N^@;^@^M^@ ^@B^@E^@G^@I^@N^@;^@^M^@ ^@C^@R^@E^@A^@T^@E^@ ^@T^@A^@B^@L^@E^@ ^@"^@k^@a^@t^@a^@s^@t^@e^@r^@"^@ ^@(^@g^@i^@d^@ ^@s^@e^@r^@i^@a^@l^@,^@^M^@ ^@"^@t^@u^@n^@n^@u^@s^@"^@ ^@v^@a^@r^@c^@h^@a^@r^@(^@2^@5^@)^@,^@^M^@ ^@"^@h^@k^@o^@o^@d^@"^@ ^@v^@a^@r^@c^@h^@a^@r^@(^@4^@)^@,^@^M^@ ^@"^@m^@k^@_^@n^@i^@m^@i^@"^@ ^@v^@a^@r^@c^@h^@a^@r^@(^@1^@0^@0^@)^@,^@^M^@ ^@"^@o^@v^@_^@n^@i^@m^@i^@"^@ ^@v^@a^@r^@c^@h^@a^@r^@(^@1^@0^@0^@)^@,^@^M^@ ^@"^@a^@y^@_^@n^@i^@m^@i^@"^@ ^@v^@a^@r^@c^@h^@a^@r^@(^@1^@0^@0^@)^@,^@^M^@ ^@"^@l^@_^@a^@a^@d^@r^@e^@s^@s^@"^@ ^@v^@a^@r^@c^@h^@a^@r^@(^@2^@4^@0^@)^@,^@^M^@ ^@"^@r^@e^@g^@i^@s^@t^@r^@"^@ ^@d^@a^@t^@e^@,^@^M^@ ^@"^@m^@u^@u^@d^@e^@t^@"^@ ^@d^@a^@t^@e^@,^@^M^@ ^@"^@s^@i^@h^@t^@1^@"^@ ^@v^@a^@r^@c^@h^@a^@r^@(^@4^@5^@)^@,^@^M^@ ^@"^@s^@i^@h^@t^@2^@"^@ ^@v^@a^@r^@c^@h^@a^@r^@(^@4^@5^@)^@,^@^M^@ ^@"^@s^@i^@h^@t^@3^@"^@ ^@v^@a^@r^@c^@h^@a^@r^@(^@4^@5 How to generate proper sql output Regards Raivo
shp2pgsql generates fake fail full of zeroes
I am usingPS F:\3D-data\Kataster> shp2pgsql -g piirid SHP_KATASTRIYKSUS kataster > kataster.sql under Windows 10 And it’s output is full of zeroe Cygwin helps to find them out, Raivo@DESKTOP-69FUU49 /cygdrive/f/3D-data/Kataster$ cat kataster.sql | less S^@E^@T^@ ^@C^@L^@I^@E^@N^@T^@_^@E^@N^@C^@O^@D^@I^@N^@G^@ ^@T^@O^@ ^@U^@T^@F^@8^@;^@^M^@^@S^@E^@T^@ ^@S^@T^@A^@N^@D^@A^@R^@D^@_^@C^@O^@N^@F^@O^@R^@M^@I^@N^@G^@_^@S^@T^@R^@I^@N^@G^@S^@ ^@T^@O^@ ^@O^@N^@;^@^M^@^@B^@E^@G^@I^@N^@;^@^M^@^@C^@R^@E^@A^@T^@E^@ ^@T^@A^@B^@L^@E^@ ^@"^@k^@a^@t^@a^@s^@t^@e^@r^@"^@ ^@(^@g^@i^@d^@ ^@s^@e^@r^@i^@a^@l^@,^@^M^@^@"^@t^@u^@n^@n^@u^@s^@"^@ ^@v^@a^@r^@c^@h^@a^@r^@(^@2^@5^@)^@,^@^M^@^@"^@h^@k^@o^@o^@d^@"^@ ^@v^@a^@r^@c^@h^@a^@r^@(^@4^@)^@,^@^M^@^@"^@m^@k^@_^@n^@i^@m^@i^@"^@ ^@v^@a^@r^@c^@h^@a^@r^@(^@1^@0^@0^@)^@,^@^M^@^@"^@o^@v^@_^@n^@i^@m^@i^@"^@ ^@v^@a^@r^@c^@h^@a^@r^@(^@1^@0^@0^@)^@,^@^M^@^@"^@a^@y^@_^@n^@i^@m^@i^@"^@ ^@v^@a^@r^@c^@h^@a^@r^@(^@1^@0^@0^@)^@,^@^M^@^@"^@l^@_^@a^@a^@d^@r^@e^@s^@s^@"^@ ^@v^@a^@r^@c^@h^@a^@r^@(^@2^@4^@0^@)^@,^@^M^@^@"^@r^@e^@g^@i^@s^@t^@r^@"^@ ^@d^@a^@t^@e^@,^@^M^@^@"^@m^@u^@u^@d^@e^@t^@"^@ ^@d^@a^@t^@e^@,^@^M^@^@"^@s^@i^@h^@t^@1^@"^@ ^@v^@a^@r^@c^@h^@a^@r^@(^@4^@5^@)^@,^@^M^@^@"^@s^@i^@h^@t^@2^@"^@ ^@v^@a^@r^@c^@h^@a^@r^@(^@4^@5^@)^@,^@^M^@^@"^@s^@i^@h^@t^@3^@"^@ ^@v^@a^@r^@c^@h^@a^@r^@(^@4^@5 How to generate proper sql output Regards Raivo Sent from Mail for Windows
Re: Dropping behavior for unique CONSTRAINTs
On 2023-03-04 02:34:02 -0600, Ron wrote: > On 3/4/23 02:03, Peter J. Holzer wrote: > [snip] > > So your plan is to create a unique constraint (backed by a unique > > index) and then to drop the index and keep the constraint? > > > > That doesn't work. A unique constraint can't exist without a (unique) > > index. Think about it: With a unique constraint PostgreSQL needs to > > check for every insert whether the value already exists in the table. > > Without an index this would mean a full table scan. > > I cut my teeth on an RDBMS which didn't automagically create a backing > index. You had to do it yourself... Just curious: Which RDBMS was that? I'm pretty sure Oracle did that automatically when I first used it (version 8.0.5). Not sure about MySQL, but if it didn't have an index it probably didn't enfocre the constraint either (it definitely didn't enforce foreign key constraints). Speaking of foreign key constraints: Neither Oracle nor PostgreSQL automatically add an index on a foreign key. That bit me hard back in the day ... hp -- _ | Peter J. Holzer| Story must make more sense than reality. |_|_) || | | | h...@hjp.at |-- Charles Stross, "Creative writing __/ | http://www.hjp.at/ | challenge!" signature.asc Description: PGP signature
Re: DISTINCT *and* ORDER BY in aggregate functions on expressions(!)y
> On 3 Mar 2023, at 20:32, Thorsten Glaser wrote: > > On Fri, 3 Mar 2023, Alban Hertroys wrote: > >> You can rewrite that into something like this: >> >> select jsonb_build_object('opening_times’, >> obj >> ORDER BY >> obj->>'weekday’, >> obj->>'from_hour’, >> obj->>'to_hour') >> ) >> from cot >> cross join lateral jsonb_agg(jsonb_build_object( >> 'weekday', cot.weekday, >> 'from_hour', cot.from_hour, >> 'to_hour', cot.to_hour) obj > > But isn’t that the same as with a regular LEFT JOIN? Similar, but not the same, I’d say. I do now notice that I made some copying errors there, I was a bit nauseous at that time. That should have read: >> select jsonb_build_object('opening_times’, >> jsonb_agg(obj >> ORDER BY >> obj->>'weekday’, >> obj->>'from_hour’, >> obj->>'to_hour') >> ) >> from cot >> cross join lateral jsonb_build_object( >> 'weekday', cot.weekday, >> 'from_hour', cot.from_hour, >> 'to_hour', cot.to_hour) obj The lateral join applies the function to each row returned from the left side of the join and enriches that row with the function result. I used a cross join because there is no join condition to apply to the lateral, otherwise you could also use an inner join on true. I think you could also have used an implicit Cartesian product (using ‘,’ for the join), and that in that case the lateral would be implied. I prefer explicit notation though. A left join wouldn’t make much sense here, unless the function could return NULL - for example if it were a function marked as STRICT and some of the input parameter values (from the table) could be NULL. >>> cgwaj AS ( >>> SELECT cgwa.id AS id, jsonb_build_object( >>> 'weekday', cgwa.weekday, >>> 'forenoon', cgwa.forenoon, >>> 'afternoon', cgwa.afternoon, >>> 'evening', cgwa.evening) AS obj >>> FROM core_generalworkavailability cgwa > > plus There are some differences. You need a sub-select, which in turn creates its own result set. It’s up to the planner whether the left or the right side gets executed first, after which the results of the other side of the join get merged to this, or whether this can all be collected in one go. That’s up to the query planner to decide though, and it could be right. >>> LEFT JOIN cgwaj ON cgwaj.id=cppwt.generalworkavailability_id > > With the addition that I can aggregate… You can do so in both situations, but I guess that confusion stems from my copy/paste mistake. In my experience, lateral joins go well with the jsonb functions. They tend to reduce code repetition when referencing object members, such as in your case. Regards, Alban Hertroys -- There is always an exception to always.
Re: Dropping behavior for unique CONSTRAINTs
On 3/4/23 02:03, Peter J. Holzer wrote: [snip] So your plan is to create a unique constraint (backed by a unique index) and then to drop the index and keep the constraint? That doesn't work. A unique constraint can't exist without a (unique) index. Think about it: With a unique constraint PostgreSQL needs to check for every insert whether the value already exists in the table. Without an index this would mean a full table scan. I cut my teeth on an RDBMS which didn't automagically create a backing index. You had to do it yourself... (Autocommit and the default transaction mode not being SERIALIZABLE were also a shock when I started using other systems.) -- Born in Arizona, moved to Babylonia.
Re: Dropping behavior for unique CONSTRAINTs
On 2023-03-04 13:50:28 +1300, David Rowley wrote: > On Sat, 4 Mar 2023 at 10:55, Ron wrote: > > On 3/3/23 04:54, David Rowley wrote: > > If you have a look at > > https://www.postgresql.org/docs/15/sql-dropindex.html check out the > > CONCURRENTLY option. That option allows an index to be dropped without > > blocking concurrent reads and writes to the table. It seems like just > > having a unique index without the constraint is likely your best bet > > if you can't afford to block any traffic for the brief moment it would > > take to drop the constraint. > > > > > > That doc page says this about CONCURRENTLY: > > " > > There are several caveats to be aware of when using this option. > > Only one index name can be specified, and the CASCADE option is not > > supported. (Thus, an index that supports a UNIQUE or PRIMARY KEY > > constraint cannot be dropped this way.) > > " > > I'm not sure which one of these you think applies to the > recommendation I mentioned The OP asked specifically about dropping the index backing a unique constraint, so Ron is pointing out that CONCURRENTLY cannot be used for that purpose. (I realize that your idea is not to create the constraint in the first place.) hp -- _ | Peter J. Holzer| Story must make more sense than reality. |_|_) || | | | h...@hjp.at |-- Charles Stross, "Creative writing __/ | http://www.hjp.at/ | challenge!" signature.asc Description: PGP signature
Re: Dropping behavior for unique CONSTRAINTs
On 2023-03-02 20:30:41 -, Conner Bean wrote: > Hi folks,I'm curious if there are any docs supporting the > functionality behind dropping unique constraints. For context, I am > interested in enforcing uniqueness on a column. This table is heavily > used, and I plan on dropping the constraint in the future. I wanted to > avoid using a unique index since dropping them requires an exclusive > lock and cannot be done concurrently. My thought was to then use a > unique constraint, since I've read unofficial docs[0] that say these > can be dropped safely with no lock.However, since a unique index would > be the backing index to the unique constraint, I'm curious how this > would work in practice (or if it even does!). So your plan is to create a unique constraint (backed by a unique index) and then to drop the index and keep the constraint? That doesn't work. A unique constraint can't exist without a (unique) index. Think about it: With a unique constraint PostgreSQL needs to check for every insert whether the value already exists in the table. Without an index this would mean a full table scan. hp -- _ | Peter J. Holzer| Story must make more sense than reality. |_|_) || | | | h...@hjp.at |-- Charles Stross, "Creative writing __/ | http://www.hjp.at/ | challenge!" signature.asc Description: PGP signature