Re: RHEL repo package crc mismatches

2023-05-03 Thread Brainmue
Hello Devrim,

The problem is fixed in most of the repositories I synchronise, but in one I 
now have a new one. With the package: 
postgresql13-odbc-13.00.-1PGDG.rhel7.x86_64.rpm

For the pgdg13 RHEL 7 repository:

[MIRROR] postgresql13-odbc-13.00.-1PGDG.rhel7.x86_64.rpm: Downloading 
successful, but checksum doesn't match. Calculated: 
2fa1642932c950ca5597d64a129fc78d2fb3909c898ade5f9bff4db73fb39ae5(sha256)  
Expected: 
9ed5b91c12e072d871314bfa5e8ec991bb312f360f7d1e3af8ece78945931900(sha256) 

It would be great if you could correct that too.

Thank you very much.

Greetings
Michael


4. Mai 2023 00:23, "Devrim Gündüz"  schrieb:

> Hi again,
> 
> On Tue, 2023-05-02 at 12:38 -0700, Evan Rempel wrote:
> 
>> At our site we use reposync to copy the postgresql repositories to a
>> local repository.
>> 
>> When doing this on April 28 (and since) I exprience the following
>> package checksum matching errors.
> 
> 
> 
> I can confirm that this is caused by signing unsigned packages last
> week, but rsync failing to update main server(s). So this is *not* a
> security issue.
> 
> However, as a precaution, I removed problematic packages from the
> repository. They were too old anyway. I did not want to push updated
> checksums for the same packages.
> 
> Please let me know if this solves your problem.
> 
> Again, thanks for the report.
> 
> Regards,
> --
> Devrim Gündüz
> Open Source Solution Architect, PostgreSQL Major Contributor
> Twitter: @DevrimGunduz , @DevrimGunduzTR




Re: RHEL repo package crc mismatches

2023-05-03 Thread Evan Rempel

On 2023-05-03 15:23, Devrim Gündüz wrote:

Hi again,

On Tue, 2023-05-02 at 12:38 -0700, Evan Rempel wrote:

At our site we use reposync to copy the postgresql repositories to a
local repository.

When doing this on April 28 (and since) I exprience the following
package checksum matching errors.




I can confirm that this is caused by signing unsigned packages last
week, but rsync failing to update main server(s). So this is *not* a
security issue.

However, as a precaution, I removed problematic packages from the
repository. They were too old anyway. I did not want to push updated
checksums for the same packages.

Please let me know if this solves your problem.

Again, thanks for the report.

Regards,
--
Devrim Gündüz



Thank you. That does solve my problem.

Evan.





Re: [EXT] Re: Why using a partial index is doing slightly more logical I/O than a normal index

2023-05-03 Thread Jeff Janes
On Wed, May 3, 2023 at 2:00 PM Dirschel, Steve <
steve.dirsc...@thomsonreuters.com> wrote:

> Thanks for the reply Jeff.  Yes-  more of an academic question.  Regarding
> this part:
>
>
>
>Index Cond: ((workflow_id = 1070) AND ((status)::text = ANY
> ('{NOT_STARTED,PAUSED,PENDING,RUNNING}'::text[])))
>
>Filter: (deleted_millis <= 0)
>
>Buffers: shared hit=24
>
>
>
> For this usage, the =ANY is applied as an "in-index filter".  It only
> descends the index once, to where workflow_id=1070, and then scans forward
> applying the =ANY to each index-tuple until it exhausts the =1070
> condition.  As long as all the =1070 entries fit into just a few buffers,
> the count of buffers accessed by doing this is fewer than doing the
> re-descents.  (Stepping from tuple to tuple in the same index page doesn't
> count as a new access.  While a re-descent releases and reacquires the
> buffer)
>
>
>
> There are 2,981,425 rows where workflow_id = 1070.  Does that change your
> theory of using an “in-index filter” for that plan?
>

Yes.  There is no way that that many index tuples will fit in just 24 index
leaf pages, so I think it must be using the re-descending method for both
plans.  Then I don't know why there is a difference in the number of
buffer accesses.  But the difference seems trivial, so I wouldn't put much
effort into investigating it.



> When you say there was a bit of speculation on the “boundard condition” vs
> “in-index filter” is the speculation on if Postgres has 2 different ways of
> processing a =ANY filter or is the speculation that one is being used by
> one plan and the other is being used by the other plan?
>

The speculation was that this applied to your query.  But going back to
some of my original tests, I see that I remembered some of the details
wrong on the broader topic as well.  When it voluntarily doesn't use the
=ANY as a boundary condition, that shows up in the plan as having the
condition evicted from "Index Cond" line and instead show up in a "Filter"
line, at least in my one test case (which means it is no longer an in-index
filter, as it jumps to the table and verifies visibility before applying
the filter).  So the thing that the plans really don't distinguish is
between when it just chooses not to use the extra index column for cost
reasons, from when it thinks it is unable to use it for correctness/safety
reasons.

Cheers,

Jeff


Re: Invoking SQL function while doing CREATE OR REPLACE on it

2023-05-03 Thread Tom Lane
Erik Wienhold  writes:
> On 03/05/2023 20:17 CEST Nagendra Mahesh (namahesh)  
> wrote:
>> I have a Postgres 14.4 cluster (AWS Aurora) to which I connect from my
>> application using JDBC.

Aurora uses, I believe, some other storage engine entirely than
community Postgres has.

>> Only in this tiny time window, few transactions fail with the following 
>> error:
>> ERROR: function bar(arg1 => text, arg2 => text) does not exist
>> Hint: No function matches the given name and argument types. You might need 
>> to add explicit type casts.

> There's also a race condition bug in v14.4 that may be relevant.  It got fixed
> in v14.5.  See "Fix race condition when checking transaction visibility" in
> https://www.postgresql.org/docs/14/release-14-5.html.

That race could easily explain this symptom: during the update, there
are two versions of the function's pg_proc row, and it could be that
both of them appear "dead" to an onlooker transaction if one of them
is inspected during the race window.  Then the onlooker would find
no live version of the row and report that it doesn't exist.

But having said that, it's not clear to me whether Aurora's storage
engine shares this bug with community PG, or you're seeing some
independent bug of theirs that happens to have a similar symptom.
It's even less clear whether AWS would have applied the fix yet if it
is a shared bug.  You really need to discuss this with AWS support.

regards, tom lane




Re: RHEL repo package crc mismatches

2023-05-03 Thread Devrim Gündüz


Hi again,

On Tue, 2023-05-02 at 12:38 -0700, Evan Rempel wrote:
> At our site we use reposync to copy the postgresql repositories to a 
> local repository.
> 
> When doing this on April 28 (and since) I exprience the following 
> package checksum matching errors.
> 



I can confirm that this is caused by signing unsigned packages last
week, but rsync failing to update main server(s). So this is *not* a
security issue.

However, as a precaution, I removed problematic packages from the
repository. They were too old anyway. I did not want to push updated
checksums for the same packages.

Please let me know if this solves your problem.

Again, thanks for the report.

Regards,
-- 
Devrim Gündüz
Open Source Solution Architect, PostgreSQL Major Contributor
Twitter: @DevrimGunduz , @DevrimGunduzTR




Re: Invoking SQL function while doing CREATE OR REPLACE on it

2023-05-03 Thread Erik Wienhold
> On 03/05/2023 20:17 CEST Nagendra Mahesh (namahesh)  
> wrote:
>
> I have a Postgres 14.4 cluster (AWS Aurora) to which I connect from my
> application using JDBC.
>
> I use liquibase for schema management - not only tables, but also a bunch of
> SQL stored procedures and functions. Basically, there is one liquibase
> changeSet that runs last and executes a set of SQL files which contain stored
> procedures and functions.
>
> CREATE OR REPLACE FUNCTION bar(arg1 text, arg2 text) RETURNS record LANGUAGE 
> "plpgsql" AS '
> BEGIN
>// function body
> END;
> ';
>
> These functions / procedures are replaced ONLY when there is a change in one /
> more SQL files which are part of this changeSet. (runOnChange: true).
>
> Whenever I do a rolling deployment of my application (say, with a change in
> the function body of bar()), liquibase will execute the CREATE OR REPLACE 
> FUNCTION bar()
> as part of a transaction.
>
> In the few milliseconds while bar() is being replaced, there are other ongoing
> transactions (from other replicas of my application) which are continuously
> trying to invoke bar().
>
> Only in this tiny time window, few transactions fail with the following error:
>
> ERROR: function bar(arg1 => text, arg2 => text) does not exist
>   Hint: No function matches the given name and argument types. You might need 
> to add explicit type casts.
> Position: 4 : errorCode = 42883

CREATE OR REPLACE FUNCTION should be atomic and cannot change the function
signature.  I don't see how a function cannot exist at some point in this case.

Are you sure that Liquibase is not dropping the function before re-creating it?
If Liquibase drops and re-creates the function in separate transactions, the
transactions trying to execute that function may find it dropped when using the
read committed isolation level.

There's also a race condition bug in v14.4 that may be relevant.  It got fixed
in v14.5.  See "Fix race condition when checking transaction visibility" in
https://www.postgresql.org/docs/14/release-14-5.html.

--
Erik




Re: [EXT] Re: Why using a partial index is doing slightly more logical I/O than a normal index

2023-05-03 Thread Tom Lane
"Dirschel, Steve"  writes:
> There are 2,981,425 rows where workflow_id = 1070.  Does that change your 
> theory of using an “in-index filter” for that plan?  When you say there was a 
> bit of speculation on the “boundard condition” vs “in-index filter” is the 
> speculation on if Postgres has 2 different ways of processing a =ANY filter 
> or is the speculation that one is being used by one plan and the other is 
> being used by the other plan?

I don't believe the intelligence Jeff is postulating actually exists.
I see only one code path for this in nbtree.c, and what it's doing is
what he called the "boundary condition" implementation.  That is, it runs
one index search for "workflow_id = 1070 AND status = 'NOT_STARTED'",
then one for "workflow_id = 1070 AND status = 'PAUSED'", etc,
re-descending the index tree at the start of each of those four scans.

I'm not inclined to ascribe any great significance to the varying numbers
of buffer hits you observed.  I think that's likely explained by chance
layout of the two indexes' contents, so that some of these searches cross
different numbers of index pages even though they visit the same number of
index entries overall.  In particular, it doesn't look like the partial
index is buying anything for this specific test case.  The index's
constraint on "status" matters not at all, because in neither index will
we ever visit any regions of the index where other values of "status"
appear (save the one following entry where we detect that the status value
no longer matches in each scan; but it doesn't really matter what that
entry is).  The constraint on "deleted_millis" could help, but your second
EXPLAIN result shows that it didn't actually eliminate any rows:

>> Index Scan using test_workflow_execution_initial_ui_tabs on 
>> workflow_execution_test  (cost=0.56..15820.19 rows=4335 width=1309) (actual 
>> time=0.049..0.106 rows=56 loops=1)
>>Index Cond: ((workflow_id = 1070) AND ((status)::text = ANY 
>> ('{NOT_STARTED,PAUSED,PENDING,RUNNING}'::text[])))
>>Filter: (deleted_millis <= 0)

(note the lack of any "Rows Removed by Filter" line).  We can therefore
conclude that the index regions satisfying the workflow_id+status
conditions had no entries with deleted_millis <= 0 either.  So these
two test cases visited exactly the same number of index entries, and
any differences in "buffers hit" had to be chance layout effects, or
possibly the result of different index entry sizes.  How large is
that "result" column in reality?

regards, tom lane




Invoking SQL function while doing CREATE OR REPLACE on it

2023-05-03 Thread Nagendra Mahesh (namahesh)
I have a Postgres 14.4 cluster (AWS Aurora) to which I connect from my 
application using JDBC.

I use liquibase for schema management - not only tables, but also a bunch of 
SQL stored procedures and functions. Basically, there is one liquibase 
changeSet that runs last and executes a set of SQL files which contain stored 
procedures and functions.

CREATE OR REPLACE FUNCTION bar(arg1 text, arg2 text) RETURNS record LANGUAGE 
"plpgsql" AS '
BEGIN
   // function body
END;
';

These functions / procedures are replaced ONLY when there is a change in one / 
more SQL files which are part of this changeSet. (runOnChange: true).

Whenever I do a rolling deployment of my application (say, with a change in the 
function body of bar()), liquibase will execute the CREATE OR REPLACE FUNCTION 
bar() as part of a transaction.

In the few milliseconds while bar() is being replaced, there are other ongoing 
transactions (from other replicas of my application) which are continuously 
trying to invoke bar().

Only in this tiny time window, few transactions fail with the following error:

ERROR: function bar(arg1 => text, arg2 => text) does not exist
  Hint: No function matches the given name and argument types. You might need 
to add explicit type casts.
Position: 4 : errorCode = 42883

I don't want any of these transactions to fail (we do not have any proper way 
of re-trying them from the application layer).
This is seen as affecting availability.
However, it is acceptable for these transactions to BLOCK (for a few hundred 
ms) while the SQL function body is being replaced, and then proceed with 
invocation.

Is there a way to safely modify a stored function / procedure in PostgreSQL 
while that function / procedure is being invoked continuously by multiple 
transactions?

Thanks!



RE: [EXT] Re: Why using a partial index is doing slightly more logical I/O than a normal index

2023-05-03 Thread Dirschel, Steve
Thanks for the reply Jeff.  Yes-  more of an academic question.  Regarding this 
part:

   Index Cond: ((workflow_id = 1070) AND ((status)::text = ANY 
('{NOT_STARTED,PAUSED,PENDING,RUNNING}'::text[])))
   Filter: (deleted_millis <= 0)
   Buffers: shared hit=24

For this usage, the =ANY is applied as an "in-index filter".  It only descends 
the index once, to where workflow_id=1070, and then scans forward applying the 
=ANY to each index-tuple until it exhausts the =1070 condition.  As long as all 
the =1070 entries fit into just a few buffers, the count of buffers accessed by 
doing this is fewer than doing the re-descents.  (Stepping from tuple to tuple 
in the same index page doesn't count as a new access.  While a re-descent 
releases and reacquires the buffer)

There are 2,981,425 rows where workflow_id = 1070.  Does that change your 
theory of using an “in-index filter” for that plan?  When you say there was a 
bit of speculation on the “boundard condition” vs “in-index filter” is the 
speculation on if Postgres has 2 different ways of processing a =ANY filter or 
is the speculation that one is being used by one plan and the other is being 
used by the other plan?

Thanks again for your reply.  It is helpful.
Steve


Re: Why using a partial index is doing slightly more logical I/O than a normal index

2023-05-03 Thread Jeff Janes
Because both the actual times and the expected costs are so similar to each
other, I am assuming you are asking this as more of an academic question
than a practical one.  If it is actually a practical matter, you should
find a better example to present to us.

On Wed, May 3, 2023 at 9:17 AM Dirschel, Steve <
steve.dirsc...@thomsonreuters.com> wrote:



>  Index Cond: ((workflow_id = 1070) AND ((status)::text = ANY
> ('{NOT_STARTED,PAUSED,PENDING,RUNNING}'::text[])))
>
  Buffers: shared hit=33
>

For this usage, the =ANY is a "boundary condition".  It re-descends the
index for each value in the array, and each of those re-descents incurs
buffer accesses.  They cost very little, as the planner thinks they will
mostly be cached already (and indeed, they are), but the difference still
shows up in the Buffers tabulation.

   Index Cond: ((workflow_id = 1070) AND ((status)::text = ANY
> ('{NOT_STARTED,PAUSED,PENDING,RUNNING}'::text[])))
>
>Filter: (deleted_millis <= 0)
>
>Buffers: shared hit=24
>

For this usage, the =ANY is applied as an "in-index filter".  It only
descends the index once, to where workflow_id=1070, and then scans forward
applying the =ANY to each index-tuple until it exhausts the =1070
condition.  As long as all the =1070 entries fit into just a few buffers,
the count of buffers accessed by doing this is fewer than doing the
re-descents.  (Stepping from tuple to tuple in the same index page doesn't
count as a new access.  While a re-descent releases and reacquires the
buffer)

This is a bit of speculation on my part, as nothing in the plan output
distinguishes boundary condition usages from in-index-filter usages.  This
speculation is based on the fact that I was recently investigating a
similar situation and did extensive work on it with a debugger and by
adding new experimental log messages.  Also, It isn't clear to me why it
chooses one usage for one plan and the other usage for the other one in
your case, as it seems that both would be eligible for the "boundary
condition" treatment.  But presumably for some reason invisible to us it
just thinks one approach is faster for one index and the other approach for
the other index.


One other thing to note-  when using the partial index the cost is .43 ..
> 15824.82.  When using the other index the cost is .56 .. 15820.19.  So the
> lower end cost (I believe the cost to find the first row) is slightly lower
> for the partial index but the higher end cost (I believe to find the last
> row) is higher for the partial index.  Since there is no LIMIT clause why
> wouldn’t the optimizer use the lowest cost to find all rows (which in this
> case would be to use the non-partial index)?
>

There is some code in the planner which deems plans to be tied if their
costs are within a small difference (1%, I think it is).  This allows some
branches of the tree of all possible plans to be pruned off early, which
can save a lot of time in planning.

Cheers,

Jeff

>


Re: RHEL repo package crc mismatches

2023-05-03 Thread Brainmue
Hello Bruce,

Thanks for the update. Let's see what will come out.

Greetings
   Michael


3. Mai 2023 18:57, "Bruce Momjian"  schrieb:

> The packagers are researching this problem now.
> 
> ---
> 
> On Wed, May 3, 2023 at 07:33:02AM +, Brainmue wrote:
> 
>> Hello Evan,
>> 
>> we have exactly the same problem and don't feel comfortable with it at the 
>> moment either.
>> We even synchronise several versions and this problem occurs with all of 
>> them.
>> Can anyone confirm that the packages have not been changed inadvertently but 
>> only the metadata is
>> wrong?
>> Here are the changes with us.
>> 
>> For the pgdg11 RHEL 7 repository:
>> 
>> [MIRROR] ogr_fdw_11-1.1.0-1.rhel7.x86_64.rpm: Downloading successful, but 
>> checksum doesn't match.
>> Calculated: 
>> c61d0bb8cdc2c386b57d8968b509f9fe7bf7693b3f86af730128797d087c0caa(sha256) 
>> Expected:
>> a963ae2eb874da055db63953cf0eb0d62e24d16abd6e8d4dab615ba4fadaefd8(sha256)
>> [MIRROR] ogr_fdw_11-llvmjit-1.1.0-1.rhel7.x86_64.rpm: Downloading 
>> successful, but checksum doesn't
>> match. Calculated: 
>> 1be687c8721e7683f7efbfe51b9bd9532f7c7326d344e83e8928667cbc524cd3(sha256)
>> Expected: 
>> 52aa7c905fd802bfea5cf7e89b80b7523b2a16309575cdbe9d68df4179ec1f6b(sha256)
>> [MIRROR] pg_auto_failover_11-1.6.3-1.rhel7.x86_64.rpm: Downloading 
>> successful, but checksum doesn't
>> match. Calculated: 
>> abd1ede633fe8dc7721e1e09783e300c8d5a5e9b226257c67969e2bfbf7ce4f9(sha256)
>> Expected: 
>> 0b29fc748639210c76af4b1870772780ba13a04698886e78514e7fb1baac9781(sha256)
>> 
>> For the pgdg13 RHEL 7 repository:
>> 
>> [MIRROR] ogr_fdw_13-1.1.0-1.rhel7.x86_64.rpm: Downloading successful, but 
>> checksum doesn't match.
>> Calculated: 
>> d2ea23dc8b866c09eb620187e40147daae1a60f2a31370a88fd119b08a5f8816(sha256) 
>> Expected:
>> a39bc56ebc34de96321af69f99862819fe36516775cb155f599c839c098a0030(sha256)
>> [MIRROR] ogr_fdw_13-llvmjit-1.1.0-1.rhel7.x86_64.rpm: Downloading 
>> successful, but checksum doesn't
>> match. Calculated: 
>> f2d981ba5ae5e54ac420f881c27eaba3af6b506638feed9f686273272083b479(sha256)
>> Expected: 
>> 5e6baa1e8169da8251f4a3c47c8db0ab4344977c0ed4a8f1042d353a50e4e304(sha256)
>> [MIRROR] pg_auto_failover_13-1.6.3-1.rhel7.x86_64.rpm: Downloading 
>> successful, but checksum doesn't
>> match. Calculated: 
>> 01ce463c8487d52986e347025266167135f0a866c37590c784e7e3e5d8e43817(sha256)
>> Expected: 
>> e35c32a27f5c97596d74fca03e416cb743bf188fdc0dfaf736cc68a20801a5c9(sha256)
>> 
>> For the pgdg14 RHEL 7 repository:
>> 
>> [MIRROR] pg_auto_failover_14-1.6.3-1.rhel7.x86_64.rpm: Downloading 
>> successful, but checksum doesn't
>> match. Calculated: 
>> 7b72deadb029a8752717c832cde2e23d87e341037765086d88ac6d96816ebe89(sha256)
>> Expected: 
>> 55de94cebb1967c4f1edb1a0be14246173c05168261a76d141e819f607e83ee3(sha256)
>> 
>> Thank you for checking.
>> 
>> Greetings
>> Michael
>> 
>> 3. Mai 2023 09:00, "Evan Rempel"  schrieb:
>> 
>> At our site we use reposync to copy the postgresql repositories to a local 
>> repository.
>> 
>> When doing this on April 28 (and since) I exprience the following package 
>> checksum matching errors.
>> 
>> For the pgdg13 RHEL 8 repository
>> 
>> [MIRROR] pg_auto_failover_13-1.6.3-1.rhel8.x86_64.rpm: Downloading 
>> successful, but checksum doesn't
>> match. Calculated: 
>> 5196edcfe1d6af6c0e90ad9a25667613bdfa0731a84fa9a1dbaa7080b4a3caac(sha256)
>> Expected: 
>> 8d4527c96e9c8a3ff86d75aa85c166899ee895e9522c6720223f0f93b658f8d6(sha256)
>> 
>> [MIRROR] e-maj_13-4.0.1-1.rhel8.x86_64.rpm: Downloading successful, but 
>> checksum doesn't match.
>> Calculated: 
>> f7576cb1cd22303cb3dbb2a86911ad3f9e57afa8472a31f1a6a1f176f708fa1d(sha256) 
>> Expected:
>> 8c56cacb99771c4f06be2551988e553a70ea5e5459202e12e0e92fdeb7371621(sha256)
>> 
>> For the pgdg12 RHEL 8 repository
>> 
>> [MIRROR] pg_auto_failover_12-llvmjit-1.6.3-1.rhel8.x86_64.rpm: Downloading 
>> successful, but checksum
>> doesn't match. Calculated: 
>> 9bfdaccc3a151fd847bbb5e622a9384648cf963faacd90dc9b31cd433e23a3c0(sha256)
>> Expected: 
>> aa5e3dc99cabfe22839ed0b9501a0099af139bf8551344a3b198ac048218ceee(sha256)
>> 
>> I think it is just metadata information, but it sounds scary.
>> 
>> Can anyone comment?
>> 
>> --
>> Evan
> 
> --
> Bruce Momjian  https://momjian.us
> EDB https://enterprisedb.com
> 
> Embrace your flaws. They make you human, rather than perfect,
> which you will never be.




Re: RHEL repo package crc mismatches

2023-05-03 Thread Bruce Momjian


The packagers are researching this problem now.

---

On Wed, May  3, 2023 at 07:33:02AM +, Brainmue wrote:
> Hello Evan,
> 
> we have exactly the same problem and don't feel comfortable with it at the 
> moment either.
> We even synchronise several versions and this problem occurs with all of them.
> Can anyone confirm that the packages have not been changed inadvertently but 
> only the metadata is
> wrong?
> Here are the changes with us.
> 
> For the pgdg11 RHEL 7 repository:
> 
> [MIRROR] ogr_fdw_11-1.1.0-1.rhel7.x86_64.rpm: Downloading successful, but 
> checksum doesn't match.
> Calculated: 
> c61d0bb8cdc2c386b57d8968b509f9fe7bf7693b3f86af730128797d087c0caa(sha256) 
> Expected:
> a963ae2eb874da055db63953cf0eb0d62e24d16abd6e8d4dab615ba4fadaefd8(sha256) 
> [MIRROR] ogr_fdw_11-llvmjit-1.1.0-1.rhel7.x86_64.rpm: Downloading successful, 
> but checksum doesn't
> match. Calculated: 
> 1be687c8721e7683f7efbfe51b9bd9532f7c7326d344e83e8928667cbc524cd3(sha256)
> Expected: 
> 52aa7c905fd802bfea5cf7e89b80b7523b2a16309575cdbe9d68df4179ec1f6b(sha256) 
> [MIRROR] pg_auto_failover_11-1.6.3-1.rhel7.x86_64.rpm: Downloading 
> successful, but checksum doesn't
> match. Calculated: 
> abd1ede633fe8dc7721e1e09783e300c8d5a5e9b226257c67969e2bfbf7ce4f9(sha256)
> Expected: 
> 0b29fc748639210c76af4b1870772780ba13a04698886e78514e7fb1baac9781(sha256) 
> 
> For the pgdg13 RHEL 7 repository:
> 
> [MIRROR] ogr_fdw_13-1.1.0-1.rhel7.x86_64.rpm: Downloading successful, but 
> checksum doesn't match.
> Calculated: 
> d2ea23dc8b866c09eb620187e40147daae1a60f2a31370a88fd119b08a5f8816(sha256) 
> Expected:
> a39bc56ebc34de96321af69f99862819fe36516775cb155f599c839c098a0030(sha256) 
> [MIRROR] ogr_fdw_13-llvmjit-1.1.0-1.rhel7.x86_64.rpm: Downloading successful, 
> but checksum doesn't
> match. Calculated: 
> f2d981ba5ae5e54ac420f881c27eaba3af6b506638feed9f686273272083b479(sha256)
> Expected: 
> 5e6baa1e8169da8251f4a3c47c8db0ab4344977c0ed4a8f1042d353a50e4e304(sha256) 
> [MIRROR] pg_auto_failover_13-1.6.3-1.rhel7.x86_64.rpm: Downloading 
> successful, but checksum doesn't
> match. Calculated: 
> 01ce463c8487d52986e347025266167135f0a866c37590c784e7e3e5d8e43817(sha256)
> Expected: 
> e35c32a27f5c97596d74fca03e416cb743bf188fdc0dfaf736cc68a20801a5c9(sha256) 
> 
> For the pgdg14 RHEL 7 repository:
> 
> [MIRROR] pg_auto_failover_14-1.6.3-1.rhel7.x86_64.rpm: Downloading 
> successful, but checksum doesn't
> match. Calculated: 
> 7b72deadb029a8752717c832cde2e23d87e341037765086d88ac6d96816ebe89(sha256)
> Expected: 
> 55de94cebb1967c4f1edb1a0be14246173c05168261a76d141e819f607e83ee3(sha256)
> 
> Thank you for checking.
> 
> Greetings
> Michael
> 
> 3. Mai 2023 09:00, "Evan Rempel"  schrieb:
> 
> > At our site we use reposync to copy the postgresql repositories to a local 
> > repository.
> > 
> > When doing this on April 28 (and since) I exprience the following package 
> > checksum matching errors.
> > 
> > For the pgdg13 RHEL 8 repository
> > 
> > [MIRROR] pg_auto_failover_13-1.6.3-1.rhel8.x86_64.rpm: Downloading 
> > successful, but checksum doesn't
> > match. Calculated: 
> > 5196edcfe1d6af6c0e90ad9a25667613bdfa0731a84fa9a1dbaa7080b4a3caac(sha256)
> > Expected: 
> > 8d4527c96e9c8a3ff86d75aa85c166899ee895e9522c6720223f0f93b658f8d6(sha256)
> > 
> > [MIRROR] e-maj_13-4.0.1-1.rhel8.x86_64.rpm: Downloading successful, but 
> > checksum doesn't match.
> > Calculated: 
> > f7576cb1cd22303cb3dbb2a86911ad3f9e57afa8472a31f1a6a1f176f708fa1d(sha256) 
> > Expected:
> > 8c56cacb99771c4f06be2551988e553a70ea5e5459202e12e0e92fdeb7371621(sha256)
> > 
> > For the pgdg12 RHEL 8 repository
> > 
> > [MIRROR] pg_auto_failover_12-llvmjit-1.6.3-1.rhel8.x86_64.rpm: Downloading 
> > successful, but checksum
> > doesn't match. Calculated: 
> > 9bfdaccc3a151fd847bbb5e622a9384648cf963faacd90dc9b31cd433e23a3c0(sha256)
> > Expected: 
> > aa5e3dc99cabfe22839ed0b9501a0099af139bf8551344a3b198ac048218ceee(sha256)
> > 
> > I think it is just metadata information, but it sounds scary.
> > 
> > Can anyone comment?
> > 
> > --
> > Evan
> 
> 

-- 
  Bruce Momjian  https://momjian.us
  EDB  https://enterprisedb.com

  Embrace your flaws.  They make you human, rather than perfect,
  which you will never be.




Re: Can one user login in multile machine?

2023-05-03 Thread Israel Brewster
> On May 2, 2023, at 10:23 PM, Wen Yi  wrote:
> 
> Hi team,
> can I use same user's information login in multile machines when connect to 
> the postgres?

In short, yes. A given user can log in from any number of machines, I believe 
even simultaneously (though I haven’t tried that specifically).

---
Israel Brewster
Software Engineer
Alaska Volcano Observatory 
Geophysical Institute - UAF 
2156 Koyukuk Drive 
Fairbanks AK 99775-7320
Work: 907-474-5172
cell:  907-328-9145

> Thanks in advance!
> 
> Yours,
> Wen Yi.



Re: unknown postgres ssl error "could not accept SSL connection: Success" and timeout

2023-05-03 Thread Jeff Janes
On Wed, May 3, 2023 at 9:54 AM Sergey Cherevko 
wrote:

> Ubuntu 18.04.6 LTS (GNU/Linux 4.15.0-167-generic x86_64)
>
> OpenSSL 1.1.1 11 Sep 2018
>
> Sometimes i see this in postgres logs
>
So, your system is working normally most of the time?  Or is it working
normally all of the time, and you just get occasional extra log messages of
unknown origin?

It could be informative to add %r to the log_line_prefix, so you would know
where these connections are coming from.



> CRITICAL
> PDOException
> SQLSTATE[08006] [7] timeout expired
> PDOException
> /var/log/web/hub2.log
>
>
I don't recognize the above (client?) log format.  What is it from?  Does
it correspond exactly to the entries in the PostgreSQL log?  The time
stamps seem off, the wording doesn't suggest to me that they are just
different ends of the same error, and the count of errors in one log
doesn't agree with the count in the other (6 vs 3).


> I should to update my openssl to see real error? How can i do that if i use 
> ubuntu 18.04?
>
> I wouldn't expect that to help.  But how did you install it in the first
place?  Ubuntu 18.04 generally provides PostgreSQL 10, not 12, so you must
have done something non-default to get where you are.

In my experience, this (poorly worded) error comes from the client
initiating an ssl connection, but then not finishing the ssl negotiations.
For example, PGSSLMODE is set to verify-ca or above, but the client has no
root.crt file present.

Cheers,

Jeff


Re: libpq and multi-threading

2023-05-03 Thread Geoff Winkless
On Wed, 3 May 2023 at 12:11, Michael J. Baars <
mjbaars1977.pgsql.hack...@gmail.com> wrote:

> The shared common address space is controlled by the clone(2) CLONE_VM
> option. Indeed this results in an environment in which both the parent and
> the child can read / write each other's memory, but dynamic memory being
> allocated using malloc(3) from two different threads simulaneously will
> result in internal interference.
>

There's an interesting note here

https://stackoverflow.com/a/45285877

TL;DR: glibc malloc does not cope well with threads created with clone().
Use pthread_create if you wish to use glibc malloc.

Geoff


Re: libpq and multi-threading

2023-05-03 Thread Peter J. Holzer
On 2023-05-03 06:35:26 -0600, Michael Loftis wrote:
> That is not a thread. Linux man clone right at the start …
> 
> “clone, __clone2, clone3 - create a child process”
> 
> What you want is pthread_create (or similar)

clone is the system call which is used to create both processes and
threads (in the early days of Linux that generalization was thought to
be beneficial, but POSIX has all kinds of special rules for processes
and threads so it may actually have made stuff more complicated.)

I do agree that pthread_create (or the C11 thrd_create) is the way to
go. It will just call clone behind the scenes, but it will do so with
the right flags and possibly set up some other stuff expected by the
rest of the C library, too.

There may be good reasons to use the low level function in some cases.
But I'd say that in that case you should better know what that means
exactly.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: libpq and multi-threading

2023-05-03 Thread Michael J. Baars
Hi Michael,

Are pthread_* functions really such an improvement over clone? Does it make
an 'freely passing around' of PGresult objects possible? Like it matters,
process or thread.

We were talking about the documentation and this 'freely passing around'
PGresult object. I just don't think it is as simple as the documentation
makes you believe.

On Wed, 3 May 2023, 14:35 Michael Loftis,  wrote:

>
> That is not a thread. Linux man clone right at the start …
>
> “clone, __clone2, clone3 - create a child process”
>
> What you want is pthread_create (or similar)
>
> There’s a bunch of not well documented dragons if you’re trying to treat a
> child process as a thread. Use POSIX Threads, as pretty much anytime PG or
> anything else Linux based says thread they’re talking about a POSIX Thread
> environment.
>
>
> On Wed, May 3, 2023 at 05:12 Michael J. Baars <
> mjbaars1977.pgsql.hack...@gmail.com> wrote:
>
>> Hi Peter,
>>
>> The shared common address space is controlled by the clone(2) CLONE_VM
>> option. Indeed this results in an environment in which both the parent and
>> the child can read / write each other's memory, but dynamic memory being
>> allocated using malloc(3) from two different threads simulaneously will
>> result in internal interference.
>>
>> Because libpq makes use of malloc to store results, you will come to find
>> that the CLONE_VM option was not the option you were looking for.
>>
>> On Tue, 2 May 2023, 19:58 Peter J. Holzer,  wrote:
>>
>>> On 2023-05-02 17:43:06 +0200, Michael J. Baars wrote:
>>> > I don't think it is, but let me shed some more light on it.
>>>
>>> One possibly quite important information you haven't told us yet is
>>> which OS you use.
>>>
>>> Or how you create the threads, how you pass the results around, what
>>> else you are possibly doing between getting the result and trying to use
>>> it ...
>>>
>>> A short self-contained test case might shed some light on this.
>>>
>>>
>>> > After playing around a little with threads and memory, I now know that
>>> the
>>> > PGresult is not read-only, it is read-once. The child can only read
>>> that
>>> > portion of parent memory, that was written before the thread started.
>>> Read-only
>>> > is not strong enough.
>>> >
>>> > Let me correct my first mail. Making libpq use mmap is not good enough
>>> either.
>>> > Shared memory allocated by the child can not be accessed by the parent.
>>>
>>> Are you sure you are talking about threads and not processes? In the OSs
>>> I am familiar with, threads (of the same process) share a common address
>>> space. You don't need explicit shared memory and there is no such thing
>>> as "parent memory" (there is thread-local storage, but that's more a
>>> compiler/library construct).
>>>
>>> hp
>>>
>>> --
>>>_  | Peter J. Holzer| Story must make more sense than reality.
>>> |_|_) ||
>>> | |   | h...@hjp.at |-- Charles Stross, "Creative writing
>>> __/   | http://www.hjp.at/ |   challenge!"
>>>
>> --
>
> "Genius might be described as a supreme capacity for getting its possessors
> into trouble of all kinds."
> -- Samuel Butler
>


Re: PL/pgSQL doesn't support variables in queries?

2023-05-03 Thread Adrian Klaver

On 5/3/23 07:34, J.A. wrote:

Ah - I think I'm starting to follow.

what i was _trying_ to do is this

get value from a column and stick it into a variable.

now select * from a _number_ of tables and return a -multi recordsets- 
from this single query. I'm not sure if that is the same terminology, in 
pgsql?


So is this possible?


Many things are possible and they are covered in the documentation:

https://www.postgresql.org/docs/current/plpgsql.html

Read through that and you will find your questions answered.



-JA-







--
Adrian Klaver
adrian.kla...@aklaver.com





Re: PL/pgSQL doesn't support variables in queries?

2023-05-03 Thread David G. Johnston
The convention on these lists is to inline or, at worse, bottom-post.

On Wed, May 3, 2023 at 7:34 AM J.A. 
wrote:

> now select * from a _number_ of tables and return a -multi recordsets-
> from this single query. I'm not sure if that is the same terminology, in
> pgsql?
>
> So is this possible?
>

The output of a set-returning function (srf) is a single tabular result.

If you want to produce multiple tabular results you would need to either
serialize them (say into jsonb) or assign a cursor name to each and then
reference them by name.

David J.


Re: PL/pgSQL doesn't support variables in queries?

2023-05-03 Thread J.A.
Ah - I think I'm starting to follow.

what i was _trying_ to do is this

get value from a column and stick it into a variable.

now select * from a _number_ of tables and return a -multi recordsets- from
this single query. I'm not sure if that is the same terminology, in pgsql?

So is this possible?

-JA-



On Wed, 3 May 2023 at 23:29, Tom Lane  wrote:

> "J.A."  writes:
> > I must admit, I did try doing something like you suggested Erik. I tried
> > things like:
>
> > DO $$
> > DECLARE
> >   v_application_id uuid;
> > BEGIN
> >   SELECT application_id INTO v_application_id FROM applications
> > WHERE code = 'pg-test-cc';
>
> >   SELECT * FROM application_foo WHERE application_id =
> > v_application_id;
> >   -- more SELECT * FROM child tables
>
> > END $$;
>
> > but that never worked, with warning:
>
> > ERROR: query has no destination for result data HINT: If you want to
> > discard the results of a SELECT, use PERFORM instead. CONTEXT: PL/pgSQL
> > function inline_code_block line 7 at SQL statement SQL state: 42601
>
> Note that that is complaining about your second try, not your first.
> You need to put the result of the SELECT somewhere.  INTO is fine
> if it's a single-row result.  Otherwise, consider looping through
> the result with a FOR loop.  Again, there are plenty of examples
> in the manual.
>
> regards, tom lane
>


unknown postgres ssl error "could not accept SSL connection: Success" and timeout

2023-05-03 Thread Sergey Cherevko
Ubuntu 18.04.6 LTS (GNU/Linux 4.15.0-167-generic x86_64)

OpenSSL 1.1.1 11 Sep 2018

Sometimes i see this in postgres logs

Found some similar threads, but not sure

https://github.com/modoboa/modoboa/issues/1702

https://postgrespro.com/list/thread-id/2585389

How i can to debug this?

2023-02-22 00:43:38.570 MSK [27636] [unknown]@[unknown] LOG:  could
not accept SSL connection: Success
2023-02-22 00:43:38.570 MSK [27631] [unknown]@[unknown] LOG:  could
not accept SSL connection: Success
2023-02-22 00:43:38.570 MSK [27639] [unknown]@[unknown] LOG:  could
not accept SSL connection: Success
2023-02-22 00:43:38.570 MSK [27642] [unknown]@[unknown] LOG:  could
not accept SSL connection: Success
2023-02-22 00:43:38.570 MSK [27643] [unknown]@[unknown] LOG:  could
not accept SSL connection: Success
2023-02-22 00:43:38.570 MSK [27627] [unknown]@[unknown] LOG:  could
not accept SSL connection: Connection reset by peer

CRITICAL
PDOException
SQLSTATE[08006] [7] timeout expired
PDOException
/var/log/web/hub2.log
Feb 21, 2023 @ 23:43:38.001
CRITICAL
PDOException
SQLSTATE[08006] [7] timeout expired
PDOException
/var/log/web/hub2.log
Feb 21, 2023 @ 23:43:38.001
CRITICAL
PDOException
SQLSTATE[08006] [7] timeout expired
PDOException
/var/log/web/hub2.log

here is my postgres config

cat /etc/postgresql/12/main/postgresql.conf

data_directory = '/var/lib/postgresql/12/main'
hba_file = '/etc/postgresql/12/main/pg_hba.conf'
ident_file = '/etc/postgresql/12/main/pg_ident.conf'
external_pid_file = '/var/run/postgresql/12-main.pid'

listen_addresses = '*'
port = 5432
max_connections = 1
unix_socket_directories = '/var/run/postgresql'
ssl = on
ssl_cert_file = '/etc/ssl/certs/ssl-cert-snakeoil.pem'
ssl_key_file = '/etc/ssl/private/ssl-cert-snakeoil.key'

shared_buffers = 4GB
temp_buffers = 8MB
work_mem = 128MB
maintenance_work_mem = 128MB
dynamic_shared_memory_type = posix

bgwriter_lru_maxpages = 0
effective_io_concurrency = 3

wal_level = hot_standby

checkpoint_completion_target = 0.9

archive_mode = on
archive_command = '/bin/true'
archive_timeout = 300

max_standby_archive_delay = 21600
max_standby_streaming_delay = 21600



I should to update my openssl to see real error? How can i do that if
i use ubuntu 18.04?


Re: PL/pgSQL doesn't support variables in queries?

2023-05-03 Thread Tom Lane
"J.A."  writes:
> I must admit, I did try doing something like you suggested Erik. I tried
> things like:

> DO $$
> DECLARE
>   v_application_id uuid;
> BEGIN
>   SELECT application_id INTO v_application_id FROM applications
> WHERE code = 'pg-test-cc';

>   SELECT * FROM application_foo WHERE application_id =
> v_application_id;
>   -- more SELECT * FROM child tables

> END $$;

> but that never worked, with warning:

> ERROR: query has no destination for result data HINT: If you want to
> discard the results of a SELECT, use PERFORM instead. CONTEXT: PL/pgSQL
> function inline_code_block line 7 at SQL statement SQL state: 42601

Note that that is complaining about your second try, not your first.
You need to put the result of the SELECT somewhere.  INTO is fine
if it's a single-row result.  Otherwise, consider looping through
the result with a FOR loop.  Again, there are plenty of examples
in the manual.

regards, tom lane




Re: PL/pgSQL doesn't support variables in queries?

2023-05-03 Thread Erik Wienhold
> On 03/05/2023 14:51 CEST J.A.  wrote:
>
> Oh wow folks! I totally misunderstood the docs then. (I also tried to Read The
> Manual before I posted here, too :blush:)
>
> I must admit, I did try doing something like you suggested Erik. I tried
> things like:
>
> DO $$
>  DECLARE
>  v_application_id uuid;
>  BEGIN
>  SELECT application_id INTO v_application_id FROM applications WHERE code = 
> 'pg-test-cc';
>
> SELECT * FROM application_foo WHERE application_id = v_application_id;
>  -- more SELECT * FROM child tables
>
>  END $$;
>
> but that never worked, with warning:
>
> ERROR: query has no destination for result data
> HINT: If you want to discard the results of a SELECT, use PERFORM instead.
> CONTEXT: PL/pgSQL function inline_code_block line 7 at SQL statement SQL 
> state: 42601
>
> Which is why i (incorrectly?) thought this cannot be done?

plpgsql requires you to either store query results in variables or discard them
as the hint in the error message says.  PERFORM is mainly used to execute
functions for their side-effects only, e.g. PERFORM pg_reload_conf(), or execute
a query where you only want to tell if rows were found by checking special
variable  FOUND  afterwards.

> So is there another trick to doing this instead? Is it maybe via the v_record
> "record" variable instead?

Depends on what you want to do with those application_foo rows.  SELECT INTO
only considers the first row.  I assume you want to loop over the entire result
set.  Then you must use  FOR v_rec IN  LOOP:

DO $$
DECLARE
  v_application_id uuid;
  v_rec record;
BEGIN
  SELECT application_id INTO v_application_id FROM applications WHERE 
code = 'pg-test-cc';

  FOR v_rec IN
SELECT * FROM application_foo WHERE application_id = 
v_application_id
  LOOP
RAISE NOTICE 'v_rec = %', v_rec;  -- Prints each result.
  END LOOP;
END $$;

--
Erik




Why using a partial index is doing slightly more logical I/O than a normal index

2023-05-03 Thread Dirschel, Steve
Table definition:

workflow_db=> \d workflow_execution_test
Table "public.workflow_execution_test"
  Column  |   Type   | Collation | 
Nullable | Default
--+--+---+--+-
execution_id | bigint   |   |   
   |
state_machine_id | bigint   |   |   
   |
workflow_id  | bigint   |   |   
   |
started_datetime | timestamp with time zone |   |   
   |
completed_datetime   | timestamp with time zone |   |   
   |
status   | character varying(50)|   |   
   |
execution_context_s3_arn | character varying(200)   |   |   
   |
ol_version   | integer  |   |   
   |
created_datetime | timestamp with time zone |   |   
   |
updated_datetime | timestamp with time zone |   |   
   |
deleted_millis   | bigint   |   |   
   |
acquisition_channel_id   | bigint   |   |   
   |
correlation_id   | character varying(36)|   |   
   |
result   | character varying(50)|   |   
   |
state_machine_execution_arn  | character varying(200)   |   |   
   |
created_by_id| bigint   |   |   
   |
updated_by_id| bigint   |   |   
   |
acquired_gcs_s3_object   | text |   |   
   |
sqs_trigger_id   | bigint   |   |   
   |
trigger_message  | text |   |   
   |
acquired_gcs_s3_object_uuid  | character varying(36)|   |   
   |
api_trigger_id   | bigint   |   |   
   |
scheduled_trigger_id | bigint   |   |   
   |
notification_trigger_workflow_id | bigint   |   |   
   |
acquired_object_name | text |   |   
   |
subscription_guid| uuid |   |   
   |
processing_class_code| character varying(50)|   |   
   |
root_execution_guid  | uuid |   |   
   |
Indexes:
"test_workflow_execution_active_pending_ordered_by_created_date_" btree 
(workflow_id, status, deleted_millis, created_datetime) WHERE (status::text = 
ANY (ARRAY['ACTION_NEEDED'::character varying, 'NOT_STARTED'::character 
varying, 'PAUSED'::character varying, 'PENDING'::character varying, 
'RUNNING'::character varying]::text[])) AND deleted_millis <= 0
"test_workflow_execution_initial_ui_tabs" btree (workflow_id, status, 
result, completed_datetime DESC NULLS LAST)

I created/populated this table with 22 million rows.  Afterwards I then created 
the 2 indexes.  So those indexes are packed tightly.

As is the optimizer decides to use the partial index.  Below shows it did 33 
logical reads.  This index should be very good for this query-  the leading 3 
columns of the index are on the 3 criteria in the WHERE clause and the partial 
part is only storing rows that match the status and deleted_millis filters.

explain (analyze, buffers)
select * from workflow_execution_test
where workflow_id = 1070
AND status in ('NOT_STARTED','PAUSED','PENDING','RUNNING')
and deleted_millis <= 0;

 QUERY PLAN

Index Scan using 
test_workflow_execution_active_pending_ordered_by_created_date_ on 
workflow_execution_test  (cost=0.43..15824.82 rows=4335 width=1309) (actual 
time=0.040..0.095 rows=56 loops=1)
   Index Cond: ((workflow_id = 1070) AND ((status)::text = ANY 
('{NOT_STARTED,PAUSED,PENDING,RUNNING}'::text[])))
  Buffers: shared hit=33
 Planning:
   Buffers: shared hit=2
Planning Time: 0.321 ms
Execution Time: 0.117 ms

If I hint the query to use the other index it does less work-  it does 24 
logical reads vs 33 using the partial index.

/*+ IndexScan(workflow_execution_test test_workflow_execution_initial_ui_tabs) 
*/
explain (analyze, buffers)
select * from workflow_execution_test
where workflow_id = 1070
AND status in ('NOT_STARTED','PAUSED','PENDING','RUNNING')
and deleted_millis <= 0;

Re: RHEL repo package crc mismatches

2023-05-03 Thread Devrim Gündüz


Hi,

On Tue, 2023-05-02 at 12:38 -0700, Evan Rempel wrote:
> At our site we use reposync to copy the postgresql repositories to a 
> local repository.
> 
> When doing this on April 28 (and since) I exprience the following 
> package checksum matching errors.



Thanks for the report.

This definitely does not look like a security issue, but need run
further checks. I think it is an rsync issue.

I'll reply again as soon as I'm done.

Regards,

-- 
Devrim Gündüz
Open Source Solution Architect, PostgreSQL Major Contributor
Twitter: @DevrimGunduz , @DevrimGunduzTR




Re: PL/pgSQL doesn't support variables in queries?

2023-05-03 Thread J.A.
Oh wow folks! I totally misunderstood the docs then. (I also tried to Read
The Manual before I posted here, too :blush:)

I must admit, I did try doing something like you suggested Erik. I tried
things like:

DO $$
DECLARE
  v_application_id uuid;
BEGIN
  SELECT application_id INTO v_application_id FROM applications
WHERE code = 'pg-test-cc';

  SELECT * FROM application_foo WHERE application_id =
v_application_id;
  -- more SELECT * FROM child tables

END $$;

but that never worked, with warning:

ERROR: query has no destination for result data HINT: If you want to
discard the results of a SELECT, use PERFORM instead. CONTEXT: PL/pgSQL
function inline_code_block line 7 at SQL statement SQL state: 42601

Which is why i (incorrectly?) thought this cannot be done?

So is there another trick to doing this instead? Is it maybe via the
v_record "record" variable instead?

-JA-

On Wed, 3 May 2023 at 22:39, Erik Wienhold  wrote:

> > On 03/05/2023 14:25 CEST J.A. 
> wrote:
> >
> > ms-sql person here migrating over to pgsql. One of the first thing's I
> noticed
> > with pgsql (or more specifically, PL/pgSQL) is that it doesn't support
> > "variables" in a query?
> >
> > for example, here's some T-SQL:
> >
> > DECLARE @fkId INTEGER
> >
> > SELECT @fkId = fkId FROM SomeTable WHERE id = 1
> >
> > -- and then do something with that value..
> >
> > SELECT * FROM AnotherTable WHERE Id = @fkId
> > SELECT * FROM YetAnotherTable WHERE FKId = @fkId
> > -- etc..
>
> plpgsql does support variable declarations [0] but does not use any special
> notation like T-SQL.  An equivalent to your example would be:
>
> DO $$
> DECLARE
>   v_fkid int;
>   v_rec record;
> BEGIN
>   SELECT fkid INTO v_fkid FROM SomeTable WHERE id = 1;
>   SELECT * INTO v_rec FROM AnotherTable WHERE Id = v_fkid;
>   -- Do something with v_rec ...
> END $$;
>
> Prefixing variable names with v_ is just a convention to avoid ambiguous
> column
> references (assuming that column names are not prefixed with v_) [1].
>
> [0] https://www.postgresql.org/docs/current/plpgsql-declarations.html
> [1]
> https://www.postgresql.org/docs/current/plpgsql-implementation.html#PLPGSQL-VAR-SUBST
>
> --
> Erik
>


Re: PL/pgSQL doesn't support variables in queries?

2023-05-03 Thread Erik Wienhold
> On 03/05/2023 14:25 CEST J.A.  wrote:
>
> ms-sql person here migrating over to pgsql. One of the first thing's I noticed
> with pgsql (or more specifically, PL/pgSQL) is that it doesn't support
> "variables" in a query?
>
> for example, here's some T-SQL:
>
> DECLARE @fkId INTEGER
>
> SELECT @fkId = fkId FROM SomeTable WHERE id = 1
>
> -- and then do something with that value..
>
> SELECT * FROM AnotherTable WHERE Id = @fkId
> SELECT * FROM YetAnotherTable WHERE FKId = @fkId
> -- etc..

plpgsql does support variable declarations [0] but does not use any special
notation like T-SQL.  An equivalent to your example would be:

DO $$
DECLARE
  v_fkid int;
  v_rec record;
BEGIN
  SELECT fkid INTO v_fkid FROM SomeTable WHERE id = 1;
  SELECT * INTO v_rec FROM AnotherTable WHERE Id = v_fkid;
  -- Do something with v_rec ...
END $$;

Prefixing variable names with v_ is just a convention to avoid ambiguous column
references (assuming that column names are not prefixed with v_) [1].

[0] https://www.postgresql.org/docs/current/plpgsql-declarations.html
[1] 
https://www.postgresql.org/docs/current/plpgsql-implementation.html#PLPGSQL-VAR-SUBST

--
Erik




Re: libpq and multi-threading

2023-05-03 Thread Michael Loftis
That is not a thread. Linux man clone right at the start …

“clone, __clone2, clone3 - create a child process”

What you want is pthread_create (or similar)

There’s a bunch of not well documented dragons if you’re trying to treat a
child process as a thread. Use POSIX Threads, as pretty much anytime PG or
anything else Linux based says thread they’re talking about a POSIX Thread
environment.


On Wed, May 3, 2023 at 05:12 Michael J. Baars <
mjbaars1977.pgsql.hack...@gmail.com> wrote:

> Hi Peter,
>
> The shared common address space is controlled by the clone(2) CLONE_VM
> option. Indeed this results in an environment in which both the parent and
> the child can read / write each other's memory, but dynamic memory being
> allocated using malloc(3) from two different threads simulaneously will
> result in internal interference.
>
> Because libpq makes use of malloc to store results, you will come to find
> that the CLONE_VM option was not the option you were looking for.
>
> On Tue, 2 May 2023, 19:58 Peter J. Holzer,  wrote:
>
>> On 2023-05-02 17:43:06 +0200, Michael J. Baars wrote:
>> > I don't think it is, but let me shed some more light on it.
>>
>> One possibly quite important information you haven't told us yet is
>> which OS you use.
>>
>> Or how you create the threads, how you pass the results around, what
>> else you are possibly doing between getting the result and trying to use
>> it ...
>>
>> A short self-contained test case might shed some light on this.
>>
>>
>> > After playing around a little with threads and memory, I now know that
>> the
>> > PGresult is not read-only, it is read-once. The child can only read that
>> > portion of parent memory, that was written before the thread started.
>> Read-only
>> > is not strong enough.
>> >
>> > Let me correct my first mail. Making libpq use mmap is not good enough
>> either.
>> > Shared memory allocated by the child can not be accessed by the parent.
>>
>> Are you sure you are talking about threads and not processes? In the OSs
>> I am familiar with, threads (of the same process) share a common address
>> space. You don't need explicit shared memory and there is no such thing
>> as "parent memory" (there is thread-local storage, but that's more a
>> compiler/library construct).
>>
>> hp
>>
>> --
>>_  | Peter J. Holzer| Story must make more sense than reality.
>> |_|_) ||
>> | |   | h...@hjp.at |-- Charles Stross, "Creative writing
>> __/   | http://www.hjp.at/ |   challenge!"
>>
> --

"Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds."
-- Samuel Butler


Re: PL/pgSQL doesn't support variables in queries?

2023-05-03 Thread hubert depesz lubaczewski
On Wed, May 03, 2023 at 10:25:55PM +1000, J.A. wrote:
> Heya folks :)
> 
> ms-sql person here migrating over to pgsql. One of the first thing's I
> noticed with pgsql (or more specifically, PL/pgSQL) is that it doesn't
> support "variables" in a query?
> 
> for example, here's some T-SQL:
> 
> DECLARE @fkId INTEGER

Sure it does.

There is nothing relating to "@" character, though.

You can easily find examples in docs:
https://www.postgresql.org/docs/current/plpgsql-structure.html

depesz




PL/pgSQL doesn't support variables in queries?

2023-05-03 Thread J.A.
Heya folks :)

ms-sql person here migrating over to pgsql. One of the first thing's I
noticed with pgsql (or more specifically, PL/pgSQL) is that it doesn't
support "variables" in a query?

for example, here's some T-SQL:

DECLARE @fkId INTEGER

SELECT @fkId = fkId FROM SomeTable WHERE id = 1

-- and then do something with that value..

SELECT * FROM AnotherTable WHERE Id = @fkId
SELECT * FROM YetAnotherTable WHERE FKId = @fkId
-- etc..

If I have this information correct, has this concept ever been
discussed before or considered to be included in PL/pgSQL ?

Thank you kindly for any help/conversations on this topic.

Sincere apologies if this is not the correct forum/list to ask this
question.

Regards,

JA.


Re: libpq and multi-threading

2023-05-03 Thread Michael J. Baars
Hi Peter,

The shared common address space is controlled by the clone(2) CLONE_VM
option. Indeed this results in an environment in which both the parent and
the child can read / write each other's memory, but dynamic memory being
allocated using malloc(3) from two different threads simulaneously will
result in internal interference.

Because libpq makes use of malloc to store results, you will come to find
that the CLONE_VM option was not the option you were looking for.

On Tue, 2 May 2023, 19:58 Peter J. Holzer,  wrote:

> On 2023-05-02 17:43:06 +0200, Michael J. Baars wrote:
> > I don't think it is, but let me shed some more light on it.
>
> One possibly quite important information you haven't told us yet is
> which OS you use.
>
> Or how you create the threads, how you pass the results around, what
> else you are possibly doing between getting the result and trying to use
> it ...
>
> A short self-contained test case might shed some light on this.
>
>
> > After playing around a little with threads and memory, I now know that
> the
> > PGresult is not read-only, it is read-once. The child can only read that
> > portion of parent memory, that was written before the thread started.
> Read-only
> > is not strong enough.
> >
> > Let me correct my first mail. Making libpq use mmap is not good enough
> either.
> > Shared memory allocated by the child can not be accessed by the parent.
>
> Are you sure you are talking about threads and not processes? In the OSs
> I am familiar with, threads (of the same process) share a common address
> space. You don't need explicit shared memory and there is no such thing
> as "parent memory" (there is thread-local storage, but that's more a
> compiler/library construct).
>
> hp
>
> --
>_  | Peter J. Holzer| Story must make more sense than reality.
> |_|_) ||
> | |   | h...@hjp.at |-- Charles Stross, "Creative writing
> __/   | http://www.hjp.at/ |   challenge!"
>


Re: Can one user login in multile machine?

2023-05-03 Thread Ron

On 5/3/23 01:23, Wen Yi wrote:

Hi team,
can I use same user's information login in multile machines when connect 
to the postgres?


Your question is a bit ambiguous, since:
1. "login in (to) multiple machines" typically refers logging in to the OS, 
and that has nothing to do with postgresql, and

2. a specific postgresql database can only be on one server.

Having said that... when doing streaming replication, role details 
(including authentication credentials) will be automatically replicated to 
the secondary just like every other bit of "the postgres".


--
Born in Arizona, moved to Babylonia.

Re: RHEL repo package crc mismatches

2023-05-03 Thread Brainmue
Hello Evan,

we have exactly the same problem and don't feel comfortable with it at the 
moment either.
We even synchronise several versions and this problem occurs with all of them.
Can anyone confirm that the packages have not been changed inadvertently but 
only the metadata is
wrong?
Here are the changes with us.

For the pgdg11 RHEL 7 repository:

[MIRROR] ogr_fdw_11-1.1.0-1.rhel7.x86_64.rpm: Downloading successful, but 
checksum doesn't match.
Calculated: 
c61d0bb8cdc2c386b57d8968b509f9fe7bf7693b3f86af730128797d087c0caa(sha256) 
Expected:
a963ae2eb874da055db63953cf0eb0d62e24d16abd6e8d4dab615ba4fadaefd8(sha256) 
[MIRROR] ogr_fdw_11-llvmjit-1.1.0-1.rhel7.x86_64.rpm: Downloading successful, 
but checksum doesn't
match. Calculated: 
1be687c8721e7683f7efbfe51b9bd9532f7c7326d344e83e8928667cbc524cd3(sha256)
Expected: 
52aa7c905fd802bfea5cf7e89b80b7523b2a16309575cdbe9d68df4179ec1f6b(sha256) 
[MIRROR] pg_auto_failover_11-1.6.3-1.rhel7.x86_64.rpm: Downloading successful, 
but checksum doesn't
match. Calculated: 
abd1ede633fe8dc7721e1e09783e300c8d5a5e9b226257c67969e2bfbf7ce4f9(sha256)
Expected: 
0b29fc748639210c76af4b1870772780ba13a04698886e78514e7fb1baac9781(sha256) 

For the pgdg13 RHEL 7 repository:

[MIRROR] ogr_fdw_13-1.1.0-1.rhel7.x86_64.rpm: Downloading successful, but 
checksum doesn't match.
Calculated: 
d2ea23dc8b866c09eb620187e40147daae1a60f2a31370a88fd119b08a5f8816(sha256) 
Expected:
a39bc56ebc34de96321af69f99862819fe36516775cb155f599c839c098a0030(sha256) 
[MIRROR] ogr_fdw_13-llvmjit-1.1.0-1.rhel7.x86_64.rpm: Downloading successful, 
but checksum doesn't
match. Calculated: 
f2d981ba5ae5e54ac420f881c27eaba3af6b506638feed9f686273272083b479(sha256)
Expected: 
5e6baa1e8169da8251f4a3c47c8db0ab4344977c0ed4a8f1042d353a50e4e304(sha256) 
[MIRROR] pg_auto_failover_13-1.6.3-1.rhel7.x86_64.rpm: Downloading successful, 
but checksum doesn't
match. Calculated: 
01ce463c8487d52986e347025266167135f0a866c37590c784e7e3e5d8e43817(sha256)
Expected: 
e35c32a27f5c97596d74fca03e416cb743bf188fdc0dfaf736cc68a20801a5c9(sha256) 

For the pgdg14 RHEL 7 repository:

[MIRROR] pg_auto_failover_14-1.6.3-1.rhel7.x86_64.rpm: Downloading successful, 
but checksum doesn't
match. Calculated: 
7b72deadb029a8752717c832cde2e23d87e341037765086d88ac6d96816ebe89(sha256)
Expected: 
55de94cebb1967c4f1edb1a0be14246173c05168261a76d141e819f607e83ee3(sha256)

Thank you for checking.

Greetings
Michael

3. Mai 2023 09:00, "Evan Rempel"  schrieb:

> At our site we use reposync to copy the postgresql repositories to a local 
> repository.
> 
> When doing this on April 28 (and since) I exprience the following package 
> checksum matching errors.
> 
> For the pgdg13 RHEL 8 repository
> 
> [MIRROR] pg_auto_failover_13-1.6.3-1.rhel8.x86_64.rpm: Downloading 
> successful, but checksum doesn't
> match. Calculated: 
> 5196edcfe1d6af6c0e90ad9a25667613bdfa0731a84fa9a1dbaa7080b4a3caac(sha256)
> Expected: 
> 8d4527c96e9c8a3ff86d75aa85c166899ee895e9522c6720223f0f93b658f8d6(sha256)
> 
> [MIRROR] e-maj_13-4.0.1-1.rhel8.x86_64.rpm: Downloading successful, but 
> checksum doesn't match.
> Calculated: 
> f7576cb1cd22303cb3dbb2a86911ad3f9e57afa8472a31f1a6a1f176f708fa1d(sha256) 
> Expected:
> 8c56cacb99771c4f06be2551988e553a70ea5e5459202e12e0e92fdeb7371621(sha256)
> 
> For the pgdg12 RHEL 8 repository
> 
> [MIRROR] pg_auto_failover_12-llvmjit-1.6.3-1.rhel8.x86_64.rpm: Downloading 
> successful, but checksum
> doesn't match. Calculated: 
> 9bfdaccc3a151fd847bbb5e622a9384648cf963faacd90dc9b31cd433e23a3c0(sha256)
> Expected: 
> aa5e3dc99cabfe22839ed0b9501a0099af139bf8551344a3b198ac048218ceee(sha256)
> 
> I think it is just metadata information, but it sounds scary.
> 
> Can anyone comment?
> 
> --
> Evan




Can one user login in multile machine?

2023-05-03 Thread Wen Yi
Hi team,
can I use same user's information login in multile machines when connect to the 
postgres?
Thanks in advance!

Yours,
Wen Yi.