backup_manifest rename to backup_manifest.old after successful postgres start up

2023-07-17 Thread Gert Cuykens
Hi, suggest to automatically rename backup_manifest to backup_manifest.old
like backup_label when postgres start up successfully because it has no use
anymore for pg_verifybackup after postgres has been started.


Re: Is anyone using db_user_namespace?

2023-07-17 Thread Nathan Bossart
For future reference, this parameter was removed in v17 via commit 884eee5.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com




Re: Query take a long time and use no index

2023-07-17 Thread basti

Thanks a lot tomas, i will try it.

I have find out that there is a 'aggregation' function in the frontend.
But this is MySQL specific and I have no idea the transform it to postgres.

It looks like:
'REPLACE INTO aggregate (channel_id, type, timestamp, value, count)


SELECT channel_id, ? AS type, MAX(agg.timestamp) AS timest
  amp, COALESCE( SUM(agg.val_by_time) / (MAX(agg.timestamp) - 
MIN(agg.prev_timestamp)), AVG(agg.value)) AS value, COUNT(agg.value) AS 
count FROM ( SELECT channel_id,
  timestamp, value, value * (timestamp - @prev_timestamp) AS 
val_by_time, COALESCE(@prev_timestamp, 0) AS prev_timestamp, 
@prev_timestamp := timestamp FROM data CROSS
   JOIN (SELECT @prev_timestamp := NULL) AS vars WHERE channel_id = ? 
AND timestamp < UNIX_TIMESTAMP(DATE_FORMAT(NOW(), "%Y-%m-%d")) * 1000 ) 
AS agg GROUP BY channel_
  id, DATE_TRUNC('day', TIMESTAMP 'epoch' + timestamp * INTERVAL '1 
millisecond')' with params [3, 5]: 







  SQLSTATE[42601]: Syntax error: 7 ERROR:  syntax error at or near 
"REPLACE" 

  LINE 1: REPLACE INTO aggregate (channel_id, type, timestamp, value, 
...



Am 17.07.23 um 13:21 schrieb Tomas Vondra:

On 7/17/23 13:20, Tomas Vondra wrote:

...

It's always going to be slow with the COUNT(DISTINCT), I'm afraid.

Not sure how much you can modify the query / database, and how accurate
results you need. If you're OK with estimates, you can try postgres-hll
extension [2] which estimates count(distinct). For exact reaults, I
wrote count_distinct extension [2] that uses hashtable. Might be worth a
try, I guess.



Damn, I forgot to add the links:

[1] https://github.com/citusdata/postgresql-hll

[2] https://github.com/tvondra/count_distinct

regards






Re: Installation Issue

2023-07-17 Thread Carl Erik Eriksson
I too had problems with 7.4. They went away when I downgraded to 7.2

C E Eriksson
Sent from iPhone


> On 17 Jul 2023, at 18:31, Erik Wienhold  wrote:
> 
> 
>> 
>> On 17/07/2023 16:30 CEST Adrian Klaver  wrote:
>> 
>> 
>> The other screenshot is a pgAdmin issue. I have seen quite a few
>> questions about this on Stack Overflow recently. Seems to be related to
>> version 7.4. My suggestion would be to file an issue here:
>> 
>> https://github.com/pgadmin-org/pgadmin4/issues
> 
> Let me guess (without knowing the mentioned screenshot), something about a 
> lost
> connection to pgAdmin server?  That's a known issue in 7.4 which also bugged
> some of my colleagues.  They downgraded to 7.3.
> 
> https://github.com/pgadmin-org/pgadmin4/issues/6524
> 
> --
> Erik
> 
> 




Re: Installation Issue

2023-07-17 Thread Erik Wienhold
> On 17/07/2023 16:30 CEST Adrian Klaver  wrote:
>
>
> The other screenshot is a pgAdmin issue. I have seen quite a few
> questions about this on Stack Overflow recently. Seems to be related to
> version 7.4. My suggestion would be to file an issue here:
>
> https://github.com/pgadmin-org/pgadmin4/issues

Let me guess (without knowing the mentioned screenshot), something about a lost
connection to pgAdmin server?  That's a known issue in 7.4 which also bugged
some of my colleagues.  They downgraded to 7.3.

https://github.com/pgadmin-org/pgadmin4/issues/6524

--
Erik




Re: How to grant read only functions execute permission to read only user

2023-07-17 Thread David G. Johnston
On Mon, Jul 17, 2023, 08:44 Tomas Vondra 
wrote:

>
>
> On 7/17/23 16:11, Yi Sun wrote:
> > Hello guys,
> >
> > Our read only user is okay to be granted read only permission of tables.
> >
> > How to grant read only functions execute permission to read only user,
> > is there a simple way to do it please? If not, how to get the list of
> > read only functions please? Then can grant one by one based on the list,
> > thanks
>
> What is read-only function? I don't think Postgres has anything like
> that. Functions inherit the privileges of the user that executes them by
> default. So if the user is read-only (i.e. has just SELECT privilege),
> then the function can't do any writes either.
>
>
>
By definition any function marked stable or immutable is read-only though
the system doesn't enforce that user-specified label.

David J.


Re: How to grant read only functions execute permission to read only user

2023-07-17 Thread Tomas Vondra



On 7/17/23 16:11, Yi Sun wrote:
> Hello guys,
> 
> Our read only user is okay to be granted read only permission of tables.
> 
> How to grant read only functions execute permission to read only user,
> is there a simple way to do it please? If not, how to get the list of
> read only functions please? Then can grant one by one based on the list,
> thanks

What is read-only function? I don't think Postgres has anything like
that. Functions inherit the privileges of the user that executes them by
default. So if the user is read-only (i.e. has just SELECT privilege),
then the function can't do any writes either.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Installation Issue

2023-07-17 Thread Adrian Klaver

On 7/16/23 23:47, Baskar Muthu wrote:

Reply to list also.
Ccing list.

I tried many times but they showed the same things and it ran for more 
than 3 hours. how to resolve this issue. If you have any idea please let 
me know.Thanking you..


OS version - Windows 11

Postgre sql version - 14.3

I downloaded it from the postgresql.org  website.

Yes, but it's not connecting with the server.

Version 14.3 parameter i used for connecting with psql server.


The Stackbuilder screenshot is showing a failed connection to a download 
server at EDB, not to Postgres server.


The other screenshot is a pgAdmin issue. I have seen quite a few 
questions about this on Stack Overflow recently. Seems to be related to 
version 7.4. My suggestion would be to file an issue here:


https://github.com/pgadmin-org/pgadmin4/issues

In meantime try using psql to connect to the server.



No.

It's not showing an error but it's not connected with the server. That's 
only my issue. Please resolve this issue.


On Mon, Jul 17, 2023 at 12:10 PM Baskar Muthu > wrote:


Screenshot 2023-07-17 120353.pngScreenshot 2023-07-17 120441.png

On Sun, Jul 16, 2023 at 9:05 PM Adrian Klaver
mailto:adrian.kla...@aklaver.com>> wrote:

On 7/15/23 07:30, Baskar Muthu wrote:
 > Hi sir/mam
 >
 > I installed postre - sql but it is not connected with the
server. I
 > tried many times but it's not working and it shows a
'connecting server
 > error'.

This needs more information:

1) OS and version.

2) Postgresql version.

3) How was Postgresql installed?

5) Is the Postgresql server running?

6) What client are you connecting with and the connection
parameters?

7) Is the client remote to the server?

8) The complete error message.


 >
 > Please resolve this issue and alternate ideas please let me know.
 >
 > Thanks & Regards
 > Baskar Muthu

-- 
Adrian Klaver

adrian.kla...@aklaver.com 



--
Adrian Klaver
adrian.kla...@aklaver.com





How to grant read only functions execute permission to read only user

2023-07-17 Thread Yi Sun
Hello guys,

Our read only user is okay to be granted read only permission of tables.

How to grant read only functions execute permission to read only user, is
there a simple way to do it please? If not, how to get the list of read
only functions please? Then can grant one by one based on the list, thanks

Best regards
Dennis Sun


Re: Query take a long time and use no index

2023-07-17 Thread Ron

On 7/17/23 04:13, basti wrote:
[snip]


The Indexes:

volkszaehler=# SELECT tablename,indexname,indexdef FROM pg_indexes WHERE 
tablename LIKE 'data%' ORDER BY tablename,indexname;


 tablename |   indexname    | indexdef
---++-- 

 data  | idx_16391_idx_adf3f36372f5a1aa | CREATE INDEX 
idx_16391_idx_adf3f36372f5a1aa ON volkszaehler.data USING btree (channel_id)
 data  | idx_16391_primary  | CREATE UNIQUE INDEX 
idx_16391_primary ON volkszaehler.data USING btree (channel_id, "timestamp")
 data  | idx_data_timestamp | CREATE INDEX 
idx_data_timestamp ON volkszaehler.data USING btree ("timestamp")

(3 rows)


The Query:

volkszaehler=# explain analyze SELECT COUNT(DISTINCT DATE_TRUNC('day', 
TIMESTAMP 'epoch' + timestamp * INTERVAL '1 millisecond')) FROM data WHERE 
channel_id = 5 AND timestamp >= 0;


 QUERY PLAN
 

 Aggregate  (cost=590793.68..590793.69 rows=1 width=8) (actual 
time=15449.536..15449.539 rows=1 loops=1)
   ->  Seq Scan on data  (cost=0.00..382037.82 rows=16700469 width=8) 
(actual time=247.092..3833.495 rows=16669429 loops=1)

 Filter: (("timestamp" >= 0) AND (channel_id = 5))
 Rows Removed by Filter: 1215163
 Planning Time: 0.374 ms
 JIT:
   Functions: 5
   Options: Inlining true, Optimization true, Expressions true, Deforming 
true
   Timing: Generation 1.201 ms, Inlining 158.833 ms, Optimization 59.816 
ms, Emission 28.472 ms, Total 248.322 ms

 Execution Time: 15451.093 ms
(10 rows)

Round about 16 sec is too long, the frontend run in timeout or other erros.


What fraction of the rows in the table meet the "WHERE channel_id = 5 AND 
timestamp >= 0" qualification?


If it's high (and "high" can be a seemingly low value), then Postgresql will 
decide it's cheaper to sequentially scan the table.




A simple count look like

volkszaehler=# explain analyze SELECT count (channel_id) FROM data;
QUERY PLAN
- 

 Finalize Aggregate  (cost=208089.76..208089.77 rows=1 width=8) (actual 
time=3514.293..3523.842 rows=1 loops=1)
   ->  Gather  (cost=208089.55..208089.76 rows=2 width=8) (actual 
time=3514.247..3523.800 rows=3 loops=1)

 Workers Planned: 2
 Workers Launched: 2
 ->  Partial Aggregate  (cost=207089.55..207089.56 rows=1 width=8) 
(actual time=3427.139..3427.141 rows=1 loops=3)
   ->  Parallel Seq Scan on data (cost=0.00..188457.44 
rows=7452844 width=8) (actual time=0.137..2121.695 rows=5962263 loops=3)

 Planning Time: 0.247 ms
 JIT:
   Functions: 11
   Options: Inlining false, Optimization false, Expressions true, 
Deforming true
   Timing: Generation 2.665 ms, Inlining 0.000 ms, Optimization 1.612 ms, 
Emission 48.580 ms, Total 52.857 ms

 Execution Time: 3525.656 ms
(12 rows)

Why are the indexes not used ?
Is it a Problem of the DATE_TRUNC ?
How can I optimize?

Best regards









--
Born in Arizona, moved to Babylonia.




Re: Query take a long time and use no index

2023-07-17 Thread Tomas Vondra
On 7/17/23 13:20, Tomas Vondra wrote:
> ...
> 
> It's always going to be slow with the COUNT(DISTINCT), I'm afraid.
> 
> Not sure how much you can modify the query / database, and how accurate
> results you need. If you're OK with estimates, you can try postgres-hll
> extension [2] which estimates count(distinct). For exact reaults, I
> wrote count_distinct extension [2] that uses hashtable. Might be worth a
> try, I guess.
> 

Damn, I forgot to add the links:

[1] https://github.com/citusdata/postgresql-hll

[2] https://github.com/tvondra/count_distinct

regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Query take a long time and use no index

2023-07-17 Thread Tomas Vondra



On 7/17/23 11:13, basti wrote:
> Hello,
> 
> I use volkszaehler.org (a DIY Smartmeter) on postgres.
> 
> The hardware is a Dual-core 2,4 GHz amd64-CPU and 6 GB RAM.
> The databse is approx. 1.5 GB on SSD
> 
> my local config looks like:
> 
> root@vz:~# cat /etc/postgresql/15/main/conf.d/local.conf
> # DB Version: 15
> # OS Type: linux
> # Total Memory (RAM): 6 GB
> # CPUs num: 2
> # Connections num: 20
> # Data Storage: sdd
> 
> max_connections = 20
> shared_buffers = 2GB
> effective_cache_size = 4608MB
> maintenance_work_mem = 384MB
> checkpoint_completion_target = 0.9
> wal_buffers = 16MB
> default_statistics_target = 100
> random_page_cost = 0.9
> effective_io_concurrency = 200
> min_wal_size = 1GB
> max_wal_size = 4GB
> work_mem = 512MB
> temp_buffers = 512MB
> 
> # log
> log_directory = '/var/log/postgresql/'  # directory where log files are
> written,
>     # can be absolute or relative to
> PGDATA
> log_filename = 'postgresql-15-main.log'
> #log_statement = 'all'
> #logging_collector = on
> log_temp_files = 0
> session_preload_libraries = 'auto_explain'
> auto_explain.log_min_duration = '3s'
> 
> 
> The Values:
> 
> volkszaehler=# SELECT *  FROM data limit 10;
>    timestamp   | channel_id | value
> ---++---
>  1666008690865 |  5 |   232
>  1666008691867 |  5 |   236
>  1666008692865 |  5 |   237
>  1666008693867 |  5 |   232
>  1666008694865 |  5 |   227
>  1666008695867 |  5 |   229
>  1666008698866 |  5 |   228
>  1666008699868 |  5 |   229
>  1666008700864 |  5 |   228
>  1666008701868 |  5 |   230
> 
> 
> volkszaehler=# SELECT COUNT(value) FROM data;
>   count
> --
>  17884247
> (1 row)
> 
> 
> The Indexes:
> 
> volkszaehler=# SELECT tablename,indexname,indexdef FROM pg_indexes WHERE
> tablename LIKE 'data%' ORDER BY tablename,indexname;
> 
>  tablename |   indexname    |   indexdef
> ---++--
>  data  | idx_16391_idx_adf3f36372f5a1aa | CREATE INDEX
> idx_16391_idx_adf3f36372f5a1aa ON volkszaehler.data USING btree
> (channel_id)
>  data  | idx_16391_primary  | CREATE UNIQUE INDEX
> idx_16391_primary ON volkszaehler.data USING btree (channel_id,
> "timestamp")
>  data  | idx_data_timestamp | CREATE INDEX
> idx_data_timestamp ON volkszaehler.data USING btree ("timestamp")
> (3 rows)
> 
> 
> The Query:
> 
> volkszaehler=# explain analyze SELECT COUNT(DISTINCT DATE_TRUNC('day',
> TIMESTAMP 'epoch' + timestamp * INTERVAL '1 millisecond')) FROM data
> WHERE channel_id = 5 AND timestamp >= 0;
> 
>  QUERY PLAN
> 
>  Aggregate  (cost=590793.68..590793.69 rows=1 width=8) (actual
> time=15449.536..15449.539 rows=1 loops=1)
>    ->  Seq Scan on data  (cost=0.00..382037.82 rows=16700469 width=8)
> (actual time=247.092..3833.495 rows=16669429 loops=1)
>  Filter: (("timestamp" >= 0) AND (channel_id = 5))
>  Rows Removed by Filter: 1215163
>  Planning Time: 0.374 ms
>  JIT:
>    Functions: 5
>    Options: Inlining true, Optimization true, Expressions true,
> Deforming true
>    Timing: Generation 1.201 ms, Inlining 158.833 ms, Optimization 59.816
> ms, Emission 28.472 ms, Total 248.322 ms
>  Execution Time: 15451.093 ms
> (10 rows)
> 
> Round about 16 sec is too long, the frontend run in timeout or other erros.
> 
> A simple count look like
> 
> volkszaehler=# explain analyze SELECT count (channel_id) FROM data;
>  QUERY PLAN
> -
>  Finalize Aggregate  (cost=208089.76..208089.77 rows=1 width=8) (actual
> time=3514.293..3523.842 rows=1 loops=1)
>    ->  Gather  (cost=208089.55..208089.76 rows=2 width=8) (actual
> time=3514.247..3523.800 rows=3 loops=1)
>  Workers Planned: 2
>  Workers Launched: 2
>  ->  Partial Aggregate  (cost=207089.55..207089.56 rows=1
> width=8) (actual time=3427.139..3427.141 rows=1 loops=3)
>    ->  Parallel Seq Scan on data  (cost=0.00..188457.44
> rows=7452844 width=8) (actual time=0.137..2121.695 rows=5962263 loops=3)
>  Planning Time: 0.247 ms
>  JIT:
>    Functions: 11
>    Options: Inlining false, Optimization false, Expressions true,
> Deforming true
>    Timing: Generation 2.665 ms, Inlining 0.000 ms, Optimization 1.612
> ms, Emission 48.580 ms, Total 52.857 ms
>  Execution Time: 3525.656 ms
> (12 rows)
> 
> Why are the indexes not used ?

It's a common misconception that indexes necessarily improve 

Re: [Beginner Question]What is query jumbling?

2023-07-17 Thread Tomas Vondra



On 7/17/23 05:23, Wen Yi wrote:
> Hi community,
> I am reading the 'Generate code for query jumbling through
> gen_node_support.pl'
> (https://www.postgresql.org/message-id/E1pMk51-000puf-55%40gemulon.postgresql.org)
>  to understand the principle of the postgres's node, but I really don't know 
> the meaning of the word 'query jumbling'.
> 
> Can someone provide me some advice?
> Thanks in advance!
> 

Stuff like this is usually explained somewhere in the code base, so "git
grep" is your friend. In this case it's probably good to read the
comment in src/backend/nodes/queryjumblefuncs.c which says:

 * Normalization is implemented by fingerprinting queries, selectively
 * serializing those fields of each query tree's nodes that are judged
 * to be essential to the query. This is referred to as a query jumble.

So "jumbling" is a process to generate a "normalized" query.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Query take a long time and use no index

2023-07-17 Thread basti

Hello,

I use volkszaehler.org (a DIY Smartmeter) on postgres.

The hardware is a Dual-core 2,4 GHz amd64-CPU and 6 GB RAM.
The databse is approx. 1.5 GB on SSD

my local config looks like:

root@vz:~# cat /etc/postgresql/15/main/conf.d/local.conf
# DB Version: 15
# OS Type: linux
# Total Memory (RAM): 6 GB
# CPUs num: 2
# Connections num: 20
# Data Storage: sdd

max_connections = 20
shared_buffers = 2GB
effective_cache_size = 4608MB
maintenance_work_mem = 384MB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 0.9
effective_io_concurrency = 200
min_wal_size = 1GB
max_wal_size = 4GB
work_mem = 512MB
temp_buffers = 512MB

# log
log_directory = '/var/log/postgresql/'  # directory where log files are 
written,
# can be absolute or relative 
to PGDATA

log_filename = 'postgresql-15-main.log'
#log_statement = 'all'
#logging_collector = on
log_temp_files = 0
session_preload_libraries = 'auto_explain'
auto_explain.log_min_duration = '3s'


The Values:

volkszaehler=# SELECT *  FROM data limit 10;
   timestamp   | channel_id | value
---++---
 1666008690865 |  5 |   232
 1666008691867 |  5 |   236
 1666008692865 |  5 |   237
 1666008693867 |  5 |   232
 1666008694865 |  5 |   227
 1666008695867 |  5 |   229
 1666008698866 |  5 |   228
 1666008699868 |  5 |   229
 1666008700864 |  5 |   228
 1666008701868 |  5 |   230


volkszaehler=# SELECT COUNT(value) FROM data;
  count
--
 17884247
(1 row)


The Indexes:

volkszaehler=# SELECT tablename,indexname,indexdef FROM pg_indexes WHERE 
tablename LIKE 'data%' ORDER BY tablename,indexname;


 tablename |   indexname| 
  indexdef

---++--
 data  | idx_16391_idx_adf3f36372f5a1aa | CREATE INDEX 
idx_16391_idx_adf3f36372f5a1aa ON volkszaehler.data USING btree (channel_id)
 data  | idx_16391_primary  | CREATE UNIQUE INDEX 
idx_16391_primary ON volkszaehler.data USING btree (channel_id, "timestamp")
 data  | idx_data_timestamp | CREATE INDEX 
idx_data_timestamp ON volkszaehler.data USING btree ("timestamp")

(3 rows)


The Query:

volkszaehler=# explain analyze SELECT COUNT(DISTINCT DATE_TRUNC('day', 
TIMESTAMP 'epoch' + timestamp * INTERVAL '1 millisecond')) FROM data 
WHERE channel_id = 5 AND timestamp >= 0;


 QUERY PLAN 



 Aggregate  (cost=590793.68..590793.69 rows=1 width=8) (actual 
time=15449.536..15449.539 rows=1 loops=1)
   ->  Seq Scan on data  (cost=0.00..382037.82 rows=16700469 width=8) 
(actual time=247.092..3833.495 rows=16669429 loops=1)

 Filter: (("timestamp" >= 0) AND (channel_id = 5))
 Rows Removed by Filter: 1215163
 Planning Time: 0.374 ms
 JIT:
   Functions: 5
   Options: Inlining true, Optimization true, Expressions true, 
Deforming true
   Timing: Generation 1.201 ms, Inlining 158.833 ms, Optimization 
59.816 ms, Emission 28.472 ms, Total 248.322 ms

 Execution Time: 15451.093 ms
(10 rows)

Round about 16 sec is too long, the frontend run in timeout or other erros.

A simple count look like

volkszaehler=# explain analyze SELECT count (channel_id) FROM data;
 QUERY 
PLAN

-
 Finalize Aggregate  (cost=208089.76..208089.77 rows=1 width=8) (actual 
time=3514.293..3523.842 rows=1 loops=1)
   ->  Gather  (cost=208089.55..208089.76 rows=2 width=8) (actual 
time=3514.247..3523.800 rows=3 loops=1)

 Workers Planned: 2
 Workers Launched: 2
 ->  Partial Aggregate  (cost=207089.55..207089.56 rows=1 
width=8) (actual time=3427.139..3427.141 rows=1 loops=3)
   ->  Parallel Seq Scan on data  (cost=0.00..188457.44 
rows=7452844 width=8) (actual time=0.137..2121.695 rows=5962263 loops=3)

 Planning Time: 0.247 ms
 JIT:
   Functions: 11
   Options: Inlining false, Optimization false, Expressions true, 
Deforming true
   Timing: Generation 2.665 ms, Inlining 0.000 ms, Optimization 1.612 
ms, Emission 48.580 ms, Total 52.857 ms

 Execution Time: 3525.656 ms
(12 rows)

Why are the indexes not used ?
Is it a Problem of the DATE_TRUNC ?
How can I optimize?

Best regards









Re: Reset Postgresql users password

2023-07-17 Thread Chris Travers
You can use a DO block or write a function do to this.

It takes some practice (and you need to use EXECUTE FORMAT())

If users need to be able to change their own users, something like this
works:

CREATE FUNCTION change_my_password(in_password, text)
returns void language plpgsql as
$$
begin
   EXECUTE FORMAT($F$ALTER USER $I WITH PASSWORD %L$F$, session_user,
in_password);
end;
$$ SECURITY DEFINER;

On Mon, Jul 17, 2023 at 9:28 AM Ron  wrote:

> On 7/12/23 14:28, Johnathan Tiamoh wrote:
> > Hello,
> >
> > I wish to find out if there is a way to reset all users in Postgresql
> > password to the same password at once.
>
> To the same value??
>
> --
> Born in Arizona, moved to Babylonia.
>
>
>

-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: Reset Postgresql users password

2023-07-17 Thread Wen Yi
I think maybe you can check the pg_hba.conf, change the method to the 'trust'.
(Allow the connection unconditionally. This method allows anyone that can 
connect to the PostgreSQL database server to login as any PostgreSQL user they 
wish, without the need for a password or any other authentication. See 
Section21.4 for details.)



And then login as the root user, after that do the things you wish.



https://www.postgresql.org/docs/current/auth-pg-hba-conf.html





--Original--
From:   
 "Ron"  
  


[Beginner Question]What is query jumbling?

2023-07-17 Thread Wen Yi
Hi community,
I am reading the 'Generate code for query jumbling through gen_node_support.pl' 
(https://www.postgresql.org/message-id/E1pMk51-000puf-55%40gemulon.postgresql.org)
 to understand the principle of the postgres's node, but I really don't know 
the meaning of the word 'query jumbling'.


Can someone provide me some advice?
Thanks in advance!


Yours,
Wen Yi

Re: create a temp table in SPI

2023-07-17 Thread 黄宁
Thanks for your explanation.

Laurenz Albe  于2023年7月13日周四 17:48写道:

> On Thu, 2023-07-13 at 13:12 +0800, 黄宁 wrote:
> > I want to create some temporary tables in SPI, but after I created the
> table and inserted the data, I can’t query any data, why?
> >
> > the postgres version:
> > PostgreSQL 13.6 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 7.3.0,
> 64-bit
> > code:
> >
> >
> >  int ret = 0;
> > SPI_connect();
> > ret = SPI_execute("CREATE GLOBAL TEMP TABLE temp_table (id int,
> value text)", false, 0);
> >
> > ret = SPI_execute("INSERT INTO temp_table VALUES (1, 'a'), (2,
> 'b')", false, 0);
> >
> > ret = SPI_execute("SELECT * FROM temp_table", true, 0);
> >
> > if(SPI_processed > 0)
>  ...
>
> That's because you set "read_only" to "true" when you executed the query,
> so that the command counter is not incremented, and the query cannot see
> the results from the previous statement.
>
> The documentation is quite clear here:
>
>   It is generally unwise to mix read-only and read-write commands within
>   a single function using SPI; that could result in very confusing
> behavior,
>   since the read-only queries would not see the results of any database
>   updates done by the read-write queries.
>
> Yours,
> Laurenz Albe
>