Re: inner join elimination

2018-06-07 Thread pinker
As far as I know PostgreSQL does only OUTER JOIN Elimination, with inner join it doesn't work. -- Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html

Plans cache eviction from process local Memory

2018-06-07 Thread pinker
Hi! I would like to know how postgres will behave with a big amount of cached plans from prepared statements on 1 connection. Let's say there is an application level connection pooling and one connection can last for many weeks. Many plans are cached from many prepared statements, is there any

Re: except all & WITH - syntax error?

2018-07-03 Thread pinker
thank you for the answer, had no idea about "syntactic precedence" thing. -- Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html

Re: How Many Partitions are Good Performing

2018-01-10 Thread pinker
I've run once a test on my laptop because was curious as well. From my results (on laptop - 16GB RAM, 4 cores) the upper limit was 12k. Above it planning time was unbearable high - much higher than execution time. It's been tested on 9.5 -- Sent from:

Re: pg_basebackup is taking more time than expected

2018-01-12 Thread pinker
It will not solve the problem, but maybe try with --checkpoint=fast option. -- Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html

Re: dirty_ratio & dirty_background_ratio settings with huge memory

2018-03-06 Thread pinker
Laurenz Albe wrote > Yes, you should set vm.dirty_background_bytes and vm.dirty_bytes > and not use the *_ratio settings. > > 2 GB for vm.dirty_background_bytes and 1 GB for vm.dirty_bytes sounds > fine. > > Yours, > Laurenz Albe > -- > Cybertec | https://www.cybertec-postgresql.com Thank you

Re: dirty_ratio & dirty_background_ratio settings with huge memory

2018-03-06 Thread pinker
Andres Freund wrote > With a halfway modern PG I'd suggest to rather tune postgres settings > that control flushing. That leaves files like temp sorting in memory for > longer, while flushing things controlledly for other sources of > writes. See *_flush_after settings. > > Greetings, > > Andres

Re: SELECT .. FOR UPDATE: find out who locked a row

2018-03-15 Thread pinker
Check out here: https://wiki.postgresql.org/wiki/Lock_Monitoring -- Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html

dirty_ratio & dirty_background_ratio settings with huge memory

2018-03-06 Thread pinker
Hi, I've got cutomer with really huge RAM, now it's: total used free sharedbuffers cached Mem: 31021113052596 49515 2088019922961185 -/+ buffers/cache: 904183011693 Swap: 8191 1 8190 (free -m) and

Re: Need followup setup instructions to postgresql-10.3-1-linux-x64.run

2018-03-14 Thread pinker
you probably need to change pg_hba.conf. set the authentication method to trust for your user, reload the server with pg_ctl, go to psql and change the passwords. more details you will find here: https://www.postgresql.org/docs/current/static/auth-pg-hba-conf.html -- Sent from:

Re: Best options for new PG instance

2018-03-14 Thread pinker
Bugzilla from scher...@proteus-tech.com wrote > Oh - and lots of memory is always good no matter what as others have said. I'm probably "the others" here. I have seen already really large instalations like with 6TB of RAM. Dealing with it is like completely other universe of problems, because of

Re: decompose big queries

2018-04-06 Thread pinker
Edson Carlos Ericksson Richter wrote > I don't know if there are best practices (each scenario requires its own > solution), but for plain complex SELECT queries, I do use "WITH" > queries... They work really well. Be cautious with CTE's. They weren't meant to be an alternative to subqueries

unreliable behaviour of track_functions

2018-03-31 Thread pinker
Hi All! I've been experimenting with track_functions options and what I've saw it's really puzzling me. Documentation says: / SQL-language functions that are simple enough to be "inlined" into the calling query will not be tracked, regardless of this setting./ But it came up, it depends on much

Re: unreliable behaviour of track_functions

2018-04-01 Thread pinker
and to be clear I was experimenting with pg_stat_clear_snapshot() after your answer, but it doesn't change behaviour of track_functions. -- Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html

Re: unreliable behaviour of track_functions

2018-04-01 Thread pinker
I mean this part describing track_function: https://www.postgresql.org/docs/10/static/runtime-config-statistics.html Enables tracking of function call counts and time used. Specify pl to track only procedural-language functions, all to also track SQL and C language functions. The default is

Re: pg_sleep() inside plpgsql block - pro & cons

2018-10-02 Thread pinker
Francisco Olarte wrote > I do some similar things, but I sleep outside of the > database, is there a reason this can not be done? > > Francisco Olarte. Yes, I do try to convince them to do it outside the db, that's the reason I'm looking for some support here :) I'm not sure those 2 reasons are

pg_sleep() inside plpgsql block - pro & cons

2018-10-02 Thread pinker
Hi! There is second time I see that somebody uses pg_sleep function inside plpgsql block. This case is quite similar to the last one - it's some kind of wait for data to be loaded. After pg_sleep there is a check if some condition is true, if not procedure goes to sleep again. As a result an

Re: Why the sql is not executed in parallel mode

2018-09-26 Thread pinker
Which version are you running? -- Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html

Re: Postgresql

2018-09-25 Thread pinker
You might find this comparision useful: https://en.wikipedia.org/wiki/Comparison_of_relational_database_management_systems -- Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html

1GB of maintenance work mem

2020-04-21 Thread pinker
Hi, is this limit for maintenance work mem still there? or it has been patched? https://www.postgresql-archive.org/Vacuum-allow-usage-of-more-than-1GB-of-work-mem-td5919221i180.html -- Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html

COPY blocking \dt+?

2020-05-04 Thread pinker
Hi, I'm running standard pgbench and what's kinda strange copy pgbench_accounts from stdin is blocking my other query which is \dt+. Does copy hold any exclusive lock or there is something wrong with my system? i'm using pgbench=> SELECT version(); -[ RECORD 1

Re: COPY blocking \dt+?

2020-05-04 Thread pinker
thank you David. So it would need to run inside single transaction to cause lock, right? do you know if pgbench is opening transaction? -- Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html

Loading 500m json files to database

2020-03-23 Thread pinker
Hi, do you have maybe idea how to make loading process faster? I have 500 millions of json files (1 json per file) that I need to load to db. My test set is "only" 1 million files. What I came up with now is: time for i in datafiles/*; do psql -c "\copy json_parts(json_data) FROM $i"& done

Re: Loading 500m json files to database

2020-03-23 Thread pinker
Hi, json_parts it's just single table with 2 column: Table "public.json_parts" Column | Type | Collation | Nullable |Default | Storage | Stats target | Description

Re: Loading 500m json files to database

2020-03-23 Thread pinker
there is no indexes nor foreign keys, or any other constraints -- Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html

Re: Loading 500m json files to database

2020-03-23 Thread pinker
Christopher Browne-3 wrote > Well, you're paying for a lot of overhead in that, as you're > establishing a psql command, connecting to a database, spawning a backend > process, starting a transactions, committing a transaction, closing the > backend > process, disconnecting from the database, and

Re: Loading 500m json files to database

2020-03-23 Thread pinker
it's in a blob storage in Azure. I'm testing with 1m that I have locally -- Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html

Re: Loading 500m json files to database

2020-03-23 Thread pinker
Ertan Küçükoğlu wrote > However, if possible, you may think of using a local physical computer to > do all uploading and after do backup/restore on cloud system. > > Compressed backup will be far less internet traffic compared to direct > data inserts. I was thinking about that but data source

Re: Loading 500m json files to database

2020-03-23 Thread pinker
it's a cloud and no plpythonu extension avaiable unfortunately -- Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html

Re: Loading 500m json files to database

2020-03-23 Thread pinker
hmm now I'm thinking maybe setting up pgbouncer in front of postgres with statement mode would help? -- Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html

insert on conflict postgres returning distinction

2020-08-12 Thread pinker
is there any way to distinct between updated and inserted rows in RETURNING clause when ON CONFLICT UPDATE was used? -- Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html

Re: insert on conflict postgres returning distinction

2020-08-12 Thread pinker
thank you Adrian, the background of it is that I have already written the python script that translates Oracle MERGE clause to Postgres INSERT ... ON CONFLICT, but in order to be able to add DELETE part from MERGE i need to distinct those operations. thank you for the idea with trigger, i

Re: insert on conflict postgres returning distinction

2020-08-12 Thread pinker
how about this solution? Does it have any caveats? WITH upsert AS (INSERT INTO GUCIO (ID, NAZWA) SELECT A.ID, A.NAZWA FROM ALA A ON CONFLICT (ID) DO UPDATE SET nazwa = excluded.nazwa RETURNING xmax,xmin, *) select xmax as xmax_value into txmaxu from upsert; delete from

Re: insert on conflict postgres returning distinction

2020-08-12 Thread pinker
Od course inside transaction block -- Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html

rollback previous commit if the current one fails

2021-04-13 Thread pinker
Hi, i need to emulate oracle's savepoint behaviour inside of the plpgsql function. This function is able to insert all the rows that weren't caught on the exception, but i need also to rollback the insert that happens before the exception. So let's say the exception is thrown when j=3 so i need

Re: rollback previous commit if the current one fails

2021-04-13 Thread pinker
thank you Luis, but this is not supported in plpgsql -- Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html

Re: List user who have access to schema

2018-08-03 Thread pinker
I think this one will give you report you need: select schema_name, roleid::regrole, string_agg(member::regrole::text,',' order by member::regrole::text) users from information_schema.schemata s, pg_user u JOIN pg_auth_members a ON u.usename::text=a.roleid::regrole::text WHERE s.schema_name not

Re: FK v.s unique indexes

2018-07-05 Thread pinker
David Rowley-3 wrote > I don't think there were any actual roadblocks, it was more of just > not enough time in the cycle to make it work due to a few technical > details that required extra effort to make work. > > Alvaro managed to simplify the problem and allow foreign keys to be > defined on

Re: Possible bug: could not open relation with OID [numbers] SQL State: XX000

2018-01-19 Thread pinker
I would like to refresh the topic and add another report about the issue that just happened to me.I'm sure it's the toast table that cannot be opened inside the function.I have added following RAISE NOTICE clauses to it and run analyze inside of the function: analyze verbose temp_table;