Happened again even with max_wal_size=32GB on the 128GB WAL partition. At
this point I'm quite sure what happens:
+ a checkpoint is going on for several minutes (because of I/O contention
on the archival drive)
+ meanwhile data keeps coming in through several processes of pg_restore
+
Hello list,
last night I got ENOSPACE on the WAL partition while running a huge
pg_restore on an empty and idle database.
The checkpoint that started 7 minutes earlier never finished:
04:31:09 LOG: checkpoint starting: wal
...
04:38:04 PANIC: could not write to file "pg_wal/xlo
Forgot an important setting I have during pg_restore:
On Thu, 10 Apr 2025, Dimitrios Apostolou wrote:
max_wal_size=64GB
max_replication_slots = 0
max_logical_replication_workers = 0
max_wal_senders = 0
wal_level = minimal
autovacuum = off
checkpoint_completion_target = 0
so the checkpoint
m out of ideas on
this
one.
You'd have to create an load the table in the same transaction,
that is,
you'd have to run pg_restore with --single-transaction.
That would restore the schema from the dump, while I want to create the
schema from the SQL code in version control.
On 4/2/25 10:39 AM, Adrian Klaver wrote:
--clean will drop the object entirely not TRUNCATE.
I'm guessing that this is being done by you per:
https://www.postgresql.org/message-id/53760c70-4a87-a453-9e02-57abc9cb2e54%40gmx.net
"After each failed attempt, I need to issue a TRUNCATE table
s is not really needed.
> Dbowner has CREATEROLE and is the one who creates all the roles (WITH
> SET TRUE), and their private schemata in the specific database. Things
> would work if pg_restore did "SET ROLE" instead of "SET SESSION
> AUTHORIZATION" to switch
tion, that is,
you'd have to run pg_restore with --single-transaction.
That would restore the schema from the dump, while I want to create the
schema from the SQL code in version control.
I am not following, from your original post:
"
... create a
clean database by running the SQL s
On 3/20/25 15:48, Dimitrios Apostolou wrote:
Rationale:
When restoring a backup in an emergency situation, it's fine to run
pg_restore as superuser and get an exact replica of the dumped db.
AFAICT pg_restore (without --data-only) is optimised for such case.
But pg_dump/restore can be us
e my own function to ALTER all
tables to UNLOGGED, but failed with "could not change table T to
unlogged because it references logged table". I'm out of ideas on this
one.
You'd have to create an load the table in the same transaction, that is,
you'd hav
Hello list.
My database includes one table with 1000 partitions, all of them rather
sizeable. I run:
pg_restore -j12 --no-tablespaces --disable-triggers --exit-on-error
--no-owner --no-privileges -n public -d newdb custom_format_dump.pgdump
Right now after 24h of restore, I notice weird
On Wed, 2 Apr 2025, Dimitrios Apostolou wrote:
Hello list.
My database includes one table with 1000 partitions, all of them rather
I was not clear here: my database dump has all that, and the database is
brand new and empty.
sizeable. I run:
pg_restore -j12 --no-tablespaces --disable
On 4/4/25 9:45 AM, Ron Johnson wrote:
TRUNCATE statements inside of "toc.dat" files? I'm skeptical.
See my post here:
https://www.postgresql.org/message-id/7be2dcc6-3ba4-4e3f-a154-8d13d816aa9b%40aklaver.com
--
Adrian Klaver
adrian.kla...@aklaver.com
On Wed, Apr 2, 2025 at 1:32 PM Dimitrios Apostolou wrote:
> Hello list.
>
> My database includes one table with 1000 partitions, all of them rather
> sizeable. I run:
>
>pg_restore -j12 --no-tablespaces --disable-triggers --exit-on-error
> --no-owner --no-privileg
e Postgres version being restored to.
4) The complete pg_restore command.
5) The error messages.
Also have you tried a schema only dump/restore?
Dimitris
--
Adrian Klaver
adrian.kla...@aklaver.com
On Fri, 4 Apr 2025, Adrian Klaver wrote:
On 4/4/25 06:13, Dimitrios Apostolou wrote:
On Wed, 2 Apr 2025, Dimitrios Apostolou wrote:
Bug? This happened on a postgres compiled from last week's master branch.
Are you talking about the dev version?
In this thread, yes. My mistake I didn't
On 4/4/25 06:13, Dimitrios Apostolou wrote:
On Wed, 2 Apr 2025, Dimitrios Apostolou wrote:
Bug? This happened on a postgres compiled from last week's master branch.
Are you talking about the dev version?
The dump I'm trying to restore is from postgres 17.4.
Thanks
Dimitris
--
Adria
On Wed, 2 Apr 2025, Adrian Klaver wrote:
On 4/2/25 10:39 AM, Adrian Klaver wrote:
--clean will drop the object entirely not TRUNCATE.
I'm guessing that this is being done by you per:
https://www.postgresql.org/message-id/53760c70-4a87-a453-9e02-57abc9cb2e54%40gmx.net
"After each fa
On Wed, 2 Apr 2025, Adrian Klaver wrote:
On 4/2/25 10:32 AM, Dimitrios Apostolou wrote:
Hello list.
My database includes one table with 1000 partitions, all of them rather
sizeable. I run:
pg_restore -j12 --no-tablespaces --disable-triggers --exit-on-error
--no-owner --no-privileges
On 4/2/25 10:32 AM, Dimitrios Apostolou wrote:
Hello list.
My database includes one table with 1000 partitions, all of them rather
sizeable. I run:
pg_restore -j12 --no-tablespaces --disable-triggers --exit-on-error
--no-owner --no-privileges -n public -d newdb custom_format_dump.pgdump
estoration of
multi-TB tables to databases with lots of cruft?
Fixing that would go a long way towards eliminating your problems with
pg_restore.
I don't have emergencies yet. I'm testing the process of restoring the
database dump, and it takes more than 24 hours currently. A success
action 1: manually issuing all of CREATE TABLE etc.
>
> Transaction 2: pg_restore --single-transaction --data-only
>
> The COPY command in transaction 2 would still need to write to WAL, since
> it's separate from the CREATE TABLE.
>
> Am I wrong somewhere?
No, that is correct.
Yours,
Laurenz Albe
On Mon, 2025-03-24 at 15:24 +0100, Dimitrios Apostolou wrote:
> By the way do you see potential problems with using --single-transaction
> to restore billion-rows tables?
No.
Yours,
Laurenz Albe
th lots of cruft?
Fixing that would go a long way towards eliminating your problems with
pg_restore.
I don't have emergencies yet. I'm testing the process of restoring the
database dump, and it takes more than 24 hours currently. A successful
test is vital to approve the process.
But
needed.
Dbowner has CREATEROLE and is the one who creates all the roles (WITH
SET TRUE), and their private schemata in the specific database. Things
would work if pg_restore did "SET ROLE" instead of "SET SESSION
AUTHORIZATION" to switch user. Is this a strai
Why are you regularly having emergencies requiring the restoration of
multi-TB tables to databases with lots of cruft?
Fixing that would go a long way towards eliminating your problems with
pg_restore.
On Mon, Mar 24, 2025 at 11:51 AM Dimitrios Apostolou wrote:
> On Mon, 24 Mar 2025, Adr
d have to create an load the table in the same transaction, that is,
you'd have to run pg_restore with --single-transaction.
That would restore the schema from the dump, while I want to create the
schema from the SQL code in version control.
I am not following, from your original post:
ction to ALTER all
tables to UNLOGGED, but failed with "could not change table T to
unlogged because it references logged table". I'm out of ideas on this
one.
You'd have to create an load the table in the same transaction, that is,
you'd have to run pg_restore
; tables to UNLOGGED, but failed with "could not change table T to
> unlogged because it references logged table". I'm out of ideas on this
> one.
You'd have to create an load the table in the same transaction, that is,
you'd have to run pg_restore with --single
Rationale:
When restoring a backup in an emergency situation, it's fine to run
pg_restore as superuser and get an exact replica of the dumped db.
AFAICT pg_restore (without --data-only) is optimised for such case.
But pg_dump/restore can be used as a generic data-copying utility, and in
On Thu, 20 Mar 2025, Dimitrios Apostolou wrote:
Rationale:
When restoring a backup in an emergency situation, it's fine to run
pg_restore as superuser and get an exact replica of the dumped db.
AFAICT pg_restore (without --data-only) is optimised for such case.
But pg_dump/restore can be
On Fri, Mar 21, 2025 at 2:36 PM Dimitrios Apostolou wrote:
> On Thu, 20 Mar 2025, Dimitrios Apostolou wrote:
>
> > Rationale:
> >
> > When restoring a backup in an emergency situation, it's fine to run
> > pg_restore as superuser and get an exact replica of
on't know
(yet) why it does not work at work.
Here's what I tried on my own clusters. Note the access privileges for "paul".
So, "paul" again has CTc after pg_restore. That's what does not work at work.
I'll have to figure out what's wrong the
Hi Adrian,
> On 13 Feb 2025, at 19:05, Adrian Klaver wrote:
>
> Then run pg_restore -s -f db_name.sql against whatever is the dump file
> produced by pg_dump -Fc -Z1 ...
>
> It will create a plain text version of the schema definitions, no data in the
> file db_name.sql
database object itself, not any of it's
contained objects.
In the original database are you executing explicit GRANTs on the database
object?
Do:
pg_restore -s -f db_name.sql ${PGDATABASE}.out
This will create a text version restore of the schema objects in the dump file.
Then search the file
ual for the field to be blank.
>
> 2) \l only lists the privileges for the database object itself, not any of
> it's contained objects.
>
> In the original database are you executing explicit GRANTs on the database
> object?
>
> Do:
>
> pg_restore -s -f db_
On 2/13/25 08:17, Paul Foerster wrote:
Hi Adrian,
sorry for the late answer. I'm just too busy.
On 7 Feb 2025, at 17:19, Adrian Klaver wrote:
With create database being "template template0", this is what my
script does. But I need the -cC options for pg_restore to get ACL
Hi Adrian,
sorry for the late answer. I'm just too busy.
> On 7 Feb 2025, at 17:19, Adrian Klaver wrote:
>
>> With create database being "template template0", this is what my
>> script does. But I need the -cC options for pg_restore to get ACLs back.
>&
Hi Daniel,
> On 7 Feb 2025, at 14:29, Daniel Verite wrote:
>
> Still, you may check it with pg_amcheck [1] or try rebuilding it
> just in case.
Thanks. I guess this is good advice. I will try that on Monday.
Cheers,
Paul
On 2/6/25 22:32, Paul Foerster wrote:
Hi Adrian,
On 6 Feb 2025, at 17:31, Adrian Klaver wrote:
1) Log into postgres database and do:
a) DROP DATABASE mydb;
b) CREATE DATABASE mydb ;
2) pg_restore -d mydb mydb.dump.gz
With create database being "template template0", this
On 2/7/25 06:53, Greg Sabino Mullane wrote:
I'm not sure why we are focused on the other errors - the database fails
to get dropped (or created), so future errors are to be expected.
It is already created before the pg_restore is run.
The OP says it works for ~40 other databases the
I'm not sure why we are focused on the other errors - the database fails to
get dropped (or created), so future errors are to be expected.
pg_restore should be run with the --exit-on-error flag, and handle the
errors one by one as someone mentioned upthread.
I would use the --section=pre
Paul Foerster wrote:
> >> pg_restore: error: could not execute query: ERROR: insert or update on
> >> table "table_1" violates foreign key constraint "..._fk"
> >> DETAIL: Key (dokument_id)=(133680) is not present in table "...&
Hi Adrian,
> On 6 Feb 2025, at 19:44, Adrian Klaver wrote:
>
> By ACL do you mean roles?
>
> If so roles are global to the cluster not the database, so I am not seeing -C
> being relevant.
>
> If not you will need to be more specific about what you are referring to.
I did a "pg_dumpall -r >r
Hi Adrian,
> On 6 Feb 2025, at 17:31, Adrian Klaver wrote:
>
> 1) Log into postgres database and do:
>
> a) DROP DATABASE mydb;
> b) CREATE DATABASE mydb ;
>
> 2) pg_restore -d mydb mydb.dump.gz
With create database being "template template0", this is wh
o you use the -c, -C, and --if-exists options? Try without
them. On a new database, you should only need:
pg_restore -d mydb mydb.dump.gz
I need -C because I need ACLs to be recreated too. I tried with -C only, i.e.
no -c but that doesn't work for some reason. The --if-exists is a script
r
it. Why do you use the -c, -C, and --if-exists options? Try
> without them. On a new database, you should only need:
>
> pg_restore -d mydb mydb.dump.gz
I need -C because I need ACLs to be recreated too. I tried with -C only, i.e.
no -c but that doesn't work for some reason. The -
en-US | |
$ pg_restore -cC --if-exists --disable-triggers -d mydb mydb.dump.gz
I would start by doing:
1) Log into postgres database and do:
a) DROP DATABASE mydb;
b) CREATE DATABASE mydb ;
2) pg_restore -d mydb mydb.dump.gz
pg_restore: error: could not execute query: ERROR: inse
ently open database
pg_restore can't drop the database because it's connected to the database. When
you use -c and -C options, you can't connect to the database you want to
restore to. You have to connect to another database, such as postgres, so that
it can do the drop and the cr
Hi Guillaume,
> On 6 Feb 2025, at 11:13, Guillaume Lelarge
> wrote:
>
> You probably don't need --disable-triggers. You should fix errors in the
> order they appear. The first one is on the drop of the database:
>
> ERROR: cannot drop the currently open database
en-US | |
$ pg_restore -cC --if-exists --disable-triggers -d mydb mydb.dump.gz
pg_restore: error: could not execute query: ERROR: cannot drop the currently
open database
Command was: DROP DATABASE IF EXISTS mydb;
pg_restore: error: could not execute query: ERROR: database "mydb" already
exi
ocale | ICU Rules | Access privileges
--+--+--+-+-+-++---+---
mydb | postgres | UTF8 | icu | en_US.UTF-8 | en_US.UTF-8 |
en-US | |
$ pg_restore -cC --if-exists --disable-triggers -d
irectory is filled with abandoned table files which
>> postgresql know nothing about
>>
>> Playground:
>>
>> Terminal 1:
>> Start server: `docker run --rm -it -e POSTGRES_HOST_AUTH_METHOD=trust -v
>> $PWD/postgres:/var/lib/postgresql/data postgres:17.2`
>>
&
RES_HOST_AUTH_METHOD=trust -v
> $PWD/postgres:/var/lib/postgresql/data postgres:17.2`
>
> Terminal 2:
> 1. Start container with recent pg_restore: `docker run --rm -it -v
> $PWD:/app -w /app postgres:17.2 bash`
> 2. Start dump recovery: `pg_restore -h 172.17.0.2 -U postgres -d post
:
Start server: `docker run --rm -it -e POSTGRES_HOST_AUTH_METHOD=trust -v
$PWD/postgres:/var/lib/postgresql/data postgres:17.2`
Terminal 2:
1. Start container with recent pg_restore: `docker run --rm -it -v
$PWD:/app -w /app postgres:17.2 bash`
2. Start dump recovery: `pg_restore -h 172.17.0.2 -U
On Tue, Sep 03, 2024 at 10:28:44PM -0400, Tom Lane wrote:
> raf writes:
> > On Tue, Sep 03, 2024 at 06:43:22PM -0700, "David G. Johnston"
> > wrote:
> >> You specified “-1” so I don’t get why you believe pg_restore should be
> >> continuing to execu
raf writes:
> On Tue, Sep 03, 2024 at 06:43:22PM -0700, "David G. Johnston"
> wrote:
>> You specified “-1” so I don’t get why you believe pg_restore should be
>> continuing to execute in the face of the SQL error.
> The reason I believe pg_restore should be conti
-E utf8 -O admin
> > payroll_tst
>
> Given the following command
>
> > gpg --decrypt 20240904-011254-payroll_tst.pgdump.gpg.aps24 | pg_restore
> > -1 -h payroll -p 5433 -U postgres -d payroll_tst -Fc
>
> And this error
>
> > pg_restore: [archiver (d
:
>
> dropdb -h payroll -p 5433 -U postgres payroll_tst
> createdb -h payroll -p 5433 -U postgres -T template0 -E utf8 -O admin
> payroll_tst
Given the following command
> gpg --decrypt 20240904-011254-payroll_tst.pgdump.gpg.aps24 | pg_restore
> -1 -h payroll -p 5433 -U postgres -d
E utf8 -O admin
payroll_tst
gpg --decrypt 20240904-011254-payroll_tst.pgdump.gpg.aps24 | pg_restore -1 -h
payroll -p 5433 -U postgres -d payroll_tst -Fc
(The ".aps24" is a label to indicate which gpg key was used)
Below is the output from the gpg | pg_restore pipeline:
gpg: encrypted
ds for single transaction
in pg_restore. As -l does not execute anything, nothing was logged in
the postgres server log and none error was shown anywhere.
-l does indeed execute something per:
https://www.postgresql.org/docs/current/app-pgrestore.html
"
-l
--list
List the table o
I didn't used pg_dump/restore until today and finally found my mistake
which lead to the "problem" described below.
The output "depends on" comes from the -l (l as Lima) flag, what i wanted
was the -1 (number one) flag, which stands for single transaction in
pg_restor
On 3/28/24 08:57, Fire Emerald wrote:
Am 28. März 2024 15:00:06 schrieb Tom Lane :
Fire Emerald writes:
Then i did a pg_restore -d target --verbose -Fc file.dump and saw in the
output this:
5145 0 730750 TABLE subpartitions backends_y2024w03 userA
; depends on: 237
and so on
Fire Emerald writes:
> The partitioning must be the problem somehow.
[ shrug... ] You're still not providing any details that would
let somebody else reproduce or diagnose the problem.
https://wiki.postgresql.org/wiki/Guide_to_reporting_problems
regards, tom lane
Am 28. März 2024 15:00:06 schrieb Tom Lane :
Fire Emerald writes:
Then i did a pg_restore -d target --verbose -Fc file.dump and saw in the
output this:
5145 0 730750 TABLE subpartitions backends_y2024w03 userA
;depends on: 237
and so on ...
That is not an error, it's
Fire Emerald writes:
> Then i did a pg_restore -d target --verbose -Fc file.dump and saw in the
> output this:
> 5145 0 730750 TABLE subpartitions backends_y2024w03 userA
> ;depends on: 237
> and so on ...
That is not an error, it's just verbose display of one
Hello everyone,
I created a database dump in postgres 'custom' format using: pg_dump -d
origin --data-only -Fc > file.dump
Then i did a pg_restore -d target --verbose -Fc file.dump and saw in the
output this:
5145 0 730750 TABLE subpartitions backends_y2024w03 userA
;
7;s some optimization
> concerned with not writing WAL if COPY's target table was created in
> the current transaction. WAL will still be made for the catalog
> changes, but usually the bulk of the WAL for a pg_restore run comes
> from loading data, and this recipe eliminates that. (Of course,
> you cannot use it on a replication primary.)
>
> regards, tom lane
>
be made for the catalog
changes, but usually the bulk of the WAL for a pg_restore run comes
from loading data, and this recipe eliminates that. (Of course,
you cannot use it on a replication primary.)
regards, tom lane
On Thu, Nov 23, 2023 at 3:37 AM Laurenz Albe
wrote:
[snip]
>
> You can avoidwriting WAL if you set "wal_level = minimal", restart
> PostgreSQL
> and restore the dump with the --single-transaction option.
>
Why does "--single-transaction" prevent WAL writes? I'd expect _more_
pg_wal growth from
On Wed, 2023-11-22 at 16:55 +, Efrain J. Berdecia wrote:
> Thanks, the issue we've run into, which I guess could be really a setup
> issue, with running a COPY command while executing pg_restore,
> is that if we are restoring a large table (bigger than 500GB) our WAL
> dire
On Wed, Nov 22, 2023 at 2:28 PM Tom Lane wrote:
> "Efrain J. Berdecia" writes:
> > Thanks, the issue we've run into, which I guess could be really a setup
> issue, with running a COPY command while executing pg_restore, is that if
> we are restoring a large tab
"Efrain J. Berdecia" writes:
> Thanks, the issue we've run into, which I guess could be really a setup
> issue, with running a COPY command while executing pg_restore, is that if we
> are restoring a large table (bigger than 500GB) our WAL directory can grow to
> be
Thanks, the issue we've run into, which I guess could be really a setup issue,
with running a COPY command while executing pg_restore, is that if we are
restoring a large table (bigger than 500GB) our WAL directory can grow to be
very large.
I would think that if the pg_restore or COPY co
On 11/22/23 05:25, Efrain J. Berdecia wrote:
After working for a site where we are constantly doing logical pg_dump
to refresh environments I've come to miss features available in other
RDBMS' refresh/restore utilities.
Someone could point me in the right direction otherwise, but
Thanks, I'm trying to gage the interest on such a feature enhancement.
Up to now I have not actively contributed to the Postgres Project but this is
itching my rusty programming fingers lol
Thanks,Efrain J. Berdecia
On Wednesday, November 22, 2023 at 08:28:18 AM EST, David G. Johnston
wro
On Wednesday, November 22, 2023, Efrain J. Berdecia
wrote:
>
> Thanks in advance for any suggestions or the green light to post this to
> the PG-developer group :-)
>
If you aren’t offering up a patch for these it isn’t developer material and
belongs right here.
David J.
After working for a site where we are constantly doing logical pg_dump to
refresh environments I've come to miss features available in other RDBMS'
refresh/restore utilities.
Someone could point me in the right direction otherwise, but pg_restore seems
to be lacking the ability t
On 11/3/23 05:09, Marc Millas wrote:
Marc MILLAS
Senior Architect
+33607850334
www.mokadb.com <http://www.mokadb.com>
Testing pg_restore with different --jobs= values will be easier.
pg_dump is what's going to be reading from a constantly varying system.
Hello,
eac
Marc MILLAS
Senior Architect
+33607850334
www.mokadb.com
Testing pg_restore with different --jobs= values will be easier. pg_dump
> is what's going to be reading from a constantly varying system.
>
> Hello,
>
each time I do a replatforming of this kind, with DB up to 2 TB,
lis 2023 o 02:20 Ron napisał(a):
> On 11/1/23 20:05, Brad White wrote:
>
>
>
> --
> *From:* Ron
> *Sent:* Thursday, November 2, 2023 3:01:47 AM
> *To:* pgsql-general@lists.postgresql.org
>
> *Subject:* Re: pg_dump/pg_restore --jobs prac
On 11/1/23 20:05, Brad White wrote:
*From:* Ron
*Sent:* Thursday, November 2, 2023 3:01:47 AM
*To:* pgsql-general@lists.postgresql.org
*Subject:* Re: pg_dump/pg_restore --jobs practical limit?
On 11/1/23 15:42
From: Ron
Sent: Thursday, November 2, 2023 3:01:47 AM
To: pgsql-general@lists.postgresql.org
Subject: Re: pg_dump/pg_restore --jobs practical limit?
On 11/1/23 15:42, Laurenz Albe wrote:
On Wed, 2023-11-01 at 13:09 -0500, Ron wrote:
I will "soon&quo
nproc count on some is 16 and on others
is 32.
Does anyone have experience as to the point of diminishing returns?
IOW, can I crank them processes up to --jobs=30, will I see no gain -- or
even degradation -- after, for example, --jobs=24?
This would be for both pg_dump and pg_restore (which woul
is 16 and on others
> is 32.
>
> Does anyone have experience as to the point of diminishing returns?
>
> IOW, can I crank them processes up to --jobs=30, will I see no gain -- or
> even degradation -- after, for example, --jobs=24?
>
> This would be for both pg_dump and pg_re
e as to the point of diminishing returns?
IOW, can I crank them processes up to --jobs=30, will I see no gain -- or
even degradation -- after, for example, --jobs=24?
This would be for both pg_dump and pg_restore (which would be run on the
RHEL 8 VM).
--
Born in Arizona, moved to Babylonia.
On Wed, 2023-10-11 at 11:34 +, Zwettler Markus (OIZ) wrote:
> if you want to retrieve data by means of pg_dump / pg_restore from a database
> with
> locale de_CH.UTF8 into a database with locale en_US.UTF8 are there any other
> things
> to consider than the behavior implica
hi,
if you want to retrieve data by means of pg_dump / pg_restore from a database
with locale de_CH.UTF8 into a database with locale en_US.UTF8 are there any
other things to consider than the behavior implications mentioned in the
documentation:
https://www.postgresql.org/docs/current
On 9/7/23 23:29, Les wrote:
I'm trying to migrate a database from version 11 to version 15.
I have created a dump file on v11:
pg_dump --dbname=not_telling -Fc --no-owner > dump
--no-owner is ignored for non-text outputs. It is handled on the
pg_restore end.
Then I was trying t
I'm trying to migrate a database from version 11 to version 15.
I have created a dump file on v11:
pg_dump --dbname=not_telling -Fc --no-owner > dump
Then I was trying to import it on v15:
pg_restore -v --dbname=not_telling --no-user /restore/dump
pg_restore: error: could not read fr
Hi, thanks for your answer !
Sorry for being late...
I'm sure :-)
But I made more research and find the problem : the function I use...
I'm using the gen_random_uuid() function wich is included into postgresql
core since v13 and was originally into the pgcrypto extension.
So when you don't prefi
> On 31/08/2023 17:08 CEST Arthur Bazin wrote:
>
> Consider that we have a function in the public schema witch is named
> my_function_in_public.
>
> In PG11 this table :
> CREATE TABLE public.test_dump (
> id TEXT DEFAULT my_function_in_public()
> );
> When you dump this table with the pg11 binar
Hi everyone !
I have a good question on pg_dump/pg_restore and the search_path.
Consider that we have a function in the public schema witch is named
my_function_in_public.
In PG11 this table :
CREATE TABLE public.test_dump (
id TEXT DEFAULT my_function_in_public()
);
When you dump this
Boris Sagadin writes:
> restoring a 1.5TB database with about 800k tables on i3.4xlarge AWS
> instace, PgSQL V12.15 on Ubuntu.
> Running pg_restore with -j 16, I noticed the pg_restore is busy for an hour
> or so with IO at 80%+ and then most of processes start idling and only a
>
On 7/13/23 02:41, Boris Sagadin wrote:
Hi,
restoring a 1.5TB database with about 800k tables on i3.4xlarge AWS
instace, PgSQL V12.15 on Ubuntu.
Running pg_restore with -j 16, I noticed the pg_restore is busy for an
hour or so with IO at 80%+ and then most of processes start idling and
only
Hi,
restoring a 1.5TB database with about 800k tables on i3.4xlarge AWS
instace, PgSQL V12.15 on Ubuntu.
Running pg_restore with -j 16, I noticed the pg_restore is busy for an hour
or so with IO at 80%+ and then most of processes start idling and only a
few doing some work, disk IO at 1-2
On Sat, 3 Jun 2023 at 00:14, Jonathan S. Katz wrote:
> Typically once a release announcement is out, we'll only edit it if it's
> inaccurate. I don't think the statement in the release announcement is
> inaccurate, as it specifies that concurrent bulk loading is faster.
Understood. I had thought
On 5/30/23 10:05 PM, David Rowley wrote:
My understanding had been that concurrency was required, but I see the
commit message for 00d1e02be mentions:
Even single threaded
COPY is measurably faster, primarily due to not dirtying pages while
extending, if supported by the operating system (see
stgreSQL 16 can also improve the performance of concurrent bulk
> > > loading of data using COPY up to 300%."
> > >
> > > Since pg_dump & pg_restore use COPY (or something very similar), will the
> > > speed increase translate to higher speeds for those util
On Tue, May 30, 2023 at 10:28:58PM -0400, Bruce Momjian wrote:
> On Wed, May 31, 2023 at 02:18:25PM +1200, David Rowley wrote:
> > On Wed, 31 May 2023 at 14:11, Bruce Momjian wrote:
> > >
> > > On Wed, May 31, 2023 at 02:05:10PM +1200, David Rowley wrote:
> > > > "Relation extensions have been imp
On Wed, May 31, 2023 at 02:18:25PM +1200, David Rowley wrote:
> On Wed, 31 May 2023 at 14:11, Bruce Momjian wrote:
> >
> > On Wed, May 31, 2023 at 02:05:10PM +1200, David Rowley wrote:
> > > "Relation extensions have been improved allowing faster bulk loading
> > > of data using COPY. These improv
1 - 100 of 375 matches
Mail list logo