On Mon, May 03, 2021 at 11:10:44AM -0400, Jan Wieck wrote:
> Not yet, but I will enter it so that we can get it into 15 for sure.
I may be missing something but this is not listed:
https://commitfest.postgresql.org/33/
Could you add it to the CF app please? There are so many patches and
On 4/30/21 3:32 PM, Bruce Momjian wrote:
On Sat, Mar 13, 2021 at 08:43:54AM -0500, Jan Wieck wrote:
On 3/12/21 8:30 PM, Michael Paquier wrote:
> Hi Jan,
>
> On Fri, Mar 12, 2021 at 06:13:33PM -0500, Jan Wieck wrote:
> > One of the things in my way is that when using pg_resetwal to put the
> >
On Sat, Mar 13, 2021 at 08:43:54AM -0500, Jan Wieck wrote:
> On 3/12/21 8:30 PM, Michael Paquier wrote:
> > Hi Jan,
> >
> > On Fri, Mar 12, 2021 at 06:13:33PM -0500, Jan Wieck wrote:
> > > One of the things in my way is that when using pg_resetwal to put the
> > > NextXID way into the future (to
On 3/12/21 8:30 PM, Michael Paquier wrote:
Hi Jan,
On Fri, Mar 12, 2021 at 06:13:33PM -0500, Jan Wieck wrote:
One of the things in my way is that when using pg_resetwal to put the
NextXID way into the future (to push the old cluster close to wraparound for
example), the postmaster won't start
Hi Jan,
On Fri, Mar 12, 2021 at 06:13:33PM -0500, Jan Wieck wrote:
> One of the things in my way is that when using pg_resetwal to put the
> NextXID way into the future (to push the old cluster close to wraparound for
> example), the postmaster won't start because it doesn't have the pg_xact
>
Resurrecting an old thread.
We (AWS) have seen this wraparound during pg_upgrade more often recently
with customers who have millions of large objects in their databases.
On 6/11/18 1:14 PM, Tom Lane wrote:
Andres Freund writes:
I suspect the issue is that pg_resetwal does:
if
Alexander Shutyaev wrote:
> Is there any hope the issue with pg_upgrade can be resolved? If not,
> could you give me some hints as to how can I decrease time needed
> for pg_dumpall | psql?
Not sure about the larger problem, but for the part about having more
than 1 million large objects
On Tue, Jun 26, 2018 at 8:38 AM Alexander Shutyaev wrote:
>
> Hello again,
>
> I've performed another test - I've migrated to the new cluster using dump
> restore: pg_dumpall | psql. It went well, although it took 6 days while
> pg_upgrade usually took a night.
>
> Is there any hope the issue
Hello again,
I've performed another test - I've migrated to the new cluster using dump
restore: pg_dumpall | psql. It went well, although it took 6 days while
pg_upgrade usually took a night.
Is there any hope the issue with pg_upgrade can be resolved? If not, could
you give me some hints as to
Back again,
>> Alexander, could you hack things up so autovacuum logging is enabled
>> (log_autovacuum_min_duration=0), and see whether it's triggered?
I've changed this config setting in both 9.6 and 10.4 postgresql.conf, then
I've ran pg_upgrade once more.
However I'm not sure how can I see
Andres Freund wrote:
> I'm not entirely clear why pg_restore appears to use a separate
> transaction for each large object, surely exascerbating the problem.
To make sure that per-object locks don't fill up the shared
lock table?
There might be hundreds of thousands of large objects.
If
>> From the query below I am going to say the above query was done on the
bof database. Is that correct?
Yes, it is.
>> Can you run the table_name query in template0 in the 9.6 cluster?
At first I couldn't. There was an error:
psql: FATAL: database "template0" is not currently accepting
On 06/11/2018 11:32 AM, Alexander Shutyaev wrote:
I'm back with more details.
First, I've deleted the smaller sslentry database, since I don't need
it, just so that it doesn't somehow spoil the picture. Now there is only
1 user database - bof (OID=16400). After that I've ran the pg_upgrade on
I'm back with more details.
First, I've deleted the smaller sslentry database, since I don't need it,
just so that it doesn't somehow spoil the picture. Now there is only 1 user
database - bof (OID=16400). After that I've ran the pg_upgrade on a clean
10.4 cluster and it failed in the same way.
On 2018-06-11 13:14:12 -0400, Tom Lane wrote:
> Andres Freund writes:
> > I suspect the issue is that pg_resetwal does:
> > if (set_xid != 0)
> > {
> > ControlFile.checkPointCopy.nextXid = set_xid;
>
> > /*
> > * For the moment, just set oldestXid to
Andres Freund writes:
> I suspect the issue is that pg_resetwal does:
> if (set_xid != 0)
> {
> ControlFile.checkPointCopy.nextXid = set_xid;
> /*
>* For the moment, just set oldestXid to a value that will force
>* immediate
On 2018-06-09 15:52:26 -0400, Tom Lane wrote:
> Adrian Klaver writes:
> > On 06/09/2018 03:46 AM, Alexander Shutyaev wrote:
> >> The upgrade operation failed after several hours with the following error:
> >> database is not accepting commands to avoid wraparound data loss in
> >> database with
Hi,
On 2018-06-09 13:46:16 +0300, Alexander Shutyaev wrote:
> Hello!
>
> I've been trying to upgrade a postgresql cluster from 9.6 to 10. I've
> executed the pg_upgrade with the following options:
>
> /usr/lib/postgresql/10/bin/pg_upgrade -b /usr/lib/postgresql/9.6/bin/ -B
>
On 06/10/2018 11:46 PM, Alexander Shutyaev wrote:
>> Is this the regular Postgres log or the pg_upgrade log which should
be something like pg_upgrade_server.log?
This is the pg_upgrade_dump_16400.log.
How did you get into the 10 cluster to report on the database OID's and
names?
After the
>> Is this the regular Postgres log or the pg_upgrade log which should be
something like pg_upgrade_server.log?
This is the pg_upgrade_dump_16400.log.
>> How did you get into the 10 cluster to report on the database OID's and
names?
After the pg_upgrade failed I was able to start both
On 06/10/2018 02:45 PM, Alexander Shutyaev wrote:
Comments inline.
The error log is like this. Here's its tail:
Is this the regular Postgres log or the pg_upgrade log which should be
something like pg_upgrade_server.log?
pg_restore: [archiver (db)] could not execute query: ERROR:
The error log is like this. Here's its tail:
pg_restore: executing BLOB 1740737401
pg_restore: WARNING: database with OID 0 must be vacuumed within 103
transactions
HINT: To avoid a database shutdown, execute a database-wide VACUUM in that
database.
You might also need to commit or roll
On 06/10/2018 02:09 PM, Alexander Shutyaev wrote:
Some more notes on databses.
Although the pg_upgrade failed, I've decided to check the databases in
the new cluster (10.4). There is no database with oid 0 either. Also to
be noted that some system databases changed the oids while others
Some more notes on databses.
Although the pg_upgrade failed, I've decided to check the databases in the
new cluster (10.4). There is no database with oid 0 either. Also to be
noted that some system databases changed the oids while others retained
them.
And of my databases - sslentry. It had a
No database with oid 0 in 9.6 cluster:
postgres=# select oid, datname from pg_database;
oid | datname
+---
1 | template1
12438 | template0
16400 | bof
12439 | postgres
1016305714 | sslentry
(5 rows)
>> 1) OS and version you are using?
I can provide you with more info. Please tell me what you need. I really
hope someone here can help me somehow solve or workaround this, because I
really need to migrate to v10 for its features.
Thanks in advance,
Alexander
2018-06-09 22:52 GMT+03:00 Tom Lane :
> Adrian Klaver writes:
> > On
I can provide you with more info. Please tell me what you need. I really hope
someone here can help me somehow solve or workaround this, because I really
need to migrate to v10 for its features.
Thanks in advance,
Alexander
--
Sent from:
Adrian Klaver writes:
> On 06/09/2018 03:46 AM, Alexander Shutyaev wrote:
>> The upgrade operation failed after several hours with the following error:
>> database is not accepting commands to avoid wraparound data loss in
>> database with OID 0
> Do you know which database has an OID of 0?
On 06/09/2018 03:46 AM, Alexander Shutyaev wrote:
Hello!
I've been trying to upgrade a postgresql cluster from 9.6 to 10. I've
executed the pg_upgrade with the following options:
/usr/lib/postgresql/10/bin/pg_upgrade -b /usr/lib/postgresql/9.6/bin/
-B /usr/lib/postgresql/10/bin/ -d
29 matches
Mail list logo