Re: pg_basebackup: could not receive data from WAL stream

2018-09-01 Thread greigwise
I should also add that when it fails, it's always right at the very end of
the backup when it's very nearly done or maybe even after it's done.

Thanks again.

Greig



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



pg_basebackup: could not receive data from WAL stream

2018-09-01 Thread greigwise
Hello.

On postgresql 10.5, my pg_basebackup is failing with this error:

pg_basebackup: could not receive data from WAL stream: server closed the
connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request

In the postgres log files, I'm seeing:

2018-09-02 00:57:32 UTC bkp_user 5b8b278c.11c3f [unknown] LOG:  terminating
walsender process due to replication timeout

I'm running the following command right on the database server itself:

pg_basebackup -U repl -D /var/tmp/pg_basebackup_20180901 -Ft -z

It seems to be an intermittent problem.. I've had it fail or succeed about
50/50.  I even bumped up the wal_sender_timeout to 2000.  One notable thing
is that I'm running on an ec2 instance on AWS.

Any advice would be helpful.

Greig Wise



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: error in vacuum

2018-09-01 Thread Dave Peticolas
On Sat, Sep 1, 2018 at 7:24 PM Andres Freund  wrote:

> On 2018-09-01 19:02:18 -0700, Dave Peticolas wrote:
> > On Sat, Sep 1, 2018 at 6:56 PM Tom Lane  wrote:
> >
> > > Dave Peticolas  writes:
> > > >>> I'm seeing this error on a table repeatedly:
> > > >>> found update xid 3433096759 <(343)%20309-6759> <(343)%20309-6759>
> from before
> > > relfrozenxid 3498422955
> > >
> > > >> This is a primary, 9.6.8 to be exact.
> > >
> > > This message is known to be associated with several data-corruption
> bugs
> > > we've fixed recently.  You should update to current (9.6.10) sooner not
> > > later.  I don't think that will cure existing corruption for you, but
> > > it should prevent things from getting worse.
> > >
> >
> > Ok, thank you for that information I will do so. The
> dump/truncate/restore
> > I mentioned in a later post seems to fix the corruption in a restored DB.
> > Given the nature of the bug do you think that would fix that table?
>
> Yes, it's fairly likely that it does. You might even get around it by
> just doing a no-op update of the whole table.  Nevertheless you need to
> upgrade, or more of that corruption can occur.
>

Indeed, that does seem to fix the issue and is much simpler, thank you. I'm
going to schedule this upgrade with all due haste.


Re: error in vacuum

2018-09-01 Thread Andres Freund
On 2018-09-01 19:02:18 -0700, Dave Peticolas wrote:
> On Sat, Sep 1, 2018 at 6:56 PM Tom Lane  wrote:
> 
> > Dave Peticolas  writes:
> > >>> I'm seeing this error on a table repeatedly:
> > >>> found update xid 3433096759 <(343)%20309-6759> from before
> > relfrozenxid 3498422955
> >
> > >> This is a primary, 9.6.8 to be exact.
> >
> > This message is known to be associated with several data-corruption bugs
> > we've fixed recently.  You should update to current (9.6.10) sooner not
> > later.  I don't think that will cure existing corruption for you, but
> > it should prevent things from getting worse.
> >
> 
> Ok, thank you for that information I will do so. The dump/truncate/restore
> I mentioned in a later post seems to fix the corruption in a restored DB.
> Given the nature of the bug do you think that would fix that table?

Yes, it's fairly likely that it does. You might even get around it by
just doing a no-op update of the whole table.  Nevertheless you need to
upgrade, or more of that corruption can occur.

Greetings,

Andres Freund



Re: error in vacuum

2018-09-01 Thread Dave Peticolas
On Sat, Sep 1, 2018 at 6:56 PM Tom Lane  wrote:

> Dave Peticolas  writes:
> >>> I'm seeing this error on a table repeatedly:
> >>> found update xid 3433096759 <(343)%20309-6759> from before
> relfrozenxid 3498422955
>
> >> This is a primary, 9.6.8 to be exact.
>
> This message is known to be associated with several data-corruption bugs
> we've fixed recently.  You should update to current (9.6.10) sooner not
> later.  I don't think that will cure existing corruption for you, but
> it should prevent things from getting worse.
>

Ok, thank you for that information I will do so. The dump/truncate/restore
I mentioned in a later post seems to fix the corruption in a restored DB.
Given the nature of the bug do you think that would fix that table?


Re: error in vacuum

2018-09-01 Thread Tom Lane
Dave Peticolas  writes:
>>> I'm seeing this error on a table repeatedly:
>>> found update xid 3433096759 from before relfrozenxid 3498422955

>> This is a primary, 9.6.8 to be exact.

This message is known to be associated with several data-corruption bugs
we've fixed recently.  You should update to current (9.6.10) sooner not
later.  I don't think that will cure existing corruption for you, but
it should prevent things from getting worse.

regards, tom lane



Re: error in vacuum

2018-09-01 Thread Dave Peticolas
On Sat, Sep 1, 2018 at 5:28 PM Dave Peticolas  wrote:

> On Sat, Sep 1, 2018, 5:10 PM Adrian Klaver 
> wrote:
>
>> On 09/01/2018 04:47 PM, Dave Peticolas wrote:
>> > I'm seeing this error on a table repeatedly:
>> >
>> > found update xid 3433096759 <(343)%20309-6759>
>>  from before
>> > relfrozenxid 3498422955
>>
>> Where is this, your primary or the instance you are running from the
>> backup?
>>
>
> This is a primary, 9.6.8 to be exact.
>
> >
>> > What can I do to repair this table?
>>
>
One thing I tried on a database restored from a primary backup that
exhibits the same "found update xid" error is this:

1. pg_dump the table and its referencing tables.
2. truncate the table and its referencing tables.
3. Load the dump file.

Does that seem reasonable?


Re: locate DB corruption

2018-09-01 Thread Dave Peticolas
On Sat, Sep 1, 2018 at 5:09 PM Adrian Klaver 
wrote:

> On 09/01/2018 04:45 PM, Dave Peticolas wrote:
>
> > Well restoring from a backup of the primary does seem to have fixed the
> > issue with the corrupt table.
>
> Pretty sure it was not that the table was corrupt but that transaction
> information was missing from pg_clog.
>
> In a previous post you mentioned you ran tar to do the snapshot of
> $PG_DATA.
>
> Was there any error when tar ran the backup that caused you problems?
>

Well the interesting thing about that is that although the bad table was
originally discovered in a DB restored from a snapshot, I subsequently
discovered it in the real-time clone of the primary from which the backups
are made. So somehow the clone's table became corrupted. The same table was
not corrupt on the primary, but I have discovered an error on the primary
-- it's in the thread I posted today. These events seem correlated in time,
I'll have to mine the logs some more.


Re: error in vacuum

2018-09-01 Thread Dave Peticolas
On Sat, Sep 1, 2018, 5:10 PM Adrian Klaver 
wrote:

> On 09/01/2018 04:47 PM, Dave Peticolas wrote:
> > I'm seeing this error on a table repeatedly:
> >
> > found update xid 3433096759  from before
> > relfrozenxid 3498422955
>
> Where is this, your primary or the instance you are running from the
> backup?
>

This is a primary, 9.6.8 to be exact.

>
> > What can I do to repair this table?
> >
> > thanks,
> > dave
> >
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: error in vacuum

2018-09-01 Thread Adrian Klaver

On 09/01/2018 04:47 PM, Dave Peticolas wrote:

I'm seeing this error on a table repeatedly:

found update xid 3433096759  from before 
relfrozenxid 3498422955


Where is this, your primary or the instance you are running from the backup?



What can I do to repair this table?

thanks,
dave





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: locate DB corruption

2018-09-01 Thread Adrian Klaver

On 09/01/2018 04:45 PM, Dave Peticolas wrote:

Well restoring from a backup of the primary does seem to have fixed the 
issue with the corrupt table.


Pretty sure it was not that the table was corrupt but that transaction 
information was missing from pg_clog.


In a previous post you mentioned you ran tar to do the snapshot of 
$PG_DATA.


Was there any error when tar ran the backup that caused you problems?


--
Adrian Klaver
adrian.kla...@aklaver.com



Re: error in vacuum

2018-09-01 Thread Dave Peticolas
On Sat, Sep 1, 2018 at 4:47 PM Dave Peticolas  wrote:

> I'm seeing this error on a table repeatedly:
>
> found update xid 3433096759 <(343)%20309-6759> from before relfrozenxid
> 3498422955
>
> What can I do to repair this table?
>
> This is version 9.6.


error in vacuum

2018-09-01 Thread Dave Peticolas
I'm seeing this error on a table repeatedly:

found update xid 3433096759 <(343)%20309-6759> from before relfrozenxid
3498422955

What can I do to repair this table?

thanks,
dave


Re: locate DB corruption

2018-09-01 Thread Dave Peticolas
On Fri, Aug 31, 2018 at 8:48 PM Dave Peticolas  wrote:

> On Fri, Aug 31, 2018 at 5:19 PM Adrian Klaver 
> wrote:
>
>> On 08/31/2018 08:51 AM, Dave Peticolas wrote:
>> > On Fri, Aug 31, 2018 at 8:14 AM Adrian Klaver <
>> adrian.kla...@aklaver.com
>> > > wrote:
>> >
>> > On 08/31/2018 08:02 AM, Dave Peticolas wrote:
>> >  > Hello, I'm running into the following error running a large query
>> > on a
>> >  > database restored from WAL replay:
>> >  >
>> >  > could not access status of transaction 330569126
>> >  > DETAIL: Could not open file "pg_clog/0C68": No such file or
>> directory
>> >
>> >
>> > Postgres version?
>> >
>> >
>> > Right! Sorry, that original email didn't have a lot of info. This is
>> > 9.6.9 restoring a backup from 9.6.8.
>> >
>> > Where is the replay coming from?
>> >
>> >
>> >  From a snapshot and WAL files stored in Amazon S3.
>>
>> Seems the process is not creating a consistent backup.
>>
>
> This time, yes. This setup has been working for almost two years with
> probably hundreds of restores in that time. But nothing's perfect I guess :)
>
>
>> How are they being generated?
>>
>
> The snapshots are sent to S3 via a tar process after calling the start
> backup function. I am following the postgres docs here. The WAL files are
> just copied to S3.
>
>
>>
>> > Are you sure you are not working across versions?
>> >
>> >
>> > I am sure, they are all 9.6.
>> >
>> > If not do pg_clog/ and 0C68 actually exist?
>> >
>> >
>> > pg_clog definitely exists, but 0C68 does not. I think I have
>> > subsequently found the precise row in the specific table that seems to
>> > be the problem. Specifically I can select * from TABLE where id = BADID
>> > - 1 or id = BADID + 1 and the query returns. I get the error if I
>> select
>> > the row with the bad ID.
>> >
>> > Now what I'm not sure of is how to fix.
>>
>> One thing I can think of is to rebuild from a later version of your S3
>> data and see if it has all the necessary files.
>>
>
> Yes, I think that's a good idea, I'm trying that.
>
>
>> There is also pg_resetxlog:
>>
>> https://www.postgresql.org/docs/9.6/static/app-pgresetxlog.html
>>
>> I have not used it, so I can not offer much in the way of tips. Just
>> from reading the docs I would suggest stopping the server and then
>> creating a backup of $PG_DATA(if possible) before using pg_resetxlog.
>>
>
> Thanks, I didn't know about that. The primary DB seems OK so hopefully it
> won't be needed.
>

Well restoring from a backup of the primary does seem to have fixed the
issue with the corrupt table.


Re: very slow largeobject transfers through JDBC

2018-09-01 Thread Dave Cramer
On Fri, 31 Aug 2018 at 10:15, Mate Varga  wrote:

> I see -- we could try that, though we're mostly using an ORM (Hibernate)
> to do this. Thanks!
>
> On Fri, Aug 31, 2018 at 3:57 PM Dmitry Igrishin  wrote:
>
>> пт, 31 авг. 2018 г. в 16:35, Mate Varga :
>> >
>> > Hi,
>> >
>> > we're fetching binary data from pg_largeobject table. The data is not
>> very large, but we ended up storing it there. If I'm copying the data to a
>> file from the psql console, then it takes X time (e.g. a second), fetching
>> it through the JDBC driver takes at least 10x more. We don't see this
>> difference between JDBC and 'native' performance for anything except
>> largeobjects (and bytea columns, for the record).
>> >
>> > Does anyone have any advice about whether this can be tuned or what the
>> cause is?
>> I don't know what a reason of that, but I think it's reasonable and
>> quite simple to call lo_import()/lo_export() via JNI.
>>
>
Can't imagine that's any faster. The driver simply implements the protocol

Do you have any code to share ? Any other information ?

Is the JDBC connection significantly further away network wise ?


Dave Cramer

da...@postgresintl.com
www.postgresintl.com


Re: Rules and decision logic triggered by / acting on JSONB nodes in Postgres

2018-09-01 Thread Guyren Howe
On Sep 1, 2018, at 7:14 , Amit Kothari  wrote:
> 
> Hi all,
> 
> I’m sorry if this question is off-topic for postgres-core, but didn’t know 
> where else to ask.
> 
> We’ve built a workflow and BPM (business process management) tool with pg as 
> the database, and we’d like to explore conditional logic within steps in a 
> workflow being triggers as functions when a certain JSONB node changes.
> 
> An example -> If (form field contains “bob”) then (change state of another 
> object)
> 
> In effect, since all our objects are JSONB - we wanted to “listen” to a 
> change in one JSONB node in order to make a corresponding change in another 
> JSONB node. 
> 
> Can this be either native in postgres, or even a pg function? 
> 
> In general - how do people approach rules and decision logic in Postgres at a 
> very large scale e.g. say 100,000 rules that must be evaluated on a commit? 
> 
> If it makes more sense - please see the example screenshot on our features 
> page at https://tallyfy.com/features - which visualizes conditional branching.

That *looks* like each team gets their own workflows, so you could restrict 
your rules search to the rules for the team that created the change. Anything 
you could do to restrict the number of rules would help, of course.

One option would be to express rules as an (antecedent, consequent) pair, where 
the antecedent is the argument to json_extract_path and the consequent is the 
rest of the arguments to a function that carries out the change — something 
along those lines, in any event.

Iterating through a small set of such rules would be tolerable. If you really 
do have 100,000 rules that have to be applied to each change, my first thought 
is to use a SAT solver. But even there, you would have to compute the results 
of all your antecedents, so it would depend how many of those you have.

You might use a map-reduce farm, but all the machines are going to be hitting 
your database pretty hard.

You might be able to reduce your 100,000 antecedents to something a bit more 
tractable by finding the overlaps between them and so only computing the shared 
parts of antecedents once.

You might want to use LISTEN-NOTIFY so an external logic engine — perhaps using 
map-reduce, perhaps you just have a set of logic engines that each takes some 
part of that 100,000 rules and just brute forces them — gets to know about the 
changes.

I’d need to know some more details about what you’re up to, to give you a more 
specific answer, but that’s what occurs to me given what you’ve told us.



RE: TR: redundant constraint_schema

2018-09-01 Thread Olivier Leprêtre
id #15361 Done, thanks !

-Message d'origine-
De : Andreas Kretschmer [mailto:andr...@a-kretschmer.de] 
Envoyé : samedi 1 septembre 2018 19:01
À : pgsql-general@lists.postgresql.org
Objet : Re: TR: redundant constraint_schema



Am 01.09.2018 um 18:52 schrieb Adrian Klaver:
> On 09/01/2018 09:47 AM, Olivier Leprêtre wrote:
>> Mine is 9.6
>
> I would submit a bug report here:
>
> https://www.postgresql.org/account/login/?next=/account/submitbug/

Olivier, please do that!


Regards, Andreas

-- 
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: TR: redundant constraint_schema

2018-09-01 Thread Andreas Kretschmer




Am 01.09.2018 um 18:52 schrieb Adrian Klaver:

On 09/01/2018 09:47 AM, Olivier Leprêtre wrote:

Mine is 9.6


I would submit a bug report here:

https://www.postgresql.org/account/login/?next=/account/submitbug/


Olivier, please do that!


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: TR: redundant constraint_schema

2018-09-01 Thread Adrian Klaver

On 09/01/2018 09:47 AM, Olivier Leprêtre wrote:

Mine is 9.6


I would submit a bug report here:

https://www.postgresql.org/account/login/?next=/account/submitbug/



-Message d'origine-
De : Adrian Klaver [mailto:adrian.kla...@aklaver.com]
Envoyé : samedi 1 septembre 2018 18:45
À : Andreas Kretschmer; pgsql-general@lists.postgresql.org
Objet : Re: TR: redundant constraint_schema

On 09/01/2018 09:27 AM, Andreas Kretschmer wrote:



Am 01.09.2018 um 17:50 schrieb Olivier Leprêtre:

I notice that a new constraint "table1_col2_fkeyxxx" is created each
time the previous ALTER TABLE ADD COLUMN is called


smells like a bug.

Regards, Andreas



Forgot to include Postgres version, 10.5.

--
Adrian Klaver
adrian.kla...@aklaver.com





--
Adrian Klaver
adrian.kla...@aklaver.com



RE: TR: redundant constraint_schema

2018-09-01 Thread Olivier Leprêtre
Mine is 9.6

-Message d'origine-
De : Adrian Klaver [mailto:adrian.kla...@aklaver.com] 
Envoyé : samedi 1 septembre 2018 18:45
À : Andreas Kretschmer; pgsql-general@lists.postgresql.org
Objet : Re: TR: redundant constraint_schema

On 09/01/2018 09:27 AM, Andreas Kretschmer wrote:
> 
> 
> Am 01.09.2018 um 17:50 schrieb Olivier Leprêtre:
>> I notice that a new constraint "table1_col2_fkeyxxx" is created each 
>> time the previous ALTER TABLE ADD COLUMN is called
> 
> smells like a bug.
> 
> Regards, Andreas
> 

Forgot to include Postgres version, 10.5.

--
Adrian Klaver
adrian.kla...@aklaver.com




Re: TR: redundant constraint_schema

2018-09-01 Thread Andreas Kretschmer




Am 01.09.2018 um 18:44 schrieb Adrian Klaver:
Forgot to include Postgres version, 10.5. 


also 9.6 and 11beta1

Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: TR: redundant constraint_schema

2018-09-01 Thread Adrian Klaver

On 09/01/2018 09:27 AM, Andreas Kretschmer wrote:



Am 01.09.2018 um 17:50 schrieb Olivier Leprêtre:
I notice that a new constraint "table1_col2_fkeyxxx" is created each 
time the previous ALTER TABLE ADD COLUMN is called


smells like a bug.

Regards, Andreas



Forgot to include Postgres version, 10.5.

--
Adrian Klaver
adrian.kla...@aklaver.com



Re: TR: redundant constraint_schema

2018-09-01 Thread Adrian Klaver

On 09/01/2018 09:27 AM, Andreas Kretschmer wrote:



Am 01.09.2018 um 17:50 schrieb Olivier Leprêtre:
I notice that a new constraint "table1_col2_fkeyxxx" is created each 
time the previous ALTER TABLE ADD COLUMN is called


smells like a bug.


Yeah, a quick test on a database where I have an event trigger:

create table fk_parent(col2 varchar primary key);
NOTICE:  Table public.fk_parent created
NOTICE:  caught CREATE TABLE event on 'public.fk_parent'
NOTICE:  caught CREATE INDEX event on 'public.fk_parent_pkey'


create table fk_child(col1 varchar references fk_parent(col2));
NOTICE:  Table public.fk_child created
NOTICE:  caught CREATE TABLE event on 'public.fk_child'
NOTICE:  caught ALTER TABLE event on 'public.fk_child'

\d fk_child
   Table "public.fk_child"
 Column |   Type| Collation | Nullable | Default
+---+---+--+-
 col1   | character varying |   |  |
Foreign-key constraints:
"fk_child_col1_fkey" FOREIGN KEY (col1) REFERENCES fk_parent(col2)

alter table fk_child add column if not exists col1 varchar references 
fk_parent(col2); 

NOTICE:  column "col1" of relation "fk_child" already exists, skipping 



NOTICE:  caught ALTER TABLE event on 'public.fk_child' 



ALTER TABLE

 \d fk_child
   Table "public.fk_child" 



 Column |   Type| Collation | Nullable | Default 



+---+---+--+- 



 col1   | character varying |   |  | 



Foreign-key constraints: 



"fk_child_col1_fkey" FOREIGN KEY (col1) REFERENCES fk_parent(col2) 



"fk_child_col1_fkey1" FOREIGN KEY (col1) REFERENCES fk_parent(col2) 






Regards, Andreas




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: TR: redundant constraint_schema

2018-09-01 Thread Andreas Kretschmer




Am 01.09.2018 um 17:50 schrieb Olivier Leprêtre:
I notice that a new constraint "table1_col2_fkeyxxx" is created each 
time the previous ALTER TABLE ADD COLUMN is called


smells like a bug.

Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




TR: redundant constraint_schema

2018-09-01 Thread Olivier Leprêtre
Hi,

 

Please find a question that didn't get an answer in the pgsql-sql &
pgsql-hackers lists. I hope I'll get an answer here.

 

Thanks,

 

Olivier

 

 

De : Olivier Leprêtre [ 
mailto:o.lepre...@gmail.com] 
Envoyé : mercredi 29 août 2018 15:49
À : 'pgsql-...@lists.postgresql.org'
Objet : redundant constraint_schema

 

Hi,

 

Can someone explain why, when a column is not created (add column if not
exists), a redundant constraint is still created from the REFERENCES part ?

 

 

I have a patching script that is supposed to add column if not existing :

 

ALTER TABLE myschem.table1

  ADD COLUMN IF NOT EXISTS col1 VARCHAR(254) REFERENCES
myschem.table2(col2)

 

When col1 already exists, I expected that nothing would happen. But, when
applying the previous query and then querying :

 

select constraint_name from information_schema.key_column_usage where
constraint_schema='myschem'

 

I notice that a new constraint "table1_col2_fkeyxxx" is created each time
the previous ALTER TABLE ADD COLUMN is called (with xxx being a new number
each time)

 

It seems strange to have second part of statement executed (references) when
first part (add column) was not. Would it be possible that this sort of
query executes "references" first ?

 

Thanks,

 

Olivier

 



Rules and decision logic triggered by / acting on JSONB nodes in Postgres

2018-09-01 Thread Amit Kothari
Hi all,

I’m sorry if this question is off-topic for postgres-core, but didn’t know 
where else to ask.

We’ve built a workflow and BPM (business process management) tool with pg as 
the database, and we’d like to explore conditional logic within steps in a 
workflow being triggers as functions when a certain JSONB node changes.

An example -> If (form field contains “bob”) then (change state of another 
object)

In effect, since all our objects are JSONB - we wanted to “listen” to a change 
in one JSONB node in order to make a corresponding change in another JSONB 
node. 

Can this be either native in postgres, or even a pg function? 

In general - how do people approach rules and decision logic in Postgres at a 
very large scale e.g. say 100,000 rules that must be evaluated on a commit? 

If it makes more sense - please see the example screenshot on our features page 
at https://tallyfy.com/features - which visualizes conditional branching.

Thanks,
Amit
https://tallyfy.com