Re: [GENERAL] question about replication slots
Michael Paquierwrote: > On Sat, Dec 5, 2015 at 2:35 AM, Andreas Kretschmer > wrote: > > imagine a streaming replication using physical replication slots. And > > sometime a > > fail over. All okay. I take a basebackup and rebuild the old master as > > slave. > > > > Is there a risk that the new slave contains active replication slots but no > > listener on it? > > Yes. This would retain WAL on the standby indefinitely if you are not careful. > > > What have i to consider? > > You had better remove the contents of pg_replslot/ in the backup > taken. Even if you include them, there are high chances that those > will be useless at the end. > > So, quoting the docs > (http://www.postgresql.org/docs/devel/static/continuous-archiving.html): Thx for the answer! Andreas -- Really, I'm not out to destroy Microsoft. That will just be a completely unintentional side effect. (Linus Torvalds) "If I was god, I would recompile penguin with --enable-fly." (unknown) Kaufbach, Saxony, Germany, Europe. N 51.05082°, E 13.56889° -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] question about replication slots
On Sat, Dec 5, 2015 at 2:35 AM, Andreas Kretschmerwrote: > imagine a streaming replication using physical replication slots. And > sometime a > fail over. All okay. I take a basebackup and rebuild the old master as slave. > > Is there a risk that the new slave contains active replication slots but no > listener on it? Yes. This would retain WAL on the standby indefinitely if you are not careful. > What have i to consider? You had better remove the contents of pg_replslot/ in the backup taken. Even if you include them, there are high chances that those will be useless at the end. So, quoting the docs (http://www.postgresql.org/docs/devel/static/continuous-archiving.html): It is often a good idea to also omit from the backup dump the files within the cluster's pg_replslot/ directory, so that replication slots that exist on the master do not become part of the backup. Otherwise, the subsequent use of the backup to create a standby may result in indefinite retention of WAL files on the standby, and possibly bloat on the master if hot standby feedback is enabled, because the clients that are using those replication slots will still be connecting to and updating the slots on the master, not the standby. Even if the backup is only intended for use in creating a new master, copying the replication slots isn't expected to be particularly useful, since the contents of those slots will likely be badly out of date by the time the new master comes on line. -- Michael -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] bdr manual cleanup required
Did you try this : https://github.com/2ndQuadrant/bdr/issues/127 : <<< BEGIN; SET LOCAL bdr.skip_ddl_locking = on; SET LOCAL bdr.permit_unsafe_ddl_commands = on; SET LOCAL bdr.skip_ddl_replication = on; SECURITY LABEL FOR bdr ON DATABASE mydb IS NULL; DELETE FROM bdr.bdr_connections; DELETE FROM bdr.bdr_nodes; SELECT bdr.bdr_connections_changed(); COMMIT; SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname = current_database() AND application_name LIKE '%): perdb'; >>> For now, I never went into situations where I had to destroy all the databases in all nodes. Sylvain 2015-12-04 20:40 GMT+01:00 Selim Tuvi: > I am trying to repair a broken bdr cluster setup and so far everything I > tried failed. Under the original node that ran bdr.bdr_group_create I am > getting the following error: > > 2015-12-04 19:34:29.063 UTC,,,22991,,5661eac4.59cf,1,,2015-12-04 19:34:28 > UTC,3/0,0,ERROR,55000,"previous init failed, manual cleanup is > required","Found bdr.bdr_nodes entry for bdr (6224504646761731677,1,16389,) > with state=i in remote bdr.bdr_nodes","Remove all replication identifiers > and slots corresponding to this node from the init target node then drop > and recreate this database and try again",,,"bdr > (6224504646761731677,1,16389,): perdb" > > Is there a way to get the cluster in a correct state without having to > drop the db? > > Thanks > -Selim > >
Re: [GENERAL] Table with invalid page blocks
On 12/4/15 11:34 AM, Gerdan Rezende dos Santos wrote: Someone has some way of identifying all invalid blocks of a table postgresql? Plpgsql a function, a tool, somehow. I found one solution on http://www.postgresql.org/message-id/1184245756.24101.178.ca...@coppola.muc.ecircle.de, but I can not change in order to identify any defective blocks at once. If your question is "How can I modify that function to report ALL invalid CTIDs?" then you probably need to use a cursor and wrap the FETCH in a BEGIN/END block with an exception handler. Something like: DECLARE curs refcursor; rec record; last_good tid; bad boolean := false; BEGIN OPEN curs NO SCROLL FOR EXECUTE 'SELECT ctid FROM ' || table_name; LOOP BEGIN FETCH curs INTO rec; EXIT WHEN NOT FOUND; IF bad THEN RAISE WARNING 'Next good CTID %', rec.ctid; bad := false; END IF; last_good := rec.ctid; EXCEPTION WHEN OTHERS RAISE WARNING E'Error %: %\nLast good CTID %', SQLSTATE, SQLERRM, last_good; bad := true; END; END LOOP; END; -- Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in Treble! http://BlueTreble.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] FATAL: unable to read data from DB node 0
On 11/27/15 5:45 PM, DrakoRod wrote: Hi everybody I've the next architecture with pgpool (streaming replication mode): 4 nodes 1 Master Node 2 Standbys Node 1 pgpool Node I've disabled the load balancing, because some clients report me problems with the load balancer, they told me the load balancer some times send querys to standby nodes, which has not yet recovered data and the querys fail, but this topic is for another thread. When I try run a stress test with hammerdb I see next errors in the pgpool Node 2015-11-27 16:48:21: pid 20190: FATAL: unable to read data from DB node 0 2015-11-27 16:48:21: pid 20190: DETAIL: EOF encountered with backend 2015-11-27 16:48:21: pid 19182: LOG: child process with pid: 20190 exits with status 256 2015-11-27 16:48:21: pid 19182: LOG: fork a new child process with pid: 20298 2015-11-27 16:48:21: pid 20163: FATAL: unable to read data from DB node 0 2015-11-27 16:48:21: pid 20163: DETAIL: EOF encountered with backend 2015-11-27 16:48:21: pid 19182: LOG: child process with pid: 20163 exits with status 256 2015-11-27 16:48:21: pid 19182: LOG: fork a new child process with pid: 20299 In all Child, therefore the hammerdb stop the test because all connections it lots. I've never run pgPool myself, but maybe this means the connection was broken for some reason. Are there related errors in the Postgres server log for that node? -- Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in Treble! http://BlueTreble.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] BDR: ALTER statement hanging
If you're not sure what's going on on a node, look at its logs. The background worker API and PostgreSQL's lack of autonomous transactions makes it quite challenging for BDR workers to capture logs and expose them to users at the SQL level. So always, if in doubt, examine the log files.
Re: [GENERAL] plperlu stored procedure seems to freeze for a minute
On 12/4/15 4:15 AM, Peter J. Holzer wrote: On 2015-12-03 10:02:18 -0500, Tom Lane wrote: >"Peter J. Holzer"writes: > >Can those signals be safely ignored? Just blocking them (so that they > >are delivered after the UDF finishes) might be safer. But even that may > >be a problem: If the UDF then executes some SQL, could that rely on > >signals being delivered? I have no idea. > >The minute you start fooling with a backend's signal behavior, we're >going to politely refuse to support whatever breakage you run into. As I understood Jim he was talking about possible changes to postgresql to shield UDFs from those signals, not something the author of a UDF should do. No, I was suggesting that: 1) It might possible for Postgres error handling to detect that a signal had been received since entering a UDF if the UDF reported an error, and make that information available as part of the standard error report. 2) If there are alarms being set specifically for authentication, they should probably be cancelled after authentication is successful. I certainly do NOT think it would be wise to disable all signal handling during UDF execution. Just for starters, that means that if a UDF went into an infinite loop your only way to recover would be to PANIC the entire database. It would probably create a bunch of other problems as well. In other words, UDFs *must* be capable of handling an interrupt. -- Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in Treble! http://BlueTreble.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Convert from hex to string
On 11/26/15 1:12 AM, Francisco Olarte wrote: Anyway, I think people should be told to respect types, and people should be teach that strings are sequences of characters, that to do what people think is 'hex encoding' ( two digits per*byte*, no delimiter ), you need to first transform the string to bytes, then hex-encode that. BTW, it wouldn't be that difficult to create a hex data type that worked like bytea but accepted only hex strings. Likewise for a base64 type. Only downside is you'd probably have to do it in C, because no other language can handle cstring and I don't think there's any built-in conversion functions with the correct parameters. -- Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in Treble! http://BlueTreble.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Generating an XSD file from an existing database
> On Dec 6, 2015, at 2:30 PM, Blake McBridewrote: > > Greetings, > > I need an XML specification for my exiting schema - an XSD file. Reading the > docs I see schema_to_xml but I do not understand: > > A. Will that produce the XSD file I seek? > > B. I do not understand the documentation. Can I do this through psql? I > need an exact example if possible. > > Thank you. > > Blake McBride > Yes, those functions are available through psql select schema_to_xml() and select schema_to_xmlschema() are what you’re after. Redirect the output of each into separate files. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Generating an XSD file from an existing database
On 12/06/2015 01:59 PM, Blake McBride wrote: On Sun, Dec 6, 2015 at 3:47 PM, Rob Sargent> wrote: > On Dec 6, 2015, at 2:30 PM, Blake McBride > wrote: > > Greetings, > > I need an XML specification for my exiting schema - an XSD file. Reading the docs I see schema_to_xml but I do not understand: > > A. Will that produce the XSD file I seek? > > B. I do not understand the documentation. Can I do this through psql? I need an exact example if possible. > > Thank you. > > Blake McBride > Yes, those functions are available through psql select schema_to_xml() and select schema_to_xmlschema() are what you’re after. Redirect the output of each into separate files. Thanks. My problem is understanding the arguments. I am hoping to get an exact example. Server: localhost Database: wmco Schema: public Once I am in psql, exactly what do I type? (I just don't understand the docs.) Will produce a big file. On a relatively small database the below created a 5.6 G file \o schema_file.xml select schema_to_xml('public', 't', 't', 'text'); \o Explanation of above: \o redirects further commands to the file specified. Run xml function \o resets output to stdout If you want to use the table form, then: \o cell_per.xml select table_to_xml('cell_per'::regclass, 't', 't', 'text'); \o Thank you! Blake -- Adrian Klaver adrian.kla...@aklaver.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Generating an XSD file from an existing database
Greetings, I need an XML specification for my exiting schema - an XSD file. Reading the docs I see schema_to_xml but I do not understand: A. Will that produce the XSD file I seek? B. I do not understand the documentation. Can I do this through psql? I need an exact example if possible. Thank you. Blake McBride
Re: [GENERAL] Generating an XSD file from an existing database
On Sun, Dec 6, 2015 at 3:47 PM, Rob Sargentwrote: > > > On Dec 6, 2015, at 2:30 PM, Blake McBride wrote: > > > > Greetings, > > > > I need an XML specification for my exiting schema - an XSD file. > Reading the docs I see schema_to_xml but I do not understand: > > > > A. Will that produce the XSD file I seek? > > > > B. I do not understand the documentation. Can I do this through psql? > I need an exact example if possible. > > > > Thank you. > > > > Blake McBride > > > Yes, those functions are available through psql > select schema_to_xml() > and > select schema_to_xmlschema() are what you’re after. > > Redirect the output of each into separate files. > Thanks. My problem is understanding the arguments. I am hoping to get an exact example. Server: localhost Database: wmco Schema: public Once I am in psql, exactly what do I type? (I just don't understand the docs.) Thank you! Blake