Re: [GENERAL] cast issue in WITH RECURION
> when i create a recursive query and try to add the distances i get a message: > ERROR: recursive query "edges" column 3 has type numeric(7,3) in non-recursive term but type numeric overall. > My exercise is almost identical to the example in the docs: > WITH RECURSIVE search_graph(id, link, data, depth, path, cycle) AS ( > SELECT g.id, g.link, g.data, 1, > ARRAY[g.id], > false > FROM graph g > UNION ALL > SELECT g.id, g.link, > sg.data + g.data, -- altered section, data is numeric(7,3) > sg.depth + 1, > path || g.id, > g.id = ANY(path) > FROM graph g, search_graph sg > WHERE g.id = sg.link AND NOT cycle > ) > SELECT * FROM search_graph; I believe the solution is rather simple; just cast(sg.data + g.data to numeric(7,3)) Alban Hertroys -- Tried that and it did not work. cast(sg.data + g.data AS numeric(7,3)) but the same error is produced, even if i cast the g.data in the non-recursive section. Same issue if i cast each column individually. Karl -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] hot standby questions
2017-08-04 5:58 GMT+12:00 Jeff Janes: > On Wed, Aug 2, 2017 at 8:19 PM, armand pirvu > wrote: > >> >> Hi >> >> Just trying to put together the hot_standby setup >> All docs I read are pointing to use as prefered method to use >> pg_basebackup to set the base >> So far so good >> But >> >> psql postgres -c "select pg_start_backup('backup')" >> pg_basebackup -D /var/lib/pgsql/sample -Ft -z -P >> psql postgres -c "select pg_stop_backup()" >> > > pg_basebackup does the equivalent of pg_start_backup and pg_stop_backup > for you. It is not helpful, and might even sometimes be harmful, to do > them yourself when using pg_basebackup. > > >> >> Pretty much every where I looked at -x is not mentioned to be used >> >> So what gives ? What did I miss ? It's gotta be soomething >> > > That certainly isn't my experience. If you find sites that don't mention > -x, -X, or --xlog-method, then I would be reluctant to take any of that > site's other advice seriously. > > But note that in version 10, -x will go away and the default will be > changed so that not specifying anything will be the same as -X stream. > perhaps you are reading advice aimed at a future version. > > Cheers, > > Jeff > I use pg_basebackup every day and the way I do is: pg_basebackup -D /destination --checkpoint=fast --xlog-method=stream The --xlog-method=stream option will copy the wal_files as well (to pg_xlogs directory). That works pretty well for me. You can either add the compress option too. Hope this was helpful. Cheers, Lucas
[GENERAL] Setting Variables within the PostgreSQL Service Process on AWS Linux
Hi All, I was hoping someone could point me in the right direction. I want to use Python 3.4 with AWS Linux and PostgreSQL 9.4. By default AWS Linux OS users Python2.7. I am trying to set the following variable in the PostgreSQL startup service PYTHONPATH=/usr/lib64/python3.4 Below is a snippet from the service: # Set defaults for configuration variables PGENGINE=/usr/pgsql-9.1/bin PGPORT=5432 PGDATA=/var/lib/pgsql/9.1/data PGLOG=/var/lib/pgsql/9.1/pgstartup.log lockfile="/var/lock/subsys/${NAME}" pidfile="/var/run/${NAME}.pid" PYTHONPATH=/usr/lib64/python3.4 # Override defaults from /etc/sysconfig/pgsql if file is present [ -f /etc/sysconfig/pgsql/${NAME} ] && . /etc/sysconfig/pgsql/${NAME} export PGDATA export PGPORT export PYTHONPATH --- The variable is never loaded to the PID. When I do the same thing with Centos 7 it works fine. The only way I can get the PostgreSQL PID to see the variable is to set a global one in the /etc/profile.d directory. This unfortunately breaks AWS Linux utilities that use Python2.7. Any help would be appreciated. Regards, Mark Street
Re: [GENERAL] cast issue in WITH RECURION
> On 3 Aug 2017, at 20:22, k bwrote: > > when i create a recursive query and try to add the distances i get a message: > ERROR: recursive query "edges" column 3 has type numeric(7,3) in > non-recursive term but type numeric overall. > My exercise is almost identical to the example in the docs: > WITH RECURSIVE search_graph(id, link, data, depth, path, cycle) AS ( >SELECT g.id, g.link, g.data, 1, > ARRAY[g.id], > false >FROM graph g > UNION ALL >SELECT g.id, g.link, >sg.data + g.data, -- altered section, data is numeric(7,3) >sg.depth + 1, > path || g.id, > g.id = ANY(path) >FROM graph g, search_graph sg >WHERE g.id = sg.link AND NOT cycle > ) > SELECT * FROM search_graph; I believe the solution is rather simple; just cast(sg.data + g.data to numeric(7,3)) Alban Hertroys -- If you can't see the forest for the trees, cut the trees and you'll find there is no forest. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] cast issue in WITH RECURION
Den tors 2017-08-03 skrev k b: Ämne: cast issue in WITH RECURION Till: pgsql-general@postgresql.org Datum: torsdag 3 augusti 2017 20:22 Hi. i use postgresql 9.6.3. I have made a small graph with nodes and edges. Each edge has a distance numeric (7,3) attribute. when i create a recursive query and try to add the distances i get a message: ERROR: recursive query "edges" column 3 has type numeric(7,3) in non-recursive term but type numeric overall. if i alter the column in the edges table to integer, this message will not occur. but if i try to cast all references to the distance to the type numerc(7,3) it still does not work. it is an almost identical case as https://www.postgresql.org/message-id/E1UEqGY-Qp-Po%40wrigleys.postgresql.org the autor there writes it is solvable, it seems not to me. Any idea how to solve it? if it is a known bug, please add it in to the documents in section https://www.postgresql.org/docs/9.6/static/queries-with.html My exercise is almost identical to the example in the docs: WITH RECURSIVE search_graph(id, link, data, depth, path, cycle) AS ( SELECT g.id, g.link, g.data, 1, ARRAY[g.id], false FROM graph g UNION ALL SELECT g.id, g.link, sg.data + g.data, -- altered section, data is numeric(7,3) sg.depth + 1, path || g.id, g.id = ANY(path) FROM graph g, search_graph sg WHERE g.id = sg.link AND NOT cycle ) SELECT * FROM search_graph; only the column data would be called distance and be of type numeric(7,3). many thanks in advance Karl COMMENT ADDED: changing the column to real or double precision will allow the WITH RECURSIVE to run just fine with no casting at all. I can live with this but i would prefer to use exact data types, before inexact. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] cast issue in WITH RECURION
Hi. i use postgresql 9.6.3. I have made a small graph with nodes and edges. Each edge has a distance numeric (7,3) attribute. when i create a recursive query and try to add the distances i get a message: ERROR: recursive query "edges" column 3 has type numeric(7,3) in non-recursive term but type numeric overall. if i alter the column in the edges table to integer, this message will not occur. but if i try to cast all references to the distance to the type numerc(7,3) it still does not work. it is an almost identical case as https://www.postgresql.org/message-id/E1UEqGY-Qp-Po%40wrigleys.postgresql.org the autor there writes it is solvable, it seems not to me. Any idea how to solve it? if it is a known bug, please add it in to the documents in section https://www.postgresql.org/docs/9.6/static/queries-with.html My exercise is almost identical to the example in the docs: WITH RECURSIVE search_graph(id, link, data, depth, path, cycle) AS ( SELECT g.id, g.link, g.data, 1, ARRAY[g.id], false FROM graph g UNION ALL SELECT g.id, g.link, sg.data + g.data, -- altered section, data is numeric(7,3) sg.depth + 1, path || g.id, g.id = ANY(path) FROM graph g, search_graph sg WHERE g.id = sg.link AND NOT cycle ) SELECT * FROM search_graph; only the column data would be called distance and be of type numeric(7,3). many thanks in advance Karl -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] hot standby questions
On Wed, Aug 2, 2017 at 8:19 PM, armand pirvuwrote: > > Hi > > Just trying to put together the hot_standby setup > All docs I read are pointing to use as prefered method to use > pg_basebackup to set the base > So far so good > But > > psql postgres -c "select pg_start_backup('backup')" > pg_basebackup -D /var/lib/pgsql/sample -Ft -z -P > psql postgres -c "select pg_stop_backup()" > pg_basebackup does the equivalent of pg_start_backup and pg_stop_backup for you. It is not helpful, and might even sometimes be harmful, to do them yourself when using pg_basebackup. > > Pretty much every where I looked at -x is not mentioned to be used > > So what gives ? What did I miss ? It's gotta be soomething > That certainly isn't my experience. If you find sites that don't mention -x, -X, or --xlog-method, then I would be reluctant to take any of that site's other advice seriously. But note that in version 10, -x will go away and the default will be changed so that not specifying anything will be the same as -X stream. perhaps you are reading advice aimed at a future version. Cheers, Jeff
Re: [GENERAL] select md5 result set
On Wed, Aug 2, 2017 at 4:25 PM, David G. Johnston < david.g.johns...@gmail.com> wrote: > On Wed, Aug 2, 2017 at 3:42 PM, Peter Koukoulis> wrote: > >> >> SQL> select dbms_sqlhash.gethash('select x,y from test1',2) as md5_value >> from dual; >> >> MD5_VALUE >> >> >> 9FDA7FA725B783172CA371DA04AD5754 >> >> >> Can I do something similar in PostgreSQL ? >> >> > Similar. > > SELECT md5(string_agg(vals::text, '')) > FROM ( VALUES (1, 2), (2, 3) ) vals (x, y) > > > That is going to build up the entire string in memory, so will fail if the text representation of the entire table doesn't fit in 1GB. I don't see any feature in PostgreSQL for calculating hashes over streaming data. But it wouldn't be too hard to create something in plperl, for example, to do that. You would have to make sure the query always returns rows in the same order (I don't know if Oracle's function handles that for you) and that things like datestyle and timezone don't cause differences. You could use something like: \copy (select * from blah order by something) to program 'md5sum' binary but I don't know how you would get the output back into your program once it shows up on your screen. Cheers, Jeff
Re: [GENERAL] Check if prepared statement exist?
Igor Korotwrites: > Is there a way to do such a check from the libpq? I think the pg_prepared_statements view will help you. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Check if prepared statement exist?
Hi, ALL, Is there a way to do such a check from the libpq? I'm trying to call a function from 2 different places of the application and want to see if I already had the call to PQprepare(); Thank you. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] unexpected pageaddr
Well this reproduces with pg_basebackup Disturbing. I wonder what else if lurking around > On Aug 2, 2017, at 10:34 PM, armand pirvuwrote: > > Hi > > Setting up the hot_standby the only way I could it get the base remote > cluster running was including pg_xlog > Now using rsync as opposed to pg_basebackup seems ok aka > > psql postgres -c "select pg_start_backup('backup')" > rsync -a /var/lib/pgsql/9.5/data postgres@192.168.5.252:/var/lib/pgsql/9.5 > psql postgres -c "select pg_stop_backup()" > > But > > cp: cannot stat ‘/var/lib/pgsql/pg_xlog_archive/00010078’: No > such file or directory > < 2017-08-02 22:30:40.949 CDT >LOG: unexpected pageaddr 0/3D00 in log > segment 00010078, offset 0 > > and later > > > cp: cannot stat ‘/var/lib/pgsql/pg_xlog_archive/00010079’: No > such file or directory > < 2017-08-02 22:30:40.949 CDT >LOG: unexpected pageaddr 0/3D00 in log > segment 00010079, offset 0 > cp: cannot stat ‘/var/lib/pgsql/pg_xlog_archive/00010079’: No > such file or directory > > Things seem okay but the unexpected pageaddr error is a bit unsettling . > > Any thougts ? Something to be really worried about ? Did I miss it again ? :)) > > > Thanks > Armand > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Would you add a --dry-run to pg_restore?
On Wed, Aug 2, 2017 at 11:49 AM, Tom Lanewrote: > Edmundo Robles writes: >> I mean, to verify the integrity of backup i do: >> gunzip -c backup_yesterday.gz | pg_restore -d my_database && echo >> "backup_yesterday is OK" > >> but my_database's size, uncompresed, is too big more than 15G and >> sometimes i have no space to restore it, so always i must declutter my >> disk first. > >> Will be great to have a dry run option, because the time to verify >> reduces a lot and will save space on disk, because just execute with no >> write to disk. > > What do you imagine a dry run option would do? > > If you just want to see if the file contains obvious corruption, > you could do > > pg_restore file >/dev/null > > and see if it prints any complaints on stderr. If you want to have > confidence that the file would actually restore (and that there aren't > e.g. unique-index violations or foreign-key violations in the data), > I could imagine a mode where pg_restore wraps its output in "begin" and > "rollback". But that's not going to save any disk space, or time, > compared to doing a normal restore into a scratch database. > > I can't think of any intermediate levels of verification that wouldn't > involve a huge amount of work to implement ... and they'd be unlikely > to catch interesting problems in practice. For instance, I doubt that > syntax-checking but not executing the SQL coming out of pg_restore would > be worth the trouble. If an archive is corrupt enough that it contains > bad SQL, it probably has problems that pg_restore would notice anyway. > Most of the restore failures that we hear about in practice would not be > detectable without actually executing the commands, because they involve > problems like issuing commands in the wrong order. The vast majority of my restore issues are dependency problems (for example, postgis extension not being present). A distant second place would be pg_restore's inability to do things in the proper order or gaps in the dump feature itself (for example, a cast between two built in types, at least back in the day). A good reasonable test for all of those cases with the current tools is to do a schema only restore (which should not take long in most cases). If you get past that step, there is an exceptionally high probability that the restore will succeed sans some controllable factors like running out of space. There are some rare known considerations that could a data load to fail. For example, a unique index on floating point can dump but not load if two binary differentiated values render to the same string. I've never seen this in practice however. So I'd argue to just use that (schema only) feature for pre-load verification if you're paranoid. merlin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Would you add a --dry-run to pg_restore?
On 02/08/17, Steve Atkins (st...@blighty.com) wrote: > > On Aug 2, 2017, at 9:02 AM, Edmundo Robleswrote: > > > > I mean, to verify the integrity of backup i do: > > > > gunzip -c backup_yesterday.gz | pg_restore -d my_database && echo > > "backup_yesterday is OK" > > > > but my_database's size, uncompresed, is too big more than 15G and > > sometimes i have no space to restore it, so always i must > > declutter my disk first. ... > If the gunzip completes successfully then the backups weren't > corrupted and the disk is readable. They're very likely to be "good" > unless you have a systematic problem with your backup script. > > You could then run that data through pg_restore, redirecting the > output to /dev/null, to check that the compressed file actually came > from pg_dump. (gunzip backup_yesterday.gz | pg_restore >/dev/null) A couple of extra steps you can add to avoid a full restore (which is best) is to do a file hash check as part of the verification, and do something like add a token to the database just before dumping, then verify that. We do something like this: rory:~/db$ gpg -d dump_filename.sqlc.gpg | \ pg_restore -Fc --data-only --schema audit | \ grep -A 1 "COPY audit" output > COPY audit (tdate) FROM stdin; 2017-04-25 Cheers Rory -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Would you add a --dry-run to pg_restore?
On Wed, Aug 2, 2017 at 7:10 PM, Edmundo Robleswrote: > > I imagine pg_restore can execute the instructions on dump but don't > write on disk. just like David said: "tell me what is going to happen but > don't actually do it" IIRC pg_restore does not execute SQL fully. It just passes the commands to the server when in text mode, like psql, and builds some commands and send them to the server to execute when in custom/tar mode. I doubt it has the ability to validate the contents of the dump. >> Edmundo Robles writes: >> > I mean, to verify the integrity of backup i do: >> > gunzip -c backup_yesterday.gz | pg_restore -d my_database && echo >> > "backup_yesterday is OK" I also think if he is piping the dump must be text mode, I seem to recall custom format needs seekable files, but not too sure about tar, it should not. In this case, as previously suggested, a simple gunzip -t is enough to verify backup file integrity, but checking internal correctness is very difficult ( as it may even depend on server configuration, i.e., needing some predefined users / locales / encodings ). Francisco Olarte. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Do not INSERT if UPDATE fails
On Wed, Aug 2, 2017 at 6:23 PM, Scott Marlowewrote: > Does insert's "on conflict" clause not work for this usage? Did you even bother to read the queries? He is using two different tables. Francisco Olarte. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] standby database crash
++ Forwarding to pgsql-general group. -- Forwarded message -- From: Seong Son (US)Date: Thu, Aug 3, 2017 at 12:48 AM Subject: standby database crash To: "pgadmin-supp...@lists.postgresql.org" < pgadmin-supp...@lists.postgresql.org> Hello, I’ve posted this to the legacy list but learned that there’s a new list so here it is. I have a client who has streaming replication setup with the primary in one city and standby in another city both identical servers with Postgresql 9.6 on Windows Server 2012. They have some network issues, which is causing the connection from the primary to standby to drop sometimes. And recently standby crashed with the following log. And it could not be restarted. 2017-07-18 09:21:13 UTC FATAL: invalid memory alloc request size 4148830208 2017-07-18 09:21:14 UTC LOG: startup process (PID 5608) exited with exit code 1 2017-07-18 09:21:14 UTC LOG: terminating any other active server processes 2017-07-18 09:21:14 UTC LOG: database system is shut down Last entry from the pg_xlogdump shows the following pg_xlogdump: FATAL: error in WAL record at D5/D1BD5FD0: unexpected pageaddr D1/E7BD6000 in log segment 00D500D1, offset 12410880 So my questions are, could an old WAL segment being resent through the network cause crash like this? Shouldn’t Postgresql be able to handle out of order WAL segments instead of just crashing? And what would be the best way to recover the standby server? Resynching the entire database seems to be too time consuming. Thanks in advance for any info. -Seong