Re: [GENERAL] SSO Windows-to-unix
I think PgAdmin is just a client that uses libpq and does not specifically help with SSO. From: Raghu Ram [mailto:raghuchenn...@gmail.com] Sent: Tuesday, 13 January 2015 10:22 p.m. To: Jeremy Palmer Subject: Re: [GENERAL] SSO Windows-to-unix On Tue, Jan 13, 2015 at 5:15 AM, Jeremy Palmer jpal...@linz.govt.nzmailto:jpal...@linz.govt.nz wrote: Hi All, I'm just investigating the option for configuring SSO for windows clients connecting to a PostgreSQL 9.3 server installed on Ubuntu 14.04. Our windows environment uses a Windows 2012 domain controller. The best information I could find on this subject was http://www.hagander.net/talks/Deploying%20PostgreSQL%20in%20a%20Windows%20Enterprise.pdf, but I see it's dated 2008. Could someone confirm that this is still the best how-to guide for this subject and if there are any other considerations with newer versions of PostgreSQL? Many thanks, Jeremy You can use Graphical User interface tool i.e PgAdmin-III to connect PostgreSQL Database running on Ubuntu Server. http://www.pgadmin.org/ Thanks Regards Raghu Ram This message contains information, which may be in confidence and may be subject to legal privilege. If you are not the intended recipient, you must not peruse, use, disseminate, distribute or copy this message. If you have received this message in error, please notify us immediately (Phone 0800 665 463 or i...@linz.govt.nz) and destroy the original message. LINZ accepts no responsibility for changes to this email, or for any attachments, after its transmission from LINZ. Thank You.
Re: [GENERAL] Simple Atomic Relationship Insert
Thanks John. I've been seeing a lot of examples like this lately. Does the following approach have any advantages over traditional approaches? WITH sel AS ( SELECT id FROM hometowns WHERE name = 'Portland' ), ins AS ( INSERT INTO hometowns(name) SELECT 'Portland' WHERE NOT EXISTS (SELECT 1 FROM sel) RETURNING id ) INSERT INTO users(name, hometown_id) VALUES ('Robert', SELECT id FROM ins UNION ALL SELECT id FROM sel); On Tue, Jan 13, 2015 at 8:50 AM, John McKown john.archie.mck...@gmail.com wrote: On Tue, Jan 13, 2015 at 10:42 AM, Robert DiFalco robert.difa...@gmail.com wrote: Let's say I have two tables like this (I'm leaving stuff out for simplicity): CREATE SEQUENCE HOMETOWN_SEQ_GEN START 1 INCREMENT 1; CREATE TABLE hometowns ( id INTEGER DEFAULT nextval('HOMETOWN_SEQ_GEN'), name VARCHAR, PRIMARY KEY (id), UNIQUE(name) ); CREATE SEQUENCE USER_SEQ_GEN START 1 INCREMENT 1; CREATE TABLE users ( id BIGINT DEFAULT nextval('USER_SEQ_GEN'), hometown_id INTEGER, nameVARCHAR NOT NULL, PRIMARY KEY (id), FOREIGN KEY (hometown_id) REFERENCES hometowns(id) ); The hometowns table is populate as users are created. For example, a client may submit {name:Robert, hometown:Portland}. The hometowns table will never be updated, only either queries or inserted. So given this I need to INSERT a row into users and either SELECT the hometowns.id that matches Portland or if it doesn't exist I INSERT it returning the hometowns.id. Normally I would do by first doing a SELECT on hometown. If I don't get anything I do an INSERT into hometown RETURNING the id. If THAT throws an error then I do the SELECT again. Now I'm finally ready to INSERT into users using the hometowns.id from the above steps. But wow, that seems like a lot of code for a simple Add if doesn't exist foreign key relationship -- but this is how I've always done. So my question. Is there a simpler, more performant, or thread-safe way to do this? Thanks! What occurs to me is to simply do an INSERT into the hometowns table and just ignore the already exists return indication. Then do a SELECT to get the hometowns id which now exists, then INSERT the users. but I could easily be overlooking some reason why this wouldn't work properly. -- While a transcendent vocabulary is laudable, one must be eternally careful so that the calculated objective of communication does not become ensconced in obscurity. In other words, eschew obfuscation. 111,111,111 x 111,111,111 = 12,345,678,987,654,321 Maranatha! John McKown
[GENERAL] Re: [HACKERS] Check that streaming replica received all data after master shutdown
On 01/13/2015 12:11 PM, Vladimir Borodin wrote: 05 янв. 2015 г., в 18:15, Vladimir Borodin r...@simply.name написал(а): Hi all. I have a simple script for planned switchover of PostgreSQL (9.3 and 9.4) master to one of its replicas. This script checks a lot of things before doing it and one of them is that all data from master has been received by replica that is going to be promoted. Right now the check is done like below: On the master: postgres@pgtest03d ~ $ psql -t -A -c 'select pg_current_xlog_location();' 0/3390 postgres@pgtest03d ~ $ /usr/pgsql-9.3/bin/pg_ctl stop -m fast waiting for server to shut down done server stopped postgres@pgtest03d ~ $ /usr/pgsql-9.3/bin/pg_controldata | head pg_control version number:937 Catalog version number: 201306121 Database system identifier: 6061800518091528182 Database cluster state: shut down pg_control last modified: Mon 05 Jan 2015 06:47:57 PM MSK Latest checkpoint location: 0/3428 Prior checkpoint location:0/3328 Latest checkpoint's REDO location:0/3428 Latest checkpoint's REDO WAL file:001B0034 Latest checkpoint's TimeLineID: 27 postgres@pgtest03d ~ $ On the replica (after shutdown of master): postgres@pgtest03g ~ $ psql -t -A -c select pg_xlog_location_diff(pg_last_xlog_replay_location(), '0/3428'); 104 postgres@pgtest03g ~ $ These 104 bytes seems to be the size of shutdown checkpoint record (as I can understand from pg_xlogdump output). postgres@pgtest03g ~/9.3/data/pg_xlog $ /usr/pgsql-9.3/bin/pg_xlogdump -s 0/3390 -t 27 rmgr: XLOGlen (rec/tot): 0/32, tx: 0, lsn: 0/3390, prev 0/3328, bkp: , desc: xlog switch rmgr: XLOGlen (rec/tot): 72/ 104, tx: 0, lsn: 0/3428, prev 0/3390, bkp: , desc: checkpoint: redo 0/3428; tli 27; prev tli 27; fpw true; xid 0/6010; oid 54128; multi 1; offset 0; oldest xid 1799 in DB 1; oldest multi 1 in DB 1; oldest running xid 0; shutdown pg_xlogdump: FATAL: error in WAL record at 0/3428: record with zero length at 0/3490 postgres@pgtest03g ~/9.3/data/pg_xlog $ I’m not sure that these 104 bytes will always be 104 bytes to have a strict equality while checking. Could it change in the future? Or is there a better way to understand that streaming replica received all data after master shutdown? The check that pg_xlog_location_diff returns 104 bytes seems a bit strange. Don't rely on it being 104 bytes. It can vary across versions, and across different architectures. You could simply check that the standby's pg_last_xlog_replay_location() master's Latest checkpoint location, and not care about the exact difference. - Heikki -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Simple Atomic Relationship Insert
On Tue, Jan 13, 2015 at 11:45 AM, Robert DiFalco robert.difa...@gmail.com wrote: Thanks John. I've been seeing a lot of examples like this lately. Does the following approach have any advantages over traditional approaches? WITH sel AS ( SELECT id FROM hometowns WHERE name = 'Portland' ), ins AS ( INSERT INTO hometowns(name) SELECT 'Portland' WHERE NOT EXISTS (SELECT 1 FROM sel) RETURNING id ) INSERT INTO users(name, hometown_id) VALUES ('Robert', SELECT id FROM ins UNION ALL SELECT id FROM sel); Oh, that is very clever. I've not see such a thing before. Thanks. I've added it to my stable of tricks. Which aren't really tricks, just really nice new methods to do something. The main advantage that I can see is that it is a single SQL statement to send to the server. That makes it self contained so that it would be more difficult for someone to accidentally mess it up. On the other hand, CTEs are still a bit new (at least to me) and so the why it works might not be very obvious to other programmers who might need to maintain the application. To many this lack of obviousness is a detriment. To me, it means update your knowledge. But then, I am sometimes a arrogant BOFH. Add that to my being an surly old curmudgeon, and you can end up with some bad advice when in a corporate environment. The minus, at present, is that it is clever and so may violate corporate coding standards due to complexity. Or maybe I just work for a staid company. -- While a transcendent vocabulary is laudable, one must be eternally careful so that the calculated objective of communication does not become ensconced in obscurity. In other words, eschew obfuscation. 111,111,111 x 111,111,111 = 12,345,678,987,654,321 Maranatha! John McKown
Re: [GENERAL] Simple Atomic Relationship Insert
I don't like loops to catch failure condition... can you possibly fail to stop? In a stored procedure (or with auto-commit turned off in any transaction)... You can avoid any race condition by using a semaphore (e.g. you lock something for the duration of the critical part of your processing so that anything that would update that data just waits). [ http://www.postgresql.org/docs/9.4/static/applevel-consistency.html#NON-SERIALIZABLE-CONSISTENCY ] When you have a sequence of steps that need to be serialized across processes, choose or even create a table to use for locking. SELECT FOR UPDATE a row in that table. Every process trying to access that row now waits until the first transaction to get the exclusive lock commits/rollbacks (no actual update need be executed). How atomic the lock is depends on what you use for your lock. (poor levels of atomicity will lead to performance problems that are hard to diagnose) For the most recent version of this I've done... we used the unique business key in an project based audit table. (the table was totally unrelated to the work being done other than it had the same business key values and locking wouldn't interfere with other processing.) So if you had to create something to lock... -- create something to lock... uniqueness is not required but allowed for if you need it INSERT INTO LOCKIT (table, biz_key) VALUES ('hometowns',hometown_name); insert into users(name, hometown_id) VALUES ('Robert', SELECT select_hometown_id(hometown_name)); CREATE OR REPLACE FUNCTION select_hometown_id(hometown_name VARCHAR) RETURNS INTEGER AS $ DECLARE hometown_id INTEGER; BEGIN SELECT FOR UPDATE from LOCKIT where table = 'hometowns' and biz_key = hometown_name; -- wait WITH sel AS ( SELECT id FROM hometowns WHERE name = 'Portland' ), ins AS ( INSERT INTO hometowns(name) SELECT 'Portland' WHERE NOT EXISTS (SELECT 1 FROM sel) RETURNING id ) SELECT id INTO hometown_id FROM ins UNION ALL SELECT id FROM sel; RETURN hometown_id; END; $ LANGUAGE plpgsql; Only one process will be able to execute the CTE at a time - always - and no looping required. Roxanne On 1/13/2015 6:52 PM, Robert DiFalco wrote: Good points. I guess my feeling is that if there can be a race condition on INSERT then the CTE version is not truly atomic, hence the LOOP. On Tue, Jan 13, 2015 at 3:11 PM, Brian Dunavant br...@omniti.com mailto:br...@omniti.com wrote: A very good point, but it does not apply as here (and in my article) we are not using updates, only insert and select. On Tue, Jan 13, 2015 at 6:03 PM, Thomas Kellerer spam_ea...@gmx.net mailto:spam_ea...@gmx.net wrote: Brian Dunavant wrote on 13.01.2015 22:33: What issue are you having? I'd imagine you have a race condition on the insert into hometowns, but you'd have that same race condition in your app code using a more traditional 3 query version as well. I often use CTEs like this to make things atomic. It allows me to remove transactional code out of the app and also to increase performance by reducing the back-and-forth to the db. http://omniti.com/seeds/writable-ctes-improve-performance Craig Ringer explained some of the pitfalls of this approach here: http://dba.stackexchange.com/questions/78510/why-is-cte-open-to-lost-updates which is a follow up question based on this: http://stackoverflow.com/a/8702291/330315 Thomas -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org mailto:pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org mailto:pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- [At other schools] I think the most common fault in general is to teach students how to pass exams instead of teaching them the science. Donald Knuth
Re: [GENERAL] SSO Windows-to-unix
Le 14 janv. 2015 05:22, Jeremy Palmer jpal...@linz.govt.nz a écrit : I think PgAdmin is just a client that uses libpq and does not specifically help with SSO. You're definitely right about that. From: Raghu Ram [mailto:raghuchenn...@gmail.com] Sent: Tuesday, 13 January 2015 10:22 p.m. To: Jeremy Palmer Subject: Re: [GENERAL] SSO Windows-to-unix On Tue, Jan 13, 2015 at 5:15 AM, Jeremy Palmer jpal...@linz.govt.nz wrote: Hi All, I'm just investigating the option for configuring SSO for windows clients connecting to a PostgreSQL 9.3 server installed on Ubuntu 14.04. Our windows environment uses a Windows 2012 domain controller. The best information I could find on this subject was http://www.hagander.net/talks/Deploying%20PostgreSQL%20in%20a%20Windows%20Enterprise.pdf, but I see it's dated 2008. Could someone confirm that this is still the best how-to guide for this subject and if there are any other considerations with newer versions of PostgreSQL? Many thanks, Jeremy You can use Graphical User interface tool i.e PgAdmin-III to connect PostgreSQL Database running on Ubuntu Server. http://www.pgadmin.org/ Thanks Regards Raghu Ram This message contains information, which may be in confidence and may be subject to legal privilege. If you are not the intended recipient, you must not peruse, use, disseminate, distribute or copy this message. If you have received this message in error, please notify us immediately (Phone 0800 665 463 or i...@linz.govt.nz) and destroy the original message. LINZ accepts no responsibility for changes to this email, or for any attachments, after its transmission from LINZ. Thank You.
Re: [GENERAL] How to analyze a slowdown in 9.3.5?
For what it's worth, this week's run covered even more months than last week's did, and ran in about 5 1/2 hours, with no slowdowns, under a similar system load. So, it could have been a one-time thing or some combination of factors that will be difficult to reproduce. -- Mike Nolan -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Simple Atomic Relationship Insert
John McKown wrote: On Tue, Jan 13, 2015 at 10:42 AM, Robert DiFalco robert.difa...@gmail.com mailto:robert.difa...@gmail.comwrote: Let's say I have two tables like this (I'm leaving stuff out for simplicity): CREATE SEQUENCE HOMETOWN_SEQ_GEN START 1 INCREMENT 1; CREATE TABLE hometowns ( id INTEGER DEFAULT nextval('HOMETOWN_SEQ_GEN'), name VARCHAR, PRIMARY KEY (id), UNIQUE(name) ); CREATE SEQUENCE USER_SEQ_GEN START 1 INCREMENT 1; CREATE TABLE users ( id BIGINT DEFAULT nextval('USER_SEQ_GEN'), hometown_id INTEGER, nameVARCHAR NOT NULL, PRIMARY KEY (id), FOREIGN KEY (hometown_id) REFERENCES hometowns(id) ); The hometowns table is populate as users are created. For example, a client may submit {name:Robert, hometown:Portland}. The hometowns table will never be updated, only either queries or inserted. So given this I need to INSERT a row into users and either SELECT the hometowns.id http://hometowns.id that matches Portland or if it doesn't exist I INSERT it returning the hometowns.id http://hometowns.id. Normally I would do by first doing a SELECT on hometown. If I don't get anything I do an INSERT into hometown RETURNING the id. If THAT throws an error then I do the SELECT again. Now I'm finally ready to INSERT into users using the hometowns.id http://hometowns.id from the above steps. But wow, that seems like a lot of code for a simple Add if doesn't exist foreign key relationship -- but this is how I've always done. So my question. Is there a simpler, more performant, or thread-safe way to do this? Thanks! What occurs to me is to simply do an INSERT into the hometowns table and just ignore the already exists return indication. Then do a SELECT to get the hometowns id which now exists, then INSERT the users. but I could easily be overlooking some reason why this wouldn't work properly. And one more approach: CREATE SEQUENCE HOMETOWN_SEQ_GEN START 1 INCREMENT 1; CREATE TABLE hometowns ( id INTEGER DEFAULT nextval('HOMETOWN_SEQ_GEN'), name VARCHAR, PRIMARY KEY (id), UNIQUE(name) ); CREATE SEQUENCE USER_SEQ_GEN START 1 INCREMENT 1; CREATE TABLE users ( id BIGINT DEFAULT nextval('USER_SEQ_GEN'), hometown_id INTEGER, nameVARCHAR NOT NULL, PRIMARY KEY (id), FOREIGN KEY (hometown_id) REFERENCES hometowns(id) ); create or replace view user_town as select users.name as username, hometowns.name as hometownname from users join hometowns on hometowns.id = users.hometown_id; create rule user_town_exists as on insert to user_town where exists(select id from hometowns where (hometowns.name = new.hometownname)) do insert into users (name, hometown_id) values (new.username, (select id from hometowns where (hometowns.name = new.hometownname))); create rule user_town_not_exists as on insert to user_town where not exists(select id from hometowns where (hometowns.name = new.hometownname)) do ( insert into hometowns (name) values (new.hometownname); insert into users (name, hometown_id) values (new.username, (select id from hometowns where (hometowns.name = new.hometownname))); ); create rule user_town_nothing as on insert to user_town do instead nothing; --- This email is free from viruses and malware because avast! Antivirus protection is active. http://www.avast.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Need advice for handling big data in postgres
Hi, I am quite new to handle a lot of data in Postgres and I would be happy to get as much advice from experienced data jongleurs as possible. THE SCENARIO: I have a big table `client_log`. It is the main gate for clients (and there is a lot of them) to talk to the backend. Each client sends more or less log lines, resulting in the table being written lots of times every second producing a lot of data. Here is the table definition: http://stackoverflow.com/questions/25542010/postgres-huge-table-with-delayed-read-and-write-access *code* determines the nature of the log line. Example: code=20 determines a location log line (the important information will be in *latitude* and *longitude*), code=8999 carries a comment that just needs to get stored (in the *comment* field). Just storing the data is useless, I need to look at it. At some log lines quite regularly, at some just once, at others every now and then. I need indexes on nearly every single column for quick access. So what is the `best` way of storing the data? OPTION 1 - PARTITIONING: For each query only a few columns are interesting and I could partition the table (as it was suggested on SO) by *created* and by *code*. There is roughly 10 different codes and I would keep data for the last two months (partitioned by day). So I would end up having 10 * 60 = 600 partitions. For every partition I could create indexes that are relevant to that partition (examples: *created*, *created_on_server* or *latitude* and *longitude*). OPTION 2 - MULTIPLE TABLES: I could create the tables myself: one for location log lines, one for comment log lines etc. and store them via python in the correct table (depending on *code*). Each of these tables would only have the columns and indexes needed. OUTCOME: I expect partitioning to be faster because Postgres selects the correct partition for me automatically. I can easily get rid of old data by dropping the corresponding partition. The downside of the partition approach is that all partitions inherit all columns of the master table which is unnecessary (and consumes disc space?). Which option should I go for? Is there anything else that I haven't considered? Any help and comments appreciated. PS.: Does partitioning work trouble-free with master-slave replication? attachment: tobias.vcf -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Re: Stuck trying to backup large database - best practice? How about a cloud service?
Many worthwhile things cost money. I never suggested you wouldn't have to pay. - Bob On Tue, Jan 13, 2015 at 12:27 AM, Michael Nolan htf...@gmail.com wrote: On Mon, Jan 12, 2015 at 7:46 PM, Bob Futrelle bob.futre...@gmail.com wrote: You should be able to find a cloud provider that could give you many TB. Or so they like to claim. Nope, but you probably find one willing to SELL you access to many TB. -- Mike Nolan
Re: [GENERAL] Check that streaming replica received all data after master shutdown
05 янв. 2015 г., в 18:15, Vladimir Borodin r...@simply.name написал(а): Hi all. I have a simple script for planned switchover of PostgreSQL (9.3 and 9.4) master to one of its replicas. This script checks a lot of things before doing it and one of them is that all data from master has been received by replica that is going to be promoted. Right now the check is done like below: On the master: postgres@pgtest03d ~ $ psql -t -A -c 'select pg_current_xlog_location();' 0/3390 postgres@pgtest03d ~ $ /usr/pgsql-9.3/bin/pg_ctl stop -m fast waiting for server to shut down done server stopped postgres@pgtest03d ~ $ /usr/pgsql-9.3/bin/pg_controldata | head pg_control version number:937 Catalog version number: 201306121 Database system identifier: 6061800518091528182 Database cluster state: shut down pg_control last modified: Mon 05 Jan 2015 06:47:57 PM MSK Latest checkpoint location: 0/3428 Prior checkpoint location:0/3328 Latest checkpoint's REDO location:0/3428 Latest checkpoint's REDO WAL file:001B0034 Latest checkpoint's TimeLineID: 27 postgres@pgtest03d ~ $ On the replica (after shutdown of master): postgres@pgtest03g ~ $ psql -t -A -c select pg_xlog_location_diff(pg_last_xlog_replay_location(), '0/3428'); 104 postgres@pgtest03g ~ $ These 104 bytes seems to be the size of shutdown checkpoint record (as I can understand from pg_xlogdump output). postgres@pgtest03g ~/9.3/data/pg_xlog $ /usr/pgsql-9.3/bin/pg_xlogdump -s 0/3390 -t 27 rmgr: XLOGlen (rec/tot): 0/32, tx: 0, lsn: 0/3390, prev 0/3328, bkp: , desc: xlog switch rmgr: XLOGlen (rec/tot): 72/ 104, tx: 0, lsn: 0/3428, prev 0/3390, bkp: , desc: checkpoint: redo 0/3428; tli 27; prev tli 27; fpw true; xid 0/6010; oid 54128; multi 1; offset 0; oldest xid 1799 in DB 1; oldest multi 1 in DB 1; oldest running xid 0; shutdown pg_xlogdump: FATAL: error in WAL record at 0/3428: record with zero length at 0/3490 postgres@pgtest03g ~/9.3/data/pg_xlog $ I’m not sure that these 104 bytes will always be 104 bytes to have a strict equality while checking. Could it change in the future? Or is there a better way to understand that streaming replica received all data after master shutdown? The check that pg_xlog_location_diff returns 104 bytes seems a bit strange. +hackers Could anyone help? Thanks. Thanks. -- May the force be with you... http://simply.name -- May the force be with you... http://simply.name
Re: [GENERAL] BDR Error restarted
Hi Craig, any insight on this issue? :) Thanks, Dean -- View this message in context: http://postgresql.nabble.com/BDR-Error-restarted-tp5833139p5833722.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Timstamp to Json conversion issue
Hi I have migrated my postgresql 9.2 schemas to Postgres 9.4 (only table scheme migrated not the data ). I have used the following sql to convert table output to json in 9.2. select array_to_json(array_agg(row_to_json(R.*)))::text from ( select ID, TIME from SN_TestTable )R; IN 9.2, I used to get this result [{id:1,time:2015-01-13 12:09:45.348}] But same code in 9.4 produce this result [{id:1,time:2015-01-13T12:09:45.348}] . T separator is added between date and time. Seems json coversion is followed ISO8601 for the timestamp. This issue is resolved by sending TIME column as text instead of Timestamp without timezone. But how do I fix this problem without converting to text. With best regards, Ramkumar Yelai Siemens Technology and Services Private Limited CT DC AA I HOUSE DEV GL4 84, Hosur Road Bengaluru 560100, Indien Tel.: +91 80 33136494 Fax: +91 80 33133389 Mobil: +91 9886182031 mailto:ramkumar.ye...@siemens.com http://www.siemens.co.in/STS Registered Office: 130, Pandurang Budhkar Marg, Worli, Mumbai 400 018. Telephone +91 22 39677000. Fax +91 22 39677075. Other Offices: Bangalore, Chennai, Gurgaon, Noida, Kolkata, Pune. Corporate Identity number:U9MH1986PLC093854
Re: [GENERAL] Timstamp to Json conversion issue
On 01/12/2015 10:45 PM, Yelai, Ramkumar IN BLR STS wrote: Hi I have migrated my postgresql 9.2 schemas to Postgres 9.4 (only table scheme migrated not the data ). I have used the following sql to convert table output to json in 9.2. select array_to_json(array_agg(row_to_json(R.*)))::text from ( select ID, TIME” from SN_TestTable )R; IN 9.2, I used to get this result [{id:1,time:2015-01-13 12:09:45.348}] But same code in 9.4 produce this result [{id:1,time:2015-01-13T12:09:45.348}] . “T” separator is added between date and time. Seems json coversion is followed ISO8601 for the timestamp. This issue is resolved by sending “TIME” column as text instead of Timestamp without timezone. But how do I fix this problem without converting to text. You have already found the fix:) It is a change in 9.4: http://www.postgresql.org/docs/9.4/interactive/release-9-4.html When converting values of type date, timestamp or timestamptz to JSON, render the values in a format compliant with ISO 8601 (Andrew Dunstan) Previously such values were rendered according to the current DateStyle setting; but many JSON processors require timestamps to be in ISO 8601 format. If necessary, the previous behavior can be obtained by explicitly casting the datetime value to text before passing it to the JSON conversion function. With best regards, Ramkumar Yelai Siemens Technology and Services Private Limited CT DC AA I HOUSE DEV GL4 84, Hosur Road Bengaluru 560100, Indien Tel.: +91 80 33136494 Fax: +91 80 33133389 Mobil: +91 9886182031 _mailto:ramkumar.yelai@siemens.com_ _http://www.siemens.co.in/STS_ Registered Office: 130, Pandurang Budhkar Marg, Worli, Mumbai 400 018. Telephone +91 22 39677000. Fax +91 22 39677075. Other Offices: Bangalore, Chennai, Gurgaon, Noida, Kolkata, Pune. Corporate Identity number:U9MH1986PLC093854 -- Adrian Klaver adrian.kla...@aklaver.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Timstamp to Json conversion issue
Yelai, Ramkumar IN BLR STS ramkumar.ye...@siemens.com writes: I have migrated my postgresql 9.2 schemas to Postgres 9.4 (only table scheme migrated not the data ). I have used the following sql to convert table output to json in 9.2. select array_to_json(array_agg(row_to_json(R.*)))::text from ( select ID, TIME from SN_TestTable )R; IN 9.2, I used to get this result [{id:1,time:2015-01-13 12:09:45.348}] But same code in 9.4 produce this result [{id:1,time:2015-01-13T12:09:45.348}] . T separator is added between date and time. This is an intentional change. As far as we know, any JSON processor that reads timestamps should be happy with the T, because that syntax is required by the JSON RFC. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Simple Atomic Relationship Insert
Let's say I have two tables like this (I'm leaving stuff out for simplicity): CREATE SEQUENCE HOMETOWN_SEQ_GEN START 1 INCREMENT 1; CREATE TABLE hometowns ( id INTEGER DEFAULT nextval('HOMETOWN_SEQ_GEN'), name VARCHAR, PRIMARY KEY (id), UNIQUE(name) ); CREATE SEQUENCE USER_SEQ_GEN START 1 INCREMENT 1; CREATE TABLE users ( id BIGINT DEFAULT nextval('USER_SEQ_GEN'), hometown_id INTEGER, nameVARCHAR NOT NULL, PRIMARY KEY (id), FOREIGN KEY (hometown_id) REFERENCES hometowns(id) ); The hometowns table is populate as users are created. For example, a client may submit {name:Robert, hometown:Portland}. The hometowns table will never be updated, only either queries or inserted. So given this I need to INSERT a row into users and either SELECT the hometowns.id that matches Portland or if it doesn't exist I INSERT it returning the hometowns.id. Normally I would do by first doing a SELECT on hometown. If I don't get anything I do an INSERT into hometown RETURNING the id. If THAT throws an error then I do the SELECT again. Now I'm finally ready to INSERT into users using the hometowns.id from the above steps. But wow, that seems like a lot of code for a simple Add if doesn't exist foreign key relationship -- but this is how I've always done. So my question. Is there a simpler, more performant, or thread-safe way to do this? Thanks!
Re: [GENERAL] Simple Atomic Relationship Insert
On Tue, Jan 13, 2015 at 10:42 AM, Robert DiFalco robert.difa...@gmail.com wrote: Let's say I have two tables like this (I'm leaving stuff out for simplicity): CREATE SEQUENCE HOMETOWN_SEQ_GEN START 1 INCREMENT 1; CREATE TABLE hometowns ( id INTEGER DEFAULT nextval('HOMETOWN_SEQ_GEN'), name VARCHAR, PRIMARY KEY (id), UNIQUE(name) ); CREATE SEQUENCE USER_SEQ_GEN START 1 INCREMENT 1; CREATE TABLE users ( id BIGINT DEFAULT nextval('USER_SEQ_GEN'), hometown_id INTEGER, nameVARCHAR NOT NULL, PRIMARY KEY (id), FOREIGN KEY (hometown_id) REFERENCES hometowns(id) ); The hometowns table is populate as users are created. For example, a client may submit {name:Robert, hometown:Portland}. The hometowns table will never be updated, only either queries or inserted. So given this I need to INSERT a row into users and either SELECT the hometowns.id that matches Portland or if it doesn't exist I INSERT it returning the hometowns.id. Normally I would do by first doing a SELECT on hometown. If I don't get anything I do an INSERT into hometown RETURNING the id. If THAT throws an error then I do the SELECT again. Now I'm finally ready to INSERT into users using the hometowns.id from the above steps. But wow, that seems like a lot of code for a simple Add if doesn't exist foreign key relationship -- but this is how I've always done. So my question. Is there a simpler, more performant, or thread-safe way to do this? Thanks! What occurs to me is to simply do an INSERT into the hometowns table and just ignore the already exists return indication. Then do a SELECT to get the hometowns id which now exists, then INSERT the users. but I could easily be overlooking some reason why this wouldn't work properly. -- While a transcendent vocabulary is laudable, one must be eternally careful so that the calculated objective of communication does not become ensconced in obscurity. In other words, eschew obfuscation. 111,111,111 x 111,111,111 = 12,345,678,987,654,321 Maranatha! John McKown
Re: [GENERAL] Check that streaming replica received all data after master shutdown
Vladimir Borodin wrote: I’m not sure that these 104 bytes will always be 104 bytes to have a strict equality while checking. Could it change in the future? There is no promise that WAL record format stays unchanged. Sometimes we change a WAL record in a minor release. Or is there a better way to understand that streaming replica received all data after master shutdown? The check that pg_xlog_location_diff returns 104 bytes seems a bit strange. I guess you could pg_xlogdump the difference and verify that it is a shutdown checkpoint record. As far as I remember there should always be one at the end of recovery. -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training Services -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Simple Atomic Relationship Insert
Brian Dunavant wrote on 13.01.2015 22:33: What issue are you having? I'd imagine you have a race condition on the insert into hometowns, but you'd have that same race condition in your app code using a more traditional 3 query version as well. I often use CTEs like this to make things atomic. It allows me to remove transactional code out of the app and also to increase performance by reducing the back-and-forth to the db. http://omniti.com/seeds/writable-ctes-improve-performance Craig Ringer explained some of the pitfalls of this approach here: http://dba.stackexchange.com/questions/78510/why-is-cte-open-to-lost-updates which is a follow up question based on this: http://stackoverflow.com/a/8702291/330315 Thomas -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Simple Atomic Relationship Insert
This CTE approach doesn't appear to play well with multiple concurrent transactions/connections. On Tue, Jan 13, 2015 at 10:05 AM, John McKown john.archie.mck...@gmail.com wrote: On Tue, Jan 13, 2015 at 11:45 AM, Robert DiFalco robert.difa...@gmail.com wrote: Thanks John. I've been seeing a lot of examples like this lately. Does the following approach have any advantages over traditional approaches? WITH sel AS ( SELECT id FROM hometowns WHERE name = 'Portland' ), ins AS ( INSERT INTO hometowns(name) SELECT 'Portland' WHERE NOT EXISTS (SELECT 1 FROM sel) RETURNING id ) INSERT INTO users(name, hometown_id) VALUES ('Robert', SELECT id FROM ins UNION ALL SELECT id FROM sel); Oh, that is very clever. I've not see such a thing before. Thanks. I've added it to my stable of tricks. Which aren't really tricks, just really nice new methods to do something. The main advantage that I can see is that it is a single SQL statement to send to the server. That makes it self contained so that it would be more difficult for someone to accidentally mess it up. On the other hand, CTEs are still a bit new (at least to me) and so the why it works might not be very obvious to other programmers who might need to maintain the application. To many this lack of obviousness is a detriment. To me, it means update your knowledge. But then, I am sometimes a arrogant BOFH. Add that to my being an surly old curmudgeon, and you can end up with some bad advice when in a corporate environment. The minus, at present, is that it is clever and so may violate corporate coding standards due to complexity. Or maybe I just work for a staid company. -- While a transcendent vocabulary is laudable, one must be eternally careful so that the calculated objective of communication does not become ensconced in obscurity. In other words, eschew obfuscation. 111,111,111 x 111,111,111 = 12,345,678,987,654,321 Maranatha! John McKown
Re: [GENERAL] Simple Atomic Relationship Insert
What issue are you having? I'd imagine you have a race condition on the insert into hometowns, but you'd have that same race condition in your app code using a more traditional 3 query version as well. I often use CTEs like this to make things atomic. It allows me to remove transactional code out of the app and also to increase performance by reducing the back-and-forth to the db. http://omniti.com/seeds/writable-ctes-improve-performance On Tue, Jan 13, 2015 at 4:21 PM, Robert DiFalco robert.difa...@gmail.com wrote: This CTE approach doesn't appear to play well with multiple concurrent transactions/connections. On Tue, Jan 13, 2015 at 10:05 AM, John McKown john.archie.mck...@gmail.com wrote: On Tue, Jan 13, 2015 at 11:45 AM, Robert DiFalco robert.difa...@gmail.com wrote: Thanks John. I've been seeing a lot of examples like this lately. Does the following approach have any advantages over traditional approaches? WITH sel AS ( SELECT id FROM hometowns WHERE name = 'Portland' ), ins AS ( INSERT INTO hometowns(name) SELECT 'Portland' WHERE NOT EXISTS (SELECT 1 FROM sel) RETURNING id ) INSERT INTO users(name, hometown_id) VALUES ('Robert', SELECT id FROM ins UNION ALL SELECT id FROM sel); Oh, that is very clever. I've not see such a thing before. Thanks. I've added it to my stable of tricks. Which aren't really tricks, just really nice new methods to do something. The main advantage that I can see is that it is a single SQL statement to send to the server. That makes it self contained so that it would be more difficult for someone to accidentally mess it up. On the other hand, CTEs are still a bit new (at least to me) and so the why it works might not be very obvious to other programmers who might need to maintain the application. To many this lack of obviousness is a detriment. To me, it means update your knowledge. But then, I am sometimes a arrogant BOFH. Add that to my being an surly old curmudgeon, and you can end up with some bad advice when in a corporate environment. The minus, at present, is that it is clever and so may violate corporate coding standards due to complexity. Or maybe I just work for a staid company. -- While a transcendent vocabulary is laudable, one must be eternally careful so that the calculated objective of communication does not become ensconced in obscurity. In other words, eschew obfuscation. 111,111,111 x 111,111,111 = 12,345,678,987,654,321 Maranatha! John McKown -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Simple Atomic Relationship Insert
Well, traditionally I would create a LOOP where I tried the SELECT, if there was nothing I did the INSERT, if that raised an exception I would repeat the LOOP. What's the best way to do it with the CTE? Currently I have the following which gives me Duplicate Key Exceptions when two sessions try to insert the same record at the same time. CREATE OR REPLACE FUNCTION select_hometown_id(hometown_name VARCHAR) RETURNS INTEGER AS $ DECLARE hometown_id INTEGER; BEGIN WITH sel AS ( SELECT id FROM hometowns WHERE name = hometown_name ), ins AS ( INSERT INTO hometowns (name) SELECT hometown_name WHERE NOT EXISTS(SELECT 1 FROM sel) RETURNING id ) SELECT id INTO hometown_id FROM ins UNION ALL SELECT id FROM sel; RETURN hometown_id; END; $ LANGUAGE plpgsql; And that is no bueno. Should I just put the whole thing in a LOOP?
Re: [GENERAL] Simple Atomic Relationship Insert
With the single CTE I don't believe you can do a full upsert loop. If you're doing this inside of a postgres function, your changes are already atomic, so I don't believe by switching you are buying yourself much (if anything) by using a CTE query instead of something more traditional here. The advantages of switching to a CTE would be if this code was all being done inside of the app code with multiple queries. On Tue, Jan 13, 2015 at 4:39 PM, Robert DiFalco robert.difa...@gmail.com wrote: Well, traditionally I would create a LOOP where I tried the SELECT, if there was nothing I did the INSERT, if that raised an exception I would repeat the LOOP. What's the best way to do it with the CTE? Currently I have the following which gives me Duplicate Key Exceptions when two sessions try to insert the same record at the same time. CREATE OR REPLACE FUNCTION select_hometown_id(hometown_name VARCHAR) RETURNS INTEGER AS $ DECLARE hometown_id INTEGER; BEGIN WITH sel AS ( SELECT id FROM hometowns WHERE name = hometown_name ), ins AS ( INSERT INTO hometowns (name) SELECT hometown_name WHERE NOT EXISTS(SELECT 1 FROM sel) RETURNING id ) SELECT id INTO hometown_id FROM ins UNION ALL SELECT id FROM sel; RETURN hometown_id; END; $ LANGUAGE plpgsql; And that is no bueno. Should I just put the whole thing in a LOOP? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Simple Atomic Relationship Insert
This seems to get rid of the INSERT race condition. CREATE OR REPLACE FUNCTION select_hometown_id(hometown_name VARCHAR) RETURNS INTEGER AS $ DECLARE hometown_id INTEGER; BEGIN LOOP BEGIN WITH sel AS ( SELECT id FROM hometowns WHERE name = hometown_name ), ins AS ( INSERT INTO hometowns (name) SELECT hometown_name WHERE NOT EXISTS(SELECT 1 FROM sel) RETURNING id ) SELECT id INTO hometown_id FROM ins UNION ALL SELECT id FROM sel; RETURN hometown_id; EXCEPTION WHEN unique_violation THEN END; END LOOP; END; $ LANGUAGE plpgsql; On Tue, Jan 13, 2015 at 1:53 PM, Brian Dunavant br...@omniti.com wrote: With the single CTE I don't believe you can do a full upsert loop. If you're doing this inside of a postgres function, your changes are already atomic, so I don't believe by switching you are buying yourself much (if anything) by using a CTE query instead of something more traditional here. The advantages of switching to a CTE would be if this code was all being done inside of the app code with multiple queries. On Tue, Jan 13, 2015 at 4:39 PM, Robert DiFalco robert.difa...@gmail.com wrote: Well, traditionally I would create a LOOP where I tried the SELECT, if there was nothing I did the INSERT, if that raised an exception I would repeat the LOOP. What's the best way to do it with the CTE? Currently I have the following which gives me Duplicate Key Exceptions when two sessions try to insert the same record at the same time. CREATE OR REPLACE FUNCTION select_hometown_id(hometown_name VARCHAR) RETURNS INTEGER AS $ DECLARE hometown_id INTEGER; BEGIN WITH sel AS ( SELECT id FROM hometowns WHERE name = hometown_name ), ins AS ( INSERT INTO hometowns (name) SELECT hometown_name WHERE NOT EXISTS(SELECT 1 FROM sel) RETURNING id ) SELECT id INTO hometown_id FROM ins UNION ALL SELECT id FROM sel; RETURN hometown_id; END; $ LANGUAGE plpgsql; And that is no bueno. Should I just put the whole thing in a LOOP?
Re: [GENERAL] Simple Atomic Relationship Insert
The loop to run it twice handles that yes. I don't think that buys you anything over a more traditional non-cte method though. I'd run them a few thousand times to see if there's any difference in runtimes but my guess is the CTE version would be slightly slower here. v_id integer; BEGIN; select id into v_id from hometowns where name = hometown_name; BEGIN insert into hometowns (name) select hometown_name where v_id is null returning id into v_id; EXCEPTION WHEN unique_violation THEN select id into v_id from hometowns where name = hometown_name; END; insert into users (name, hometown_id) values ('Robert', v_id); END; On Tue, Jan 13, 2015 at 5:06 PM, Robert DiFalco robert.difa...@gmail.com wrote: This seems to get rid of the INSERT race condition. CREATE OR REPLACE FUNCTION select_hometown_id(hometown_name VARCHAR) RETURNS INTEGER AS $ DECLARE hometown_id INTEGER; BEGIN LOOP BEGIN WITH sel AS ( SELECT id FROM hometowns WHERE name = hometown_name ), ins AS ( INSERT INTO hometowns (name) SELECT hometown_name WHERE NOT EXISTS(SELECT 1 FROM sel) RETURNING id ) SELECT id INTO hometown_id FROM ins UNION ALL SELECT id FROM sel; RETURN hometown_id; EXCEPTION WHEN unique_violation THEN END; END LOOP; END; $ LANGUAGE plpgsql; On Tue, Jan 13, 2015 at 1:53 PM, Brian Dunavant br...@omniti.com wrote: With the single CTE I don't believe you can do a full upsert loop. If you're doing this inside of a postgres function, your changes are already atomic, so I don't believe by switching you are buying yourself much (if anything) by using a CTE query instead of something more traditional here. The advantages of switching to a CTE would be if this code was all being done inside of the app code with multiple queries. On Tue, Jan 13, 2015 at 4:39 PM, Robert DiFalco robert.difa...@gmail.com wrote: Well, traditionally I would create a LOOP where I tried the SELECT, if there was nothing I did the INSERT, if that raised an exception I would repeat the LOOP. What's the best way to do it with the CTE? Currently I have the following which gives me Duplicate Key Exceptions when two sessions try to insert the same record at the same time. CREATE OR REPLACE FUNCTION select_hometown_id(hometown_name VARCHAR) RETURNS INTEGER AS $ DECLARE hometown_id INTEGER; BEGIN WITH sel AS ( SELECT id FROM hometowns WHERE name = hometown_name ), ins AS ( INSERT INTO hometowns (name) SELECT hometown_name WHERE NOT EXISTS(SELECT 1 FROM sel) RETURNING id ) SELECT id INTO hometown_id FROM ins UNION ALL SELECT id FROM sel; RETURN hometown_id; END; $ LANGUAGE plpgsql; And that is no bueno. Should I just put the whole thing in a LOOP? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Simple Atomic Relationship Insert
A very good point, but it does not apply as here (and in my article) we are not using updates, only insert and select. On Tue, Jan 13, 2015 at 6:03 PM, Thomas Kellerer spam_ea...@gmx.net wrote: Brian Dunavant wrote on 13.01.2015 22:33: What issue are you having? I'd imagine you have a race condition on the insert into hometowns, but you'd have that same race condition in your app code using a more traditional 3 query version as well. I often use CTEs like this to make things atomic. It allows me to remove transactional code out of the app and also to increase performance by reducing the back-and-forth to the db. http://omniti.com/seeds/writable-ctes-improve-performance Craig Ringer explained some of the pitfalls of this approach here: http://dba.stackexchange.com/questions/78510/why-is-cte-open-to-lost-updates which is a follow up question based on this: http://stackoverflow.com/a/8702291/330315 Thomas -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Simple Atomic Relationship Insert
Good points. I guess my feeling is that if there can be a race condition on INSERT then the CTE version is not truly atomic, hence the LOOP. On Tue, Jan 13, 2015 at 3:11 PM, Brian Dunavant br...@omniti.com wrote: A very good point, but it does not apply as here (and in my article) we are not using updates, only insert and select. On Tue, Jan 13, 2015 at 6:03 PM, Thomas Kellerer spam_ea...@gmx.net wrote: Brian Dunavant wrote on 13.01.2015 22:33: What issue are you having? I'd imagine you have a race condition on the insert into hometowns, but you'd have that same race condition in your app code using a more traditional 3 query version as well. I often use CTEs like this to make things atomic. It allows me to remove transactional code out of the app and also to increase performance by reducing the back-and-forth to the db. http://omniti.com/seeds/writable-ctes-improve-performance Craig Ringer explained some of the pitfalls of this approach here: http://dba.stackexchange.com/questions/78510/why-is-cte-open-to-lost-updates which is a follow up question based on this: http://stackoverflow.com/a/8702291/330315 Thomas -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general