Re: [GENERAL] Regarding timezone
On Wed, Sep 10, 2014 at 8:43 PM, Tom Lane t...@sss.pgh.pa.us wrote: You'd want to get a new version of the IANA timezone database files for that. Depending on what packaging you're using, this might be an operating-system update not a Postgres update. If you are relying on the Postgres copies, you'd have to update src/timezone/data/ and then do a make install in src/timezone. regards, tom lane Thanks Tom ! Actually am using the postgres binaries and not building from sources. Is there any solution if using the binaries and want to update timezone settings? Regards...
Re: [GENERAL] CREATE SYNONYM in PostgreSQL
Thank you for replay. You will not be able to do it without modifying the grammar. SYNONYM isn't even a keyword in stock PG. If I understood correctly since SYNONYM is not the part of the grammar, the parser throw a syntax error before reaching to hook. - Regards, Vinayak, -- View this message in context: http://postgresql.1045698.n5.nabble.com/CREATE-SYNONYM-in-PostgreSQL-tp5818446p5818779.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] permission denied for schema topology
Thanks Tom and everyone that replied. Since my last email my service provider managed to solve the problem on my main database. I looked at the schemas listed in phpPgAdmin on this database before it was fixed and there were two main schemas listed, public and topology, both owned by postgres. Now when I look, topology is no longer there. The problem was fixed on that database however not on others and when I create a new datbase via cpanel (the only way the system allows) - the problem reoccurs and I can't do a dump of the new db. Thanks again for your input - I think it will help me communicate with them to resolve the problem now fully. Cheers, Iain Em Qui, 2014-09-11 às 10:05 -0400, Tom Lane escreveu: Iain Mott m...@reverberant.com writes: Here's what happens (the important error messages are in English): [~]# pg_dump mydatabase dump.sql pg_dump: comando SQL falhou pg_dump: Mensagem de erro do servidor: ERROR: permission denied for schema topology pg_dump: O comando foi: LOCK TABLE topology.topology IN ACCESS SHARE MODE I am able to perform dumps of the databases via phpPdAdmin in the cpanel of the server, but this is going very inconvenient - hoping to use pg_dump If it works through phpPgAdmin, then phpPgAdmin must be using some other (more privileged) user ID than what pg_dump is using by default. A simple workaround therefore ought to be to use pg_dump's -U switch to use that other user ID. From the rest of the thread I gather that you ought to complain to your service provider that they installed postgis when you didn't ask for it. But selecting the right user ID to dump as is important anyway; it generally doesn't work to use an underprivileged ID for pg_dump. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL Portable
Hi Craig, On Fri, 12 Sep 2014 11:33:55 +0800, Craig Ringer cr...@2ndquadrant.com wrote: On 09/11/2014 03:16 PM, George Neuner wrote: If the driver permits it and you [or your users] can be trusted to perform a safe unmount via the OS *before* disconnecting the device, then you can enable write caching for the device using the device manager. [Note that the device must be connected for it to be visible in the device manager.] It shouldn't be living dangerously, actually. While I haven't tested it myself, writeback caching on the external drive should be safe so long as it continues to honour explicit disk flush requests. That's why we have the WAL and do periodic checkpoints. If you yank the drive mid-write you'll lose uncommitted transactions and might have slower startup next time around, but it should otherwise not be overly problematic. For the most part you're correct, but recall that WAL itself can be made asynchronous [see fsync() and synchronous_commit() settings] and the periodic OS sync also may be disabled - which doesn't affect WAL handling but may(?) affect the background writer. Even having synchronous WAL the most recent transactions can be lost if the log device fails *during* a write. That's why, if we use external devices at all, we tend to use closely coupled devices - disk array, wired SAN, etc. - that aren't very likely to be physically disconnected. And uninterruptible power all around 8-) A portable device can be reasonably safe if treated properly, but it never will be quite as safe as an internal device. George -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL Portable
Hey, I had many external hard drive crash (savage unplug, power off, pc forced restart). The server on the virtual machine was never hurt, nor the data. Cheers, Rémi-C 2014-09-12 15:34 GMT+02:00 George Neuner gneun...@comcast.net: Hi Craig, On Fri, 12 Sep 2014 11:33:55 +0800, Craig Ringer cr...@2ndquadrant.com wrote: On 09/11/2014 03:16 PM, George Neuner wrote: If the driver permits it and you [or your users] can be trusted to perform a safe unmount via the OS *before* disconnecting the device, then you can enable write caching for the device using the device manager. [Note that the device must be connected for it to be visible in the device manager.] It shouldn't be living dangerously, actually. While I haven't tested it myself, writeback caching on the external drive should be safe so long as it continues to honour explicit disk flush requests. That's why we have the WAL and do periodic checkpoints. If you yank the drive mid-write you'll lose uncommitted transactions and might have slower startup next time around, but it should otherwise not be overly problematic. For the most part you're correct, but recall that WAL itself can be made asynchronous [see fsync() and synchronous_commit() settings] and the periodic OS sync also may be disabled - which doesn't affect WAL handling but may(?) affect the background writer. Even having synchronous WAL the most recent transactions can be lost if the log device fails *during* a write. That's why, if we use external devices at all, we tend to use closely coupled devices - disk array, wired SAN, etc. - that aren't very likely to be physically disconnected. And uninterruptible power all around 8-) A portable device can be reasonably safe if treated properly, but it never will be quite as safe as an internal device. George -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Regarding timezone
On 09/11/2014 10:59 PM, Dev Kumkar wrote: On Wed, Sep 10, 2014 at 8:43 PM, Tom Lane t...@sss.pgh.pa.us mailto:t...@sss.pgh.pa.us wrote: You'd want to get a new version of the IANA timezone database files for that. Depending on what packaging you're using, this might be an operating-system update not a Postgres update. If you are relying on the Postgres copies, you'd have to update src/timezone/data/ and then do a make install in src/timezone. regards, tom lane Thanks Tom ! Actually am using the postgres binaries and not building from sources. Is there any solution if using the binaries and want to update timezone settings? What OS and what packaging? Regards... -- Adrian Klaver adrian.kla...@aklaver.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] permission denied for schema topology
On 09/12/2014 02:27 AM, Iain Mott wrote: Thanks Tom and everyone that replied. Since my last email my service provider managed to solve the problem on my main database. I looked at the schemas listed in phpPgAdmin on this database before it was fixed and there were two main schemas listed, public and topology, both owned by postgres. Now when I look, topology is no longer there. The problem was fixed on that database however not on others and when I create a new datbase via cpanel (the only way the system allows) - the problem reoccurs and I can't do a dump of the new db. Two things: 1) Looks like the template database they are using with CREATE DATABASE has the topology schema already in it, which is why it is showing up again. 2) What user are you running the pg_dump as? Thanks again for your input - I think it will help me communicate with them to resolve the problem now fully. Cheers, Iain -- Adrian Klaver adrian.kla...@aklaver.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] 2 left joins causes seqscan
Hi, Today i ran into a situation where a second left join on an indexed field would prevent the index from being used, even though the index is clearly more efficient. Removing either of the 2 joins would cause that the planner will use the index again. I tested this on postgres 9.1 and 9.3 on my ubuntu (amd64) laptop. --Here's the test data: create table a (id serial primary key, field1 text); create table b (id integer, title text, lang integer); create index b_title_lowerto on b using btree (lower(title) text_pattern_ops); vacuum analyze; with x as ( insert into a select generate_series(1,4) as id returning id ) insert into b select id, translate((random()*100*id)::text, '1234567890.', 'abcdefghij'), 1 from x; update a set field1=translate(id::text, '1234567890.', 'abcdefghij'); insert into b select b2.id, translate((random()*100*b2.id)::text, '1234567890.', 'abcdefghij'), 2 from b b2; --Here's the query that doesn't use the index on b: select a.field1, b1.title , b2.title from a left join b b1 on b1.id = a.id and b1.lang=1 left join b b2 on b2.id = a.id and b2.lang=2 where (lower(b1.title) like'abcd%' or lower(b2.title) like 'abcd%') --plan: Hash Right Join (cost=4298.60..7214.76 rows=8 width=35) Hash Cond: (b1.id = a.id) Filter: ((lower(b1.title) ~~ 'abcd%'::text) OR (lower(b2.title) ~~ 'abcd%'::text)) - Seq Scan on b b1 (cost=0.00..1510.00 rows=40176 width=19) Filter: (lang = 1) - Hash (cost=3798.60..3798.60 rows=4 width=24) - Hash Right Join (cost=1293.00..3798.60 rows=4 width=24) Hash Cond: (b2.id = a.id) - Seq Scan on b b2 (cost=0.00..1510.00 rows=39824 width=19) Filter: (lang = 2) - Hash (cost=793.00..793.00 rows=4 width=9) - Seq Scan on a (cost=0.00..793.00 rows=4 width=9) --Here's the query that does use the index on b: select a.field1, b1.title from a left join b b1 on b1.id = a.id and b1.lang=1 where lower(b1.title) like 'abcd%' union select a.field1, b2.title from a left join b b2 on b2.id = a.id and b2.lang=2 where lower(b2.title) like 'abcd%' --plan: HashAggregate (cost=98.31..98.39 rows=8 width=20) - Append (cost=4.74..98.27 rows=8 width=20) - Nested Loop (cost=4.74..49.10 rows=4 width=20) - Bitmap Heap Scan on b b1 (cost=4.45..15.82 rows=4 width=19) Filter: ((lang = 1) AND (lower(title) ~~ 'abcd%'::text)) - Bitmap Index Scan on b_title_lowerto (cost=0.00..4.45 rows=3 width=0) Index Cond: ((lower(title) ~=~ 'abcd'::text) AND (lower(title) ~~ 'abce'::text)) - Index Scan using a_pkey on a (cost=0.29..8.31 rows=1 width=9) Index Cond: (id = b1.id) - Nested Loop (cost=4.74..49.10 rows=4 width=20) - Bitmap Heap Scan on b b2 (cost=4.45..15.82 rows=4 width=19) Filter: ((lang = 2) AND (lower(title) ~~ 'abcd%'::text)) - Bitmap Index Scan on b_title_lowerto (cost=0.00..4.45 rows=3 width=0) Index Cond: ((lower(title) ~=~ 'abcd'::text) AND (lower(title) ~~ 'abce'::text)) - Index Scan using a_pkey on a a_1 (cost=0.29..8.31 rows=1 width=9) Index Cond: (id = b2.id) As you can see, the second query is far more efficient, even though it scans both tables twice to combine the results. Is this some glitch in the query planner? Cheers, -- Willy-Bas Loos
Re: [GENERAL] 2 left joins causes seqscan
Willy-Bas Loos willy...@gmail.com wrote: As you can see, the second query is far more efficient, even though it scans both tables twice to combine the results. But the two queries don't return the same results. Of course the second one will be faster. The simple equivalent of your second query is: explain analyze select a.field1, b.title from a join b on b.id = a.id where lower(b.title) like 'abcd%' and lang in (1, 2); The equivalent of your first query is to take the result sets from these two queries: select a1.field1, b1.title, b2.title from a a1 join b b1 on b1.id = a1.id and b1.lang = 1 left join b b2 on (b2.id = a1.id and b2.lang = 2) where lower(b1.title) like'abcd%' union select a2.field1, b4.title, b3.title from a a2 join b b3 on b3.id = a2.id and b3.lang = 2 left join b b4 on (b4.id = a2.id and b4.lang = 1) where lower(b3.title) like'abcd%'; The above form does optimize better than the original, but it's not too surprising that the planner can't come up with the optimal plan; you've posed quite a challenge for it. -- Kevin Grittner EDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general