Re: [GENERAL] bdr manual cleanup required
I really couldn't say with the available information. Can you set provide a step-by-step process by which you set up these nodes?
Re: [GENERAL] Function in PostgreSQL
Sachin Srivastava wrote: > Do you know if it is possible to create a function in postgres that has > errors in the syntax of the > creation sql? It would help our conversion process if we could get all of our > various functions > present in postgres even if they aren’t in 100% working shape. Is there any > way to ignore errors on > object creation? I am not 100% sure what you mean, but does SET check_function_bodies=off; fix your problem? Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Function in PostgreSQL
Hi, Do you know if it is possible to create a function in postgres that has errors in the syntax of the creation sql? It would help our conversion process if we could get all of our various functions present in postgres even if they aren’t in 100% working shape. Is there any way to ignore errors on object creation? Please suggest? Regards, SS
Re: [GENERAL] bdr manual cleanup required
Le 09/12/2015 05:18, Craig Ringer a écrit : Are you adding more than one node at once? BDR isn't currently smart enough to handle that. Make sure to wait until one node is fully synced up before adding another. In other words, one shall not attemp to add a new node if the other nodes are not in the 'r'eady state, when more than two nodes ? But what about if one gets this 'i' state with two nodes only? in my case, with two node only, in one side, both nodes had the state 'r', while the states were 'r' and 'i' on the other side. Thank-you, Sylvain -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgresql INET select and default route ?
Tim Smith wrote: > create table test(a inet); > insert into test values ('0.0.0.0/0'); > insert into test values ('10.1.2.3'); > => select * from test; > a > --- > 0.0.0.0/0 > 10.1.2.3 > (2 rows) > > > This works as expected . > => select * from test where a <<= '10.1.2.3'; > a > -- > 10.1.2.3 > (1 row) > > > This does not work as expected > => select * from test where a <<= '11.1.2.3'; > a > --- > (0 rows) > > > Surely the second one should return the default route entry (i.e. 0.0.0.0/0) > ? I am surprised that you don't expect "0.0.0.0/0" to be returned by the first query if you expect it to be returned by the second. Is that an oversicht? I guess your problem is a misunderstanding what the operator means: "<<=" is called "is contained by or equals". The definition (from the PostgreSQL source) is that x <<= y iff: - x has at least as many relevant bits as y and - all relevant bits of y are the same in x. So the following would produce what you expect: test=> select * from test where '11.1.2.3' <<= a; a --- 0.0.0.0/0 (1 row) Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Postgresql INET select and default route ?
Hi, create table test(a inet); insert into test values ('0.0.0.0/0'); insert into test values ('10.1.2.3'); => select * from test; a --- 0.0.0.0/0 10.1.2.3 (2 rows) This works as expected . => select * from test where a <<= '10.1.2.3'; a -- 10.1.2.3 (1 row) This does not work as expected => select * from test where a <<= '11.1.2.3'; a --- (0 rows) Surely the second one should return the default route entry (i.e. 0.0.0.0/0) ? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] HELP!!! The WAL Archive is taking up all space
Hi all, Please help... I have 1 master PostgreSQL and 1 standby PostgreSQL. Both servers has the same OS Linux Debian Wheezy, the same hardware. Both server hardware: CPU: 24 cores RAM: 128GB Disk-1: 800GB SAS (for OS, logs, WAL archive directory) Disk-2: 330GB SSD (for PostgreSQL data directory, except WAL archive and except pg_log) The part of the configuration are as below: checkpoint_segments = 64 checkpoint_completion_target = 0.9 default_statistics_target = 10 maintenance_work_mem = 1GB effective_cache_size = 64GB shared_buffers = 24GB work_mem = 5MB wal_buffers = 8MB wal_keep_segments = 4096 wal_level = hot_standby max_wal_senders = 10 archive_mode = on archive_command = 'cp -i %p /home/postgres/archive/master/%f' The WAL archive folder is at /home/postgres/archive/master/, right? This directory consumes around 750GB of Disk-1. Each segment in the /home/postgres/archive/master/ is 16MB each There are currently 47443 files in this folder. If I want to limit the total size use by WAL archive to around 200-400 GB, what value should I set for the wal_keep_segments, checkpoint_segments? Regards, Fattah -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Deletion Challenge
Steve Crawford wrote: If I understand correctly the value of "click" always advances and within a "click" the "cash_journal_id" always advances - not necessarily by single steps so within a fairian_id, ordering by "click" plus "cash_journal_id" would return the records in order from which you want the most recent 5 for each farian_id. Typing without testing and ignoring performance optimizations, something along the lines of the following should work and covers the "last 5" issue as well. with stuff_to_delete as ( select farian_id, click, cash_journal_id, rank() over (partition by farian_id order by (click, cash_journal_id) desc) as howold) from cash_journal) delete from cash_journal using stuff_to_delete where cash_journal.farian_id = stuff_to_delete.farian_id and cash_journal.click = stuff_to_delete.click and cash_journal.cash_journal_id = stuff_to_delete.cash_journal_id and stuff_to_delete.howold > 5; Assessing without testing, I like that. Thanks! Although the above is not the exactly the form I was using, an earlier iteration of a related problem employed window functions. But as the data set grew performance suffered, so if deletes were not done on a regular, continuing basis in order to keep the historical data set approximately "small", the process execution time using a windowing scheme eventually exceeded the extent of my patience. That "non-scalable" situation is actually what motivated the deliberate de-normalization (of retaining the "running balance" in a separate column) and the desire to delete old data. The original implementation calculated the running balance on-the-fly, employing windowing per fairian_id, and those tallies of the net balance entailed increasingly lengthy execution times as the number of rows increased, hence I was motivated to retain only a relatively constant-sized per-farian history, and I dismissed the use of windowing for the delete problem since it was so problematic for the running-balance-without-delete problem. Thanks for knocking some sense into me! -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Deletion Challenge
Adrian Klaver wrote: On 12/05/2015 08:08 AM, Berend Tober wrote: /* Deletion Challenge I want to delete all but the most recent transaction, per person, from a table that records a transaction history because at some point the transaction history grows large enough to adversely effect performance, and also becomes less relevant for retention. ... test=> delete from cash_journal where ARRAY[click, cash_journal_id] NOT in (select max(ARRAY[click,cash_journal_id]) from cash_journal group by fairian_id); DELETE 7 test=> SELECT * FROM cash_journal order by fairian_id, click, cash_journal_id; click | cash_journal_id | fairian_id | debit | credit | balance | description ---+-++---++-+-- 412 | 1 | 7 | 5 || 14 | Sold food quantity 7 units. 37 | 7 | 8 | 8 || 8 | Ratified contract f1abd670358e03 37 | 9 | 9 | 7 || 7 | Ratified contract 1574bddb75c78a 36 | 14 | 18 | 0 | 0 | 0 | initial cash balance 413 | 1 | 25 | |995 | 0 | Redeemed bond 7719a1c782a1ba (5 rows) Nice. The idea of a NOT IN query had occurred to me briefly, but I failed to pursue it because at some point in the distant past I had gained the impression that NOT IN queries were not computationally efficient. During one round of testing I had like a million rows. I'll have to run some EXPLAIN query testing with a larger data sample for comparison. Thanks! -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Feature Request: Faceting for full text search
Hello, Is it OK if I add feature request "faceting for FTS" to roadmap? Regards, Mirek Svoboda | +420 608 224 486 | Skype: xsvobo10
Re: [GENERAL] HELP!!! The WAL Archive is taking up all space
On Wednesday, December 09, 2015 07:55:09 AM FattahRozzaq wrote: > archive_mode = on > archive_command = 'cp -i %p /home/postgres/archive/master/%f' > > > The WAL archive folder is at /home/postgres/archive/master/, right? > This directory consumes around 750GB of Disk-1. > Each segment in the /home/postgres/archive/master/ is 16MB each > There are currently 47443 files in this folder. > > If I want to limit the total size use by WAL archive to around 200-400 > GB, what value should I set for the wal_keep_segments, > checkpoint_segments? PostgreSQL doesn't clean up files copied by your archive_command. You need to have a separate task clean those out. PostgreSQL's active wal_keep_segments etc. are in the data/pg_xlog directory. signature.asc Description: This is a digitally signed message part.
Re: [GENERAL] HELP!!! The WAL Archive is taking up all space
On 12/09/2015 11:15 AM, Alan Hodgson wrote: On Wednesday, December 09, 2015 07:55:09 AM FattahRozzaq wrote: archive_mode = on archive_command = 'cp -i %p /home/postgres/archive/master/%f' The WAL archive folder is at /home/postgres/archive/master/, right? This directory consumes around 750GB of Disk-1. Each segment in the /home/postgres/archive/master/ is 16MB each There are currently 47443 files in this folder. If I want to limit the total size use by WAL archive to around 200-400 GB, what value should I set for the wal_keep_segments, checkpoint_segments? PostgreSQL doesn't clean up files copied by your archive_command. You need to have a separate task clean those out. PostgreSQL's active wal_keep_segments etc. are in the data/pg_xlog directory. The OP might want to take a look at: http://www.postgresql.org/docs/9.4/interactive/pgarchivecleanup.html To be safe I would use: -n Print the names of the files that would have been removed on stdout (performs a dry run). at first. -- Adrian Klaver adrian.kla...@aklaver.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Deletion Challenge
On Wed, Dec 9, 2015 at 1:31 PM, Adrian Klaverwrote: > On 12/09/2015 12:24 AM, Berend Tober wrote: > >> Adrian Klaver wrote: >> >>> On 12/05/2015 08:08 AM, Berend Tober wrote: >>> /* Deletion Challenge I want to delete all but the most recent transaction, per person, from a table that records a transaction history because at some point the transaction history grows large enough to adversely effect performance, and also becomes less relevant for retention. ... >>> test=> delete from cash_journal where ARRAY[click, cash_journal_id] >>> NOT in (select max(ARRAY[click,cash_journal_id]) from cash_journal >>> group by fairian_id); >>> DELETE 7 >>> >>> test=> SELECT * FROM cash_journal order by fairian_id, click, >>> cash_journal_id; >>> click | cash_journal_id | fairian_id | debit | credit | balance >>> | description >>> >>> ---+-++---++-+-- >>> >>> 412 | 1 | 7 | 5 || 14 | >>> Sold food quantity 7 units. >>> 37 | 7 | 8 | 8 || 8 | >>> Ratified contract f1abd670358e03 >>> 37 | 9 | 9 | 7 || 7 | >>> Ratified contract 1574bddb75c78a >>> 36 | 14 | 18 | 0 | 0 | 0 | >>> initial cash balance >>> 413 | 1 | 25 | |995 | 0 | >>> Redeemed bond 7719a1c782a1ba >>> (5 rows) >>> >>> >> Nice. >> >> The idea of a NOT IN query had occurred to me briefly, but I failed to >> pursue it because at some point in the distant past I had gained the >> impression that NOT IN queries were not computationally efficient. >> During one round of testing I had like a million rows. I'll have to run >> some EXPLAIN query testing with a larger data sample for comparison. >> Thanks! >> > > Plan B: > > WITH d AS > (SELECT * FROM > cash_journal > LEFT JOIN > (SELECT > MAX(ARRAY[click,cash_journal_id]) AS mx > FROM > cash_journal > GROUP BY > fairian_id) > AS > mxa > ON > mxa.mx=ARRAY[click, cash_journal_id] > WHERE > mx IS NULL) > DELETE FROM > cash_journal > USING > d > WHERE > d.click = cash_journal.click > AND > d.cash_journal_id = cash_journal.cash_journal_id; > > Couldn't the LEFT JOIN relation in the CTE be better written using "SELECT DISTINCT ON (click, cash_journal_id) click, cash_journal_id [...] ORDER BY click DESC, cash_journal_id" or something similar? It doesn't seem like you should need to introduce an array and an aggregate here. It does have the negative property of only providing a single row; which excludes using it for the "last 5" part but I suspect it will be considerably faster for the single version. David J.
Re: [GENERAL] Deletion Challenge
The two general solutions are the "keep the last one" proposed by Adrian "keep the last N" that I sent. But it might be worth stepping back a bit. You said you are having performance problems that you feel would be improved by removing only a million rows which doesn't sound like that much to me. It's less than half of what I *add* to just one of my tables every week and my database is dwarfed by those of many of the participants on this list. This suggests that there may be other issues such as tuning, indexing or query optimization at play. Depending on your requirements, partitioning might be useful. It wouldn't be last N but could easily be done to partition by date-ranges which makes archiving and purging a low-cost operation. You might want to expand a bit on the core issue you are trying to solve. Cheers, Steve On Wed, Dec 9, 2015 at 12:43 PM, David G. Johnston < david.g.johns...@gmail.com> wrote: > On Wed, Dec 9, 2015 at 1:31 PM, Adrian Klaver> wrote: > >> On 12/09/2015 12:24 AM, Berend Tober wrote: >> >>> Adrian Klaver wrote: >>> On 12/05/2015 08:08 AM, Berend Tober wrote: > /* > > Deletion Challenge > > I want to delete all but the most recent transaction, per person, from > a > table that records a transaction history because at some point the > transaction history grows large enough to adversely effect performance, > and also becomes less relevant for retention. > > ... > > test=> delete from cash_journal where ARRAY[click, cash_journal_id] NOT in (select max(ARRAY[click,cash_journal_id]) from cash_journal group by fairian_id); DELETE 7 test=> SELECT * FROM cash_journal order by fairian_id, click, cash_journal_id; click | cash_journal_id | fairian_id | debit | credit | balance | description ---+-++---++-+-- 412 | 1 | 7 | 5 || 14 | Sold food quantity 7 units. 37 | 7 | 8 | 8 || 8 | Ratified contract f1abd670358e03 37 | 9 | 9 | 7 || 7 | Ratified contract 1574bddb75c78a 36 | 14 | 18 | 0 | 0 | 0 | initial cash balance 413 | 1 | 25 | |995 | 0 | Redeemed bond 7719a1c782a1ba (5 rows) >>> Nice. >>> >>> The idea of a NOT IN query had occurred to me briefly, but I failed to >>> pursue it because at some point in the distant past I had gained the >>> impression that NOT IN queries were not computationally efficient. >>> During one round of testing I had like a million rows. I'll have to run >>> some EXPLAIN query testing with a larger data sample for comparison. >>> Thanks! >>> >> >> Plan B: >> >> WITH d AS >> (SELECT * FROM >> cash_journal >> LEFT JOIN >> (SELECT >> MAX(ARRAY[click,cash_journal_id]) AS mx >> FROM >> cash_journal >> GROUP BY >> fairian_id) >> AS >> mxa >> ON >> mxa.mx=ARRAY[click, cash_journal_id] >> WHERE >> mx IS NULL) >> DELETE FROM >> cash_journal >> USING >> d >> WHERE >> d.click = cash_journal.click >> AND >> d.cash_journal_id = cash_journal.cash_journal_id; >> >> > Couldn't the LEFT JOIN relation in the CTE be better written using > "SELECT DISTINCT ON (click, cash_journal_id) click, cash_journal_id [...] > ORDER BY click DESC, cash_journal_id" or something similar? It doesn't > seem like you should need to introduce an array and an aggregate here. > > It does have the negative property of only providing a single row; which > excludes using it for the "last 5" part but I suspect it will be > considerably faster for the single version. > > David J. >
Re: [GENERAL] Regexp_replace question / help needed
Hi, I guess capture will help you look at http://www.postgresql.org/docs/9.0/static/functions-matching.html SELECT regexp_replace('http://test.com/test/testfile.php', '^(.*)/(.*\.php)$', E'\\1=\\2', 'g') 2015-12-09 22:58 GMT+01:00 Christopher Molnar: > Hello, > > I am running into a problem and need some pointers on regexp_replace - I > can't seem to find an answer in any of the online resources. > > I have a string (like 40,000 with different length and number of components) > of them in a field named "externalurl". I need to replace the final "/" of > the string with "=" while preserving the filename and extension > following the "/". > > The closest I can get is: > > regexp_replace('http://test.com/test/testfile.php','/[^/]*$','=') > > however this looses the file name and returns: > > http://test.com/test= > > What I am looking for is: > > http://test.com/test=testfile.php > > as a result. > > Would anyone here point me in the right direction? > > Thanks! > -Chris -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Regexp_replace question / help needed
Christopher Molnarwrites: > I have a string (like 40,000 with different length and number of > components) of them in a field named "externalurl". I need to replace the > final "/" of the string with "=" while preserving the filename and > extension following the "/". > The closest I can get is: > regexp_replace('http://test.com/test/testfile.php','/[^/]*$','=') There's more than one way to do it. You could use capturing parens: regexp_replace('http://test.com/test/testfile.php','/([^/]*)$','=\1') or you could use a lookahead constraint: regexp_replace('http://test.com/test/testfile.php','/(?=[^/]*$)','=') regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Regexp_replace question / help needed
Christopher Molnarwrites: > Hello, > > I am running into a problem and need some pointers on regexp_replace - I > can't seem to find an answer in any of the online resources. > > I have a string (like 40,000 with different length and number of components) > of them in a field named "externalurl". I need to replace the final "/" of > the string with > "=" while preserving the filename and extension following the "/". > > The closest I can get is: > > regexp_replace('http://test.com/test/testfile.php','/[^/]*$','=')Â > > however this looses the file name and returns: > > http://test.com/test= > > What I am looking for is: > > http://test.com/test=testfile.php > > as a result. > > Would anyone here point me in the right direction? > select regexp_replace('http://foo/wow/blah/zzz.php', '/([^/]*)$', '=\1'); regexp_replace -- http://foo/wow/blah=zzz.php (1 row) > > Thanks! > -Chris > -- Jerry Sievers Postgres DBA/Development Consulting e: postgres.consult...@comcast.net p: 312.241.7800 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Regexp_replace question / help needed
Hello, I am running into a problem and need some pointers on regexp_replace - I can't seem to find an answer in any of the online resources. I have a string (like 40,000 with different length and number of components) of them in a field named "externalurl". I need to replace the final "/" of the string with "=" while preserving the filename and extension following the "/". The closest I can get is: regexp_replace('http://test.com/test/testfile.php','/[^/]*$','=') however this looses the file name and returns: http://test.com/test= What I am looking for is: http://test.com/test=testfile.php as a result. Would anyone here point me in the right direction? Thanks! -Chris
Re: [GENERAL] Deletion Challenge
On 12/09/2015 12:24 AM, Berend Tober wrote: Adrian Klaver wrote: On 12/05/2015 08:08 AM, Berend Tober wrote: /* Deletion Challenge I want to delete all but the most recent transaction, per person, from a table that records a transaction history because at some point the transaction history grows large enough to adversely effect performance, and also becomes less relevant for retention. ... test=> delete from cash_journal where ARRAY[click, cash_journal_id] NOT in (select max(ARRAY[click,cash_journal_id]) from cash_journal group by fairian_id); DELETE 7 test=> SELECT * FROM cash_journal order by fairian_id, click, cash_journal_id; click | cash_journal_id | fairian_id | debit | credit | balance | description ---+-++---++-+-- 412 | 1 | 7 | 5 || 14 | Sold food quantity 7 units. 37 | 7 | 8 | 8 || 8 | Ratified contract f1abd670358e03 37 | 9 | 9 | 7 || 7 | Ratified contract 1574bddb75c78a 36 | 14 | 18 | 0 | 0 | 0 | initial cash balance 413 | 1 | 25 | |995 | 0 | Redeemed bond 7719a1c782a1ba (5 rows) Nice. The idea of a NOT IN query had occurred to me briefly, but I failed to pursue it because at some point in the distant past I had gained the impression that NOT IN queries were not computationally efficient. During one round of testing I had like a million rows. I'll have to run some EXPLAIN query testing with a larger data sample for comparison. Thanks! Plan B: WITH d AS (SELECT * FROM cash_journal LEFT JOIN (SELECT MAX(ARRAY[click,cash_journal_id]) AS mx FROM cash_journal GROUP BY fairian_id) AS mxa ON mxa.mx=ARRAY[click, cash_journal_id] WHERE mx IS NULL) DELETE FROM cash_journal USING d WHERE d.click = cash_journal.click AND d.cash_journal_id = cash_journal.cash_journal_id; -- Adrian Klaver adrian.kla...@aklaver.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] HELP!!! The WAL Archive is taking up all space
On 12/09/2015 04:38 PM, FattahRozzaq wrote: Quick information, After I realize, the line "archive_command=/bin/true" is a bad decision, I have revert it back. Now I'm really confused and panic. I don't know what to do, and I don't really understand the postgresql.conf I'm a network engineer, I should handle the network and also postgresql database. Oh man, the office is so good but this part is sucks :(( If the pg_xlog directory is growing it is likely that either: * wal_keep_segments is set high and your slave is not correctly receiving updates. * You are using a replication slot and the slave is not correctly receiving updates. If your archive_command does not return a success, your pg_xlog will also grow but you don't need the archive_command *IF* your streaming replication is working *UNLESS* you are also doing archiving or PITR. Sincerely, JD -- Command Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564 PostgreSQL Centered full stack support, consulting and development. Announcing "I'm offended" is basically telling the world you can't control your own emotions, so everyone else should do it for you. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Overhead changing varchar(2000) to text
Hi! I do have several tables that uses varchar(2000) as store for remarks. Lately, one customer need to store more than 2000 characteres, and I'm considering changing from varchar(2000) to text. What is the overhead? Is there any place where I can learn about storage impacto for each data type? Thanks, -- Atenciosamente, Edson Carlos Ericksson Richter -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Overhead changing varchar(2000) to text
On Wed, Dec 9, 2015 at 5:17 PM, Kevin Grittnerwrote: > On Wed, Dec 9, 2015 at 5:13 PM, Edson Richter > wrote: > >> I do have several tables that uses varchar(2000) as store for remarks. >> Lately, one customer need to store more than 2000 characteres, and I'm >> considering changing from varchar(2000) to text. >> >> What is the overhead? > > None -- they are stored in exactly the same format; the only > difference is whether the length is limited. I probably should have mentioned that an ALTER TABLE to change the column type from varchar(2000) to text does not rewrite the data (since it is in the same format) -- it just changes the catalogs to reflect the lack of a limit on length. Changing the other way would require a pass to check that all existing data passes the length check. >> Is there any place where I can learn about storage impacto for each data >> type? > > http://www.postgresql.org/docs/current/interactive/datatype-character.html While it's fairly technical, you might also be interested in this: http://www.postgresql.org/docs/current/interactive/storage-toast.html -- Kevin Grittner EDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] HELP!!! The WAL Archive is taking up all space
Hi John, I really don't know why I should keep the wal archives. I implement streaming replication into 1 server (standby server). I'm really newbie to PostgreSQL but the boss pushed me to handle it and implement it in production
Re: [GENERAL] HELP!!! The WAL Archive is taking up all space
On 12/09/2015 04:27 PM, FattahRozzaq wrote: Hi John, I really don't know why I should keep the wal archives. So who set up the archiving and why? Is archive recovery set up on the standby?: http://www.postgresql.org/docs/9.4/interactive/archive-recovery-settings.html I implement streaming replication into 1 server (standby server). Is that the only standby or is there another set up previously? Per another recent thread having a WAL archive to fall back on is handy if the streaming replication falls behind and wal_keep_segments is not high enough: http://www.postgresql.org/docs/9.4/interactive/warm-standby.html#STREAMING-REPLICATION "If you use streaming replication without file-based continuous archiving, the server might recycle old WAL segments before the standby has received them. If this occurs, the standby will need to be reinitialized from a new base backup. You can avoid this by setting wal_keep_segments to a value large enough to ensure that WAL segments are not recycled too early, or by configuring a replication slot for the standby. If you set up a WAL archive that's accessible from the standby, these solutions are not required, since the standby can always use the archive to catch up provided it retains enough segments." I'm really newbie to PostgreSQL but the boss pushed me to handle it and implement it in production f*&%*$%%$#%$#
Re: [GENERAL] HELP!!! The WAL Archive is taking up all space
> FattahRozzaqhat am 10. Dezember 2015 um 01:27 > geschrieben: > > > Hi John, > > I really don't know why I should keep the wal archives. That's the problem! But that's your part, not our. If you need a Backup with PITR-capability you have to create a so called basebackup and continously WAL's. If you create later, say the next day, a new Basebackup and your Backup-Policy is hold one Backup, than you can delete all WAL's untill to the new Basebackup and the old Backup. If i where you i would use somethink like barman (see: http://www.pgbarman.org/ ) for that. And yes: you should a extra Backup-Server. If you have both (Database and Backup) on the same machine and the machine burns you will lost both, data and backup. Questions? > I implement streaming replication into 1 server (standby server). Streamin Replication can't replace a Backup! > I'm really newbie to PostgreSQL but the boss pushed me to handle it > and implement it in production
Re: [GENERAL] Regexp_replace question / help needed
Thank you both. Problem solved - worked perfectly. On Wed, Dec 9, 2015 at 5:41 PM, Jerry Sieverswrote: > Christopher Molnar writes: > > > Hello, > > > > I am running into a problem and need some pointers on regexp_replace - I > can't seem to find an answer in any of the online resources. > > > > I have a string (like 40,000 with different length and number of > components) of them in a field named "externalurl". I need to replace the > final "/" of the string with > > "=" while preserving the filename and extension following the "/". > > > > The closest I can get is: > > > > regexp_replace('http://test.com/test/testfile.php','/[^/]*$','=') > > > > however this looses the file name and returns: > > > > http://test.com/test= > > > > What I am looking for is: > > > > http://test.com/test=testfile.php > > > > as a result. > > > > Would anyone here point me in the right direction? > > > > select regexp_replace('http://foo/wow/blah/zzz.php', '/([^/]*)$', > '=\1'); > regexp_replace > -- > http://foo/wow/blah=zzz.php > (1 row) > > > > > > Thanks! > > -Chris > > > > -- > Jerry Sievers > Postgres DBA/Development Consulting > e: postgres.consult...@comcast.net > p: 312.241.7800 >
Re: [GENERAL] HELP!!! The WAL Archive is taking up all space
On 12/9/2015 4:27 PM, FattahRozzaq wrote: I really don't know why I should keep the wal archives. I implement streaming replication into 1 server (standby server). I'm really newbie to PostgreSQL but the boss pushed me to handle it and implement it in production f*&%*$%%$#%$#
Re: [GENERAL] HELP!!! The WAL Archive is taking up all space
Hi John, Really thanking you for spend time typing and responding my email. I think the archive_command returns success, I can see the archive directory piling up 16MB every 2 minutes. Maybe the pgarchivecleanup is the solution to cleanup the contents of archive folder? How to properly do it? What is the pgarchivecleanup example that I can use for this case? How to run a dry-run for pgarchivecleanup? Best Regards, FR On 10/12/2015, Joshua D. Drakewrote: > On 12/09/2015 04:38 PM, FattahRozzaq wrote: >> Quick information, >> >> After I realize, the line "archive_command=/bin/true" is a bad >> decision, I have revert it back. >> Now I'm really confused and panic. >> I don't know what to do, and I don't really understand the >> postgresql.conf >> I'm a network engineer, I should handle the network and also >> postgresql database. >> Oh man, the office is so good but this part is sucks :(( > > If the pg_xlog directory is growing it is likely that either: > > * wal_keep_segments is set high and your slave is not correctly > receiving updates. > > * You are using a replication slot and the slave is not correctly > receiving updates. > > If your archive_command does not return a success, your pg_xlog will > also grow but you don't need the archive_command *IF* your streaming > replication is working *UNLESS* you are also doing archiving or PITR. > > Sincerely, > > JD > > -- > Command Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564 > PostgreSQL Centered full stack support, consulting and development. > Announcing "I'm offended" is basically telling the world you can't > control your own emotions, so everyone else should do it for you. > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Overhead changing varchar(2000) to text
On Wed, Dec 9, 2015 at 5:13 PM, Edson Richterwrote: > I do have several tables that uses varchar(2000) as store for remarks. > Lately, one customer need to store more than 2000 characteres, and I'm > considering changing from varchar(2000) to text. > > What is the overhead? None -- they are stored in exactly the same format; the only difference is whether the length is limited. > Is there any place where I can learn about storage impacto for each data > type? http://www.postgresql.org/docs/current/interactive/datatype-character.html -- Kevin Grittner EDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Overhead changing varchar(2000) to text
Thanks. After reading, I've run some tests and found no storage changes in tables moving from varchar(2000) to text. Actually, the biggest change is that I don't have to keep another constraint between app and database - if I want to increase the user perceived space, now I just have to change the application (of course, under the limits). Atenciosamente, Edson Carlos Ericksson Richter Em 09/12/2015 21:17, Kevin Grittner escreveu: On Wed, Dec 9, 2015 at 5:13 PM, Edson Richterwrote: I do have several tables that uses varchar(2000) as store for remarks. Lately, one customer need to store more than 2000 characteres, and I'm considering changing from varchar(2000) to text. What is the overhead? None -- they are stored in exactly the same format; the only difference is whether the length is limited. Is there any place where I can learn about storage impacto for each data type? http://www.postgresql.org/docs/current/interactive/datatype-character.html -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] HELP!!! The WAL Archive is taking up all space
Quick information, After I realize, the line "archive_command=/bin/true" is a bad decision, I have revert it back. Now I'm really confused and panic. I don't know what to do, and I don't really understand the postgresql.conf I'm a network engineer, I should handle the network and also postgresql database. Oh man, the office is so good but this part is sucks :(( -- On 10/12/2015, FattahRozzaqwrote: > Hi John, > > I really don't know why I should keep the wal archives. > I implement streaming replication into 1 server (standby server). > I'm really newbie to PostgreSQL but the boss pushed me to handle it > and implement it in production
Re: [GENERAL] HELP!!! The WAL Archive is taking up all space
On 12/8/2015 4:55 PM, FattahRozzaq wrote: ...I want to limit the total size use by WAL archive to around 200-400 GB...? for what purpose are you keeping a wal archive ? if its for PITR (point in time recovery), you need ALL WAL records since the start of a base backup up to the point in time at which you wish to recover. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general