[GENERAL] profiling store procedure
what tool is suitable for profiling store procedure in postgresql? sql server has sql profiles, how about postgresql?
Re: [GENERAL] [SOLVED] Very high memory usage on restoring dump (with plain pqsl) on pg 9.1.2
Am 22.03.2012 18:21, schrieb Heiko Wundram: Am 22.03.2012 15:48, schrieb Tom Lane: There was a memory leak in the last-but-one releases for index operations on inet and cidr datatypes, so I'm wondering if that explains your problem ... I'll be updating pgsql now and then recheck the import. Yes, that worked - imported the old pmacct data flawlessly after upgrading to 9.1.3. Thank you again for the hint at the inet problems; searching through Google initially didn't turn up any relevant hints, but after I rechecked with the specific reference to inet, I did find the corresponding bug reports against PostgreSQL 9.1.2. -- --- Heiko. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] profiling store procedure
On Fri, Mar 23, 2012 at 11:16 AM, Geek Matter geekmat...@yahoo.com wrote: what tool is suitable for profiling store procedure in postgresql? sql server has sql profiles, how about postgresql? You can use the pg_stat_statements (http://www.postgresql.org/docs/9.1/interactive/pgstatstatements.html) extension to do profiling. Turn track_utility to on to collect DDL and set track to all to collect queries inside functions. -- Sergey Konoplev Blog: http://gray-hemp.blogspot.com LinkedIn: http://ru.linkedin.com/in/grayhemp JID/GTalk: gray...@gmail.com Skype: gray-hemp -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] profiling store procedure
On Fri, 2012-03-23 at 00:16 -0700, Geek Matter wrote: what tool is suitable for profiling store procedure in postgresql? sql server has sql profiles, how about postgresql? The EDB PL/pgsql profiler tool is probably the easier tool to work with if you want to profile PL/pgsql functions. That's also the only available automatic tool I know. You can also use the manual way, but it isn't easy. See this great post from Hubert Lubaczewski (aka Depesz): http://www.depesz.com/2010/03/18/profiling-stored-proceduresfunctions/ -- Guillaume http://blog.guillaume.lelarge.info http://www.dalibo.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL 64 Bit XIDs - Transaction IDs
Am 23.03.2012 06:45, schrieb Gerhard Wiesinger: With a database admin of a commercial database system I've discussed that they have to provide and they also achieve 2^31 transactions per SECOND! Just corious: What is causing this many transactions? Cheers, Frank -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL 64 Bit XIDs - Transaction IDs
Hi, On 23 March 2012 19:14, Frank Lanitz fr...@frank.uvena.de wrote: Am 23.03.2012 06:45, schrieb Gerhard Wiesinger: With a database admin of a commercial database system I've discussed that they have to provide and they also achieve 2^31 transactions per SECOND! Just corious: What is causing this many transactions? I would be interested to know hardware configuration and name of that mysterious commercial database system! -- Ondrej Ivanic (ondrej.iva...@gmail.com) -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL 64 Bit XIDs - Transaction IDs
On 23.03.2012 06:45, Gerhard Wiesinger wrote: With a database admin of a commercial database system I've discussed that they have to provide and they also achieve 2^31 transactions per SECOND! As PostgreSQL uses transaction IDs (XIDs) in the range of 2^31 they would turn around in about one second. Wow. What application issues that much transactions? And what is the database system that can handle that? I can't think of a single machine capable of this - and hardy believe postgresql can came close. 2^31 transactions mean that a single one lasts 0.5ns. Even the fastest DDR3-2133 has cycle times of 4ns. I have seen a database monster in action - 43 trillion (academic) transactions per day, but that's only 5*10^8 transactions per second, under a quarter of 2^31 per second. So, I can't answer your question - but you triggered my curiosity :-) Cheers, Jan -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] postgresql_fdw_handler
Hi everybody, is there any fdw_handler for postgresql available (pg to pg)? I saw thread http://archives.postgresql.org/pgsql-general/2011-09/msg00525.php, need to know if something has been changed Thanks in advance Regards, Bartek
Re: [GENERAL] postgresql_fdw_handler
On Fri, 2012-03-23 at 12:26 +0100, Bartosz Dmytrak wrote: Hi everybody, is there any fdw_handler for postgresql available (pg to pg)? I saw thread http://archives.postgresql.org/pgsql-general/2011-09/msg00525.php, need to know if something has been changed Not yet. There is one in the latest commitfest, still in Needs review status (https://commitfest.postgresql.org/action/patch_view?id=667). -- Guillaume http://blog.guillaume.lelarge.info http://www.dalibo.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL 64 Bit XIDs - Transaction IDs
On 03/22/2012 10:45 PM, Gerhard Wiesinger wrote: Hello, With a database admin of a commercial database system I've discussed that they have to provide and they also achieve 2^31 transactions per SECOND! As PostgreSQL uses transaction IDs (XIDs) in the range of 2^31 they would turn around in about one second. I would say either they got the numbers wrong or someone is pulling your leg. That rate is not going to happen. Thnx. Ciao, Gerhard -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] postgresql_fdw_handler
On 03/23/2012 04:26 AM, Bartosz Dmytrak wrote: Hi everybody, is there any fdw_handler for postgresql available (pg to pg)? I saw thread http://archives.postgresql.org/pgsql-general/2011-09/msg00525.php, need to know if something has been changed Might be able to make it work using Multicorn and rhe SQLALchemy wrapper: http://multicorn.org/foreign-data-wrappers/#sqlalchemy-foreign-data-wrapper -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] postgresql_fdw_handler
thanks, I am waiting... Pozdrawiam, Bartek 2012/3/23 Guillaume Lelarge guilla...@lelarge.info On Fri, 2012-03-23 at 12:26 +0100, Bartosz Dmytrak wrote: Hi everybody, is there any fdw_handler for postgresql available (pg to pg)? I saw thread http://archives.postgresql.org/pgsql-general/2011-09/msg00525.php, need to know if something has been changed Not yet. There is one in the latest commitfest, still in Needs review status (https://commitfest.postgresql.org/action/patch_view?id=667). -- Guillaume http://blog.guillaume.lelarge.info http://www.dalibo.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Streaming replication and empty databases
Hi We're having two servers one of which is productive, the other used for backup and staging. Each has pg-9.1 installed and the relevant filesystem parts are regularly rsynced. The last missing piece is streaming replication from the production to the staging server. On production, pg holds two databases named app_production and app_staging. The latter is just a bare and empty database with no tables etc defined. With streaming replication in place, the pg on the staging server will get all the information to sync with the pg on the production server. However, can I use the app_staging database on the staging server independently? It appears a possibility since no XLOG messages concerning the app_staging database will be sent. Plan B is a second pg instance on the staging server, but I'd like to know if the above setup might work as well. Thanks for your hints! -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL 64 Bit XIDs - Transaction IDs
Am 23.03.2012 14:23, schrieb Adrian Klaver: I would say either they got the numbers wrong or someone is pulling your leg. That rate is not going to happen. Maybe twitter or facebook all in all... Cheers, Frank -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Parsing PG log usings CSV format
Arvind Singh wrote: i have sorted out the problem on The columns that are not quoted are guaranteed not to contain a comma. Good. But i have another query, the structure of PG Log CSV as mentioned in manual and as below has 24 columns http://www.postgresql.org/docs/9.1/static/runtime-config-logging.html --- log_time timestamp, user_name text, database_name text, process_id integer, connection_from text, session_id text, session_line_num bigint, command_tag text, session_start_time timestamp, virtual_transaction_id text, transaction_id bigint, error_severity text, sql_state_code text, message text, detail text, hint text, internal_query text, internal_query_pos integer, context text, query text, query_pos integer, location text, application_name text, PRIMARY KEY (session_id, session_line_num) Actually, that's only 23 columns. The last line is a constraint. However every Log line added contains only 22 columns, a sample log line is given below --- 2012-03-22 15:29:53.546 IST,postgres,stock_apals,2396,localhost:2766,4f6af819.95c,9,SELEC T,2012-03-22 15:29:53 IST,3/0,0,LOG,0,QUERY STATISTICS,! system usage stats: ! 0.015000 elapsed 0.00 user 0.015625 system sec ! [0.078125 user 0.062500 sys total],SELECT SUBSTR(itemname, 1, 15) as Product, avg(rate) as Avgrate FROM issue_stock WHERE extract(year from issue_stock.edate) = '2011' GROUP BY itemname order by itemname,,ShowUsage, .\src\backend\tcop\postgres.c:4305, --- the last column of the Log is not the Primarykey ? the last column as of yet is unknown because at all occurances it is a empty quoted column ? the column numbers dont match with the generated log Is this is a different Log format , can you guide us to a page where the column specifications can be matched. You must be using PostgreSQL 8.4 or earlier. The last column, application_name, was added in 9.0. Look up the documentation for your version and you'll find it. Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] PITR backup - estimating size
I'd like to switch to PITR backups, but have limited disk space. Is there a way to get a ballpark estimate by monitoring a running system, without actually creating the WAL files and risking filling a filesystem? Mike
Re: [GENERAL] PITR backup - estimating size
2012/3/23 Mike Blackwell mike.blackw...@rrd.com I'd like to switch to PITR backups, but have limited disk space. Is there a way to get a ballpark estimate by monitoring a running system, without actually creating the WAL files and risking filling a filesystem? Usually each wal segment size is 16MB (16777216 bytes), so you can save this information using archive_command. Ex: archive_command = 'du -b %p /tmp/xlogsize_history.txt' Regards, -- Fabrízio de Royes Mello Consultoria/Coaching PostgreSQL Blog sobre TI: http://fabriziomello.blogspot.com Perfil Linkedin: http://br.linkedin.com/in/fabriziomello Twitter: http://twitter.com/fabriziomello
Re: [GENERAL] PostgreSQL 64 Bit XIDs - Transaction IDs
On 03/22/12 10:45 PM, Gerhard Wiesinger wrote: With a database admin of a commercial database system I've discussed that they have to provide and they also achieve 2^31 transactions per SECOND! bullpucky. that's 2 transactions per NANOSECOND. light can travel about 6 in that time. half a bit at gigE. -- john r pierceN 37, W 122 santa cruz ca mid-left coast -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Server choice for small workload : raptors or SSD?
On Wed, Mar 21, 2012 at 6:47 PM, Rory Campbell-Lange r...@campbell-lange.net wrote: I'd be grateful for comments on whether to go with a server with the A or B spec. Both servers have the following in common: E5620 Quad-Core / 4x 2.40GHz LSI MegaRAID SAS 9260-4i with BBU (6Gbps) 48 GB PC3-10600 DDR3 / 1333MHz / registered ECC RAM Server A: 4 * 300GB 10K WD raptors in a RAID10 configuration Server B: 2 * 500GB 7.2K SATA disks in RAID 1 2 * 100GB Intel 710 Solid State SATA 270MBs read, 170MBs write in RAID 1 ** I just purchased up a big DB server... I went with the Intel 320 SSDs because they were cheaper, and I needed 8 of them. I also splurged for the 9265 LSI card, and added the FastPath option for speeding up the SSDs. This will probably add a bit over $150 to your SSD server. I have some servers now using 320s as boot drives, and they are *wicked* fast. All that aside, if your DB is 40GB now, it doesn't matter so much what your disks are since you're going to end up running primarily out of server disk cache. Is your workload heavily read or heavily write? If you are not writing so much, then your RAM is going to make the difference in these drives pretty insignificant. And personally, I avoid WD drives like the plague. Nothing but bad experience in my large disk arrays. I prefer Hitachi or Seagate. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Large PostgreSQL servers
On Wed, Mar 21, 2012 at 3:31 PM, Kjetil Nygård polpo...@gmail.com wrote: We wonder if someone could give some hardware / configuration specs for large PostgreSQL installations. We're interested in: - Number of CPUs - Memory on the server - shared_buffers - Size of the database on disk Just yesterday I purchased this system: Details: CPU: 2 x Opteron 6212 (2.6GHz, 8-Core, G34, 16MB L3 Cache) 115W TDP, 32nm RAM: 32GB (8 x 4GB) Operating at 1333MHz Max (DDR3-1333 ECC Registered DIMMs) NIC: Intel 82576 Dual-Port Gigabit Ethernet Controller - Integrated Management: Integrated IPMI 2.0 KVM with Dedicated LAN PCIe 2.0 x8 - 1: LSI 9265-8i 6Gb/s SAS/SATA RAID (8-Port Int) with 1GB DDR3 Cache (BBU Consumes 2nd PCI Slot) PCIe 2.0 x8 - 2: No Item Selected NOTE: SAS Drives or More Than 6 SATA Drives Require Controller (See PCIe 2.0 Slot) Hot-Swap Drive - 1: 40GB Intel 320 Series MLC (3Gb/s) 2.5 SATA SSD Hot-Swap Drive - 2: 40GB Intel 320 Series MLC (3Gb/s) 2.5 SATA SSD Hot-Swap Drive - 3: 160GB Intel 320 Series MLC (3Gb/s) 2.5 SATA SSD Hot-Swap Drive - 4: 160GB Intel 320 Series MLC (3Gb/s) 2.5 SATA SSD Hot-Swap Drive - 5: 160GB Intel 320 Series MLC (3Gb/s) 2.5 SATA SSD Hot-Swap Drive - 6: 160GB Intel 320 Series MLC (3Gb/s) 2.5 SATA SSD Hot-Swap Drive - 7: 160GB Intel 320 Series MLC (3Gb/s) 2.5 SATA SSD Hot-Swap Drive - 8: 160GB Intel 320 Series MLC (3Gb/s) 2.5 SATA SSD Optical Drive: Low-Profile DVD-ROM Drive Power Supply: Redundant 700W Power Supply with PMBus - 80 PLUS Gold Certified Additional Components Fastpath: LSI FastPath Software License (Requires 926x 928x controller) drives 1+2 RAID 1 for boot + OS drives 3+4 RAID 1 for db logs drives 5-8 RAID 10 for data The expected maximum size of the stored data is going to be about 100GB, so the goal was to have the data area about 2x that limit since they are SSDs and work optimally with lots of room to spare. My currently largest server has 22GB RAM and I specify 5GB as shared buffers. Most important in tuning is to get your random_page_cost right, and have enough checkpoint segments for your write load. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Large PostgreSQL servers
On Fri, Mar 23, 2012 at 10:01 AM, Vick Khera vi...@khera.org wrote: On Wed, Mar 21, 2012 at 3:31 PM, Kjetil Nygård polpo...@gmail.com wrote: We wonder if someone could give some hardware / configuration specs for large PostgreSQL installations. We're interested in: - Number of CPUs - Memory on the server - shared_buffers - Size of the database on disk Just yesterday I purchased this system: Details: CPU: 2 x Opteron 6212 (2.6GHz, 8-Core, G34, 16MB L3 Cache) 115W TDP, 32nm RAM: 32GB (8 x 4GB) Operating at 1333MHz Max (DDR3-1333 ECC Registered DIMMs) NIC: Intel 82576 Dual-Port Gigabit Ethernet Controller - Integrated Management: Integrated IPMI 2.0 KVM with Dedicated LAN PCIe 2.0 x8 - 1: LSI 9265-8i 6Gb/s SAS/SATA RAID (8-Port Int) with 1GB DDR3 Cache (BBU Consumes 2nd PCI Slot) PCIe 2.0 x8 - 2: No Item Selected NOTE: SAS Drives or More Than 6 SATA Drives Require Controller (See PCIe 2.0 Slot) Hot-Swap Drive - 1: 40GB Intel 320 Series MLC (3Gb/s) 2.5 SATA SSD Hot-Swap Drive - 2: 40GB Intel 320 Series MLC (3Gb/s) 2.5 SATA SSD Hot-Swap Drive - 3: 160GB Intel 320 Series MLC (3Gb/s) 2.5 SATA SSD Hot-Swap Drive - 4: 160GB Intel 320 Series MLC (3Gb/s) 2.5 SATA SSD Hot-Swap Drive - 5: 160GB Intel 320 Series MLC (3Gb/s) 2.5 SATA SSD Hot-Swap Drive - 6: 160GB Intel 320 Series MLC (3Gb/s) 2.5 SATA SSD Hot-Swap Drive - 7: 160GB Intel 320 Series MLC (3Gb/s) 2.5 SATA SSD Hot-Swap Drive - 8: 160GB Intel 320 Series MLC (3Gb/s) 2.5 SATA SSD Optical Drive: Low-Profile DVD-ROM Drive Power Supply: Redundant 700W Power Supply with PMBus - 80 PLUS Gold Certified Additional Components Fastpath: LSI FastPath Software License (Requires 926x 928x controller) drives 1+2 RAID 1 for boot + OS drives 3+4 RAID 1 for db logs drives 5-8 RAID 10 for data The expected maximum size of the stored data is going to be about 100GB, so the goal was to have the data area about 2x that limit since they are SSDs and work optimally with lots of room to spare. My currently largest server has 22GB RAM and I specify 5GB as shared buffers. Most important in tuning is to get your random_page_cost right, and have enough checkpoint segments for your write load. would love to see some performance #s from your server when it's all set up... merlin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Large PostgreSQL servers
On Fri, Mar 23, 2012 at 11:28 AM, Merlin Moncure mmonc...@gmail.com wrote: would love to see some performance #s from your server when it's all set up... I plan on doing some. This will also be my very first non-FreeBSD server (it is also running a custom app we just bought, and that requires CentOS) so this is very very new to me. Any recommended tests to run? I was going to try to get Greg Smith some time on this box before it goes production to see how his tests come out, but I haven't heard from him in a while. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL 64 Bit XIDs - Transaction IDs
On 23.03.2012 11:16, Jan Kesten wrote: On 23.03.2012 06:45, Gerhard Wiesinger wrote: With a database admin of a commercial database system I've discussed that they have to provide and they also achieve 2^31 transactions per SECOND! As PostgreSQL uses transaction IDs (XIDs) in the range of 2^31 they would turn around in about one second. Wow. What application issues that much transactions? And what is the database system that can handle that? I can't think of a single machine capable of this - and hardy believe postgresql can came close. 2^31 transactions mean that a single one lasts 0.5ns. Even the fastest DDR3-2133 has cycle times of 4ns. I have seen a database monster in action - 43 trillion (academic) transactions per day, but that's only 5*10^8 transactions per second, under a quarter of 2^31 per second. So, I can't answer your question - but you triggered my curiosity :-) I'm just answering in one of the posts ... Ok, talked again to the admin and he was wrong with 3 zeros and per minute :-) So corrected data are: 1 Mio transaction per minute. 1Mio/60s=1 transactions/s 2^32*60/1E6=257698s to get to the XID overflow which are 2.98 days (~1.5 days when 2^31 with signed int is the border) So in that time autovacuum is triggered. Nevertheless we are getting into the area where XID overflow is an issue in the near future. In your example with 5E8 transactions per second overflow will be in 4s (2^31) or 8s (2^32) ... So I think XID overflow should be planned for one of the next PostgreSQL releases. Ciao, Gerhard -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL 64 Bit XIDs - Transaction IDs
On Fri, Mar 23, 2012 at 11:35 AM, Gerhard Wiesinger li...@wiesinger.com wrote: On 23.03.2012 11:16, Jan Kesten wrote: On 23.03.2012 06:45, Gerhard Wiesinger wrote: With a database admin of a commercial database system I've discussed that they have to provide and they also achieve 2^31 transactions per SECOND! As PostgreSQL uses transaction IDs (XIDs) in the range of 2^31 they would turn around in about one second. Wow. What application issues that much transactions? And what is the database system that can handle that? I can't think of a single machine capable of this - and hardy believe postgresql can came close. 2^31 transactions mean that a single one lasts 0.5ns. Even the fastest DDR3-2133 has cycle times of 4ns. I have seen a database monster in action - 43 trillion (academic) transactions per day, but that's only 5*10^8 transactions per second, under a quarter of 2^31 per second. So, I can't answer your question - but you triggered my curiosity :-) I'm just answering in one of the posts ... Ok, talked again to the admin and he was wrong with 3 zeros and per minute :-) So corrected data are: 1 Mio transaction per minute. 1Mio/60s=1 transactions/s 2^32*60/1E6=257698s to get to the XID overflow which are 2.98 days (~1.5 days when 2^31 with signed int is the border) So in that time autovacuum is triggered. Nevertheless we are getting into the area where XID overflow is an issue in the near future. In your example with 5E8 transactions per second overflow will be in 4s (2^31) or 8s (2^32) ... So I think XID overflow should be planned for one of the next PostgreSQL releases. two mitigating factors: 1. read only transactions do not increment xid counter 2. xid wraparound counter is per table. merlin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL 64 Bit XIDs - Transaction IDs
On 23.03.2012 11:16, Jan Kesten wrote: On 23.03.2012 06:45, Gerhard Wiesinger wrote: With a database admin of a commercial database system I've discussed that they have to provide and they also achieve 2^31 transactions per SECOND! As PostgreSQL uses transaction IDs (XIDs) in the range of 2^31 they would turn around in about one second. Wow. What application issues that much transactions? And what is the database system that can handle that? I can't think of a single machine capable of this - and hardy believe postgresql can came close. 2^31 transactions mean that a single one lasts 0.5ns. Even the fastest DDR3-2133 has cycle times of 4ns. I have seen a database monster in action - 43 trillion (academic) transactions per day, but that's only 5*10^8 transactions per second, under a quarter of 2^31 per second. So, I can't answer your question - but you triggered my curiosity :-) I'm just answering in one of the posts ... Ok, talked again to the admin and he was wrong with 3 zeros and per minute :-) So corrected data are: 1 Mio transaction per minute. 1Mio/60s=1 transactions/s 2^32*60/1E6=257698s to get to the XID overflow which are 2.98 days (~1.5 days when 2^31 with signed int is the border) So in that time autovacuum is triggered. Nevertheless we are getting into the area where XID overflow is an issue in the near future. In your example with 5E8 transactions per second overflow will be in 4s (2^31) or 8s (2^32) ... So I think XID overflow should be planned for one of the next PostgreSQL releases. Ciao, Gerhard -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] plpgsql function to insert or update problem
Hi all, I am inserting apache log into into a database. Seem to have a little problem with this function: create or replace function insert_webstat( ivhost text, iip inet, isessid text, ihittime timestamp, iurl text, istatus integer, isize integer, icompress integer, ielapsed integer, iuseragent text) returns void as $$ declare x text; vid integer; aid integer; r integer; begin if iurl = '*' then return; end if; ... skip hopefully uninteresting stuff ... ... and then .. insert into webstats(vhost, ip, sessid, hittime, url, status, size, compress, elapsed, useragent) values (vid, iip, isessid, ihittime, iurl, istatus, isize, icompress, ielapsed, aid); if iurl like '%.php' and current_timestamp - ihittime '24 hours'::interval then update by_ip set hits = hits + 1 where vhost = vid and time = date_trunc('hour', ihittime) and ip = iip and sessid = isessid; if not found then insert into by_ip(vhost, time, ip, sessid, hits) values (vid, date_trunc('hour', ihittime), iip, isessid, 1); end if; end if; end $$ language 'plpgsql'; I have two or three web boxes that collect stats and forward them (via perl and ZeroMQ) to the stat server that is also running a perl/ZeroMQ program. The server perl script starts a transaction, and collects as many ZeroMQ messages as it can without blocking, and inserts them all into the db, then commits. So, only one connection to the database. In slight copy/paste/summarize perl: $q = $db-prepare('select insert_webstat($1, $2, $3, $4, $5, $6, $7, $8, $9, $10)'); # this blocks and waits for a message my $msg = $sock-recv(); $db-begin_work; while (1) { my $data = $msg-data(); eval { $q-execute(@list) }; if ($@) { my $x = $db-errstr; $log-print(DB Error: $x\n[, join(,, @list), ]\n); $q = undef; $db = undef; opendb(); last; } $msg = $sock-recv(ZMQ_NOBLOCK); if (! $msg) { $db-commit; last; } } This is not the exact code, the above is just example to try and describe without too much detail. Anyway, the problem. I get a lot of DB Error messages: DB Error: ERROR: duplicate key value violates unique constraint by_ip_pk DETAIL: Key (ip, sessid, time)=(97.64.237.59, 2qggi9gcdkcaoecqg3arvo1gu7, 2012-03-23 13:00:00) already exists. CONTEXT: SQL statement insert into by_ip(vhost, time, ip, sessid, hits) values (vid, date_trunc('hour', ihittime), iip, isessid, 1) PL/pgSQL function insert_webstat line 38 at SQL statement [jasper.iowaassessors.com,97.64.237.59,2qggi9gcdkcaoecqg3arvo1gu7,23/Mar/2012:13:19:48 -0500,/getSketch.php,200,1837,,6387,Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.83 Safari/535.11] Lets see. Server is PG 9.0.7, on Slackware 64 13.1. stats=# \d by_ip Table public.by_ip Column |Type | Modifiers +-+--- vhost | integer | not null time | timestamp without time zone | not null ip | inet| not null sessid | text| hits | integer | Indexes: by_ip_pk UNIQUE, btree (ip, sessid, time) vhost is an integer in this table... the function turns jasper.iowaassessors.com into an integer based on a lookup table. Why is it hitting the insert statement? Why doesn't the if not found seem to work? Any hints would be appreciated. -Andy -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL 64 Bit XIDs - Transaction IDs
Merlin Moncure mmonc...@gmail.com writes: On Fri, Mar 23, 2012 at 11:35 AM, Gerhard Wiesinger li...@wiesinger.com wrote: So I think XID overflow should be planned for one of the next PostgreSQL releases. two mitigating factors: 1. read only transactions do not increment xid counter Yes. Ask your admin what his throughput of *data modifying* transactions is. I'll bet that number is a few orders of magnitude smaller again. 2. xid wraparound counter is per table. That unfortunately isn't so, the XID counter is global to an installation. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL 64 Bit XIDs - Transaction IDs
On 03/23/12 9:17 AM, Gerhard Wiesinger wrote: So in that time autovacuum is triggered. autovacuum runs pretty much continuously in the background, its not an on/off thing. -- john r pierceN 37, W 122 santa cruz ca mid-left coast -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL 64 Bit XIDs - Transaction IDs
On Fri, Mar 23, 2012 at 1:57 PM, Tom Lane t...@sss.pgh.pa.us wrote: 2. xid wraparound counter is per table. That unfortunately isn't so, the XID counter is global to an installation. yup -- thinko: I was thinking about oid counter, not xid. thanks merlin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL 64 Bit XIDs - Transaction IDs
On Fri, 23 Mar 2012, John R Pierce wrote: On 03/23/12 9:17 AM, Gerhard Wiesinger wrote: So in that time autovacuum is triggered. autovacuum runs pretty much continuously in the background, its not an on/off thing. Yes, I know. I ment that it runs at least once in 1.5 days. Ciao, Gerhard -- http://www.wiesinger.com/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL 64 Bit XIDs - Transaction IDs
On Fri, 23 Mar 2012, Tom Lane wrote: Merlin Moncure mmonc...@gmail.com writes: On Fri, Mar 23, 2012 at 11:35 AM, Gerhard Wiesinger li...@wiesinger.com wrote: So I think XID overflow should be planned for one of the next PostgreSQL releases. two mitigating factors: 1. read only transactions do not increment xid counter Yes. Ask your admin what his throughput of *data modifying* transactions is. I'll bet that number is a few orders of magnitude smaller again. That were all small writing transactions benchmarking the database (with the corrected values discussed). Ciao, Gerhard -- http://www.wiesinger.com/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Change the default [tgenabled] for new internal triggers ?
Today I learned that . . . the firing of even internally generated constraint triggers is affected by the value of [session_replication_role]. Sadly, I had previously assumed that such internal triggers did not care about [s_r_r]. Also learned that . . . when a FK constraint gets made, its pg_trigger.tgenabled = ' O'. QUESTION: Is the a way to configure Postgres such that tgenabled = ' A' automatically when the FK constraint gets made ? Or must I !un-automatedly! do a . . . alter table MYTABLEENABLE ALWAYS TRIGGER RI_ConstraintTrigger_18656 whenever somebody makes a FK ?!? Thanks, -dvs- -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL 64 Bit XIDs - Transaction IDs
On 03/23/2012 12:12 PM, Gerhard Wiesinger wrote: On Fri, 23 Mar 2012, John R Pierce wrote: On 03/23/12 9:17 AM, Gerhard Wiesinger wrote: So in that time autovacuum is triggered. autovacuum runs pretty much continuously in the background, its not an on/off thing. Yes, I know. I ment that it runs at least once in 1.5 days. http://www.postgresql.org/docs/9.1/interactive/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Change the default [tgenabled] for new internal triggers ?
david.sahag...@emc.com writes: Is the a way to configure Postgres such that tgenabled = ' A' automatically when the FK constraint gets made ? No. Why do you think that would be a good idea? ISTM it'd lead to the action being taken twice on the slave. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] broken xlog - recovery plan check
Hi I seem to have an 8.3.9 database with a broken xlog, PANIC: heap_insert_redo: invalid max offset number My plan is to run pg_resetxlog. Hopefully it then starts up. Test recent data as thoroughly as possible - (script some Select * ' s?) If ok - curse ops and their raid caches If not - curse ops and tell them to recover from backup (v. large and therefore very slow process). Can anyone give me feedback on this plan? Cheers Colin. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] about numerical accuracy of statistical functions
Hi, I want to use statistical functions, such as variance(), stddev() and covar_samp(), to compute some basic statistics of my data. I have strict requirement for the numerical accuracy of the results. So, I want to know whether the implementation of these functions are numerically stable. For example, variance can be computed as sum(x^2)/n - sum(x)^2/n^2. But this computation can lead to serious accuracy problems with double precision. Can anybody shed lights on what algorithms are used to implement these functions and whether they are numerically stable? Cheers, Yuanyuan -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] pgcon 2012
Dear sir, we are recent users of Postgres and wish to be more aware and soak up the activities around the DB. we used to develop more on MySql , SQl Server combo Now we are using PG 9.0.5 , NPGSQL + Visual CSharp , Java Is http://www.pgcon.org/2012/?2nd a valid and certified Postgres Event or is it just a marketing strategy by Private Event Management firm Especially related to - latest PG developments - learn about PG Stats , Logs , PG_Settings table - hire postgres related manpower because we are based in Delhi, and it will take efforts to send candidates to the event thanking you yours truly arvind
Re: [GENERAL] PostgreSQL 64 Bit XIDs - Transaction IDs
VoltDB maybe - Original Message - On 06:31 AM 03/23/2012 Frank Lanitz wrote: Am 23.03.2012 14:23, schrieb Adrian Klaver: I would say either they got the numbers wrong or someone is pulling your leg. That rate is not going to happen. Maybe twitter or facebook all in all... Cheers, Frank -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general