Re: [ADMIN] PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects
On Tue, Oct 1, 2013 at 4:01 AM, Giuseppe Broccolo giuseppe.brocc...@2ndquadrant.it wrote: Maybe you can performe your database changing some parameters properly: max_connections = 500 # (change requires restart) Set it to 100, the highest value supported by PostgreSQL Surely you mean that max_connections = 100 is the *default* ?
Re: [ADMIN] Dumping a database that is not accepting commands?
On Tue, Sep 17, 2013 at 9:48 AM, Natalie Wenz nataliew...@ebureau.comwrote: maintenance_work_mem| 10GB shared_buffers | 128MB maintenance_work_mem seems pretty high, and shared_buffers seems really low. Out of curiousity, were those set as a product of internal testing which determined those were effective settings?
Re: [ADMIN] wrong database name in error message?
On Sat, Sep 14, 2013 at 6:05 PM, Rural Hunter ruralhun...@gmail.com wrote: 于 2013/9/15 1:06, Kevin Grittner 写道: Rural Hunter ruralhun...@gmail.com wrote: Why in the world would you want to use VACUUM FULL in this circumstance? the db name in the error message wrong? I just googled around and found the solution. What's the other option? A plain VACUUM will suffice -- there is no need to also use the FULL keyword in this case (wraparound).
Re: [ADMIN] question on the information_schema
On Fri, Aug 23, 2013 at 8:02 AM, Salvatore Barone salvator.bar...@gmail.com wrote: On 23/08/2013 16:57, bricklen wrote: select * from information_schema.columns where table_schema not in ('information_schema','pg_catalog') Both queries, whether to send as user ssaa, ie the user that I use daily, return an empty table. If sending as user postgres instead, they return the correct list of columns. See David Johnston's response, I think that might be the reason why you are not seeing anything (no tables to show columns of).
Re: [ADMIN] question on the information_schema
On Fri, Aug 23, 2013 at 8:12 AM, Salvatore Barone salvator.bar...@gmail.com wrote: I'm not stupid, No one said you were. These lists see people of a wide range of skillsets and expertise. In this case it might not be the answer for your issue, but David's suggestion might help someone else having a similar problem in the future. I added some columns before sending the query. I'm trying to write a query that returns the name and other information about the columns of a database. the query that is sent is the following select * from information_schema.columns col where col.table_schema!='pg_catalog'and col.table_schema!='information_schema' if i sent the query as postgres, the result is a table that contain the the right column list. If I sent the query as ssaa, the result is an empty table. I must to be able to retrieve the list of tables as a user ssaa. Logged in as ssaa, can you show the output of the following (in psql): \dt+ and show search_path;
Re: [ADMIN] question on the information_schema
2013/8/23 Salvatore Barone salvator.bar...@gmail.com This is the output of \d from psql. I logged in as ssaa Lista delle relazioni Schema |Nome| Tipo | Proprietario | Dimensione | Descrizione ++-+--++- public | acquisti_persona_fisica| tabella | platinet | 8192 bytes | public | acquisti_persona_giuridica | tabella | platinet | 0 bytes | public | articolo | tabella | platinet | 8192 bytes | public | fornitore | tabella | platinet | 0 bytes | public | giacenza | tabella | platinet | 0 bytes | public | persona_fisica | tabella | platinet | 8192 bytes | public | persona_giuridica | tabella | platinet | 8192 bytes | (7 righe) And this is the output of show search_path; search_path $user,public (1 riga) I'm stumped. I am unable to reproduce the lack of output from your query when logged in as a non-superuser, and I can't think of a reason why you are seeing no results. Hopefully someone else can chime in with more suggestions or an answer.
Re: [ADMIN] Only WAL archive left. Need to restore data.
On Fri, Aug 9, 2013 at 4:06 AM, Sergey Arlashin sergeyarl.maill...@gmail.com wrote: Hi, I have 2GB backup of WAL archive folder but the backup of postgres database cluster directory is lost. However I need to restore any data possible from these WAL logs. Is that possible? What do I do to get the data. I haven't used it personally, but xlogdump might work somewhat: https://github.com/snaga/xlogdump
Re: [ADMIN] unexpected EOF on client connection during pg_dumpall
On Fri, Aug 2, 2013 at 11:33 AM, Arnold, Sandra L. arnol...@ornl.govwrote: 2013-08-02 14:04:40.981 EDT,backup,postgres,21309hostname:40353,51fbf3a1.533d,48,idle,2013-08-02 14:00:01 EDT,,0,LOG,0,disconnection: session time: 0:04:39.039 user=backup database=postgres host=hostname port=40353,pg_dumpall It looks like it completed the COPY statement for the table that it was backing up right before the error message occurred. I also do not see anything in the backup file that indicates what is happening. Is the entire process occurring on the same host, or is across a network? If the latter, could it be due to timeouts from ssh, or tunnels, or internal load balancers/connection poolers?
Re: [ADMIN] Disk latency goes up during certaing pediods
On Wed, Jul 31, 2013 at 9:25 AM, German Becker german.bec...@gmail.comwrote: To all whom might be interested, I have an update on this. I run some tests on the old production DB which was Posgres 8.3 (and only one disk for everything), using pgreplay, running the same queries as the 9.1 server. Here is the output of iostat for the 8.3 server: ... Here is the output for 9.1: ... What kernel are you running? Could it be related to a recent discussions about the 3.2 kernel? http://markmail.org/message/qosngswoy5lqmxlr
Re: [ADMIN] Disk latency goes up during certaing pediods
On Tue, Jul 30, 2013 at 8:35 AM, German Becker german.bec...@gmail.comwrote: 256 was set some time when we were testing a differnt issue. I read that the only drawback is the amunt of time required for recovery, which was tested and it was like 10 seconds for the 256 segments, and higher values mean less disk usage. Anyway all these parameters should affect the throughput to the data disks, not the WAL, Am I right? checkpoint_completion_target is to help with checkpoint smoothing, to reduce the spike in disk I/O when shared_buffers are written out. Depesz has a good article about that: http://www.depesz.com/2010/11/03/checkpoint_completion_target/ Do your graphs show any correlation between number of WAL segments getting recycled, and disk I/O spikes? Are you logging checkpoints? If so, you could use the checkpoint times to compare against your I/O graphs. I am by no means an expert here, I'm just throwing out ideas (which might already have been suggested).
Re: [ADMIN] Disk latency goes up during certaing pediods
On Mon, Jul 29, 2013 at 12:41 PM, German Becker german.bec...@gmail.comwrote: Luis, The disk only has the WAL (pg_xlog ) directory. Brett, Here are the mount options: /dev/sdb1 on /storage/sdb1 type ext3 (rw,noatime,data=writeback,errors=remount-ro) BTW The original fs was ext4, now I am trying with ext3, with the exact same results. No noticeable changes using diferent journal modes. I also tried disabling the journal altogether, which dramatically reduced the disk usage, but nevertheless there was this latency spikes. I haven't been following this thread and might have missed it, but did you show your checkpoint_completion_target?
Re: [ADMIN] Disk latency goes up during certaing pediods
On Mon, Jul 29, 2013 at 1:28 PM, German Becker german.bec...@gmail.comwrote: checkpoint_segments = 256 # in logfile segments, min 1, 16MB each I'm curious about checkpoint_segments. 256 seems pretty high -- did testing show that that helps? checkpoint_completion_target = 0.7 # checkpoint target duration, 0.0 - 1.0 0.7 could be bumped up to 0.9, but I doubt that that will make a very noticeable difference for this particular issue.
[ADMIN] PG wiki updated with query to show unindexed foreign keys
https://wiki.postgresql.org/wiki/Unindexed_foreign_keys Suggestions and feedback welcome on the query referenced in the link above.
Re: [ADMIN] creating a circular replication in postgres
On Thu, Jul 4, 2013 at 11:59 PM, Jayesh Nambiar jayesh1...@gmail.comwrote: I am trying to achieve a circular replication with postgres; meaning two postgres servers are slaves of each other. Are you looking to do Multi-Master replication? I went through Bucardo which uses perl scripts to achieve this, but I am looking at something more native if available. PostgreSQL does not support multimaster replication in the core product, but there are 3rd party tools that support it. Some other options are listed at: https://wiki.postgresql.org/wiki/Replication%2C_Clustering%2C_and_Connection_Pooling
Re: [ADMIN] excessive WAL activity
On Wed, Jun 19, 2013 at 4:22 PM, Sean Dillon s...@dillonsoftware.comwrote: Just turned on WAL archiving to an S3 bucket for a small database - total size of perhaps 2-4G. After turning on achiving, we're seeing WAL logs written to S3 at the rate of about 1G every 3 minutes. That seems completely unreasonable given usage of the db. I can even see that nearly nothing is happening with this: select datname, usename, procpid, client_addr, waiting, query_start, current_query from pg_stat_activity; Nearly every time I run that, all 20 connections have current_query = 'IDLE'. Does current_query include inserts, updates, and deletes or just select statements? Any ideas what to look for or how to solve this? Can you show the results from: SELECT name, current_setting(name), source FROM pg_settings WHERE source NOT IN ('default', 'override') UNION ALL SELECT 'version' as name, version(), null; Perhaps there is a setting there (like archive_timeout) which could lead to an answer. In your $PGDATA/pg_xlog directory, look at the timestamps of the WAL segments, are there lots of files generated per minute?
Re: [ADMIN] Copy one database to another server
On Thu, May 2, 2013 at 3:01 PM, Huan Ruan huan.ruan...@gmail.com wrote: create a new db using the current one as a template. Easy to do and speed is acceptable. A bit downtime is ok, but it's only on the current server. Is there a way to move it to anther server? FWIW, I recently set up a 160GB clone by simply piping from pg_dump over the network into psql, alleviating the need to dump to disk. In my case, the server did not have enough space to hold the pg_dump, so I had to stream it. pg_dump -Fp your_db | psql -h new_server -U postgres your_new_db You could also search the archives for caveats and other (and safer) ways to do this.
Re: [ADMIN] High growth company seeks postgres DBA
You might have better luck in the postgresql list specifically for job postings. http://www.postgresql.org/list/pgsql-jobs/ On Mon, Apr 8, 2013 at 3:26 PM, Shiffrin, Sharda sharda.shiff...@asurion.com wrote: We’re looking to hire a DBA with postgres experience. If you or anyone you know who might be interested in a full time permanent position (not consulting), please contact me as soon as possible for more information. This is an urgent need. ** ** *Best,* *Sharda Shiffrin* *sharda.shiff...@asurion.com* ** voice (650) 645-5408 * *mobile (650) 730-3789 Asurion I 1400 Fashion Island Blvd. Suite 450 San Mateo, CA 94404 www.asurion.com I Twitter https://twitter.com/Asurion I Facebookhttps://www.facebook.com/Asurion ** **
Re: [ADMIN] diagnosing a db crash - server exit code 2
On Wed, Sep 28, 2011 at 12:54 PM, Robert Burgholzer rburg...@vt.edu wrote: If anyone has any suggestions as to how to run the trace via a nohup command or something, that would be cool, since then I could let it run in the background. If you have screen installed, maybe try it in a screen session. -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Prepared statments: partial indexes are avoided!
On Mon, Jun 20, 2011 at 7:06 AM, Simon Riggs si...@2ndquadrant.com wrote: Unfortunately, prepared statements do act in the way you have seen. I have a patch into 9.2 under discussion to improve upon this situation, but don't hold your breath for that. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services I haven't used Oracle in years, but I seem to recall that they had bind variable peeking to work around this type of problem. That would be interesting to have in PG. -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Oracle Label Security/ Row Level Security on Postgresql
On Mon, Mar 7, 2011 at 7:00 AM, H S aras_h1...@yahoo.com wrote: Dear Sirs, We would like to implement Oracle Label Security or Row level security or associated concepts mechanism on PostgreSQL. What projects have don on it bis now? refer me to them. I would like to hear any from you. What should I do, and ... Regards,... Sara VEIL might be of use to you: http://pgfoundry.org/projects/veil I've never used it though.
Re: [HACKERS] Re: [ADMIN] PD_ALL_VISIBLE flag was incorrectly set happend during repeatable vacuum
On Wed, Mar 2, 2011 at 3:53 PM, daveg da...@sonic.net wrote: Postgresql version is 8.4.4. I don't see how this could be related, but since you're running on NFS, maybe it is, somehow: http://archives.postgresql.org/message-id/4d40ddb7.1010...@credativ.com (for example what if the visibility map fork's last page is overwritten?) Running on ISCSI, not nfs. But it is still a Netapp, so who knows. I'll look. Also, we are not seeing any of the unexpected data beyond EOF errors, just thousands per day of the PD_ALL_VISIBLE error. -dg FWIW, we had a couple occurrences of that message about a month ago on 9.0.2 http://archives.postgresql.org/pgsql-general/2011-01/msg00887.php Haven't seen it since we ran a cluster-wide vacuum. -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] unknown log messages
On Wed, Jan 5, 2011 at 7:16 AM, Little, Douglas douglas.lit...@orbitz.comwrote: Env:Greenplum 3.3.6(based on PG 8.2.13), Solaris 10u5, Sun X4540 (x86) We are probing the machine with sitescope for cpu/mem/disk stats, but I don’t believe it’s connecting to the db. 2011-01-01 13:01:56.848368 CST,root,root,p427,th1,[local],,2011-01-01 13:01:56 CST,0,,,seg-1,FATAL,28000,no pg_hba.conf entry for host [local], user root, database root, SSL off,,,0,,auth.c,369,Traceback 0: a12ca6: /usr/local/greenplum-db-3.3.6.6/bin/postgres errstart+0x3e6 Do you actually have a database named root and a db user named root? If not, I'd check your client connection details to ensure you pass in the database name and user. Also, if you have a support contract, you could contact Greenplum -- I've generally found them to be pretty responsive. Barring that, you could also try posting to http://community.greenplum.com
Re: [ADMIN] unknown log messages
On Wed, Jan 5, 2011 at 9:15 AM, Scott Ribe scott_r...@elevated-dev.com wrote: On Jan 5, 2011, at 9:34 AM, Little, Douglas wrote: No dbuser named Root. That's my point, you don't have a db user named root, but you do (maybe) have a process running as root trying to connect to the db. So it sounds like the log messages are standard for a login attempt with unknown user. No, they are standard for root trying to connect to the database when you do not have a db user named root. As Scott says, check your root crontab for any scripts that might connect to the db, and check the connection string. The [local] part means that the call is being initiated locally to that box, else it would should the IP. -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Greenplum: Functions that execute SQL statements from the segDBs are not yet supported.
On Sun, Jan 2, 2011 at 10:55 PM, kirancnair kirn...@gmail.com wrote: Hi, I get this error when I try to call a function from an Insert Trigger set on one of my tables in the SNE GP Edition: ... What can be the cause of this? Triggers + functions are working perfectly fine with another table in the same DB. Thanks in advance! Rgds, Kiran If you don't have a support contract with GP, then you can try getting help from here: http://community.greenplum.com/index.php -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Is any 'upgrade' required to move a 9.0.1 cluster to 9.0.2?
On Mon, Dec 20, 2010 at 8:06 AM, Lou Picciano loupicci...@comcast.net wrote: Gang, Is any 'upgrade' required to move a 9.0.1 cluster to 9.0.2? TIA, Lou No, just upgrade the binaries and bounce your cluster. -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] PGPOOL-II
On Fri, Oct 1, 2010 at 1:48 PM, Willy-Bas Loos willy...@gmail.com wrote: I've installed postgresql 8.4.4. Hi, Usually you need a ...-dev package if you want to compile anything against it. But i can't find a postgresql-8.4-dev in my ubuntu lucid (10.04). That might mean that you have to compile postgres also, or find a repository for the -dev package. hth WBL postgresql-server-dev-8.4? -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] select count is too slow
Did you vacuum your table after the delete? On Thu, Aug 20, 2009 at 2:36 AM, Kumar Anand kumar.an...@merceworld.comwrote: Dear all, I face this problem from last few days. here is test2 table with only one column id erp_test= \d test2 Table public.test2 Column | Type | Modifiers +-+--- id | integer | I insert 10,000,000 entries in this table. erp_test= INSERT INTO test2 VALUES (generate_series(1,1000)); INSERT 0 1000 then i delete all the entries erp_test= delete from test2; DELETE 1000 and i insert only 10 entries erp_test= INSERT INTO test2 VALUES (generate_series(1,10)); INSERT 0 10 now i try to count no of rows in this table which take long time for this 10 rows only (about 2-3 second) erp_test= SELECT count(*) from test2; count --- 10 (1 row) this is the output of explain analyze query of the same erp_test= EXPLAIN ANALYZE SELECT count(*) from test2; QUERY PLAN Aggregate (cost=198652.13..198652.14 rows=1 width=0) (actual time=2123.471..2123.472 rows=1 loops=1) - Seq Scan on test2 (cost=0.00..169732.70 rows=11567770 width=0) (actual time=2123.454..2123.454 rows=10 loops=1) Total runtime: 2123.609 ms (3 rows) can any one solve my problem to speed up my select count query. -- Thanks Regards Kumar Anand -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] select count is too slow
VACUUM ANALYZE, rather On Thu, Aug 20, 2009 at 2:21 PM, bricklen brick...@gmail.com wrote: Did you vacuum your table after the delete? On Thu, Aug 20, 2009 at 2:36 AM, Kumar Anand kumar.an...@merceworld.comwrote: Dear all, I face this problem from last few days. here is test2 table with only one column id erp_test= \d test2 Table public.test2 Column | Type | Modifiers +-+--- id | integer | I insert 10,000,000 entries in this table. erp_test= INSERT INTO test2 VALUES (generate_series(1,1000)); INSERT 0 1000 then i delete all the entries erp_test= delete from test2; DELETE 1000 and i insert only 10 entries erp_test= INSERT INTO test2 VALUES (generate_series(1,10)); INSERT 0 10 now i try to count no of rows in this table which take long time for this 10 rows only (about 2-3 second) erp_test= SELECT count(*) from test2; count --- 10 (1 row) this is the output of explain analyze query of the same erp_test= EXPLAIN ANALYZE SELECT count(*) from test2; QUERY PLAN Aggregate (cost=198652.13..198652.14 rows=1 width=0) (actual time=2123.471..2123.472 rows=1 loops=1) - Seq Scan on test2 (cost=0.00..169732.70 rows=11567770 width=0) (actual time=2123.454..2123.454 rows=10 loops=1) Total runtime: 2123.609 ms (3 rows) can any one solve my problem to speed up my select count query. -- Thanks Regards Kumar Anand -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] CSV Utility
Take a look at the CSV option of COPY. That should do what you're after. http://www.postgresql.org/docs/8.4/interactive/sql-copy.html On Thu, Aug 6, 2009 at 2:27 PM, Mike angeloangelo...@yahoo.com wrote: Is there a Postgres utility that allows data contained in a CSV file to be loaded into the database? -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Who's attached to the database?
Carol Walter wrote: Thanks, guys, This told me that the user that has it open is the interface user called db_user. Is there a command to disconnect this user? Carol select pg_cancel_backend(procpid of connection); -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] sequences
Alexander B. wrote: Hi, I tried to find, but I didn't, I would like to know what's the view to list all sequences! Thank you \ds or select * from pg_class where relkind = 'S'; ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] pg_dump inquiry
Karthikeyan Sundaram wrote: Hi, I have to dump only 10 tables out of 100 tables. In the pg_dump utility given by postgres there is an option called -t followed by table name. In that option, if I give more than 1 table, it's not accepting. How can I get the dump in one stroke for all the 10 tables? Please advise. Regards skarthi If you are using postgresql 8.2, you can specify multiple tables http://www.postgresql.org/docs/8.2/interactive/app-pgdump.html Don't think that that was possible before 8.2, though. ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [ADMIN] Performances
Cedric BUSCHINI wrote: Hello everyone, A quick question: - how long does it take you to perform that query : SELECT COUNT(*) FROM A_TABLE; Not long :) dev5=# SELECT COUNT(*) FROM A_TABLE; ERROR: relation a_table does not exist ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] Dump and Query
Andy Shellam (Mailing Lists) wrote: Hi Enrico, The following command will get you a text file of your result-set: # echo SELECT customer_id, first_name, sur_name FROM users;|/usr/local/pgsql/bin/psql -U [username] -d [database] myfile.txt # cat myfile.txt Alternative version: psql -d your dbname -c SELECT customer_id, first_name, sur_name FROM users -o myfile.txt ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [ADMIN] Simple Unload
Naomi Walker wrote: From time to time, I need to unload rows to a delimited file, specifically with a where clause. I've cobbled a script together to do this, but it seems like a reasonable utility to support. Sort of a pg_dump on steroids.. Have I missed the simple way to do this? Would someone consider adding such a utility or adding this to pg_dump? Naomi 8.2 COPY appears to be able to output in csv format, using a WHERE clause, though I've not tried it. http://www.postgresql.org/docs/8.2/static/sql-copy.html ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [ADMIN] Missing Earth Distance Functions Under Debian
Duncan McDonald wrote: Hi Tom, Thanks for the reply. No I didn't run the earthdistance.sql script on the backup database, is this included with the standard PostgreSQL package? If not, would you mind letting me know how/where to obtain it? I'm relatively new to PostgreSQL administration so I apologise in advance if this is an obvious question. Regards, -Duncan I think that it is in the postgresql-contrib-8.1 deb package available via aptitude (or whichever package manager you are using). ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [ADMIN] Copying data from table to table (cloned tables)
Fourat Zouari wrote: Hello all, Any one could suggest the best way to copy data from table to table in the same db, the reason why am seeking for this is that the first table is becoming very big, and old data has no reason why to stay there, so i created a cloned table but without indexes and constraints (lighter) and now i would like to copy 'some' data from first to second table (copied data is data older than 3 months, i have a timestamp column). In other way, i have a table called 'hotqueues' where i store fresh messages queued, once messages are treated, they stay in 'hotqueues' but with a flag indicating that their arent queued for treatment.. so in this way, data will rest there forever, slowing down any searches in that table, the solution was to copy old messages to another table called 'coldqueues' that has the same structure as 'hotqueues' but lighter (without constraints and indexes). How to copy these data with 100% data-loose free. Thanks for any help you can provide. If you just want to copy the data across to the other table: begin; insert into table2 select * from table1 where some criteria; commit; if you also want to remove that same data from table1: begin; insert into table2 select * from table1 where some criteria; delete from table1 where same criteria as above; commit; ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] IsDate function in plpgsql
Sistemas C.M.P. wrote: A few days ago, someone in this list gives me a function (plpgsql) that evaluates if a string is a valid date or not. It returns 1 or 0 (valid/Not Valid) The problem is that if the argument is a NULL field the function returns 1. I don't have experience with plpgsql language. This is the function begin perform $1::date; return 1; exception when others then return 0; end create or replace function datetest (text) returns integer as $$ begin if ($1 is null) then return 0; end if; perform $1::date; return 1; exception when others then return 0; end; $$ language plpgsql; ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] Backing up views, functions
Benjamin Arai wrote: I have a database of several million records and we are currently developing pl functions and views. How do you dump only the code for views and functions? Benjamin This has come up before, if you search the archives, you'll find various ways of accomplishing this. Here's a start (one way of getting function source) http://archives.postgresql.org/pgsql-general/2005-10/msg01633.php ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [ADMIN] Backing up views, functions
Benjamin Arai wrote: I have a database of several million records and we are currently developing pl functions and views. How do you dump only the code for views and functions? Benjamin Also, one way of recreating views: select 'drop view '||viewname||'; CREATE OR REPLACE view '||viewname||' as '||definition||'\n' from pg_views where schemaname='public'; ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [ADMIN] the time for [EMAIL PROTECTED] has come?
Guido Barosio wrote: All, (boring mail ahead) Writing here to get in contact with major people. Recently, with the new Windows port, and a great massive reception of the new release, I've noticed that too many threads are being sent to pgsql-admin regarding: How do I install How do I backup What is initdb etc. The point is that I am wondering about the need of a new mailing list, to isolate these cases, and give them a better support, before upgrading into a list like the admin one. A big distance between some threads and the ones that I mention already exists, and I wonder about a few orphan threads, without any kind of reply, because they are silly or the help was not properly requested. pgsql-starting seems to be a good way to isolate this threads and get a better idea of the common problems on getting started with postgresql. Recording / analyzing these threads could lead after into a better experience in the future, or the chance to understand who are these starters (profiles,background,etc). And I would kindly put some time there. My 5 cents (more than 2 ;) ) Best regards, g.- -- Adopting the position that you are smarter than an automaticoptimization algorithm is generally a good way to achieve lessperformance, not more - Tom Lane. Might be more useful to sift the various questions and answers and add them to the FAQ. More obvious pointers of where to go to find the FAQ might also help reduce the volume of those questions. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [ADMIN] upgrade database to 8.1 - 2GB file limit (pg_dump)
Tomeh, Husam wrote: I'm upgrading from 8.0 to 8.1 on RedHat 2.6 kernel. I'm hitting the file max limit of 2 GB file when I pg_dump my database (even if I compress the dump file as it gets generated using a pipe). pg_dump is the recommendation stated in the INSTALL doc to upgrade; what would be the next recommendation to upgrade the database? Can I use the -Fc option in pg_dump instead of the plain-text (default) ? Thanks in advance. --- Husam Tomeh How about using split to break up your file sizes? http://www.postgresql.org/docs/8.1/interactive/backup.html ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [ADMIN] Moving pg_xlog problem
Joost Kraaijeveld wrote: lrwxrwxrwx 1 root root pg_xlog - /opt/pg_xlog Maybe this one here? Try chown'ing it to postgres:postgres and see what happens -- ___ This e-mail may be privileged and/or confidential, and the sender does not waive any related rights and obligations. Any distribution, use or copying of this e-mail or the information it contains by other than an intended recipient is unauthorized. If you received this e-mail in error, please advise me (by return e-mail or otherwise) immediately. ___ ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [ADMIN] sleep?
Don Drake wrote: Is there a sleep function in plpgsql? I need to wait a period time (60 seconds) in a while loop. I didn't see anything in the docs. Thanks. -Don create or replace function sleep(integer) returns void as $$ return sleep(shift) $$ language plperlu immutable strict; select sleep(5); --(sleeps for 5 seconds) ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [ADMIN] select * and save into a text file failed
Lee Wu wrote: How can I save PG data into text file without using pg_dump? did you try select * from table \o '/var/tmp/textfile.txt' ; -- ___ This e-mail may be privileged and/or confidential, and the sender does not waive any related rights and obligations. Any distribution, use or copying of this e-mail or the information it contains by other than an intended recipient is unauthorized. If you received this e-mail in error, please advise me (by return e-mail or otherwise) immediately. ___ ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [ADMIN] catastrophic error
Joel Fradkin wrote: Hi, I have been live for 4 days (vacuums run each night and backups done each night). Today around 2:30 PM EST my web app returned a catastrophic error. Both web servers appeared to have the issue. I could go on them and get data via pgadmin. I could log on the server (IIS servers are win2k and pg is Redhat AS4 running 8.0.2) and see it was not using much memory or cpu. Neither the web or database servers seemed stressed? Any ideas what I should look at? I re-booted the IIS servers and it did not fix the issue. I rebooted the database server and the web servers are back to connecting. Being new to postgres I am not sure what to look at for the cause and hopefully permanent fix to this. Thanks in advance to any ideas (I did search the archive, but only saw a mention of pre 8 versions and oid numbering wraparound). Joel Fradkin Any messages in syslog on db server? Any web error logs that you can check? -- ___ This e-mail may be privileged and/or confidential, and the sender does not waive any related rights and obligations. Any distribution, use or copying of this e-mail or the information it contains by other than an intended recipient is unauthorized. If you received this e-mail in error, please advise me (by return e-mail or otherwise) immediately. ___ ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] Statistics Buffer is full
Pallav Kalva wrote: Hi Everybody, I got this logged in my log file last night on my production database while doing vacuumdb with analyze option , is this normal ? does it affect the system anyway ? is there anyway to get rid of these messages ? 2005-05-06 00:31:03 EDT%%% LOG: statistics buffer is full 2005-05-06 00:31:15 EDT%%% LOG: statistics buffer is full 2005-05-06 00:31:15 EDT%%% LOG: statistics buffer is full 2005-05-06 00:31:15 EDT%%% LOG: statistics buffer is full 2005-05-06 00:31:15 EDT%%% LOG: statistics buffer is full Thanks! Pallav I frequently see the same message in my logs when doing a bulk load (6-8 million rows). I'm interested in knowing what the message is as well. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [ADMIN] Do Petabyte storage solutions exist?
Bradley Kieser wrote: No, it isn't. Oracle is expensive but it is also the Rolls Royce, it seems. I am a strictly OpenSource man so I don't really get into the pricing thing, but I do know that it is also deal-by-deal and depending on who and what you are, the prices can vary. E.g. Educational facilities have massive discounts. Military has massive prices, etc. snip You're correct about it being 'deal-by-deal' pricing. You can negotiate the salesmen down quite a bit, depending on who your company is, the field you're in, the time of year (eg. end of quarter or year nets bigger reductions), and especially if you use a bit of cleverness by getting in-house demos by the big competitors (eg. MSSQL and DB2). Standard Edition One is listed at around $6500 Canadian per processor, or $195 per named user. This is all totally negotiable, though. Apparently mssql is priced similarly, though I can't verify that. Doing price comparisons isn't very helpful, what you really need to do is analyze your requirements and see what features you actually need, or will need in the future. I have no affiliation with any of these companies, so I'm not going to start a marketing war about who's better etc. Anyways, ss they say, You get what you pay for. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match