Re: [GENERAL] pg_last_xact_replay_timestamp() sometimes reports unlikely, very large delays
> On Mar 22, 2017, at 8:06 PM, Toby Corkindale > <toby.corkind...@strategicdata.com.au> wrote: > > My best guess for what is going on is: > - There has been no activity for hours or days, and so the oldest replayed > transaction on the slave is genuinely quite old. > - Something has happened on the master that causes its > pg_current_xlog_location() to be updated, but not in a way that is sent to > the > slave until the end of a long-running transaction. > > > Could anyone suggest how to do this in a manner that avoids the problem? Are you using streaming replication or only WAL archiving? If you are not streaming the archive command does not send the file until it is full (16MB, if I recall correctly). To address this, you can change the archive_timeout setting to ensure the WAL file is sent at some interval even if it is not full. I use 'archive_timeout = 300' to send it every 5 minutes. If the lag is greater than 15 minutes, the alarm bells start going off. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Logging broken messages
> On Feb 6, 2017, at 11:21 AM, Rui Pacheco <rui.pach...@gmail.com> wrote: > > I’m trying to implement a version of the wire protocol but I’ve hit a > problem: whenever I send a Close Statement message to the remote, it just > hangs indefinitely. I suspect the problem could be on my side but I can’t > find anything on my code that doesn’t match the manual. I have written something like this recently without any problems. Maybe post the code if you think more eyes would be helpful. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] CRM where pg is a first class citizen?
> On Dec 13, 2016, at 3:25 PM, Ivan Sergio Borgonovo <m...@webthatworks.it> > wrote: > > I don't develop on Drupal anymore but up to at least D7 Postgresql was still > not a first class citizen. > > I've heard DB abstraction layer in D8 is much better but I don't have > anything critical on Drupal anymore and life is too short to fight to see > your patches refused from upstream because "supporting postgres is holding us > back". > Considering that most PHP web applications are not optimized for any DB and I > wonder what features could a developer exploit to optimize for mysql, that's > really a shame. > > I don't want to repeat the experience, especially on software I'm just going > to use and not develop on. > > Forgive me for I have sinned: last Drupal I've installed was 7 and I picked > up mysql and I still feel awkward when I've to deal with it. I have been using Drupal with Postgres since 2005. Yes, there are sometimes issues, but it is rarely a problem unless you expect every third party module to support Postgres. All of the core modules work well with Postgres. The database abstraction layer works for the most part. The main benefit of Drupal is to get a base website up and going quickly. You can then write your own custom (Postgres only) module to implement the non-core features you need. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PDF files: to store in database or not
> On Dec 8, 2016, at 9:25 AM, Chris Travers <chris.trav...@gmail.com> wrote: > > Assuming relatively small files, bytea makes much more sense than a large > object. However note that encoding and decoding can be relatively memory > intensive depending on your environment. This is not a problem with small > files and I would typically start to worry when you get into the hundreds of > mb in size. At least in Perl, I expect decoding to take about 8x the size of > the final file in RAM. > > LOBs work best when you need a streaming interface (seek and friends) while > bytea's are otherwise much more pleasant to work with. Not much I can do on the Postgres side, but you can manage the amount of RAM needed on the client side by returning the bytea in chunks using a set returning function. In my case, this returns chunks to PHP that are immediately written to the download stream so there is no need to have the entire document in RAM on the application side. I have included the function I use below. John DeSoi, Ph.D. create or replace function blob_content_chunked(p_dbid integer) returns setof bytea as $$ declare v_chunk integer = 1048576; v_start integer = 1; v_data bytea; v_size integer; begin select into v_data content from blob where dbid = p_dbid; if found and v_data is not null then v_size = octet_length(v_data); if v_size <= v_chunk then return next v_data; else for i in 1..v_size by v_chunk loop return next substring(v_data from i for v_chunk); end loop; end if; end if; end; $$ language plpgsql stable; -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PDF files: to store in database or not
> On Dec 6, 2016, at 1:09 PM, Eric Schwarzenbach <subscri...@blackbrook.org> > wrote: > > I've often wondered if we'd have been better off storing the files in the > database. This design decision was made some years ago, and our concerns > around this had to do with performance, but I don't know that we had any real > data that this should have been a concern, and I suspect you could ameliorate > if not eliminate this as an issue by careful design. I'd loved to hear this > idea confirmed or debunked by someone who has more expertise (and ideally, > done actual testing). I have been storing PDFs in Postgres for several years without any problems. Documents range in size from a few pages to 100+ pages. I'm using a bytea column, not large objects. I store the documents in a separate database from the rest of the application data in order to make it easy to exclude in database dumps or backup in some other way. I'm currently managing about 600,000 documents. I created some functions that enable a subset of the document database to be synchronized elsewhere. For example, we need to keep only the last 3 years of documents on a website for user access. Using Postgres has made this easy to manage and verify. And with replication we automatically have the document database available on the backup web server without additional effort. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Syncing Data of data type BLOB into Postgres- Bytea
> On Nov 10, 2016, at 12:55 AM, Cynthia Hombakazi Ngejane > <hombakazi.ngej...@gmail.com> wrote: > > I have two databases SQLlite and Postgres, SQLite is my local database in it > I am saving fingerprint templates that get capture on site (offline) and my > column is of type BLOB. Now I want to sync these templates into Postgres (to > the server), so I created a column of type bytea but Postgres is refusing to > take BLOB it says there was a syntax error. I am using Qt c++ 5.7 > application to do the syncing. It is going to be hard to help without seeing the exact code you are using to insert and the exact error message you are seeing. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PHP-Shop with PostgreSQL
> On Nov 8, 2016, at 8:46 AM, Michelle Konzack <linux4miche...@gmail.com> wrote: > > Can someone recommend me a shop system which met my requirements? Drupal with Ubercart? Probably does not qualify as "simple" but should be able to do everything you listed. https://www.drupal.org/project/ubercart John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Way to quickly detect if database tables/columns/etc. were modified?
> On Oct 31, 2016, at 8:14 AM, Melvin Davidson <melvin6...@gmail.com> wrote: > > That would certainly work, but the problem is, that trigger would have to be > created for every table in the database. > When you have more than a couple dozen tables, as in hundreds, it becsmes a > huge undertaking. Unless I'm misunderstanding the documentation, you create the trigger on the "ddl event" not a table. The events are ddl_command_start, ddl_command_end, table_rewrite and sql_drop. I have not used this feature, but it seems like you would just need one function. https://www.postgresql.org/docs/current/static/event-trigger-definition.html John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Way to quickly detect if database tables/columns/etc. were modified?
> On Oct 30, 2016, at 4:45 AM, Evan Martin <postgres...@realityexists.net> > wrote: > > If I have a query that reads from system tables like pg_class, pg_namespace, > pg_attribute, pg_type, etc. and I'd like to cache the results in my > application is there any fast way to detect when any changes have been made > to these system catalogs? I don't need to know exactly what has changed. > Some kind of a global "database version" would do, just so I know that I need > to invalidate my cache (the database definition is rarely modified in > practice). Maybe create an event trigger that updates a simple table with the last modification time or sends a notification? https://www.postgresql.org/docs/current/static/sql-createeventtrigger.html John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] yum repo, pgloader
> On Jul 23, 2016, at 9:29 AM, Adrian Klaver <adrian.kla...@aklaver.com> wrote: > > If it is any consolation I get the same error here compiling on openSUSE 13.2 > with sbcl 1.2.3. Looking in the source(ixf-schema.lisp, ixf.lisp) I see the > lower case version of *IXF-STREAM*. Not sure what is going on, will do some > more digging later. See https://github.com/dimitri/pgloader/issues/296 John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] MediaWiki + PostgreSQL is not ready for production?
> On Jul 18, 2016, at 11:47 PM, John R Pierce <pie...@hogranch.com> wrote: > > Drupal even tried to offer a database API so plugin developers wouldn't touch > SQL directly, but too many ignored it. I have been using Drupal with PostgreSQL for more than 10 years without too many problems. Since version 7 all of Drupal core works with PostgreSQL and I have encountered very few non-core modules that are MySQL specific. SQLite is also a core-supported database for Drupal. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] .bash_profile replaced on software updates
I'm using CentOS and updating Postgres with yum. Whenever Postgres is updated (even minor updates) the .bash_profile is replaced with the lines below. I was happy to see the idea of the .psql_profile added fairly recently, but I don't understand why the last line is commented out. I still have to remember to uncomment the last line in this file every time I update Postgres which seems to defeat the purpose. The reason I need .pgsql_profile is that lots of useful Postgres executables (e.g. pg_archivecleanup) are not in the postgres path. Does everyone just use the full path name and change this in the configuration file for major updates? [ -f /etc/profile ] && source /etc/profile PGDATA=/var/lib/pgsql/9.5/data export PGDATA # If you want to customize your settings, # Use the file below. This is not overridden # by the RPMS. #[ -f /var/lib/pgsql/.pgsql_profile ] && source /var/lib/pgsql/.pgsql_profile John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] encoding confusion with \copy command
On Sep 17, 2014, at 11:52 AM, Martin Waite waite@gmail.com wrote: You are right that I need an intermediate step. I will probably use a CSV parser that is liberal in what it accepts, but writes out strict CSV data suitable for postgres. If you find such a utility, please share. My clients love Excel, but it takes perfectly valid CSV files and makes them unreadable by Postgres. In particular, Excel saves rows with fewer columns than the header header row if the cells are empty. It also mangles valid UTF-8. I often take Excel CSV files and re-save them from Open Office to fix them for Postgres import. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Async IO HTTP server frontend for PostgreSQL
On Sep 9, 2014, at 7:31 PM, David Boreham david_l...@boreham.org wrote: Hi Dmitriy, are you able to say a little about what's driving your quest for async http-to-pg ? I'm curious as to the motivations, and whether they match up with some of my own reasons for wanting to use low-thread-count solutions. For some discussion and preliminary design, see also https://wiki.postgresql.org/wiki/HTTP_API John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Async IO HTTP server frontend for PostgreSQL
On Sep 10, 2014, at 12:02 PM, Dmitriy Igrishin dmit...@gmail.com wrote: While this is not related to the %subj%, I've glanced and the link above. And I found this complex. (As many many many nice featured things on the market today.) Could you tell me please, for example, why the URLs like I did not write the spec, so you'll have to post to the wiki to ask questions. Just thought it might be relevant for anyone interested in this topic. Best, John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] libpq - lack of support to set the fetch size
On Mar 12, 2014, at 5:57 AM, matshyeq matsh...@gmail.com wrote: I don't see why? I can't think of any single SQL tool I've been working with that didn't have this functionality, really. The principle I find very simple and useful. There is defined fetch row size parameter (each tool calls give its own name), after submitting ANY query, client fetches result set rows but not more than that. Some programs even automatically define this value based on result grid size displayed on the screen. User then usually has two buttons, fetch another batch/screen or fetch all - he decides. If he decides way too late (break for coffee) then he simply resubmits the query (and potentially change the parameter first)... I don't find value in auto-fetching millions of rows for user to present on the screen. Also I don't think it's particularly useful when you need to know and apply database specific SQL syntax to limit the rows. If you join multiple tables that may be even more tricky (which table to apply limit? or use subquerying instead?). Using the extend query protocol, Postgres has a built-in way to limit the number of rows returned from any select without any textual manipulation of the query. I'm not sure if libpq exposes this capability in the API, but it should not be too difficult to implement. See: http://www.postgresql.org/docs/current/static/protocol-flow.html#PROTOCOL-FLOW-EXT-QUERY Once a portal exists, it can be executed using an Execute message. The Execute message specifies the portal name (empty string denotes the unnamed portal) and a maximum result-row count (zero meaning fetch all rows). The result-row count is only meaningful for portals containing commands that return row sets; in other cases the command is always executed to completion, and the row count is ignored. The possible responses to Execute are the same as those described above for queries issued via simple query protocol, except that Execute doesn't cause ReadyForQuery or RowDescription to be issued. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] streaming replication not working
On Sep 25, 2013, at 8:36 AM, Guillaume Lelarge guilla...@lelarge.info wrote: Your config file and your debug logs don't match. Your config file says that the restore command is rsync, but your logs say its pg_standby. Check if you have a pg_standby process on the slave. That would explain why the slave never tries to establish a replication connection to the master. rsync is only used in the primary configuration to push the WAL files to the standby. But pg_standby is indeed the problem. I thought pg_standby was a more feature rich option than using cp for the restore command. I see now the documentation says it supports creation of a warm standby. It did not occur to me this meant the standby could not connect to the primary for streaming replication. Even when using pg_standby, the server was really a hot standby because I was able to connect to it and make read-only queries. I think it would be helpful for pg_standby to emit a warning if primary_conninfo is set it the recovery.conf. I changed the restore command to use cp and now everything appears to be working as expected. Thanks very much for your help and to everyone who offered suggestions. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] streaming replication not working
On Sep 24, 2013, at 5:48 AM, Albe Laurenz laurenz.a...@wien.gv.at wrote: Here is what I have on the standby: postgresql.conf hot_standby = on max_wal_senders = 2 wal_level = hot_standby You should set the same parameters on the primary, else it won't work. On the primary I have wal_level = hot_standby wal_keep_segments = 48 max_wal_senders = 2 archive_mode = on archive_command = 'rsync --whole-file --ignore-existing --delete-after --timeout=30 -a %p backup:/path/backup/%f' archive_timeout = 300 I don't have hot_standby = on. I thought that only applied to the standby to allow queries in recovery mode? Am I missing something obvious? Do I have to back up the primary again to make this change? No, that shouldn't be necessary. What are the messages in the secondary's log after you start it? Below is what I have for restart with log level at debug2. In looking at the source, I should have an error or a log message streaming replication successfully connected to primary. I never get either one. LOG: restored log file 0001010F0001 from archive LOG: restored log file 0001010F0002 from archive LOG: received fast shutdown request LOG: aborting any active transactions LOG: shutting down LOG: database system is shut down LOG: database system was shut down in recovery at 2013-09-24 08:23:53 CDT DEBUG: standby_mode = 'on' DEBUG: trigger_file = '/pgsql/9.2/data/failover.trigger' DEBUG: primary_conninfo = 'host=localhost port=21333 user=postgres' DEBUG: restore_command = 'pg_standby -t /pgsql/9.2/data/failover.trigger /shared/pgbackup %f %p %r' DEBUG: archive_cleanup_command = 'pg_archivecleanup /shared/pgbackup %r' LOG: entering standby mode LOG: restored log file 0001010F0002 from archive DEBUG: got WAL segment from archive DEBUG: checkpoint record is at 10F/204D9B8 DEBUG: redo record is at 10F/2035608; shutdown FALSE DEBUG: next transaction ID: 0/3837659; next OID: 13124290 DEBUG: next MultiXactId: 3; next MultiXactOffset: 5 DEBUG: oldest unfrozen transaction ID: 1798, in database 1 DEBUG: transaction ID wrap limit is 2147485445, limited by database with OID 1 DEBUG: resetting unlogged relations: cleanup 1 init 0 DEBUG: initializing for hot standby LOG: redo starts at 10F/2035608 DEBUG: recovery snapshots are now enabled CONTEXT: xlog redo running xacts: nextXid 3837668 latestCompletedXid 3837667 oldestRunningXid 3837668 DEBUG: checkpointer updated shared memory configuration values LOG: consistent recovery state reached at 10F/300 LOG: database system is ready to accept read only connections LOG: restored log file 0001010F0003 from archive DEBUG: got WAL segment from archive Thanks, John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] streaming replication not working
I have a 9.2.4 (CentOS 6.5) standby that has been working with no problems using log shipping. I wanted to add streaming replication which I thought would be as simple as adding primary_conninfo to recovery.conf and restarting the standby. But on restart there is no message or error about connecting to the primary for replication. pg_stat_replication is empty on the primary and I don't see any errors on the primary either. Here is what I have on the standby: postgresql.conf hot_standby = on max_wal_senders = 2 wal_level = hot_standby recovery.conf standby_mode = 'on' trigger_file = '/pgsql/9.2/data/failover.trigger' primary_conninfo = 'host=localhost port=21333 user=postgres' restore_command = 'pg_standby -t /pgsql/9.2/data/failover.trigger /shared/pgbackup %f %p %r' archive_cleanup_command = 'pg_archivecleanup /shared/pgbackup %r' I have a ssh tunnel setup on localhost and have verified the replication user can connect to the primary. Am I missing something obvious? Do I have to back up the primary again to make this change? Thanks, John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] streaming replication not working
On Sep 23, 2013, at 1:00 PM, Ray Stell ste...@vt.edu wrote: Am I missing something obvious? Do I have to back up the primary again to make this change? you didn't mention a pg_hba.conf rule. did you add one for the replication user? You mean on the primary, right? Yes, I have one there. But even if I did not, I would expect to see a connection error in the log on the standby. No error or any indication the streaming replication process is running on the standby. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] streaming replication not working
On Sep 23, 2013, at 1:00 PM, Ray Stell ste...@vt.edu wrote: Am I missing something obvious? Do I have to back up the primary again to make this change? you didn't mention a pg_hba.conf rule. did you add one for the replication user? You mean on the primary, right? Yes, I have one there. But even if I did not, I would expect to see a connection error in the log on the standby. No error or any indication the streaming replication process is running on the standby. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Streaming Replication Randomly Locking Up
On Aug 15, 2013, at 1:07 PM, Andrew Berman rexx...@gmail.com wrote: I'm having an issue where streaming replication just randomly stops working. I haven't been able to find anything in the logs which point to an issue, but the Postgres process shows a waiting status on the slave: postgres 5639 0.1 24.3 3428264 2970236 ? Ss Aug14 1:54 postgres: startup process recovering 0001053D003F waiting postgres 5642 0.0 21.4 3428356 2613252 ? Ss Aug14 0:30 postgres: writer process postgres 5659 0.0 0.0 177524 788 ?Ss Aug14 0:03 postgres: stats collector process postgres 7159 1.2 0.1 3451360 18352 ? Ss Aug14 17:31 postgres: wal receiver process streaming 549/216B3730 The replication works great for days, but randomly seems to lock up and replication halts. I verified that the two databases were out of sync with a query on both of them. Has anyone experienced this issue before? Here are some relevant config settings: Master: wal_level = hot_standby checkpoint_segments = 32 checkpoint_completion_target = 0.9 archive_mode = on archive_command = 'rsync -a %p foo@foo:/var/lib/pgsql/9.1/wals/%f /dev/null' max_wal_senders = 2 wal_keep_segments = 32 I recently posted about the same thing -- replication just stops after working OK for days or weeks, no errors in the logs on master or slave. It appears I solved it by adding --timeout=30 to my rsync command. My guess was some kind of network hang and then rsync would just wait forever and never return. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] 5 is not a smallint but '5' is
I was surprised to discover this today. I can work around it, but it seems counterintuitive. If 5 can't be parsed as a smallint, I would feel better if '5' was not one either. John DeSoi, Ph.D. psql (9.2.4) Type help for help. temp=# create or replace function itest (param smallint) returns integer as $$ select $1 + 5; $$ language sql; CREATE FUNCTION temp=# select itest(5); ERROR: function itest(integer) does not exist LINE 1: select itest(5); ^ HINT: No function matches the given name and argument types. You might need to add explicit type casts. temp=# select itest('5'); itest --- 10 (1 row) -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] replication stops working
On Jul 8, 2013, at 5:41 PM, Daniel Serodio (lists) daniel.li...@mandic.com.br wrote: If there are no errors in the log, how did you conclude that replication has stopped working? Since you're using a hot standby, you've also setup streaming replication in addition to the WAL archiving, correct? I have an external process that calls pg_last_xact_replay_timestamp and sends an alert if the standby is more than 20 minutes out of sync. I'm not using streaming replication, just WAL archiving at 5 minute intervals. I just tried to restart the primary to fix it and it would not shut down. There should not have been any active connections. I finally had to power off the VM. I think what might be happening is that rsync is hanging when trying to send a WAL file. That might explain no error in the log and difficulty stopping the server. I added a timeout to the archive command; hopefully this will fix it. John DeSoi, Ph.D. 2013-07-08 21:06:02 EDT [27170]: [1-1] user=main,db=main8,remote=127.0.0.1(62194) FATAL: the database system is shutting down 2013-07-08 21:07:29 EDT [27189]: [1-1] user=postgres,db=postgres,remote=127.0.0.1(62195) FATAL: the database system is shutting down 2013-07-08 21:07:51 EDT [27190]: [1-1] user=postgres,db=postgres,remote=127.0.0.1(62196) FATAL: the database system is shutting down 2013-07-08 21:09:42 EDT [27275]: [1-1] user=postgres,db=postgres,remote=[local] FATAL: the database system is shutting down 2013-07-08 21:11:03 EDT [27363]: [1-1] user=[unknown],db=[unknown],remote=127.0.0.1(62199) LOG: incomplete startup packet 2013-07-08 21:11:03 EDT [27364]: [1-1] user=main,db=main8,remote=127.0.0.1(62200) FATAL: the database system is shutting down Killed by signal 15. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] replication stops working
I have a 9.2 hot standby setup with replication via rsync. For the second time, it has stopped working with no apparent error on the primary or standby. Last time this happened I fixed it by restarting the primary. Yesterday I started a new base backup around noon and it replicated without any problems for about 12 hours. Then it just stopped and I don't see any errors in the Postgres log (primary or standby). I looked at other system logs and still don't see any problems. I'm running Postgres 9.2.4 on CentOS 6.4. Thanks for any ideas or debug suggestions. John DeSoi, Ph.D. = wal_level = hot_standby wal_keep_segments = 48 max_wal_senders = 2 archive_mode = on archive_command = 'rsync --whole-file --ignore-existing --delete-after -a %p bak-postgres:/pgbackup/%f' archive_timeout = 300 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] replication breaks with CentOS 6.4 upgrade
Postgres 9.2 was happily replicating until I upgraded the server from CentOS 6.3 to 6.4. Log error shows 2013-05-06 23:51:35 EDT [19421]: [206-1] user=,db=,remote= LOG: archive command failed with exit code 14 2013-05-06 23:51:35 EDT [19421]: [207-1] user=,db=,remote= DETAIL: The failed archive command was: rsync --whole-file --ignore-existing --delete-after -a pg_xlog/0001006B0016 backup:/archive/0001006B0016 rsync: Failed to exec ssh: Permission denied (13) rsync error: error in IPC code (code 14) at pipe.c(84) [sender=3.0.6] rsync: connection unexpectedly closed (0 bytes received so far) [sender] rsync error: error in IPC code (code 14) at io.c(600) [sender=3.0.6] Foiled again by SELinux permissions: type=AVC msg=audit(1367932037.676:10325): avc: denied { search } for pid=2567 comm=rsync name=pgsql dev=dm-0 ino=664822 scontext=unconfined_u:system_r:rsync_t:s0 tcontext=system_u:object_r:postgresql_db_t:s0 tclass=dir type=SYSCALL msg=audit(1367932037.676:10325): arch=c03e syscall=2 success=no exit=-13 a0=1ebd330 a1=0 a2=e a3=4 items=0 ppid=2433 pid=2567 auid=0 uid=26 gid=26 euid=26 suid=26 fsuid=26 egid=26 sgid=26 fsgid=26 tty=(none) ses=57 comm=rsync exe=/usr/bin/rsync subj=unconfined_u:system_r:rsync_t:s0 key=(null) type=AVC msg=audit(1367932037.677:10326): avc: denied { execute } for pid=2568 comm=rsync name=ssh dev=dm-0 ino=266187 scontext=unconfined_u:system_r:rsync_t:s0 tcontext=system_u:object_r:ssh_exec_t:s0 tclass=file type=SYSCALL msg=audit(1367932037.677:10326): arch=c03e syscall=59 success=no exit=-13 a0=7fff1686fa27 a1=7fff1686fb60 a2=7fff16872d38 a3=7fff1686f860 items=0 ppid=2567 pid=2568 auid=0 uid=26 gid=26 euid=26 suid=26 fsuid=26 egid=26 sgid=26 fsgid=26 tty=(none) ses=57 comm=rsync exe=/usr/bin/rsync subj=unconfined_u:system_r:rsync_t:s0 key=(null) I found there is a boolean for postgres and rsync and tried setsebool -P postgresql_can_rsync 1 but replication still failed to work. There must be more required related to ssh and/or rsync. Anyone solved this (without just disabling SELinux)? Thanks, John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Analyzing the types of prepared statements
On Mar 16, 2013, at 8:30 AM, Jason Dusek jason.du...@gmail.com wrote: However, it is not clear to me at this juncture how to get the return type for a statement, given its text. Preparing and looking it up in pg_prepared_statements will retrieve the argument types but not the return type. Wrapping the query in a SQL stored procedure which returns record doesn't cause the server to infer and store a more specific type. You might want to take a look at the extended query protocol here: http://www.postgresql.org/docs/current/interactive/protocol-flow.html If you send a parse (P) messaged followed by describe (D), you can get a row description for the return type without executing anything. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PHP SQL Color Syntax that is Postgresql GPL3 Compatible?
On Apr 26, 2012, at 11:23 PM, Ken Tanzer wrote: I took a look. The syntax highlighting in phpPgAdmin is nicely self-contained in one file, and BSD-licensed. Unfortunately the actual highlighting is less advanced than some others. (I've gotten spoiled by vim, but suspect it's not written in PHP!) Now I'm looking at a couple of javascript options with good highlighting instead. I'm not sure what your ultimate goal is, but here are a couple of open source projects where I have implemented PostgreSQL syntax coloring. PHP/TextMate: https://github.com/desoi/pgedit-textmate Javascript/Ace: https://github.com/desoi/ace John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PHP SQL Color Syntax that is Postgresql GPL3 Compatible?
On Apr 25, 2012, at 6:57 AM, Ken Tanzer wrote: Hi. I'm looking for an Open Source PHP code that will take plain text SQL and turn it into colorful HTML. If it could take messy code and clean up indents and such (a la SQLinForm), that would be a nice bonus. Ideally it would understand many flavors of SQL, but handling Postgresql syntax is most important. I want to include this in my own project, so it needs to be redistributable and specifically GPL3 compatible. Again ideally, it would be a standalone and fairly lightweight piece of code, rather than having to embed some monster package. Does anyone know of such a beast? TIA for any help provided. Not stand alone, but the phpPgAdmin project colors SQL/PostgreSQL code and I believe the license is GPL. http://phppgadmin.sourceforge.net/doku.php?id=start John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL 9.0 and asynchronous replication through VPN
On Nov 30, 2011, at 5:02 AM, Edson Richter wrote: I assume that the OpenVPN got disconnected for a few seconds, and came back again. My question is: assuming I have enough wal segments on Master side, does the Slave get synchronized automatically after the connection is reestablished, or I'll need to restart Slave PostgreSQL to put it in sync again? If I restart Slave PostgreSQL, I get: Yes, it automatically catches up when the connection is working again. You should not have to restart the slave. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL consulting companies in the Bay Area
On Oct 6, 2011, at 7:48 PM, Richard Price wrote: Does anyone know any companies/individuals in the Bay Area who offer PostgreSQL consulting services? Any tips or suggestions would be greatly appreciated! Looks like you have a few options here: http://www.postgresql.org/support/professional_support_northamerica Best, John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Protocol question - fastpath parameter status 'S'
On Sep 5, 2011, at 7:05 AM, Radosław Smogura wrote: Hello, During testing of (forked) driver we had seen following strange behaviour. JDBC driver mainly invokes Fastpath to obtain LOBs, because of unscientific privileges I get 1. Some bytes 2. 'E' (error about priviliges) 3. (sic!) 'S' application_name (driver throws exception) Now I analyse buffer byte after byte 4. 'Z', 00 00 00 05 69 108 (last number may be trash) It's looks like without 3 everything should be OK, so... I have question if this is intended and undocumented behaviour, or some async trashes came in, because docs says nothing about 'S'. I found this only one app server, but I don't think it makes some background async calls. 'S' is the Sync message. http://www.postgresql.org/docs/current/static/protocol-message-formats.html See this section to understand the role of the Sync message: http://www.postgresql.org/docs/current/static/protocol-flow.html#PROTOCOL-FLOW-EXT-QUERY John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] [ADMIN] Using Postgresql as application server
On Aug 13, 2011, at 2:44 PM, s...@bestmx.ru wrote: c k wrote: Yes, I know that I can not create a simple web application using only postgresql because we need a web server to server the html content. u r wrong. u CAN! there is nginx_htttp_postgresql_module exactly to connect webserver directly to postgresql and to OUTPUT query result to a browser. You could also use something like node.js which allows you to connect Javascript directly to Postgres using the front end/back end protocol. With something like this, you could talk directly to Postgres from the browser: http://ajaxian.com/archives/tcpsocket-sockets-in-the-browser http://www.postgresql.org/docs/9.0/interactive/protocol.html I'm not saying this is the right approach, but if that is what you are really after... John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] suggestions for archive_command to a remote standby
rsync seems to be suggested in a number of references for the archive_command when copying WAL files to another server. But the documentation states in bold letters that the command should refuse to overwrite existing files, *and that it returns nonzero status in this case*. You can keep rsync from overwriting files, but I don't see any options for generating an error if the file exists. Anyone care to share a method or script for handling this correctly with rsync or some other common utility? Thanks! John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] OLD. || myColumnNameVar (How to generically access columns in a trigger's OLD or NEW records)
On Apr 28, 2011, at 3:46 PM, Basil Bourque wrote: It seems that I cannot get PL/pgSQL to interpret the text of NEW. + column name as text. My goal is to loop each field in a trigger, comparing the OLD. NEW. values of each field. If different I want to log both values in a history/audit-trail table. Is there some way to loop the fields of a trigger's Record? I've read other people's frustration at not being able to get an array of fields from the Record. This might help you: http://wiki.postgresql.org/wiki/PL/pgSQL_Dynamic_Triggers John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] disable triggers using psql
On Feb 17, 2011, at 6:59 AM, Geoffrey Myers wrote: Unless something very big changed when I wasn't looking, the constraints are actually implemented as triggers under the hood. But you're right that it'd be cleaner to drop the constraints and re-add them than to fool with system triggers. We were trying to accomplish this without having to hack the dump to much. We attempted adding: set local session_replication_role = replica; But that does not seem provide the expected relief. If your triggers have some simple way of identifying them in a query on pg_trigger, the function below can be altered to easily enable or disable them. John DeSoi, Ph.D. = create or replace function enable_link_clean_triggers(p_enable boolean) returns void as $$ declare v_action text; v_sql text; v_tg record; begin if p_enable then v_action = ' ENABLE TRIGGER '; else v_action = ' DISABLE TRIGGER '; end if; for v_tg in select tgrelid, tgname from pg_trigger where tgname ~ '^tg_link_clean_.+' loop v_sql = 'ALTER TABLE ' || v_tg.tgrelid::regclass::text || v_action || v_tg.tgname || ';'; execute v_sql; end loop; return; end; $$ language plpgsql; -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] iPad and Pg revisited...
On Jan 24, 2011, at 3:25 PM, Jerry LeVan wrote: I assume that if I were to jump to Pg 9.x.x that phpPgAdmim would die, yes? I have not tried it, but my guess is it will work. I don't recall seeing that there were any major protocol changes for version 9, so I suspect whatever libpq version is linked to PHP should work just fine with Postgres 9. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] iPad and Postgresql...
On Jan 16, 2011, at 7:40 PM, Andrej wrote: Is there an idiot installable package for MacOsX Snow Leopard that will provide a 'better' user experience for accessing Pg via the web, especially when viewed on the iPad? It's not perl, but have you considered http://phppgadmin.sourceforge.net/ I second this suggestion, but it is not as easy to install on the Mac as it should be. Snow Leopard has no PostgreSQL support in the standard PHP install. You can set it up yourself with the help of this article: http://www.gnegg.ch/2009/08/snow-leopard-and-php/ There also seems to be some nice looking native iPad/iPhone apps with the ability to perform ad-hoc queries using PostgreSQL. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] iPad and Postgresql...
On Jan 17, 2011, at 11:19 AM, Jerry LeVan wrote: There also seems to be some nice looking native iPad/iPhone apps with the ability to perform ad-hoc queries using PostgreSQL. Pointers to the apps if you please :) I did not give links because I have not used any of them. But if you google ipad postgresql you'll see DataGlass on the first page of the results. I'm sure you'll also find some with a search in the app store. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] iPad and Postgresql...
On Jan 17, 2011, at 11:30 AM, Gary Chambers wrote: http://www.gnegg.ch/2009/08/snow-leopard-and-php/ It sorta looks like the above might blow away the existing php stuff... Have you considered using the MacPorts version? That's what I use for development on my Macs and I've never had a problem replicating my production environments whether in Solaris or Linux. It does not blow anything away as far as I can tell :). Yes, I tried MacPorts. I seemed get caught in some kind of dependency hell and could not get things to update correctly. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Need magic for identifieing double adresses
On Sep 15, 2010, at 10:40 PM, Andreas wrote: I need to clean up a lot of contact data because of a merge of customer lists that used to be kept separate. I allready know that there are double entries within the lists and they do overlap, too. Relevant fields could be name, street, zip, city, phone Is there a way to do something like this with postgresql ? I fear this will need still a lot of manual sorting and searching even when potential peers get automatically identified. I recently started working with the pg_trgm contrib module for matching songs based on titles and writers. This is especially difficult because the writer credits end up in one big field with every possible variation on order and naming conventions. So far I have been pleased with the results. For example, the algorithm correctly matched these two song titles: FONTAINE DI ROMA AKA FOUNTAINS OF ROME FOUNTAINS OF ROME A/K/A FONTANE DI ROMA Trigrams can be indexed, so it is relatively fast to find an initial set of candidates. There is a nice introductory article here: http://www.postgresonline.com/journal/categories/59-pgtrgm John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] No PL/PHP ? Any reason?
On Jun 22, 2010, at 1:08 AM, Scott Marlowe wrote: I recall talking to the guys at command prompt and apparently something in the php runtime makes it unsuitable for pl deployment. Any chance that the Parrot runtime could be used for PHP and other languages? I read that some folks are working on PL/Parrot. I'd really like to have PHP and Lisp for PL languages :). John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Hosting Account with PostgreSQL and PHP?
On Feb 15, 2010, at 11:25 AM, Andre Lopes wrote: A2hosting.com supports Triggers, but in the WebHosting Plan and in the Reseller Plan the Triggers creation is not done by the user, we must send the Trigger to the support center and then they compile the Trigger... Where did you hear this? I have a reseller account and I have created many triggers using pl/pgsql. Are you wanting to create triggers with C or some other language that requires superuser access? John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Hosting Account with PostgreSQL and PHP?
On Feb 15, 2010, at 1:23 PM, Andre Lopes wrote: I have contacted again the support center on a2hosting.com and the answer was that is no manual creation of triggers on PostgreSQL, bu the guy have send to me a link with MySQL information about the subject,https://support.a2hosting.com/index.php?_m=knowledgebase_a=viewarticlekbarticleid=500 There are more a2hosting customers here that can create Triggers? My guess is that the support person does not know what they are talking about. Unless something is different for new accounts, you have psql access to your databases. You also have access via phpPgAdmin which can create functions and triggers. Your account user owns the databases you create, so I don't see how they can prevent you from creating triggers. They have a 30 day money back guarantee, so it might be worth trying out. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Hosting Account with PostgreSQL and PHP?
On Feb 13, 2010, at 4:09 PM, Andre Lopes wrote: I need an hosting account with PostgreSQL and PHP. I have signed an account with HostNine.com, but the PostgreSQL is the 8.1, and don't allow to create Languages and Triggers, so I can't get the website working... Please tell me if you know a good hosting with PostgreSQL that allow Triggers and all functionalities og PostgreSQL. You can find a list of hosting providers here: http://www.postgresql.org/support/professional_hosting If you are looking for shared hosting, I doubt you'll find any that will let you create languages. But if you need to create triggers with SQL or pl/pgsql, that should not be a problem. Of course, many offer VPS hosting where you could setup Postgres yourself and configure it any way you like. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Hosting Account with PostgreSQL and PHP?
From the list I already sent, I'm certain this one allows triggers: http://www.a2hosting.com/database/postgresql-hosting I can also setup such an account for you, but I can't beat their price of $5.72 a month :) On Feb 14, 2010, at 10:44 AM, Andre Lopes wrote: Hi, I have sign an account with HostNine.com, but PostgreSQL version is 8.1 and don't allow triggers... I must to change host, the web app is using triggers... Anyone know a host that have triggers available? Best Regards, On Sun, Feb 14, 2010 at 3:34 PM, John DeSoi de...@pgedit.com wrote: On Feb 13, 2010, at 4:09 PM, Andre Lopes wrote: I need an hosting account with PostgreSQL and PHP. I have signed an account with HostNine.com, but the PostgreSQL is the 8.1, and don't allow to create Languages and Triggers, so I can't get the website working... Please tell me if you know a good hosting with PostgreSQL that allow Triggers and all functionalities og PostgreSQL. You can find a list of hosting providers here: http://www.postgresql.org/support/professional_hosting If you are looking for shared hosting, I doubt you'll find any that will let you create languages. But if you need to create triggers with SQL or pl/pgsql, that should not be a problem. Of course, many offer VPS hosting where you could setup Postgres yourself and configure it any way you like. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] how do I disable automatic start on mac os x?
On Jan 4, 2010, at 9:32 AM, Chris Withers wrote: I feel slightly stupid asking this, but this is a Mac, surely there's some pretty shiny GUI for editing things like this? (even Windows has the Services stuff in Control Panel) What am I missing? I've used this in the past but it looks like support was dropped a few months ago. Hopefully some one else will pick it up. http://sourceforge.net/projects/lingon/ John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Extended Query, flush or sync ?
On Dec 19, 2009, at 2:40 AM, Raimon Fernandez wrote: I send: parse bind describe execute sync and then loop on the connection stream to receive the responses. And do you get the parseComplete after sending the parse or after sending the sync ? I don't really know or care. I send the entire sequence above and then read the results handling each possible case. In other words, I don't read anything after each message; I only read after sending the sync. And also from the docs: If Execute terminates before completing the execution of a portal (due to reaching a nonzero result- row count), it will send a PortalSuspended message; t he appearance of this message tells the frontend that another Execute should be issued against the same portal to complete the operation. If I execute with a row limit of 1000, and I know there are more than 1000 rows, I get the portalSuspended as described. But, If a issue a new Execute, postgresql says that myPortal doesn't exist anymore. How I can get those 1000 rows ? Are you using a named portal? Are you reading all responses until you receive a ready for query response? There are a lot of details - it really helped me to look at the psql source. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Extended Query, flush or sync ?
On Dec 18, 2009, at 4:44 PM, Raimon Fernandez wrote: It's not clear for me if I have to issue a flush or sync after each process of an extended query. It's almost working for me only when I send a sync, but not when I send a flush. With the flush, the connection seems freezed, or at least, I don't get any data from postgre. - Send the parse command - sync - Receive the ParseComplete -sync - Send the Bind - sync - Receive the BincComplete - send the Execute - receive an error = portal xxx does not exist I send: parse bind describe execute sync and then loop on the connection stream to receive the responses. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Extended Query, flush or sync ?
Hi Raimon, On Dec 20, 2009, at 2:11 PM, Raimon Fernandez wrote: I'm not seeing my e-mails on the PostgreSQL General List ... ?? Yes, my last message did not make it to the list yesterday (you obviously received it). I double checked and it was cc to the list. I can pack all of them and send them at the same time, except de Parse, that will go at the connection beggining in my case. I have two routines, prepare and exec_prepare. To prepare a named statement for multiple uses, I use prepare (parse, describe, sync). exec_prepare can take a statement from prepare OR you can pass it the unparsed SQL instead (along with the parameters). In the second case it performs the parse first with the unnamed prepared statement (empty string) and then executes it. This is nice because if you don't need multiple executions, you can build and execute with a single network write and read. You get the safety of parameterized execution and you don't have a prepared statement to dispose of in another operation. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Extended Query using the Frontend/Backend Protocol 3.0
On Dec 17, 2009, at 11:13 AM, Raimon Fernandez wrote: I'm trying to integrate the extended query protocol with my libraries. I'm sending a simple SELECT to validate the method, but I'm getting an Invalid Message Format. I did not add up your byte count, but maybe this will help: (write-byte p stream) (write-int32 (+ int32-length (length name) 1 (length sql-string) 1 int16-length (* int32-length param-count)) stream) (write-cstring name stream) (write-cstring sql-string stream) (write-int16 param-count stream) John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Need full search text on a shared hosting web site using 8.1.x
On Nov 16, 2009, at 2:12 PM, Raymond Rodgers wrote: Aside from this issue, they're the best I've found for the least amount of money in terms of bandwidth, disk space, and PostgreSQL support. I'd happily continue to use my current, more expensive, hosting provider if they weren't phasing out support for compiling your own applications with their system, which means that I'll be losing PostgreSQL support because they only support MySQL officially. I went through the list of the hosting providers on the PostgreSQL site, and while the old adage you get what you pay for might be applicable in this instance, they were the only one that I could really afford at this point in time. a2hosting.com shows unlimited plans which include Postgres 8.4 (tsearch already included) for about $5/month. If you really need an older version, they might be willing to add your account to an older shared server. I know they used to have contrib tsearch installed with older versions. http://www.a2hosting.com/services/web-hosting/ John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] postgresql installation problems
On Nov 13, 2009, at 8:05 PM, Reno Bladergroen wrote: Probably the following might have been asked before, but I've been searching the web for the following problem for 2 days already: I need to install an end-user application which demands 2 postgresql users each owning its own database. One user+database is used for a data management system (I call this one GUS), the other for the front-end application (a data-analysis program), I call that QPCR. So I installed postgreSQL according to the documentation. A server is running on port 5432 now. Now the question: when i try to install the application, the installation demands to set the port for each user separately. When i try to use 5432 for both databases an error pops up with the notification that that port is already used by QPCR and therefore can't be used by GUS. How do I set a port for each user separately?? Do I need to install two clusters (and how do I do that?) The requirement that each database use a different port is rather strange. A single cluster (which uses one port) can have a virtually unlimited number of users and databases. So yes, you will have to install two clusters, or (depending on your platform) there is probably some easy port forwarding trick where you can have one cluster but two possible ports. Installing two clusters means running initdb for two different directories and configuring them for different ports. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] MD5 Authentication
On Nov 5, 2009, at 12:35 PM, Raimon Fernandez wrote: at least, my first md5 (psw+user) is the same as the pg_shadow (wihtout the 'md5') ... should I md5 the first md5 as I get it as string (like username) or byte by byte ? As far as I know, a string. But it is unclear to me what happens when the user or database name has non-ascii characters. The client encoding is not established until after authentication. I asked about that a while ago but did not get any responses. http://archives.postgresql.org/pgsql-general/2008-12/msg00808.php John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Cancelling Requests Frontend/Backend Protocol TCP/IP
On Nov 2, 2009, at 4:15 AM, Raimon Fernandez wrote: If for example I send a SELECT * from myTable, it has 2 rows, and postgre starts sending the rows, how I can cancel this operation ? I thought Cancelling Requests would be perfect for this ... the workaround is closing and opening again the connection but I don't like this approach ... A cleaner solution is to use the extended query protocol to limit the total number of rows returned. For example, in my application I limit the result set to 1000 rows. I have not received all of the results, I switch to a server side cursor but still have the first 1000 rows for immediate display to the end user. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Cancelling Requests Frontend/Backend Protocol TCP/IP
On Nov 2, 2009, at 12:17 PM, Raimon Fernandez wrote: when postgres has finished processing the select, just before sending the first row(1), in the middle(2), or at the end(3), when the last row has been sent ? If I send the CancelRequest when postgres is in point 3, I'm too late, but if postgres is in 1 or 2, the CancelRequest will have some effect. I'm still wrong here ? thanks for clarification the concept! Yes, it will have some effect in cases 1 and 2. You will know it worked because you'll get error 57014 - canceling statement due to user request. An easy way to test this out is to call pg_sleep with a big number and then cancel the query on another connection. You won't have to worry about the timing of receiving all rows or not. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Implementing Frontend/Backend Protocol TCP/IP
On Oct 27, 2009, at 4:55 AM, Albe Laurenz wrote: That's probably not the problem in the original message, but there are things you can do with the frontend/backend protocol that libpq does not expose: for example, with the extended query protocol you can send a Bind call that requests that some of the results should be returned in text format, others in binary. Another protocol feature that I don't think is exposed in libpq is the ability to limit the maximum number of rows returned by a query. So if you are executing end user queries, you don't have to worry about processing a massive result set or somehow parsing the query to add a limit clause. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Implementing Frontend/Backend Protocol TCP/IP
On Oct 26, 2009, at 7:17 PM, John R Pierce wrote: yah, seriously. the binary protocol is not considered stable, it can change in subtle ways in each version. libpq handles the current version and all previous versions, and exposes all methods. I don't think the frontend/backend protocol has changed since version 7.4. All data can be in text format; you don't have to deal with binary. I have implemented an interface in Lisp. I found it much easier and more flexible than the foreign function interface with C and libpq. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] edit function
On Sep 7, 2009, at 2:26 AM, Rakotomandimby Mihamina wrote: I have a function like this: CREATE FUNCTION f_active_client(character varying) RETURNS character varying AS $_$ DECLARE v_modem ALIAS FOR $1; v_firstuse BOOLEAN; v_admactive BOOLEAN; v_codeclt varchar; BEGIN SELECT INTO (...) IF FOUND THEN (...) ELSE RETURN 'ERROR'; END IF; END; $_$ I have to insert several IF statements in the IF FOUND one. I wont do it one time, I will insert them one by one. I cannot stop too much long the PG server (a /etc/init.d restart time is OK, more is not) Dumping and the restoring is not possible because of too long downtime. DELETing CREATing the function is not possible because of dependency. When looking at the help, '\h' has no REPLACE FUNCTION. Only ALTER FUNCTION How to manage it? Write it exactly as you have above, but replace the first line with: CREATE OR REPLACE FUNCTION f_active_client(character varying) John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Problem starting pgsql server on Mac OS X. Pg_hba.conf reading permission.
On Aug 31, 2009, at 5:24 AM, NNL wrote: BigMac:~ postgres$ FATAL: data directory /usr/local/pgsql/data has group or world access DETAIL: Permissions should be u=rwx (0700). chmod 700 /usr/local/pgsql/data/ BigMac:~ postgres$ /usr/local/pgsql/bin/pg_ctl start -D /usr/local/ pgsql/data server starting BigMac:~ postgres$ LOG: could not open configuration file /usr/ local/pgsql/data/pg_hba.conf: Permission denied FATAL: could not load pg_hba.conf Most likely because the user you are using (appears to be postgres from the prompt) is not the owner of file (and database). This would be the case if initdb was run as a different user than postgres. Run ls -al to check. So your options are to change the ownership of the files, or run as the user which already owns them. Something like sudo -u username /usr/local/pgsql/bin/pg_ctl start -D /usr/local/pgsql/ data John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] join from array or cursor
On Aug 21, 2009, at 9:22 AM, Greg Stark wrote: Of course immediately upon hitting send I did think of a way: SELECT (r).* FROM (SELECT (SELECT x FROM x WHERE a=id) AS r FROM unnest(array[1,2]) AS arr(id) ) AS subq; Thanks to all for the interesting insights and discussion. Where in the docs can I learn about writing queries like that :). While it avoids the sort of my method, it appears to be almost 5 times slower (about 4000 keys in the cursor, Postgres 8.4.0): EXPLAIN ANALYZE SELECT (r).* FROM (SELECT (SELECT work FROM work WHERE dbid=id) AS r FROM cursor_pk('c1') AS arr(id) ) AS subq; Function Scan on cursor_pk arr (cost=0.00..116011.72 rows=1000 width=4) (actual time=13.561..249.916 rows=4308 loops=1) SubPlan 1 - Index Scan using work_pkey on work (cost=0.00..8.27 rows=1 width=32) (actual time=0.003..0.003 rows=1 loops=4308) Index Cond: (dbid = $0) SubPlan 2 - Index Scan using work_pkey on work (cost=0.00..8.27 rows=1 width=32) (actual time=0.002..0.002 rows=1 loops=4308) Index Cond: (dbid = $0) SubPlan 3 - Index Scan using work_pkey on work (cost=0.00..8.27 rows=1 width=32) (actual time=0.002..0.002 rows=1 loops=4308) Index Cond: (dbid = $0) SubPlan 4 - Index Scan using work_pkey on work (cost=0.00..8.27 rows=1 width=32) (actual time=0.002..0.002 rows=1 loops=4308) Index Cond: (dbid = $0) SubPlan 5 - Index Scan using work_pkey on work (cost=0.00..8.27 rows=1 width=32) (actual time=0.002..0.002 rows=1 loops=4308) Index Cond: (dbid = $0) SubPlan 6 - Index Scan using work_pkey on work (cost=0.00..8.27 rows=1 width=32) (actual time=0.002..0.002 rows=1 loops=4308) Index Cond: (dbid = $0) SubPlan 7 - Index Scan using work_pkey on work (cost=0.00..8.27 rows=1 width=32) (actual time=0.002..0.002 rows=1 loops=4308) Index Cond: (dbid = $0) SubPlan 8 - Index Scan using work_pkey on work (cost=0.00..8.27 rows=1 width=32) (actual time=0.002..0.002 rows=1 loops=4308) Index Cond: (dbid = $0) SubPlan 9 - Index Scan using work_pkey on work (cost=0.00..8.27 rows=1 width=32) (actual time=0.002..0.002 rows=1 loops=4308) Index Cond: (dbid = $0) SubPlan 10 - Index Scan using work_pkey on work (cost=0.00..8.27 rows=1 width=32) (actual time=0.002..0.002 rows=1 loops=4308) Index Cond: (dbid = $0) SubPlan 11 - Index Scan using work_pkey on work (cost=0.00..8.27 rows=1 width=32) (actual time=0.002..0.002 rows=1 loops=4308) Index Cond: (dbid = $0) SubPlan 12 - Index Scan using work_pkey on work (cost=0.00..8.27 rows=1 width=32) (actual time=0.002..0.002 rows=1 loops=4308) Index Cond: (dbid = $0) SubPlan 13 - Index Scan using work_pkey on work (cost=0.00..8.27 rows=1 width=32) (actual time=0.002..0.002 rows=1 loops=4308) Index Cond: (dbid = $0) SubPlan 14 - Index Scan using work_pkey on work (cost=0.00..8.27 rows=1 width=32) (actual time=0.002..0.002 rows=1 loops=4308) Index Cond: (dbid = $0) Total runtime: 250.739 ms EXPLAIN ANALYZE SELECT * FROM cursor_pk('c1') c LEFT JOIN work ON (c.pk = work.dbid) order by c.idx; Sort (cost=771.23..773.73 rows=1000 width=375) (actual time=36.058..38.392 rows=4308 loops=1) Sort Key: c.idx Sort Method: external merge Disk: 1656kB - Merge Right Join (cost=309.83..721.40 rows=1000 width=375) (actual time=15.447..22.293 rows=4308 loops=1) Merge Cond: (work.dbid = c.pk) - Index Scan using work_pkey on work (cost=0.00..385.80 rows=4308 width=367) (actual time=0.020..2.078 rows=4308 loops=1) - Sort (cost=309.83..312.33 rows=1000 width=8) (actual time=15.420..15.946 rows=4308 loops=1) Sort Key: c.pk Sort Method: quicksort Memory: 297kB - Function Scan on cursor_pk_order c (cost=0.00..260.00 rows=1000 width=8) (actual time=12.672..13.073 rows=4308 loops=1) Total runtime: 51.886 ms Thanks for any further suggestions. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] join from array or cursor
On Aug 21, 2009, at 7:26 AM, Sam Mason wrote: It may help to wrap the generate_series call into a function so you don't have to refer to myPkArray so many times. Yes, this is the best I have come up with so far. I have a set returning function which returns the key and the index number. The implementation with a cursor looks like this: SELECT * FROM cursor_pk('c1') c LEFT JOIN foo ON (c.pk = foo.pk) ORDER BY c.idx; An array function could return the same values. I need to do some testing to see what the performance looks likes for each method. Thanks for the suggestion. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] join from array or cursor
Suppose I have an integer array (or cursor with one integer column) which represents primary keys of some table. Is there a simple and efficient way to return the rows of the table corresponding to the primary key values and keep them in the same order as the array (or cursor)? Seems like it should be easy, but I'm not seeing it. Thanks, John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Information about columns
On Jun 22, 2009, at 11:43 AM, Dario Teixeira wrote: How are you talking to the database ? ODBC? JDBC? LibPQ? Something else? Or do you want this from within PL/PgSQL ? I'm hacking on a client-side library which talks directly to the Postmaster using the wire protocol [1]. I need this information to improve some of the nullability-detection heuristics used by the library. The information you want is always returned from the query as a row description message. This includes the type oid of real and computed columns. See the RowDescription message on this page for details: http://www.postgresql.org/docs/8.3/static/protocol-message-formats.html John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Information about columns
On Jun 22, 2009, at 4:57 PM, Dario Teixeira wrote: *However*, if I create a new type (which has an associated pg_class entry), and define a function which returns a SETOF that type, RowDescription will not tell me its OID. For example: CREATE TYPE foobar_t AS (quant int); CREATE FUNCTION foobar2 () RETURNS SETOF foobar_t AS 'SELECT * FROM foobar' LANGUAGE sql STABLE; Is this a bug or a conscious decision? And on the latter case, how can I retrieve the pg_class OID of foobar_t? I don't think it is a bug because the documentation clearly states if the field can be identified as a column of a specific table, the object ID of the table; otherwise zero. A type is not the same as a table. It is not as elegant as you would like, but maybe one idea is to create your own alias of the built in type so you can determine the answer just by looking at the column type. For example, instead of using int in CREATE TYPE above, create a your own type equivalent to an integer. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] looping over a small record set over and over in a function
On Jun 20, 2009, at 8:35 AM, Ivan Sergio Borgonovo wrote: And lastly, if your current approach really is the only way to compute what you're after, then maybe PL/pgSQL isn't the right match for the problem; it looks like you'd be better served by a Yeah. I gave a look to python but I don't want to add one more language to the mix. I enjoy strict type checking of plpgsql even if some bit of syntactic sugar would help to make it more pleasing and I think it is the most lightweight among the offer. Still I don't know how easy it is with eg. python to load an array with a result set, change it and place it back into the table where it was coming from. language that can work with arrays of typed structures. As I'm not familiar with the other PL languages I can't tell whether they would be suitable in that respect, but I suspect Python or Java would be able to handle this better. Your suggestion about cursor could be the way... but I don't know enough about cursors internals to understand if updating a field of a cursor will cause disk writes. I have not tried this, but the documentation says arrays can be created for any built-in or user-defined base type, enum type, or composite type. So maybe you could define a composite type and stuff those into a single array? John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] connecting to a remote pq always require a password
On Jun 18, 2009, at 8:11 AM, David Shen wrote: In the pg_hba.conf file, I even change the host access control to this: hostall all 127.0.0.1/32 trust but it still does not work. What I missed? Did you reload the configuration (or restart the sever) after making this change? pg_ctl reload -D /path/to/your/data John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Best way to monitor, control, or rewrite data definition commands?
On May 12, 2009, at 7:59 PM, Turner, Ian wrote: CREATE TABLE foo (fooid integer); Then I would also like to do CREATE TABLE foo_audit (fooid integer, other columns); along with the creation of some other triggers, rules, etc. Is there any way to be notified when a user executes data definition commands such as CREATE TABLE? It doesn't appear possible to apply triggers or rules to the system tables, and the query rewrite engine only seems to apply to SELECT, INSERT, and UPDATE. Thoughts? Correct, there are no triggers on the system tables. Maybe some type of cron process that ensures there is foo_audit for table foo? John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] converting from bytea to integers
I'd like to convert some bytea data to an array of four byte integers (and vice versa). I'm probably missing something obvious, but I don't see an efficient way to generate a 4 byte integer from a bytea string (could be big endian or little endian). Converting back to bytea seems easy enough using to_hex. Thanks for any suggestions, John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] converting from bytea to integers
On Apr 20, 2009, at 5:23 PM, Daniel Verite wrote: get_byte()? mailtest= \set e '\'\12\15\107\20\'::bytea' mailtest= select get_byte(:e,0),get_byte(:e,1),get_byte(:e, 2),get_byte(:e,3); get_byte | get_byte | get_byte | get_byte --+-- +--+-- 10 | 13 | 71 | 16 That's what I ended up with. My first attempts at it were unsuccessful because I did not notice that get_byte uses zero indexing. Earlier in the routine I extracted bytes using substring and just assumed they used the same indexing. They don't. It might be worthy of a documentation note -- it seems easy to miss if you have not used the binary functions before. I generated the integer from the bytes using something like this: b1 = get_byte(p_array, i+3); b2 = get_byte(p_array, i+2); b3 = get_byte(p_array, i+1); b4 = get_byte(p_array, i); val = (b1 24) + (b2 16) + (b3 8) + b4; Thanks, John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Full text search strategy for names
On Apr 17, 2009, at 7:02 AM, Rick Schumeyer wrote: You can get extra (undesirable) results, depending on the name. For example, if you are searching for the last name of Ricks, you will also find all authors whose first name is Rick If you can process the names separately from the rest of the text, try select to_tsvector('simple', 'claude Jones'); to_tsvector -- 'jones':2 'claude':1 John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Question Concerning PostgreSQL license.
On Mar 17, 2009, at 8:41 PM, Zachary Mitchell, BCIS wrote: If one wishes to use one's own database server of PostGreSQL for commercial purposes, does one need to purchase a commercial license from the PostgreSQL Global Development Group? or may it be used for commercial purpose, legally, at no cost? You can use PostgreSQL legally at no cost. There is no requirement to purchase any kind of license for commercial (or non-commercial) purposes. John DeSoi, Ph.D. - Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] grant everything on everything and then revoke
On Mar 3, 2009, at 4:35 AM, Ivan Sergio Borgonovo wrote: But I read: http://www.postgresql.org/docs/8.3/static/sql-grant.html The SQL standard does not support setting the privileges on more than one object per command. This is going to make maintenance and development a PITA every time I add a new table, sequence, schema... There is some pl/pgsql code here grant on more than one object at a time: http://pgedit.com/tip/postgresql/access_control_functions John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Function parameter
On Feb 24, 2009, at 5:10 PM, Nico Callewaert wrote: I'm trying to modify an input parameter of a function, but I receive following error : ERROR: $17 is declared CONSTANT CONTEXT: compile of PL/pgSQL function update_jobreg near line 26 Is there a way to modify an input parameter or I have to declare a local variable and assign that input parameter to it ? Declaring a local variable is the best way to do it. You can modify a parameter if you declare it as INOUT, but you generally only want to do that if you want to return something from the function. Note that you can declare and assign the value in a single line in the DECLARE section of the function, e.g. text_var text := text_param; John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Pet Peeves?
On Feb 3, 2009, at 3:41 PM, Peter Geoghegan wrote: What about postgreSQL's inability to re-order columns? Please don't point out that I shouldn't rely on things being in a certain order when I SELECT * FROM table. I'm well aware of that, I just generally have an aesthetic preference for a table's columns being in a certain order. Somewhat related, it would be nice if columns had a unique identifier in the catalog rather than just a sequence number for the table. This would make it possible to distinguish between altering a column versus dropping/adding when comparing schemas or detecting DDL changes. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] encoding of PostgreSQL messages
On Dec 23, 2008, at 4:24 PM, Karsten Hilbert wrote: How can I programmatically detect which encoding a PostgreSQL server I am trying to connect to sends back messages -- before I connect (so client_encoding and the pg_settings table are flat out). I don't think there is a way because you can't get any information without authorizing first. It is also unclear to me how things work in the other direction. When authenticating, what if the user or database name have non-ascii characters. Are they interpreted in the encoding of the server since the client has not established an encoding? John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] tsearch2 problem
On Oct 31, 2008, at 6:30 AM, Jodok Batlogg wrote: nevertheless i still have the problem that words with '/' are beeing interpreted as file paths instead of words. any idea how i could tweak this? The easiest solution I found was to replace '/' with a space before parsing the text. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] [Help] Config Failure on Mac OSX: psqlodbc-08.03.0300
On Oct 28, 2008, at 8:22 AM, Brent Austin wrote: configure: error: pg_config not found (set PG_CONFIG environment variable) How did you do your PostgreSQL install? In the normal install from source, pg_config is in the bin folder with the rest of the usual PostgreSQL executables. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Drupal and PostgreSQL - performance issues?
On Oct 13, 2008, at 5:08 AM, admin wrote: However, PostgreSQL support in the PHP CMS world seems lacking. Joomla is basically a MySQL-only shop. Drupal is *maybe* suitable, but who really knows where it will end up? My hope is that Drupal is moving in the right direction. With version 6 they completely abstracted the schema building API. Previously, MySQL and PostgreSQL had to be specified separately which is the main reason a lot of modules did not work with PostgreSQL. Things should improve as modules are upgraded to Drupal 6. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Drupal and PostgreSQL - performance issues?
On Oct 12, 2008, at 11:57 PM, Mikkel Høgh wrote: In any case, if anyone has any tips, input, etc. on how best to configure PostgreSQL for Drupal, or can find a way to poke holes in my analysis, I would love to hear your insights :) I just came across this article about moving Drupal from MySQL to PostgreSQL because of MyISAM data corruption and InnoDB was too slow. http://groups.drupal.org/node/15793 John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] psql scripting tutorials
On Sep 10, 2008, at 2:46 AM, Artacus wrote: Who else is doing something like this? Can psql access environmental variables or command line params? Or do I have to have my bash script write a psql script every time? The psql \! command can execute shell commands. You can also use ``, e.g. = \echo `date` Wed Sep 10 08:07:19 EDT 2008 John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] looking for psql without server-installation
On Aug 6, 2008, at 11:57 AM, Raymond O'Donnell wrote: On 06/08/2008 16:38, [EMAIL PROTECTED] wrote: hi all, maybe a simple question, but i found no answer. what is the best way to get a psql-client at a linux system without the need of a server. is [snip] p.s. besides this i am also interested, if there is a solution of this problem for windows Actually, it would be *awfully* handy to be able to download pre- built binaries of psql and pg_dump for Windows.maybe there already is a simple way to do this that I'm missing? Windows is actually pretty easy compared to the Mac where you have full library paths hard coded in the compiled binaries. On Windows all you need to do is put psql.exe and pg_dump.exe in a folder with all the dll library files (libpq.dll, clibiconv2.dll, etc). pgEdit includes psql like this and I think pgAdmin does the same with pg_dump. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Fresh install on Mac OS 10.5.4
On Aug 2, 2008, at 7:13 PM, Manoj Patwardhan wrote: bash-3.2$ initdb -D /usr/local/pgsql/data dyld: Library not loaded: /usr/local/pgsql/lib/libpq.4.dylib Referenced from: /usr/local/bin/initdb Reason: image not found Trace/BPT trap Any ideas? This is on Mac OS 10.5.4. What I see in /usr/local/pgsql/ lib is libpq.5.dylib and not libpq.4.dylib. Could be a path problem. Try which initdb initdb --version to make sure you are running the version of initdb you think you are. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] A couple of newbie questions ...
On Jul 23, 2008, at 12:00 PM, Shane Ambler wrote: To be honest I hadn't seen the use of INSERT INTO table (fld_x, fld_y,fld_z) VALUES (DEFAULT, 'y','z') before, I have always gone with INSERT INTO table (fld_x, fld_y,fld_z) VALUES (NULL, 'y','z') is DEFAULT a better option than using NULL? or is it just a preference to spell out the implied default entry? I've only used DEFAULT in CREATE TABLE(...) The semantics of using DEFAULT or NULL is completely different. If the column has a default value (for example, 0), then including DEFAULT in your insert list will give you the default value of zero. But if you include NULL in your insert list -- you'll get NULL, not zero. If nothing is included for the column in your insert list, you'll get the column default if it has one, otherwise NULL. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] mac install question
On Jul 24, 2008, at 1:51 AM, Tom Lane wrote: Relative paths sound like the best solution to me, assuming they work. Relative paths work fine. I use this all the time for distributing psql. It looks something like this: install_name_tool -change /path/to/postgresql/lib/libpq.dylib @executable_path/../lib/libpq.dylib psql John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] changing text search treatment of puncutation
Text with the '/' character gets treated as a file path, e.g. select * from to_tsvector('english', 'home/work'); gives only the single token: 'home/work':1 Changing '/' to '-' gives 'home':2 'work':3 'home-work':1 which is much more desirable for this application. Is there an easy way to change '/' to be treated like '-' ? I've looked over the documentation several times and could not find anything. Even just a way to get the two tokens 'home' and 'work' without the joined form would be helpful. Thanks, John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] bytea case sensitivity
On May 21, 2008, at 8:23 AM, Sim Zacks wrote: We are testing dbmail on postgresql, which uses a bytea column to store the messages. Because of this searching through messages is case sensitive. Does anyone know of a way to make it not case-sensitive, either by changing the data type (obviously without breaking the current functionality) or a Select Rule or any other method? My suggestion would be to write a trigger and use tsearch to setup a full text index. Even if you manage a case-insensitive search method, I doubt it will be useful unless you always plan to have a small number of messages to search. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] problem with importing dbf file into postgresql using navicat on mac OS X
On May 12, 2008, at 1:28 PM, Mathias Ghys wrote: Error Message: ERROR: invalid byte sequence for encoding UTF8: 0xe96f70 HINT: This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlled by client_encoding. My postgresql database is UTF-8 encoded. With importing I get the following data: http://server58.dedicatedusa.com/~mathias/mathias/datatypes.tiff do you guys have any idea what i've been doing wrong and how I could get the *.dbf files into my postgresql database? I think the problem is that Navicat sets your client connection to UTF-8 but the data you are importing from the file is not UTF-8. Navicat should have a way for you to change your client encoding to match what is in the file. The main problem is you need to know exactly what encoding is being used in the file. If the dbf file is really broken (invalid sequences no mater how it is encoded) then you'll need to fix that first. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Schema migration tools?
On Apr 21, 2008, at 10:44 PM, Christophe wrote: I'd like a tool that would automatically create these scripts, and I wondered if anything like this existed. The theory would be that it would consider two databases a and b, and produce the appropriate script to change b's schema to match a. Does anything like this exist? If not, I might have a new project... However it happens, you need to *capture* the schema changes that need to be applied to each database. I don't think you can just compare schemas and guarantee that database A will be transformed in exactly the same way as database B. For example, suppose the last text column name in database B was different from A. How can you determine if the column was renamed or if the column was dropped and a new column was added? The semantics of that difference could be very important. It would be nice if PostgreSQL had some kind of unique reference for the column, but I think columns are just numbered sequentially as they are added. It would also be neat to have a built-in way to log the schema changes. John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Schema migration tools?
On Apr 22, 2008, at 4:53 PM, Erik Jones wrote: It would be nice if PostgreSQL had some kind of unique reference for the column, but I think columns are just numbered sequentially as they are added. It would also be neat to have a built-in way to log the schema changes. It does: log_statement set to either 'all' or 'ddl' will do the trick. If I do this, is there a way to get a transaction consistent log of just the necessary commands to transform another copy of the database? In other words, I assume this approach will log each DDL command even if the transaction is rolled back. Correct? Thanks, John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pgplsql, how to save row variable to a table row
On Mar 26, 2008, at 8:59 AM, josep porres wrote: INSERT INTO demo_tab SELECT demo_tab_row.*; I tried this before with no success. But I realized that it was because a problem with a constraint, no warnings about this at all. With the detailed insert i've got the constraint error. I was using the execute because I had a variable called the same as a field, not good, I changed it. So now it's working perfectly!!! You don't need SELECT at all: INSERT INTO demo_tab VALUES (demo_tab_row.*); John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] psql /usr/lib/libgcc dependency in pg 8.3
I notice that when I build 8.3.1 psql on OS X (10.5.2) there is now a library dependency on /usr/lib/libgcc_s.1.dylib even though I specified --without-readline in configure. I don't see this dependency with psql 8.2. Is there a way to remove this library requirement in the build? I want to build a binary that will run on 10.3 and I don't think it has /usr/lib/libgcc. Thanks, John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] psql /usr/lib/libgcc dependency in pg 8.3
On Mar 26, 2008, at 8:17 PM, Tom Lane wrote: Whatever would make you think that readline has anything to do with that? Just an uneducated guess because long ago not adding that flag required additional library dependencies. I forgot to include that when I compiled the first time and there was no error. I used to have to tell it explicitly where to find the readline library. I want to build a binary that will run on 10.3 and I don't think it has /usr/lib/libgcc. I seem to recall there's some sort of compatibility stuff for building back-compatible binaries on the Xcode disk, but it's not installed by default. OK, so it is really and XCode issue and nothing that changed with 8.3. Thanks, John DeSoi, Ph.D. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Storing images as BYTEA or large objects
On Feb 13, 2008, at 2:53 PM, Koen Vermeer wrote: I'll check to see what the options are for reading in the data in PHP. Thanks for the help! If you use prepared statements, you don't need to do anything special at all for bytea with PHP. No worries about escaping and all that. Using the schema below and a simple prepared statement API (http://pgedit.com/resource/php/pgfuncall ), I can insert/load documents with a single line like: $db-blob_insert($content); $content = $db-blob_content($this-object_ref); John DeSoi, Ph.D. -- -- blobs -- create table blob ( dbid serial primary key, content bytea ); create or replace function blob_insert(p_content bytea) returns integer as $$ declare new_dbid integer = nextval(pg_get_serial_sequence('blob', 'dbid')); begin insert into blob (dbid, content) values (new_dbid, p_content); return new_dbid; end; $$ language plpgsql; create or replace function blob_update(p_dbid integer, p_content bytea) returns integer as $$ begin update blob set content = p_content where dbid = p_dbid; if found then return 1; else return 0; end if; end; $$ language plpgsql; create or replace function blob_content(p_dbid integer) returns bytea as $$ declare v_content bytea; begin select into v_content content from blob where dbid = p_dbid; return v_content; end; $$ language plpgsql; ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] PostgreSQL/PHP Application Server
On Feb 2, 2008, at 9:40 PM, Brian A. Seklecki (Mobile) wrote: On Thu, 2008-01-24 at 13:10 -0500, John DeSoi wrote: . The user/login system is extensible, so you could write your own. I'm not sure if I follow: Are you suggestion that the CMS system, Drupal, is an example of an application server model because of its framework extension? No, I was suggesting an approach that might address your requirement for integration with other authentication systems. John DeSoi, Ph.D. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] PostgreSQL/PHP Application Server
On Jan 24, 2008, at 12:15 PM, Brian A. Seklecki wrote: Are there any frameworks / toolkits available, possibly as established F/OSS projects, for web applications using PHP+PostgreSQL? sf.net/google comes up short -- a few XML services and Perl+PgSQL hits. By 'application server', as a marketing wank-word gleaned from Oracle/IBM/BEA, essentially constituted by: *) Templates for common data structures *) Centralized deployment of applications as 'modules' using a shared foundation *) A row-level / user-level ('view based') object security model *) Corporate Directory / Backend Integration (LDAP, Kerberos, PAM, NSS) If such a project doesn't exist, I propose instantiating one. #1 and #2 are easy. #3 and #4 are showstoppers. Drupal - http://drupal.org I don't know if there is a module for #4; check http://drupal.org/project/Modules . The user/login system is extensible, so you could write your own. John DeSoi, Ph.D. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Postgres from PHP in Leopard
On Dec 19, 2007, at 6:32 AM, Gordon wrote: However, the problem is that I can't get PHP to talk to Postgres. Running phpinfo () on the built in PHP shows that database support extends as MySql, Mysqli and SQLite extensions, and SQLite drivers for PDO. What I really need is a Postgres extension for maintaining older code and a Postgres PDO driver for new code in development. I think Entropy's package is the easiest way to get this going. It will get you PHP 5 install that works well with Apache and PostgreSQL. I have not tried it in Leopard, but hopefully that will work without problems. http://www.entropy.ch/software/macosx/php/ John DeSoi, Ph.D. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Verison 8.3 PL/pgSQL debugger Question
Tony, On Nov 10, 2007, at 4:34 PM, Tony Caduto wrote: Is there any documentation for developers on how to use the new debugger in 8.3? Specifically on how it works and general guidelines on integration into 3rd party GUI applications. The API documentation is in the source file pldbgapi.c. Korry posted a more detailed example here (which was to be added to the readme file): http://archives.postgresql.org/pgsql-hackers/2007-09/msg00241.php John DeSoi, Ph.D. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] reporting tools
On Aug 22, 2007, at 7:21 PM, Geoffrey wrote: We are looking for an open source reporting tool that will enable users to generate their own reports. Something like Crystal Reports. ;) I was looking at a couple the other day: iReport (part of Jasper), OpenRPT, and DataVision (http://datavision.sourceforge.net/). The DataVision page has some links to other report writers. Hopefully you'll do better than I did -- I also wanted something that works on OS X. All of the above meet that criteria by using Java or GTK, but the user interfaces are hard to take if you want a typical Mac application. John DeSoi, Ph.D. http://pgedit.com/ Power Tools for PostgreSQL ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Non-superuser creating a flat file
On Aug 13, 2007, at 10:07 AM, Terri Reid wrote: I have data that is being updated in a table that I need to export to a flat file via a database trigger on insert or update. The user performing the update will not be a superuser. I’ve tried to use COPY TO, but that doesn’t work for non-superusers. Is there some other functionality that can write out flat files? I’m looking for something similar to the Oracle UTL file functionality, as ideally I’d want to append the data to an existing file and be able to manipulate the data before I wrote it out. Maybe it could work if you create your function using the SECURITY DEFINER option? http://www.postgresql.org/docs/8.2/interactive/sql-createfunction.html John DeSoi, Ph.D. http://pgedit.com/ Power Tools for PostgreSQL ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/