Re: [GENERAL] pg_basebackup ----xlog-method=stream

2017-04-21 Thread Ian Harding


> On Apr 21, 2017, at 3:29 PM, Michael Paquier <michael.paqu...@gmail.com> 
> wrote:
> 
>> On Sat, Apr 22, 2017 at 3:02 AM, Ian Harding <harding@gmail.com> wrote:
>> Am I misunderstanding how this works?  I have WAL archiving set up, so the
>> files are available, but I wanted them included in the backup.
> 
> Please note that if you have a WAL archive available, you may not even
> need to have --xlog-method=stream, which is useful to have fully
> safe-contailed backups. For the error regarding the missing WAL
> segment, it could happen if Postgres completes two checkpoints when
> taking the backup. One thing that you could use is a replication slot
> that gives the guarantee that segments are retained. Those have been
> integrated in pg_basebackup with 9.6.

Yeah. I really want to use replication slots but put the server into production 
without setting the appropriate config settings. I need to schedule some 
downtime for a restart. 

> -- 
> Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_basebackup ----xlog-method=stream

2017-04-21 Thread Ian Harding
I used this command to set up a streaming replica and it worked perfectly.

I tried to run it to create an online backup of the master on that replica
for backup purposes and it seems not to have worked as well.

I thought that streaming the WAL would eliminate the need to keep tons of
WAL around, that since they were streamed concurrently with the file they
would be in place.  However, I got

pg_basebackup: could not get transaction log end position from server:
ERROR:  requested WAL segment 0001496F00BF has already been
removed

The ...backup file in the archive logs folder of the master looks like this"

# cat 0001496F00BF.00578DF8.backup
START WAL LOCATION: 496F/BF578DF8 (file 0001496F00BF)
STOP WAL LOCATION: 4971/11FC6528 (file 000149710011)
CHECKPOINT LOCATION: 496F/C1ACE0D8
BACKUP METHOD: streamed
BACKUP FROM: master
START TIME: 2017-04-21 01:08:38 PDT
LABEL: full-20170421
STOP TIME: 2017-04-21 06:31:28 PDT

so the "...already been removed" message is referring to the very first
file.

The files were definitely being streamed, I restarted and on the master I
see this:

[root@db20 ~]# ps aux | grep stream
postgres 113855  0.4  0.0 274955352 2932 ?  Ss   10:53   0:00 postgres:
wal sender process postgres 192.168.4.21(51292) streaming 4972/7FBC22B8
root 115958  0.0  0.0 112652   956 pts/2S+   10:56   0:00 grep
--color=auto stream
postgres 142072  0.3  0.0 274955348 3160 ?  Ss   Apr20   3:47 postgres:
wal sender process postgres 192.168.4.21(51240) streaming 4972/7FBC22B8


although the --verbose output from pg_basebackup mentions starting the
streaming, it doesn't mention the file names like it does for the datadir
files.

Am I misunderstanding how this works?  I have WAL archiving set up, so the
files are available, but I wanted them included in the backup.

Thanks!

- Ian


[GENERAL] Array Comparison

2014-12-05 Thread Ian Harding
I have a function that returns bigint[] and would like to be able to
compare a bigint to the result.

freeload= select fn_descendents('trip'::varchar,61::bigint);
  fn_descendents
---
 {935,815,689,569,446,325,205,191}
(1 row)
freeload= select 935::bigint in (select
fn_descendents('trip'::varchar,61::bigint));
ERROR:  operator does not exist: bigint = bigint[]
LINE 1: select 935::bigint in (select fn_descendents('trip'::varchar...
   ^
HINT:  No operator matches the given name and argument type(s). You might
need to add explicit type casts.
This is probably super obvious but I'm not getting it right now.

Thank you all in advance!

- Ian


Re: [GENERAL] Array Comparison

2014-12-05 Thread Ian Harding
On Fri, Dec 5, 2014 at 4:55 PM, Ian Harding harding@gmail.com wrote:

 I have a function that returns bigint[] and would like to be able to
 compare a bigint to the result.

 freeload= select fn_descendents('trip'::varchar,61::bigint);
   fn_descendents
 ---
  {935,815,689,569,446,325,205,191}
 (1 row)
 freeload= select 935::bigint in (select
 fn_descendents('trip'::varchar,61::bigint));
 ERROR:  operator does not exist: bigint = bigint[]
 LINE 1: select 935::bigint in (select fn_descendents('trip'::varchar...
^
 HINT:  No operator matches the given name and argument type(s). You might
 need to add explicit type casts.
 This is probably super obvious but I'm not getting it right now.

 Hmmm.. This works...

select array[935::bigint] @ (select
fn_descendents('trip'::varchar,61::bigint));

Still, why?


Fwd: [GENERAL] upgrading from V8.3.4 to V9.2.4

2013-04-19 Thread Ian Harding
Forgot to include the list!  Sorry!
If you have a non-standard socket file location pg_upgrade will not work
for this upgrade.


On Fri, Apr 5, 2013 at 9:37 AM, Leonardo Carneiro chesterma...@gmail.comwrote:

 Beside all notes, i recommend you to use pg_upgrade, to avoid a complete
 backup/restore in your transition.

 http://www.postgresql.org/docs/9.2/static/pgupgrade.html


 On Fri, Apr 5, 2013 at 1:30 PM, Kevin Grittner kgri...@ymail.com wrote:

 Robert Treat r...@xzilla.net wrote:

  Yeah, there were also some subtle breakage around keywords used
  as variable naming when plpgsql was port to use the core lexer.

 One more: from a Java client access to bytea columns will break if
 you don't also use the latest JDBC driver jar.

 --
 Kevin Grittner
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general





Fwd: [GENERAL] pg_upgrade

2013-03-03 Thread Ian Harding
Ack!  Sorry.  Bad list etiquette in so many ways...

-- Forwarded message --
From: Ian Harding harding@gmail.com
Date: Sun, Mar 3, 2013 at 8:26 PM
Subject: Re: [GENERAL] pg_upgrade
To: Bruce Momjian br...@momjian.us


It doesn't seem to though.  Here's what I get when I leave it alone (no
changes to postgresql.conf, no environment variables set)

postgres@server:~$ echo $PGPORT

postgres@server:~$ echo $PGOPTIONS

postgres@server:~$ /opt/postgres/9.2/bin/pg_upgrade -d
/var/lib/postgresql/8.4/main/ -D /srv/data/ -b /usr/lib/postgresql/8.4/bin/
-B /opt/postgres/9.2/bin/
Performing Consistency Checks
-
Checking current, bin, and data directories ok
Checking cluster versions   ok

connection to database failed: could not connect to server: No such file or
directory
Is the server running locally and accepting
connections on Unix domain socket /tmp/.s.PGSQL.50432?


could not connect to old postmaster started with the command:

/usr/lib/postgresql/8.4/bin/pg_ctl -w -l pg_upgrade_server.log -D
/var/lib/postgresql/8.4/main/ -o -p 50432 -c autovacuum=off -c
autovacuum_freeze_max_age=20 -c listen_addresses='' -c
unix_socket_permissions=0700 start
Failure, exiting

When I try to start postgresql with those options it doesn't appear to
start because -w makes it wait, where wait means try psql -l which
doesn't work because of the changed port I assume.

postgres@server:~$ psql -l
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket
/var/run/postgresql/.s.PGSQL.5432?

postgres@server:~$ psql -p 50432 -l
   List of databases
Name|  Owner   | Encoding  | Collation | Ctype |   Access
privileges
+--+---+---+---+---
 fordparts  | iharding | SQL_ASCII | C | C |
 gmparts| iharding | SQL_ASCII | C | C |
 leaguemgmt | iharding | SQL_ASCII | C | C |
 postgres   | postgres | SQL_ASCII | C | C |
 template0  | postgres | SQL_ASCII | C | C | =c/postgres
   :
postgres=CTc/postgres
 template1  | postgres | SQL_ASCII | C | C | =c/postgres
   :
postgres=CTc/postgres
(6 rows)

Although it actually did start, as in the case above.  However, the
pg_upgrade message is looking for /tmp/.s.PGSQL.50432 which isn't going to
work.

I'm puzzled.  I've tried PGPORT and different combinations of PGOPTIONS to
no beneficial effect.


On Sun, Mar 3, 2013 at 8:16 PM, Bruce Momjian br...@momjian.us wrote:

 On Sun, Mar  3, 2013 at 07:58:03PM -0800, Ian Harding wrote:
  Hi!  Sorry for the delay.  Just getting back to it.  Looks like postgres
 was
  compiled with a funky socket_directory so even if I change it in the
  postgresql.conf things break. I've tried tricking it with PGOPTIONS=
  socket_directory = '/var/run/postgresql' but that didn't help.

 You need the -o and -O options which were added to pg_upgrade in 9.2.
 Actually, I think we handle that in 9.2 automatically by checking the
 server settings, but I am not sure.

 Actually, if the client and server libraries were compiled with the same
 default, and you are using the default, it should work fine.

 ---


 
 
 
  On Fri, Feb 15, 2013 at 5:54 PM, Bruce Momjian br...@momjian.us wrote:
 
  On Fri, Feb 15, 2013 at 10:36:25AM -0800, Ian Harding wrote:
   Maybe this is it.  8.4 pg_ctl docs say it uses psql -l to see if
 it's
   finished when you use -w.  It also says
  
   PGPORT
  
   Default port for psql (used by the -w option).
  
   And since pg_upgrade uses a funky port, it might miss unless
 the
  PGPORT
   environment variable is set to match.
  
   I'll try that tonight.
 
  Yes, you are getting close to the answer.  ;-)  The problem is that
  Postgres doesn'isn't checking the right port number or socket
 location
  or something else.  This was all improved in Postgres 9.1:
 
  The wait mode is now significantly more robust.  It will not
 get
  confused by non-default postmaster port numbers, non-default
  Unix-domain socket locations, permission problems, or stale
  postmaster lock files.
 
  I am guessing there is something non-standard about your old cluster,
  and 8.4's pg_ctl -w can't handle it.  Tell me what is non-standard
 and I
  can help further.  Another idea is to make the old cluster use
 defaults
  for everything and do the upgrade.
 
  --
Bruce Momjian  br...@momjian.ushttp://momjian.us
EnterpriseDB http

Re: [GENERAL] pg_upgrade

2013-02-15 Thread Ian Harding
I don't think it would get any further...  It fails and --retain says
Retain sql and log files after success  I can look at that log file and
all it indicates is failure to start the server.

Maybe I should rephrase the question:  pg_ctl returns failure, even though
it starts the server, when run with the -w flag.


On Thu, Feb 14, 2013 at 10:41 PM, Ian Lawrence Barwick barw...@gmail.comwrote:

 2013/2/15 Ian Harding harding@gmail.com
 
 
  On Feb 14, 2013, at 9:50 PM, Ian Lawrence Barwick barw...@gmail.com
 wrote:
 
  2013/2/15 Ian Harding harding@gmail.com
 
  When I run pg_upgrade, it tries to start the old cluster with the -w
 flag, which waits a while and declares failure, even though it starts the
 server.  If I start/stop without -w everything is great.
 
  Can I tell pg_upgrade not to use that flag, or is there a reason it is
 not working that I should look into?
 
  version
 
 --
   PostgreSQL 8.4.8 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real
 (Debian 4.4.5-8) 4.4.5, 64-bit
 
 
  Which PostgreSQL version is the old cluster, and which version is the
 new cluster? What options are you supplying to pg_upgrade, and what output
 are you getting?
 

  Old is 8.4, new is 9.2.  I am not supplying an but the minimum options
 and --check succeeds.  My
  pg_ctl fails when run by hand with -w (although the database does start)
 so I know that's the issue.

 Maybe try running pg_upgrade with the  --retain option and check
 pg_upgrade_server.log for clues?


 Ian Barwick



Re: [GENERAL] pg_upgrade

2013-02-15 Thread Ian Harding
-
  pg_upgrade run on Fri Feb 15 05:09:34 2013
-

command: /usr/lib/postgresql/8.4/bin/pg_ctl -w -l pg_upgrade_server.log
-D /var/lib/postgresql/8.4/main/ -o -p 50432 -c autovacuum=off -c
autovacuum_freeze_max_age=20 -c listen_addresses='' -c
unix_socket_permissions=0700 start  pg_upgrade_server.log 21
waiting for server to
start...could
not start server


The funny thing is, the server starts instantly.


On Fri, Feb 15, 2013 at 9:19 AM, Jeff Janes jeff.ja...@gmail.com wrote:

 On Fri, Feb 15, 2013 at 4:54 AM, Ian Harding harding@gmail.com
 wrote:
  I don't think it would get any further...  It fails and --retain says
  Retain sql and log files after success  I can look at that log file and
  all it indicates is failure to start the server.
 
  Maybe I should rephrase the question:  pg_ctl returns failure, even
 though
  it starts the server, when run with the -w flag.

 How long does the server take to start?

 If it takes the server more than 1 minute to start, then what you
 describe is expected.

 Cheers,

 Jeff



Re: [GENERAL] pg_upgrade

2013-02-15 Thread Ian Harding
This is interesting, although I'm not sure it's relevant.  From pg_ctl
source.


00477
write_stderrhttp://doxygen.postgresql.org/bin_2psql_2common_8c.html#aeca3eef79cc2b37affc8d760c33e1de8
(_http://doxygen.postgresql.org/elog_8c.html#af20b8d139279b360b0fdeae71f8f43bc
(\n%s: -w option is not supported when starting a pre-9.1 server\n),


On Fri, Feb 15, 2013 at 9:45 AM, Ian Harding harding@gmail.com wrote:


 -
   pg_upgrade run on Fri Feb 15 05:09:34 2013
 -

 command: /usr/lib/postgresql/8.4/bin/pg_ctl -w -l
 pg_upgrade_server.log -D /var/lib/postgresql/8.4/main/ -o -p 50432 -c
 autovacuum=off -c autovacuum_freeze_max_age=20 -c
 listen_addresses='' -c unix_socket_permissions=0700 start 
 pg_upgrade_server.log 21
 waiting for server to
 start...could
 not start server


 The funny thing is, the server starts instantly.


 On Fri, Feb 15, 2013 at 9:19 AM, Jeff Janes jeff.ja...@gmail.com wrote:

 On Fri, Feb 15, 2013 at 4:54 AM, Ian Harding harding@gmail.com
 wrote:
  I don't think it would get any further...  It fails and --retain says
  Retain sql and log files after success  I can look at that log file
 and
  all it indicates is failure to start the server.
 
  Maybe I should rephrase the question:  pg_ctl returns failure, even
 though
  it starts the server, when run with the -w flag.

 How long does the server take to start?

 If it takes the server more than 1 minute to start, then what you
 describe is expected.

 Cheers,

 Jeff





Re: [GENERAL] pg_upgrade

2013-02-15 Thread Ian Harding
Maybe this is it.  8.4 pg_ctl docs say it uses psql -l to see if it's
finished when you use -w.  It also says

PGPORT

Default port for
psqlhttp://www.postgresql.org/docs/8.4/static/app-psql.html(used by
the -w option).
And since pg_upgrade uses a funky port, it might miss unless the PGPORT
environment variable is set to match.

I'll try that tonight.


On Fri, Feb 15, 2013 at 10:09 AM, Ian Harding harding@gmail.com wrote:

 This is interesting, although I'm not sure it's relevant.  From pg_ctl
 source.


 00477 
 write_stderrhttp://doxygen.postgresql.org/bin_2psql_2common_8c.html#aeca3eef79cc2b37affc8d760c33e1de8
 (_http://doxygen.postgresql.org/elog_8c.html#af20b8d139279b360b0fdeae71f8f43bc
 (\n%s: -w option is not supported when starting a pre-9.1 server\n),


 On Fri, Feb 15, 2013 at 9:45 AM, Ian Harding harding@gmail.comwrote:


 -
   pg_upgrade run on Fri Feb 15 05:09:34 2013
 -

 command: /usr/lib/postgresql/8.4/bin/pg_ctl -w -l
 pg_upgrade_server.log -D /var/lib/postgresql/8.4/main/ -o -p 50432 -c
 autovacuum=off -c autovacuum_freeze_max_age=20 -c
 listen_addresses='' -c unix_socket_permissions=0700 start 
 pg_upgrade_server.log 21
 waiting for server to
 start...could
 not start server


 The funny thing is, the server starts instantly.


 On Fri, Feb 15, 2013 at 9:19 AM, Jeff Janes jeff.ja...@gmail.com wrote:

 On Fri, Feb 15, 2013 at 4:54 AM, Ian Harding harding@gmail.com
 wrote:
  I don't think it would get any further...  It fails and --retain says
  Retain sql and log files after success  I can look at that log file
 and
  all it indicates is failure to start the server.
 
  Maybe I should rephrase the question:  pg_ctl returns failure, even
 though
  it starts the server, when run with the -w flag.

 How long does the server take to start?

 If it takes the server more than 1 minute to start, then what you
 describe is expected.

 Cheers,

 Jeff






[GENERAL] pg_upgrade

2013-02-14 Thread Ian Harding
When I run pg_upgrade, it tries to start the old cluster with the -w flag,
which waits a while and declares failure, even though it starts the
server.  If I start/stop without -w everything is great.

Can I tell pg_upgrade not to use that flag, or is there a reason it is not
working that I should look into?


version
--
 PostgreSQL 8.4.8 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real
(Debian 4.4.5-8) 4.4.5, 64-bit


Re: [GENERAL] pg_upgrade

2013-02-14 Thread Ian Harding
Old is 8.4, new is 9.2.  I am not supplying an but the minimum options and 
--check succeeds.  My pg_ctl fails when run by hand with -w (although the 
database does start) so I know that's the issue.

On Feb 14, 2013, at 9:50 PM, Ian Lawrence Barwick barw...@gmail.com wrote:

 2013/2/15 Ian Harding harding@gmail.com
 When I run pg_upgrade, it tries to start the old cluster with the -w flag, 
 which waits a while and declares failure, even though it starts the server.  
 If I start/stop without -w everything is great.  
 
 Can I tell pg_upgrade not to use that flag, or is there a reason it is not 
 working that I should look into?
 
 version  
   
 --
  PostgreSQL 8.4.8 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real 
 (Debian 4.4.5-8) 4.4.5, 64-bit
 
 
 
 Which PostgreSQL version is the old cluster, and which version is the new 
 cluster? What options are you supplying to pg_upgrade, and what output are 
 you getting? 
 
 Ian Barwick


Re: [GENERAL] Hot Standby Not So Hot Anymore

2012-11-06 Thread Ian Harding
On Tue, Nov 6, 2012 at 5:45 AM, Kevin Grittner kgri...@mail.com wrote:

 Darren Duncan wrote:
  Ian Harding wrote:

  It says everything is happy as normal...
 
  2012-11-05 16:22:41.200 PST - :LOG: invalid record length at BA6/6DCBA48
 
  What does this log line mean? Is that happy as normal?

 Note that the message level is LOG, not WARNING or ERROR or anything
 more severe. It found the end of the valid WAL stream in WAL files it
 was provided, and it's telling you how it decided it was at the end.

 One thing I don't think you've really described is how you took your
 base backup. You did follow all the instructions, like using
 pg_start_backup() and pg_stop_backup(), excluding the pg_xlog
 directory contents, and excluding the postmaster.pid file, right? If
 you missed some details there you might see odd behavior.

 -Kevin

 Yeah, I forgot the pg_stop_backup.  I knew it was my mistake, I just
 didn't know what... Now I know!  That's what I get for doing it by hand.
 My script never forgets...

 Thanks!

 Ian
 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



[GENERAL] Hot Standby Not So Hot Anymore

2012-11-05 Thread Ian Harding
I had a 9.0.8 hot standby setup, one master, two slaves, working great.
Then, I tried to re-initialize by making a base backup, the way I've done
it many times before, but for some reason I can't get the standby to accept
connections.  I copied the postgresql.conf and recorvery.conf out of the
way, cleaned the data directory and extracted the backup files, then
replaced the conf files.  Everything works perfectly, but I keep getting
:FATAL:  the database system is starting up

I know I'm an idiot and that I did something wrong but I can't find it.

[root@db03 data]# grep standby postgresql.conf
wal_level = hot_standby# minimal, archive, or hot_standby
hot_standby = on# on allows queries during recovery
max_standby_archive_delay = -1# max delay before canceling queries
max_standby_streaming_delay = -1# max delay before canceling queries

[root@db03 data]# grep standby recovery.conf
# Specifies whether to start the server as a standby. In streaming
replication,
standby_mode  = 'on'
# Specifies a connection string which is used for the standby server to
connect

[root@db03 data]# ps aux | grep stream
postgres 62127  0.1  0.0 34554356 3724 ?   Ss   16:22   0:16 postgres:
wal receiver process   streaming BA7/12B87818


[root@db03 data]# ps aux | grep startup
postgres 62122  0.5  0.9 34545900 1223708 ?Ss   16:22   1:03 postgres:
startup process   recovering 00010BA70012

Any suggestions appreciated!!!

- Ian


Re: [GENERAL] Hot Standby Not So Hot Anymore

2012-11-05 Thread Ian Harding
On Mon, Nov 5, 2012 at 7:46 PM, Lonni J Friedman netll...@gmail.com wrote:

 On Mon, Nov 5, 2012 at 7:40 PM, Ian Harding harding@gmail.com wrote:
  I had a 9.0.8 hot standby setup, one master, two slaves, working great.
  Then, I tried to re-initialize by making a base backup, the way I've
 done it
  many times before, but for some reason I can't get the standby to accept
  connections.  I copied the postgresql.conf and recorvery.conf out of the
  way, cleaned the data directory and extracted the backup files, then
  replaced the conf files.  Everything works perfectly, but I keep getting
  :FATAL:  the database system is starting up
 
  I know I'm an idiot and that I did something wrong but I can't find it.
 
  [root@db03 data]# grep standby postgresql.conf
  wal_level = hot_standby# minimal, archive, or hot_standby
  hot_standby = on# on allows queries during recovery
  max_standby_archive_delay = -1# max delay before canceling queries
  max_standby_streaming_delay = -1# max delay before canceling queries
 
  [root@db03 data]# grep standby recovery.conf
  # Specifies whether to start the server as a standby. In streaming
  replication,
  standby_mode  = 'on'
  # Specifies a connection string which is used for the standby server to
  connect
 
  [root@db03 data]# ps aux | grep stream
  postgres 62127  0.1  0.0 34554356 3724 ?   Ss   16:22   0:16
 postgres:
  wal receiver process   streaming BA7/12B87818
 
 
  [root@db03 data]# ps aux | grep startup
  postgres 62122  0.5  0.9 34545900 1223708 ?Ss   16:22   1:03
 postgres:
  startup process   recovering 00010BA70012
 
  Any suggestions appreciated!!!

 What's your log have to say?


It says everything is happy as normal...

2012-11-05 16:22:38.744 PST -  :LOG:  database system was shut down in
recovery at 2012-11-05 16:22:33 PST
2012-11-05 16:22:38.745 PST -  :LOG:  entering standby mode
2012-11-05 16:22:38.746 PST -  :LOG:  redo starts at BA5/F96F86A8
2012-11-05 16:22:38.762 PST - postgres :FATAL:  the database system is
starting up
2012-11-05 16:22:39.764 PST - postgres :FATAL:  the database system is
starting up
2012-11-05 16:22:40.766 PST - postgres :FATAL:  the database system is
starting up
2012-11-05 16:22:41.200 PST -  :LOG:  invalid record length at BA6/6DCBA48
2012-11-05 16:22:41.206 PST -  :LOG:  streaming replication successfully
connected to primary


This is after I stopped and restarted... after that just more failed
connection attempts.


Re: [GENERAL] Hot Standby Not So Hot Anymore

2012-11-05 Thread Ian Harding
On Mon, Nov 5, 2012 at 7:57 PM, Lonni J Friedman netll...@gmail.com wrote:

 On Mon, Nov 5, 2012 at 7:49 PM, Ian Harding harding@gmail.com wrote:
 
 
  On Mon, Nov 5, 2012 at 7:46 PM, Lonni J Friedman netll...@gmail.com
 wrote:
 
  On Mon, Nov 5, 2012 at 7:40 PM, Ian Harding harding@gmail.com
 wrote:
   I had a 9.0.8 hot standby setup, one master, two slaves, working
 great.
   Then, I tried to re-initialize by making a base backup, the way I've
   done it
   many times before, but for some reason I can't get the standby to
 accept
   connections.  I copied the postgresql.conf and recorvery.conf out of
 the
   way, cleaned the data directory and extracted the backup files, then
   replaced the conf files.  Everything works perfectly, but I keep
 getting
   :FATAL:  the database system is starting up
  
   I know I'm an idiot and that I did something wrong but I can't find
 it.
  
   [root@db03 data]# grep standby postgresql.conf
   wal_level = hot_standby# minimal, archive, or hot_standby
   hot_standby = on# on allows queries during recovery
   max_standby_archive_delay = -1# max delay before canceling queries
   max_standby_streaming_delay = -1# max delay before canceling
 queries
  
   [root@db03 data]# grep standby recovery.conf
   # Specifies whether to start the server as a standby. In streaming
   replication,
   standby_mode  = 'on'
   # Specifies a connection string which is used for the standby server
 to
   connect
  
   [root@db03 data]# ps aux | grep stream
   postgres 62127  0.1  0.0 34554356 3724 ?   Ss   16:22   0:16
   postgres:
   wal receiver process   streaming BA7/12B87818
  
  
   [root@db03 data]# ps aux | grep startup
   postgres 62122  0.5  0.9 34545900 1223708 ?Ss   16:22   1:03
   postgres:
   startup process   recovering 00010BA70012
  
   Any suggestions appreciated!!!
 
  What's your log have to say?
 
 
  It says everything is happy as normal...
 
  2012-11-05 16:22:38.744 PST -  :LOG:  database system was shut down in
  recovery at 2012-11-05 16:22:33 PST
  2012-11-05 16:22:38.745 PST -  :LOG:  entering standby mode
  2012-11-05 16:22:38.746 PST -  :LOG:  redo starts at BA5/F96F86A8
  2012-11-05 16:22:38.762 PST - postgres :FATAL:  the database system is
  starting up
  2012-11-05 16:22:39.764 PST - postgres :FATAL:  the database system is
  starting up
  2012-11-05 16:22:40.766 PST - postgres :FATAL:  the database system is
  starting up
  2012-11-05 16:22:41.200 PST -  :LOG:  invalid record length at
 BA6/6DCBA48
  2012-11-05 16:22:41.206 PST -  :LOG:  streaming replication successfully
  connected to primary
 
 
  This is after I stopped and restarted... after that just more failed
  connection attempts.

 Are you sure that its not still recovering?  In other words, when you
 look at the 'ps' output do the 'wal receiver process' and 'recovering'
 entries change?


Yeah, it's still recovering.  In normal operation it keeps right on
recovering, it's just that it used to accept read-only connections...


Re: [GENERAL] Hot Standby Not So Hot Anymore

2012-11-05 Thread Ian Harding
On Mon, Nov 5, 2012 at 8:15 PM, Lonni J Friedman netll...@gmail.com wrote:

 On Mon, Nov 5, 2012 at 8:13 PM, Ian Harding harding@gmail.com wrote:
 
 
 
  On Mon, Nov 5, 2012 at 7:57 PM, Lonni J Friedman netll...@gmail.com
 wrote:
 
  On Mon, Nov 5, 2012 at 7:49 PM, Ian Harding harding@gmail.com
 wrote:
  
  
   On Mon, Nov 5, 2012 at 7:46 PM, Lonni J Friedman netll...@gmail.com
   wrote:
  
   On Mon, Nov 5, 2012 at 7:40 PM, Ian Harding harding@gmail.com
   wrote:
I had a 9.0.8 hot standby setup, one master, two slaves, working
great.
Then, I tried to re-initialize by making a base backup, the way
 I've
done it
many times before, but for some reason I can't get the standby to
accept
connections.  I copied the postgresql.conf and recorvery.conf out
 of
the
way, cleaned the data directory and extracted the backup files,
 then
replaced the conf files.  Everything works perfectly, but I keep
getting
:FATAL:  the database system is starting up
   
I know I'm an idiot and that I did something wrong but I can't find
it.
   
[root@db03 data]# grep standby postgresql.conf
wal_level = hot_standby# minimal, archive, or
 hot_standby
hot_standby = on# on allows queries during recovery
max_standby_archive_delay = -1# max delay before canceling
queries
max_standby_streaming_delay = -1# max delay before canceling
queries
   
[root@db03 data]# grep standby recovery.conf
# Specifies whether to start the server as a standby. In streaming
replication,
standby_mode  = 'on'
# Specifies a connection string which is used for the standby
 server
to
connect
   
[root@db03 data]# ps aux | grep stream
postgres 62127  0.1  0.0 34554356 3724 ?   Ss   16:22   0:16
postgres:
wal receiver process   streaming BA7/12B87818
   
   
[root@db03 data]# ps aux | grep startup
postgres 62122  0.5  0.9 34545900 1223708 ?Ss   16:22   1:03
postgres:
startup process   recovering 00010BA70012
   
Any suggestions appreciated!!!
  
   What's your log have to say?
  
  
   It says everything is happy as normal...
  
   2012-11-05 16:22:38.744 PST -  :LOG:  database system was shut down in
   recovery at 2012-11-05 16:22:33 PST
   2012-11-05 16:22:38.745 PST -  :LOG:  entering standby mode
   2012-11-05 16:22:38.746 PST -  :LOG:  redo starts at BA5/F96F86A8
   2012-11-05 16:22:38.762 PST - postgres :FATAL:  the database system is
   starting up
   2012-11-05 16:22:39.764 PST - postgres :FATAL:  the database system is
   starting up
   2012-11-05 16:22:40.766 PST - postgres :FATAL:  the database system is
   starting up
   2012-11-05 16:22:41.200 PST -  :LOG:  invalid record length at
   BA6/6DCBA48
   2012-11-05 16:22:41.206 PST -  :LOG:  streaming replication
 successfully
   connected to primary
  
  
   This is after I stopped and restarted... after that just more failed
   connection attempts.
 
  Are you sure that its not still recovering?  In other words, when you
  look at the 'ps' output do the 'wal receiver process' and 'recovering'
  entries change?
 
 
  Yeah, it's still recovering.  In normal operation it keeps right on
  recovering, it's just that it used to accept read-only connections...

 Is the 2nd slave also experiencing this problem?  If not, is it
 further ahead than the bad one?


Everything looks fine (forgot to cc the list...)

[root@db0 ~]# ps aux | grep stream
postgres 45267  0.0  0.0 34546456 3036 ?   Ss   16:22   0:14 postgres:
wal sender process postgres 192.168.4.4(55925) streaming BA7/5FC1BFD8
postgres 54247  0.0  0.0 34546456 3036 ?   Ss   18:15   0:07 postgres:
wal sender process postgres 192.168.4.3(57482) streaming BA7/5FC1BFD8


[GENERAL] OpenSCG 9.0.10 package

2012-10-11 Thread Ian Harding
I know this is the wrong place, but I tried creating an account on
their site to contact them and it does not work.

The 9.0.10 package throws an error when I try to install it that it
has an unsatisfied dependency on libpython.  Since it brings its own
libraries, I'm not sure why that would be, but there it is.  Hoping
someone from openscg sees this...

rpm -ivh postgres-9.0.10-1.x86_64.openscg.rpm
warning: postgres-9.0.10-1.x86_64.openscg.rpm: Header V3 DSA/SHA1
Signature, key ID 38379aec: NOKEY
error: Failed dependencies:
  libpython2.4.so.1.0()(64bit) is needed by postgres90-9.0.10-1.x86_64


Also...

ls /usr/lib64/libpython2.6.so.1.0
/usr/lib64/libpython2.6.so.1.0


although I'm pretty sure that doesn't matter.

- Ian


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] OpenSCG 9.0.10 package

2012-10-11 Thread Ian Harding
On Thu, Oct 11, 2012 at 8:56 AM, Scott Mead sco...@openscg.com wrote:
 Hey Ian,


 On Thu, Oct 11, 2012 at 10:52 AM, Ian Harding harding@gmail.com wrote:

 I know this is the wrong place, but I tried creating an account on
 their site to contact them and it does not work.


 I'll take a look at that right away, sorry it happened!


 The 9.0.10 package throws an error when I try to install it that it
 has an unsatisfied dependency on libpython.  Since it brings its own
 libraries, I'm not sure why that would be, but there it is.  Hoping
 someone from openscg sees this...

 rpm -ivh postgres-9.0.10-1.x86_64.openscg.rpm
 warning: postgres-9.0.10-1.x86_64.openscg.rpm: Header V3 DSA/SHA1
 Signature, key ID 38379aec: NOKEY
 error: Failed dependencies:
   libpython2.4.so.1.0()(64bit) is needed by postgres90-9.0.10-1.x86_64


 Which linux distribution / version are you installing on ?  If you have a
 package manager that can help with dependency resolution (like yum), can you
 use that to either install the postgres RPM or just the lib python package?


The machine is fenced off from the internet and I'm pretty ignorant
about package managers, but I have been installing the other OpenSCG
packages with plain old rpm and it's been working.  I thought they
didn't have any external dependencies so I could get away with that.
Also, I do have the libpython package installed, it's just a bit newer
than what the package wants.

The Linux distro is called Scientific Linux and uname -a looks like this:

Linux db02 2.6.32-131.6.1.el6.x86_64 #1 SMP Tue Jul 12 17:14:50 CDT
2011 x86_64 x86_64 x86_64 GNU/Linux

Thanks!


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Locking or Something Else?

2012-05-20 Thread Ian Harding
I have a situation where an increase in volume of inserts into the
main transaction table causes a huge slowdown.  The table has lots of
indexes and foreign keys and a trigger.

Clearly, something is causing a resource contention issue, but here's
my main question:

I have log_lock_waits = on  and deadlock_timeout = 250ms.  Is there
any way I could have delays of tens of seconds caused by a write
conflict in a table accessed by a trigger that would not result in the
wait being logged?

Thanks!

Ian
 PostgreSQL 9.0.6 on x86_64-unknown-linux-gnu, compiled by GCC gcc
(GCC) 4.1.2 20080704 (Red Hat 4.1.2-46), 64-bit

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Locking or Something Else?

2012-05-20 Thread Ian Harding
On Sunday, May 20, 2012, Martijn van Oosterhout wrote:

 On Sun, May 20, 2012 at 12:26:26AM -0700, Ian Harding wrote:
  I have a situation where an increase in volume of inserts into the
  main transaction table causes a huge slowdown.  The table has lots of
  indexes and foreign keys and a trigger.
 
  Clearly, something is causing a resource contention issue, but here's
  my main question:
 
  I have log_lock_waits = on  and deadlock_timeout = 250ms.  Is there
  any way I could have delays of tens of seconds caused by a write
  conflict in a table accessed by a trigger that would not result in the
  wait being logged?

 The most common cause for slowdowns during inserts is if you're not
 wrapping them into large transactions.  The deadlock timeout only
 tracks deadlocks, it won't trigger on normal lock waits.  There can be
 issues with respect to foreign keys, but I didn't think they are
 triggered on inserts.


I must have misunderstood log_lock_waits.  I thought it logged all lock
waits longer than deadlock_timeout.



 If you are wrapping into transactions, then it may be that your disk
 subsystem has slow fsyncs.

 Hope this helps,
 --
 Martijn van Oosterhout   klep...@svana.org javascript:;
 http://svana.org/kleptog/
  He who writes carelessly confesses thereby at the very outset that he
 does
  not attach much importance to his own thoughts.
   -- Arthur Schopenhauer



Re: [GENERAL] Recovery from Archive files

2011-10-24 Thread Ian Harding
On Thu, Oct 20, 2011 at 11:42 AM, Raghavendra
raghavendra@enterprisedb.com wrote:
 On Thu, Oct 20, 2011 at 11:58 PM, John R Pierce pie...@hogranch.com wrote:

 On 10/20/11 2:33 AM, Raghavendra wrote:

 Am assuming you are having $PGDATA (data directory) and their WAL
 Archives.

 he said he does not have the base backup ($PGDATA), so this is pointless.

 Ahh.. Agreed.
 I mis-read, I though he is trying to recovery from current DATA directory
 and not with base backup which he doesn't have it.
 ---
 Regards,
 Raghavendra
 EnterpriseDB Corporation
 Blog: http://raghavt.blogspot.com/


I thought if you had every log file ever generated from initdb, you
could initdb, and start replaying logs?  It's an odd scenario, but I
thought I heard that could work... Come to think of it, I think that
was an Oracle thing I heard... but still.. if you had every log
file

- Ian

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Extraneous Files

2011-10-20 Thread Ian Harding
If someone happened to accidentally end up with a lot of files that
were NOT part of their database in the data/base/X directory, how
could they go about getting a reliable list of files they could safely
delete?  The files were there before the current incarnation of the
database, so have ctime before the good ones, but may be valid because
they were updated by rsync, but I don't know that mtime is reliable
for every file either, since some may have been unchanged from the
first time this someone ran rsync.

Not a super big deal, just wasting a lot of space...

Thanks!

Ian

postgresql 9.0.4 on linux

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extraneous Files

2011-10-20 Thread Ian Harding
Well, they are actually streaming replication slaves, and I boogered
up the rsync command, so there they are.  I diffed the directories
from the master to the slave, and think I will go ahead and delete all
the files that don't appear in both places and see what happens.
Worst case, I have to set them back up again.  No big deal...

Thanks!

- Ian

On Thu, Oct 20, 2011 at 12:24 PM, John R Pierce pie...@hogranch.com wrote:
 On 10/20/11 12:14 PM, Karsten Hilbert wrote:

 Well, Ian isn't talking about removing data. What he was
 asking (I believe) is how to remove from the data directory
 files which got nothing to do with the database in question
 (but probably look like database files because, say, someone
 copied*another*  database into the directory ;-)

 pg_dumpall
 stop postgres
 save the .conf files
 wipe $PGDATA
 initdb
 restore the .conf files
 restart postgres
 restore the dumpall

 --
 john r pierce                            N 37, W 122
 santa cruz ca                         mid-left coast


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Streaming Replication and Firewall

2011-10-04 Thread Ian Harding
On Thu, Sep 29, 2011 at 5:32 PM, Fujii Masao masao.fu...@gmail.com wrote:
 On Fri, Sep 30, 2011 at 1:35 AM, Ian Harding harding@gmail.com wrote:
 I updated the firewall rules on a streaming replication standby server
 and thought nothing of it.  I later happened to notice on the primary
 that ps aux | grep stream didn't show streaming to that server
 anymore.  On the standby that command still showed the wal receiver
 patiently waiting for new data.

 I know I broke it, but would anything have eventually happened, or
 would the wal receiver keep patiently waiting as the world passed it
 by?

 I guess that walreceiver has been stuck because it could not detect
 the termination of connection for a while. By default, keepalive is
 enabled on the replication connection from the standby to the master,
 but how long it takes for walreceiver to notice the termination of
 connection depends on the kernel parameters related to keepalive.
 By default settings, it takes about two hours.

 Setting keepalives parameters in primary_conninfo would help to fix
 such a problem.
 http://developer.postgresql.org/pgdocs/postgres/libpq-connect.html

 Regards,

 --
 Fujii Masao
 NIPPON TELEGRAPH AND TELEPHONE CORPORATION
 NTT Open Source Software Center


Thank you!  That was exactly what I needed.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Streaming Replication and Firewall

2011-09-29 Thread Ian Harding
I updated the firewall rules on a streaming replication standby server
and thought nothing of it.  I later happened to notice on the primary
that ps aux | grep stream didn't show streaming to that server
anymore.  On the standby that command still showed the wal receiver
patiently waiting for new data.

I know I broke it, but would anything have eventually happened, or
would the wal receiver keep patiently waiting as the world passed it
by?

I simply restarted the standby cluster and all is well.

Thanks!

- Ian

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Log Apply Delay

2011-09-16 Thread Ian Harding
Oracle has a configuration option for its version of hot standby
(DataGuard) that lets you specify a time based delay in applying logs.
 They get transferred right away, but changes in them are only applied
as they reach a certain age.  The idea is that if something horrible
happens on the master, you can keep it from propagating to one or more
of your standby databases (or keep from having to reinstate one in the
case of a failover)

Anyway, Is there any plan to add a knob like that to the streaming
replication in Postgres?

Hypothetically, if I had a standby database with max_standby_*_delay
set to -1, and there had been a long running query so log apply was an
hour behind, could I use that database for point in time recovery if
something went wrong on the primary?  Say something bad happened on
primary, and I rushed over to the standby (in this delayed situation)
and shut it down.  Could I then alter the recovery.conf and have it
come up read/write at a point in time?  Seems like I could

- Ian

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Log Apply Delay

2011-09-16 Thread Ian Harding
On Fri, Sep 16, 2011 at 8:35 AM, hubert depesz lubaczewski
dep...@depesz.com wrote:
 On Fri, Sep 16, 2011 at 08:02:31AM -0700, Ian Harding wrote:
 Oracle has a configuration option for its version of hot standby
 (DataGuard) that lets you specify a time based delay in applying logs.
  They get transferred right away, but changes in them are only applied
 as they reach a certain age.  The idea is that if something horrible
 happens on the master, you can keep it from propagating to one or more
 of your standby databases (or keep from having to reinstate one in the
 case of a failover)

 Anyway, Is there any plan to add a knob like that to the streaming
 replication in Postgres?

 In streaming - no. But if you want delay, perhaps normal WAL-files based
 approach would be good enough? OmniPITR, for one, has a option to delay
 applying wal segments.


The file based approach is pretty close, unless the Bad Thing happens
right before a file gets transferred.  This is not a super important
feature to me but It's a nice security blanket and almost takes the
place of a PITR plan including big file transfers of the data
directory at regular intervals.

- Ian

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Steps to use pl/pgtcl

2011-09-16 Thread Ian Harding
If you install using a package manager, you might only have to install
the postgresql-pltcl (or similarly named) package, then do

createlang pltcl mydatabase

from the command line and you are ready to go.  If you build from
source, you have to worry about prerequisites yourself.

On Thu, Sep 15, 2011 at 10:23 PM,  mamatha_kagathi_c...@dell.com wrote:
 Hi,



 If I am planning to use the procedures/packages present in an sql server in
 postgresql, can I do it by using pl/pgtcl feature of postgresql? If yes,
 What shall be my initial and major steps. I am new to postgresql , kindly
 help me.



 Thanks,

 Mamatha

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Index Usage in View with Aggregates

2009-09-17 Thread Ian Harding
I have never had this particular problem in PostgreSQL, it seems to
just know when queries can be flattened and indexes used.  I know
that takes tons of work.  Thank you for that.

Here's the Oracle question.

http://stackoverflow.com/questions/1439500/oracle-index-usage-in-view-with-aggregates

I suspect I have made some subtle mistake in changing that SQL into a
view that is breaking Oracle's ability to know that an index can be
used.  Any and all advice is appreciated.

- Ian

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] hash partitioning

2008-09-03 Thread Ian Harding
On Wed, Sep 3, 2008 at 10:36 AM, William Garrison [EMAIL PROTECTED] wrote:
 When I attended the PostgreSQL East conference, someone presented a way of
 doing this that they used for http://www.mailermailer.com/ and they did
 this:

 SET constraint_exclusion = on;
 EXPLAIN
 SELECT
 *
 FROM
 test
 WHERE
 id = 7
 AND id % 4 = 3

 Their business layer then generated the AND id % 4 = 3 part of the SQL.
 :(

 Does anyone know if Oracle or any other database can handle this?


Oracle has support for hash partitioning like so:

CREATE TABLE sales_hash
(salesman_id  NUMBER(5),
salesman_name VARCHAR2(30),
sales_amount  NUMBER(10),
week_no   NUMBER(2))
PARTITION BY HASH(salesman_id)
PARTITIONS 4
STORE IN (data1, data2, data3, data4);

There is no need to specify which partition to search or reference any
hash function in queries, it's all magic.


 David West wrote:

 Hi folks,



 I'm wondering why the postgres planner is not capable of determining the
 correct partition for a simple select for the following partitioning scheme,
 in which I'd like to automatically divide rows into four sub-tables, ie, a
 simple form of hash partitioning.



 Any ideas why this doesn't work, or a work around to make it work?  I would
 have expected the query plan below to only query the test_1 table.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to tell if a trigger is disabled

2008-08-16 Thread Ian Harding
On Fri, Aug 15, 2008 at 5:59 PM, Tom Lane [EMAIL PROTECTED] wrote:
 Ian Harding [EMAIL PROTECTED] writes:
 Is there any way to tell if a trigger or triggers are disabled on a
 table?  I was updating some data a week or two ago and must have
 forgotten to re-enable the triggers.  Took me a little while to figure
 out.  \d tablename didn't tell me, nor did \d+ tablename.

 This is on 8.2.3.

 FWIW, 8.3's psql knows about showing this in \dt.

Yet another reason to upgrade.. Thanks!

- Ian

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to tell if a trigger is disabled

2008-08-15 Thread Ian Harding
Is there any way to tell if a trigger or triggers are disabled on a
table?  I was updating some data a week or two ago and must have
forgotten to re-enable the triggers.  Took me a little while to figure
out.  \d tablename didn't tell me, nor did \d+ tablename.

This is on 8.2.3.

Thanks,

- Ian

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Why does this work?

2007-06-15 Thread Ian Harding

I accidentally formatted a string for tsearch before trying to cast it
to a date, and it worked!

select 'June152007'::date
   date

2007-06-15
(1 row)

Is this a happy accident, or is it OK to count on it continuing to
work this way?

Thanks,

Ian

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] $libdir

2007-06-05 Thread Ian Harding

On 6/5/07, Martijn van Oosterhout [EMAIL PROTECTED] wrote:

On Mon, Jun 04, 2007 at 08:25:22PM -0700, Ian Harding wrote:
 I know this is a question that gets asked a zillion times and is
 almost always pilot error.

I don't know much about this but the complaint is this:

 The usual error about file does not exist relative to
 $libdir/tsearch2 gets generated.

And you have:

 In that directory are the libtsearch2.so... files along with lots of
 others, with 755 permissions, owned by root.

Either the library should be tsearch.so not libtsearch.so, or the
request should be for $libdir/libtsearch or something (perhaps a
directory is missing or something and it should be
tsearch/libtsearch.so).



I saw that discrepancy, but assumed there was a prepending of lib
somewhere in the search.  Turns out that is exactly the problem, and
changing the tsearch2.sql file to reflect

$libdir/libtsearch2

works.  I will try to figure out how this happened and let the NetBSD
package maintainer know.

Thanks!

- Ian

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: NULLS and User Input WAS Re: [GENERAL] multimaster

2007-06-04 Thread Ian Harding

On 6/3/07, PFC [EMAIL PROTECTED] wrote:


 Yeah, it is awful ;^)  However the existing system is equally awful
 because there is no way to enter NULL!

Consider this form :

First name :Edgar
Middle name :   J.
Last name : Hoover

Now, if someone has no middle name, like John Smith, should we use NULL
or  for the middle name ?


NMN  for No Middle Name.

http://www.google.com/search?hl=enq=data+standards+no+middle+name+NMNbtnG=Search

The hazard with doing stuff like that is some joker could name their
kid Billy NMN Simpson.  Or this

http://www.snopes.com/autos/law/noplate.asp

If the the None identifier can't be guaranteed to not conflict with
data, the best thing is a boolean for None.


NULL usually means unknown or not applicable, so I believe we have to
use the empty string here. It makes sense to be able to concatenate the
three parts of the name, without having to put COALESCE() everywhere.



Null always means unknown.  N/A usually means Not Applicable.  I use
COALESCE once in a view and never again.


Now consider this form :

City:
State   :
Country :

If the user doesn't live in the US, State makes no sense, so it should
be NULL, not the empty string. There is no unnamed state. Also, if the
user does not enter his city name, this does not mean he lives in a city
whose name is . So NULL should be used, too.



There are states in other countries, but I get your meaning.  But if
someone doesn't enter their middle name, that doesn't mean their
parents named them Billy  Simpson either, right?

I think there is an argument for filling fields with empty strings
where they are _known_ not to exist but they are _applicable_  but I
don't do it.  I prefer the consistency of NULL for absent data versus
WHERE (mname = '' OR mname IS NULL).  Again, the user failing to enter
it when presented an opportunity does not meet the known not to
exist test for me.


It is very context-dependent.



Yeah, unless you are a stubborn old null zealot like me!

- Ian

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] $libdir

2007-06-04 Thread Ian Harding

I know this is a question that gets asked a zillion times and is
almost always pilot error.

I installed PostgreSQL 8.2.x and the Tsearch2 package on NetBSD which
went fine, but I can't get the tsearch2.sql file to run.

The usual error about file does not exist relative to
$libdir/tsearch2 gets generated.

The docs say $libdir gets expanded to the result of

pg_config --pkglibdir

which in my case returns /usr/pkg/lib/postgresql.

In that directory are the libtsearch2.so... files along with lots of
others, with 755 permissions, owned by root.

Those permissions and ownership look OK to me, read and execute by the
whole world.

What else could be wrong?

- Ian

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] multimaster

2007-06-03 Thread Ian Harding

On 6/2/07, Alexander Staubo [EMAIL PROTECTED] wrote:

On 6/2/07, Martijn van Oosterhout [EMAIL PROTECTED] wrote:
 I don't know if it's a general problem, but I've been involved in a
 using rails and it appears to have it's own way of declaring the
 database. It presumes to handle referential integrity and uniqueness in
 the application code (!).

I think you've been misled. True, Rails/ActiveRecord does bear the
signs of having been designed for MySQL/MyISAM, which has neither
transactions nor referential integrity, but this does not mean that
Rails does not support these constructs, or that Rails users don't use
them. I value my data integrity, so all my relations have RI, unique
constraints, null constraints, etc. as in any well-designed schema.



Rails propaganda explicitly proposes not repeating yourself and
since the RI constraints are defined in the rails models, and they are
enforced in the framework with a graceful feedback mechanism for
users, they implicitly denigrate defining constraints in the DB as
Repeating yourself.

The creator of RoR explicitly discourages use of RI, rules, triggers,
etc in the database as unnecessary.  His disciples take this as
gospel.

I have used rails for a couple projects and I repeated myself
religiously in database constraints.  This revealed a couple
disturbing properties of rails that made me not want to use it,
notably, assuming an empty input box equates to an empty string vice
null for text data types, even if the table is defined as accepting
nulls.

- Ian

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


NULLS and User Input WAS Re: [GENERAL] multimaster

2007-06-03 Thread Ian Harding


An empty string is not null! Null means the value is missing, which is
clearly not the case here. I would say Rails is exactly in the right
here. When an HTML form is posted, empty input boxes are declared as
empty strings, which what the user entered. The problem is not with
Rails/ActiveRecord but with your form handling. If you want empty
boxes to become null, add some client-side JavaScript logic that sets
the disabled attribute on empty input elements before form is
submitted; this will prevent the client from sending the value.


The user was presented an _opportunity_ to enter data and did not.
The data is unknown.  I don't know how you can say ...The user
entered an empty string.  There is no empty string key on the
keyboard.

I have no idea why I got such hard pushback on this.  This is the
EXACT same behaviour other types use.  If a number field is presented
to the user and submitted with no value, NULL Is inserted.  Not zero,
which is the numeric equivalent of the empty string, but NULL.   Same
with date types.  Why not say they entered '1/1/1970' by default if
they entered nothing?

http://dev.rubyonrails.org/ticket/3301

- Ian



Alexander.



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: NULLS and User Input WAS Re: [GENERAL] multimaster

2007-06-03 Thread Ian Harding

On 6/3/07, Alexander Staubo [EMAIL PROTECTED] wrote:

On 6/3/07, Ian Harding [EMAIL PROTECTED] wrote:
  An empty string is not null! Null means the value is missing, which is
  clearly not the case here. I would say Rails is exactly in the right
  here. When an HTML form is posted, empty input boxes are declared as
  empty strings, which what the user entered. The problem is not with
  Rails/ActiveRecord but with your form handling. If you want empty
  boxes to become null, add some client-side JavaScript logic that sets
  the disabled attribute on empty input elements before form is
  submitted; this will prevent the client from sending the value.

 The user was presented an _opportunity_ to enter data and did not.
 The data is unknown.  I don't know how you can say ...The user
 entered an empty string.  There is no empty string key on the
 keyboard.

Not at all. If the input box already contained a string, and the user
erased the contents of the input box, then the user has, in effect,
entered an empty string. Not a null. This is a UI layer issue, not a
database issue.

 I have no idea why I got such hard pushback on this.  This is the
 EXACT same behaviour other types use.  If a number field is presented
 to the user and submitted with no value, NULL Is inserted.  Not zero,
 which is the numeric equivalent of the empty string, but NULL.   Same
 with date types.  Why not say they entered '1/1/1970' by default if
 they entered nothing?

Ah, no. An empty string is not a valid number -- in fact, it is the
absence of a number; the same goes for dates. An empty string,
however, is a valid string, since a string is (in this context)
defined as a sequence of 0 to n characters.)

Your patch is awful because it would mean there was no way to enter an
empty string in the database. A one-character string containing a
single space is not an empty string.


Yeah, it is awful ;^)  However the existing system is equally awful
because there is no way to enter NULL!

Properly implemented, the rails model would allow you to indicate
nullability and use null if no data is provided.

- Ian



Alexander.



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] collision in serial numbers after INSERT?

2007-06-01 Thread Ian Harding

On 31 May 07 09:46:47 -0700, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

Hello all -

I'm working on a site with PHP and Postgres, coming from a MySQL
background.

I was looking for an equivalent to the mysql_insert_id() function, and
a site recommended this:


Another option is INSERT...RETURNING if you can alter your code.  In
my environment it means lying to the system and telling it you are
doing a select when the SQL is actually an insert, but it all works.
If the insert fails, an error is returned, if it succeeds, the
values(s) you asked to have returned are in the result set.

- Ian

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Design Table Search Question

2007-05-31 Thread Ian Harding

tsearch indexes have to reside in the table where the data is, for the
automagical functions that come with it to work.  You can define a
view that joins the tables, then search each of the index columns for
the values you are looking for.

In my experience, the LIKE searches are fast for relatively small
datasets, but they are often implemented funny, where a search for
'ING' (the company) would return zillions of records with verbs (and
gerunds) in them.  tsearch is smarter than me.  It knows where to stem
words and rarely gives unexpected results.  Automotive nomenclature
will seldom be stemmed so you will get whole word searches which is
usually fine.

Good luck!

- Ian

On 30 May 2007 11:59:04 -0700, Gabriel Laet [EMAIL PROTECTED] wrote:

Hi,

I'm developing an application where basically I need to store cars.
Every car has a Make and Model association. Right now, I have three
tables: MAKE, MODEL (make_id) and CAR (model_id).

1) I'm not sure if I need or not to include make_id to the CAR
table. To me, it's clear to associate just the Model. Am I right?

2) I'm thinking in the best way to search content. I'll need to search
data across multiple-tables, and I'm not sure about the best way to do
that. Should I use TSearch2 or just a bunch of LIKEs and JOINs
statements?

I appreciate any help :)
Thanks!


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Design Table Search Question

2007-05-31 Thread Ian Harding

On 5/31/07, Joshua D. Drake [EMAIL PROTECTED] wrote:

Ian Harding wrote:
 tsearch indexes have to reside in the table where the data is, for the
 automagical functions that come with it to work.  You can define a
 view that joins the tables, then search each of the index columns for
 the values you are looking for.

No they don't.



Hm?  Allow me to clarify.

To the best of my knowledge, and in my limited experience, the
tsearch2() trigger function provided with tsearch will only index
fields present in the table upon which it is created, in an index
column present in the table upon which it is created.  If I am
incorrect in this assessment I would be glad to be corrected.

Of course, custom triggers can be written to do anything from soup to nuts.

- Ian A. Harding


Joshua D. Drake




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] short ciruit logic of plpgsql in 8.2

2007-05-22 Thread Ian Harding

This keeps biting me.  In a trigger function for INSERT OR UPDATE if you try

IF TG_OP = 'UPDATE' AND OLD.foo = 'bar' THEN
...

it will blow up on inserts because there is no OLD.  I always expect
this to short circuit and am always disappointed.  Easy fix, of
course...

IF TG_OP = 'UPDATE' THEN
   IF OLD.foo = 'bar' THEN
...

In case this was what the OP was asking about in particular, it still
happens to me in 8.2

- Ian


On 5/17/07, Tom Lane [EMAIL PROTECTED] wrote:

Richard Broersma Jr [EMAIL PROTECTED] writes:
 Does anyone know if logic operations in plpgsql are short circuited in 8.2?

They are often optimized, but if you expect short circuiting following
exactly the rules of, say, C, you are likely to be disappointed.  See
the manual:

http://www.postgresql.org/docs/8.2/static/sql-expressions.html#SYNTAX-EXPRESS-EVAL

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Postgres Printed Manuals

2007-05-17 Thread Ian Harding

Anyone can set up anything that is either not copyright protected, or
freely distributable, or they create.  I can (and will) modify these
existing tomes to your revised format or any better ones that come
along!  I am pleased to see that a couple have sold already.

I also think it would be great to generate these records in lulu's
database (that's all they are until someone orders) with each version.
It could be automated since they use plain old http and allow ftp
uploads.  Someone would just have to do what you did, which is to
decide which pages go in which volumes, and run the script.

If PGDG could make a dollar or two that would be great.

On 5/17/07, Robert James [EMAIL PROTECTED] wrote:

Excellent!

Two questions:
1. Might it be better to include the index and TOC (in both volumes), and do
something like this:
http://www.postgresql.org/files/documentation/pdf/8.2/postgresql-8.2.1-US.pdf
Vol I: 1-621, 1674-end
Vol II: 1-38, 622-1395, 1672-end
(Dropped some other things, I think...)

2. I'm not familiar with Lulu - can anyone set it up? If
I would like to do one run of the manuals
that way, can I?  If not, do you care to
offer that break up as well?

+1 100% for having Postgres set this up and take a cut.


On 5/16/07, Ian Harding [EMAIL PROTECTED]  wrote:
 Well, I didn't do that, but as an exercise I split the manual in 740
 page chunks (maximum size at lulu), which misses the last couple
 hundred pages (old release notes and index, mostly) and put them on
 lulu.  It comes to $19.33 each volume for 2 volumes.

 I agree that it would be great to have them split up a bit so they
 could fit in normal sized books, but there it is, if you want a dead
 tree version.  pdftk is an open source pdf breaker-upper I used for
 this.

 http://www.lulu.com/content/863723
 http://www.lulu.com/content/864445

 - Ian





---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Postgres Printed Manuals

2007-05-16 Thread Ian Harding

On 5/14/07, Alvaro Herrera [EMAIL PROTECTED] wrote:

Rich Shepard wrote:
 On Mon, 14 May 2007, Bruce Momjian wrote:

 How much would it be to email the PDF manual to someone like Kinkos and
 get it printed?  Effectively, that might be the cheepest solution because
 it is print-on-demand.

 What I would suggest is, rather than hoping somebody has the bankroll for
 one LARGE document, somebody might take a close look at a PostgreSQL
 Documentation Project to break down those 1600 pages into three or four
 manuals that would be more reasonable to print one at a time.



Well, I didn't do that, but as an exercise I split the manual in 740
page chunks (maximum size at lulu), which misses the last couple
hundred pages (old release notes and index, mostly) and put them on
lulu.  It comes to $19.33 each volume for 2 volumes.

I agree that it would be great to have them split up a bit so they
could fit in normal sized books, but there it is, if you want a dead
tree version.  pdftk is an open source pdf breaker-upper I used for
this.

http://www.lulu.com/content/863723
http://www.lulu.com/content/864445

- Ian

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] PITR - Rewind to snapshot scheme

2007-04-22 Thread Ian Harding

Or use a SAVEPOINT.  I don't know about the impact on resources if you
leave it hanging around for a long time, but I use these for exactly
the scenario you are talking about.

- Ian

On 4/16/07, Martin Langhoff [EMAIL PROTECTED] wrote:

On 4/17/07, Tom Lane [EMAIL PROTECTED] wrote:
 Seems overly complicated --- why don't you just shut down the postmaster
 and take a tarball archive of the PGDATA tree?  Then to revert, stop
 postmaster and untar.

Thanks for the tip!

cheers


martin

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: Wikipedia on Postgres (was Re: [GENERAL] postgresql vs mysql)

2007-02-23 Thread Ian Harding

On 2/22/07, Alvaro Herrera [EMAIL PROTECTED] wrote:

Joshua D. Drake escribió:
 Andrej Ricnik-Bay wrote:
  On 2/23/07, Jim Nasby [EMAIL PROTECTED] wrote:
  That depends greatly on what you're doing with it. Generally, as soon
  as you start throwing a multi-user workload at it, MySQL stops
  scaling. http://tweakers.net recently did a study on that.
  I think I recall that wikipedia uses MySQL ... they get quite a few
  hits, too, I believe.

 And outages if you watch :)

Does this mean that we believe the Wikipedia would not suffer any
outages if it ran on Postgres?

How is the Postgres port of the Wikipedia doing this days anyway?  Is it
in a shape where one would consider it competitive?



I use mediawiki with postgres and it works fine, except for a bug
regarding timestamps.  That bug is due to mysqlism of the code.  Once
that's fixed, it will be ready as far as I'm concerned.

editorialThere have been some tragic and embarrassing data losses by
some big sites that should know better because they used mysql without
the heroic measures that are needed to make it safe.  I don't care
that much that big sites use it, big sites start small and don't
always start with the best tools.  Once started, it's hard to switch
over to better tools.  If you used enough volkswagen beetles you could
move the same number of passengers on the same routes as Greyhound
does with buses, but that doesn't mean they are the right
tool./editorial


- Ian

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Recursive Left Joins Causing Trouble in 8.2.3 RESOLVED (kind of)

2007-02-22 Thread Ian Harding

On 2/21/07, Tom Lane [EMAIL PROTECTED] wrote:

Ian Harding [EMAIL PROTECTED] writes:
 I had views that used syntax like
 WHERE datecol  current_date and (otherdatecol is null or otherdatecol
  current_date)
 Suddenly, this is ungodly inefficient in 8.2.3.  It worked just fine in 8.1.3.

This complaint is pretty much content-free (especially with the oblique
implication that it's got something to do with left joins).  Please
provide a self-contained test case.

regards, tom lane



True.  Yesterday was my last day at that employer, today is my first
at my new one.  I needed to get things going and was just happy to be
able to turn enable_nestloop back on and have things work.

I will see if I can get the problem recreated without too much
extraneous junk, it was buried pretty deep in views referencing views
referencing views which took what may be a minor change in query
timing and planning complexity and blowing it out.  The odd query
structure (I thought) was a series of self left-joins but that may
have been smoke.

I was just surprised that basically my entire system came to a halt
with a minor version upgrade, and that nobody else had seen anything
similar.  I know I write careless SQL from time to time, but again, I
didn't think I was THAT unique!

- Ian

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Recursive Left Joins Causing Trouble in 8.2.3 RESOLVED (kind of)

2007-02-21 Thread Ian Harding

OK, here's the deal

I had views that used syntax like

WHERE datecol  current_date and (otherdatecol is null or otherdatecol

current_date)


Kind of a hillbilly version of BETWEEN but it assumes null is INFINITY
(except I use date, not timestamp)

Suddenly, this is ungodly inefficient in 8.2.3.  It worked just fine in 8.1.3.

What I had to do to fix it was to make a function that did the
comparison, lied and marked it immutable, and created functional
indexes using that function.  All is well now.  I would love to hear
of a more elegant solution.

Just a heads-up in case you use that type of SQL and are upgrading to
8.2.X, it might be a problem.

- Ian

On 2/21/07, Ian Harding [EMAIL PROTECTED] wrote:

This whole thing strikes me funny since my application has run fine
for 6 years and now I have queries that simply take forever, and even
had one that threw an error (Tom fixed the bug very quickly)  Now I
have to run with enable_nestloop off globally which seems to negate
the use of indexes by and large, and makes things slower than they
were under 8.1.3, but at least it runs.



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] indexes across multiple tables

2007-02-20 Thread Ian Harding

On 2/18/07, Chris [EMAIL PROTECTED] wrote:

Toby Tremayne wrote:
 Hi all,

 I'm just experimenting with tsearch2 - I have it all working fine but I
 was wondering if there's a way to create indexes containing vector
 columns from multiple tables?  Or if not, how do people usually manage
 this kind of issue?

Postgres doesn't support multi-table indexes so there's no way tsearch2
would be able to.

What exactly are you trying to achieve?

--


Probably best to join the tables, then simply do the search in the
WHERE clause.  Something like

select p.partname, s.subassemblyname
from part p join subassembly s
on p.partid = s.partid
where p.partidx @@ to_tsquery('Some happy string')
 or s.subidx @@ to_tsquery('Some happy string')

That's how I do it, anyway...

- Ian

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] How would you handle updating an item and related stuff all at once?

2007-02-20 Thread Ian Harding

On 2/17/07, Rick Schumeyer [EMAIL PROTECTED] wrote:

This may be bad design on my part, but...



Not at all.  Very common scenario


I have three tables of interest...Account, Employee, and
AccountEmployeeRelation.  There is a many-to-many relationship between
accounts and employees.  The join table also contains a column
indicating what role the employee plays on this account.

My interface is a web app (I'm trying out Ruby on Rails).  On the edit
account screen I want to edit account attributes AND be able to
add/delete employees in one form.  The gui part seems to work.

BUT, when I update I'm not sure how to handle updating the
AccountEmployeeRelation table.  During the update, relations may have
been added or deleted, and existing relations may have been changed.  It
seems to me the easiest thing to do is delete all the relations for the
account and create all new ones with the data submitted from the form.
This seems wasteful, but the alternative would be a pain.  Or is this
really the best way?



I tried a bunch of cleverness where I checked for existence and
updated if required, etc but came back to just zapping them all and
inserting.  As long as it's done in a transaction and there are not
too many, it's fine.  It doesn't eat any more space and eats less
cycles than doing it the hard way.


Thanks for any advice.



You're welcome!


Completely off topic, (but not worth a separate post) I have been forced
to use a little bit of mysql lately...did you know that if you use
transaction and foreign key syntax with myisam tables, it does not
complain...it just silently ignores your requests for transactions and
foreign key checks.  Yikes!  I had incorrectly assumed I would get an
error message indicating that transactions are not supported.  Oh well.



Sorry about that.  Nuff said 8^/

- Ian

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] problems: slow queries with tsearch2

2007-02-16 Thread Ian Harding

On 2/16/07, Rafa Comino [EMAIL PROTECTED] wrote:



Hi from Spain,

I have a problem with TSearch2,

I have a table with more than a million registers (a table of books, for
example),

I made a tsearch2 index for one of my fields (the title of the books, for
example),

I make queries from that table, over the tsearch2 index. Then some of my
queries get too much slowly, and I don't know how to resolve it. For
example:
Every thing is ok, but when the query get a lot of results (about 1000
register) and I use ORDER BY over other field in the query (even if I use
LIMIT 10) the query gets slowly, 7 or 8 seconds. The thing is, that query
gets all the results and later order those results, and that makes my query
so slow. How can I resolve this.
When my queries have two conditions joined with AND, and each condition is
made over a different tsearch2 index, the execution get first x rows from
the first condition, and y rows from the second condition, later get the
rows from both result set. This makes my queries slow too. How could I
resolve this?.



(Sorry for my English, and thanks for reading me)


I use tsearch and have never found it to be slow, so I wonder if you
have indexes on the tsvector columns in your tables and if you have
vacuumed recently?  Something like this..

   CREATE INDEX idxFTI_idx ON tblMessages USING gist(idxFTI);
   VACUUM FULL ANALYZE;

Also, if you have other conditions in your query they will want to
have valid indexes to use as well.

If you post an EXPLAIN and/or EXPLAIN ANALYZE output more people will
probably chime in.

- Ian

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] requests / suggestions to help with backups

2007-02-16 Thread Ian Harding

On 2/16/07, Lou Duchez [EMAIL PROTECTED] wrote:

Like everyone else, I use pg_dump for backup purposes; I have a cron job
that runs a pg_dump whose output is then FTP'd elsewhere. Two things
that would make my life easier:

1) grant select on database ... or, hypothetically, grant select on
cluster. The goal would be to create a read-only PostgreSQL user, one
who can read the contents of an entire database (or even the entire
cluster) but make no changes.  Currently, to do my cron job, I have to
specify a trusted user, otherwise PostgreSQL will ask for a password;


A .pgpass file can fix this... I don't know if that gets you any
closer to your objective.

- Ian

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Adjacency List or Nested Sets to model file system hierarchy?

2007-02-12 Thread Ian Harding

On 2/12/07, Bill Moseley [EMAIL PROTECTED] wrote:

I'm looking for a little guidance in representing a file system --
well just the file and directory structure of a file system.

Often articles on representing a hierarchy discuss the advantages of
using Nested Sets (or nested intervals) it seems.  I'm not clear how
well they apply to a file system-like hierarchy, though.

The examples (and my limited understanding) of Nested Sets have the
leaf nodes at the end of the branches, where in a file system a node
can have both leaf nodes (files) and branches (directories).

Also, the Nested Sets seem to solve problems I don't have -- such as
finding all descendants of a given node.

My simple requirements are:

-- Quickly be able to lookup content by a full path name

-- Provide directory views that shows parent, list of contents
   including any sub-directories.

-- To be able to easily move branches.

It will not be a large collection of files in the tree, so that's
not an issue.



You don't mention the ltree contrib module, have you looked at it?  It
can easily meet your requirements without having to reinvent anything.
It may be what you're referring to as Nested Sets, I don't know.  I
use it and like it a lot.

-Ian

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Adjacency List or Nested Sets to model file system hierarchy?

2007-02-12 Thread Ian Harding

On 2/12/07, Bill Moseley [EMAIL PROTECTED] wrote:

On Mon, Feb 12, 2007 at 10:53:53AM -0500, Merlin Moncure wrote:
 On 2/12/07, Richard Broersma Jr [EMAIL PROTECTED] wrote:
  Can you describe in a little bit more detail about what you mean by
  'Adjaceny LIst'?
 
 Adjaceny list is the term used in the celko book to refer to a table that
 is recurively related to
 itself.
 
 create table foo (
 idinteger  primary key,
 parentid  integer references foo (id),
 name  varchar not null,
 );

 Above approach is ok but I can think of at least two other methods
 that are probably better.  First approach is to just store the whole
 path in every record for each file.  Yes, this is a pain for updates
 but searching and children discovery is simple.  in that case I would
 define pkey as (path, file).

Yes, that's what I meant by using a de-normalized table -- including
the full path in the row.  That would provide fast access to each row
via a path name.  And the parent id makes it easy to find all children
of a given node and, well, the parent too.

Separating the path and file as you suggest would make finding all
files at a given directory level simple, too.

But, I'm not thrilled about the possibility of the hard-coded path not
matching the path up the tree to the root node, though.  Which, of
course, is why I posted.  But, I'll give it a test.


The way I do it is to update the path to the parent's path, plus my id
on insert or update with a before trigger.  I have an after trigger
that simply updates any child record's parent_id, which forces an
update of the path, which forces update of their children, and so on.

You can, of course, cause a recursion problem if you're not careful...
Best to have a check for that too.

- Ian


Thanks,




--
Bill Moseley
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] A Picture is Worth

2007-01-19 Thread Ian Harding

A thousand words.  I like the brevity of this post:

http://blog.page2rss.com/2007/01/postgresql-vs-mysql-performance.html

Can't really argue with it.

- Ian

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] [1/2 OFF] Varlena.com inaccessible from .br (Blocked?)

2007-01-17 Thread Ian Harding

Brazil has been trying to get its ISPs to block access to a certain
video.  I wonder if too wide a net was cast in that effort.

http://www.slate.com/id/2157399/?nav=navoa

On 1/17/07, Jorge Godoy [EMAIL PROTECTED] wrote:

Bruno Wolff III [EMAIL PROTECTED] writes:

 Be sure you aren't blocking the return traffic. I did that once using a /8

As I said, just if I did that on several Brazilian networks.  There's no
blocking on any of these networks.

 You might also check to see if the ip address on your end was recently
 allocated. They might block unallocated IP space (which is not something I
 recommend) and be behind on updating the list of unallocated IP addresses.

None of these networks are recent.

 wolff.to is on Speakeasy's network and you can try pinging it to see if it

That website opens fine.

 is a general block. If you use traceroute, tell it to use icmp, not udp as
 wolff.to will drop udp packets that aren't destined for a public service
 or part of an existing conversation.

So, I believe that there might be something on Varlena's specific
configuration.


--
Jorge Godoy  [EMAIL PROTECTED]

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] is there a tracking trace tool like the SQL Analizer in MS sqlserver.?

2007-01-08 Thread Ian Harding

There is no GUI tool that I know of, but there is EXPLAIN which gives
the same information.

- Ian

On 1/8/07, guillermo arias [EMAIL PROTECTED] wrote:

is there a tracking trace tool in postgre? like the SQL Analizer in MS
sqlserver.

I have downloaded the PGAdmin III and i have not found any tool like this.

Thanks



Get your FREE, LinuxWaves.com Email Now! -- http://www.LinuxWaves.com
Join Linux Discussions! -- http://Community.LinuxWaves.com


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] is there a tracking trace tool like the SQL Analizer in MS sqlserver.?

2007-01-08 Thread Ian Harding

I thought that was called SQL Profiler.

http://msdn2.microsoft.com/en-us/library/ms181091.aspx

Query Analyzer is EXPLAIN with a GUI.

http://msdn2.microsoft.com/en-gb/library/aa178423(SQL.80).aspx

Anyway, I have not heard of such a thing for PostgreSQL, although I am
sure the basic information you want could be obtained from logging
queries and timing.

- Ian

On 1/8/07, Jeffrey Melloy [EMAIL PROTECTED] wrote:

Not exactly.  SQL Analyzer also includes live monitoring of whatever queries
are coming into the database.  You can achieve something similar by enabling
query logging in the settings.


 On 1/8/07, Ian Harding [EMAIL PROTECTED] wrote:

 There is no GUI tool that I know of, but there is EXPLAIN which gives
 the same information.

 - Ian

 On 1/8/07, guillermo arias [EMAIL PROTECTED]  wrote:
  is there a tracking trace tool in postgre? like the SQL Analizer in MS
  sqlserver.
 
  I have downloaded the PGAdmin III and i have not found any tool like
this.
 
  Thanks
 
 
  
  Get your FREE, LinuxWaves.com Email Now! -- http://www.LinuxWaves.com
  Join Linux Discussions! -- http://Community.LinuxWaves.com

 ---(end of
broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match





---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] is there a tracking trace tool like the SQL Analizer in MS sqlserver.?

2007-01-08 Thread Ian Harding

On 1/8/07, Joshua D. Drake [EMAIL PROTECTED] wrote:

On Mon, 2007-01-08 at 12:56 -0600, Jeffrey Melloy wrote:


 On 1/8/07, Jeffrey Melloy [EMAIL PROTECTED] wrote:
 Not exactly.  SQL Analyzer also includes live monitoring of
 whatever queries are coming into the database.  You can
 achieve something similar by enabling query logging in the
 settings.

pg_stat_activity?



The SQL Profiler tool also keeps data that meet your criteria for
later analysis as well.  It is very good.

- Ian

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Subcribing to this list, what's the secret?

2006-12-16 Thread Ian Harding

If you have a dot in your gmail username, take it out.  Gmail ignores
it and validation scripts often puke on it.

Then use that email as your reply to, not some nonexistent carp.

- Ian

On 12/13/06, wheel [EMAIL PROTECTED] wrote:

I seem to have a natural knack for hitting the ruts around here, which
is exciting. I've tried to subscribe to the list using both yahoo and
gmail email accounts and the webform always reports The email address
you entered does not appear to be valid. I would like to use either
yahoo or gmail because I am very interested in keeping spam down on my
main email accounts (see Jan 7 2006 thread 'E-mail harvesting on PG
lists?'). I've been posting via a newsreader but it seems that there is
some strange pattern with messages not making it on the the list using
that pipe...all of my posts show on the google archive of this list but
a good number (25%?) seem to be missing from the mailing list archives
that are found on the postgres site. Example, it seems that both of my
responses to Scott Marlowe never made it to the list, but are somehow
listed in google, and on the newsgroup. Scott was PO'd that I didn't
respond to his first post, that got me to check into why he didn't.

I posted about this issue, the failure of the website subscription form
to accept (at least) an @yahoo.com email acct, and no one replied
(though that was one of the posts that didnt' seem to make it onto the
mailing list, but is on google  newsgroup). Can someone tell me what
the secret is, or check the webform at postgresql.org and make sure it's
in order?

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Speed of postgres compared to ms sql, is this

2006-12-04 Thread Ian Harding

On 11/13/06, Scott Marlowe [EMAIL PROTECTED] wrote:

On Mon, 2006-11-13 at 15:36, novnov wrote:
 OK, thanks everyone, I gather from the responses that postgres performance
 won't be an issue for me then. If MS SQL Server and Postgres are in the same
 ballpark performance-wise, which seems to be the upshot of your comments, no
 problem. I'd only have worried if there was something like the major
 difference between the two with more complicated queries. I am puzzled by
 the commentor's post to the article, it could be FUD of course but didn't
 particularly sound like the commentor was anti pgsql.

I will say this. Most other databases are more forgiving of bad
queries.  Make a bad query and postgresql is more likely to punish you
for it.


Amen.  When I migrated from MSSQL to PostgreSQL (4 years ago), I found
out exactly how seriously MS SQL coddles you when it comes to its Oh,
I know what you really meant query planning.  I committed some sins
MS SQL covered up nicely and PostgreSQL flat out crawled when
presented to it.

However, I suspect that if I tried those bad queries with a current
version of PostgreSQL they would run much better, given all the work
that has been put  in over the last few years.

- Ian

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Why overlaps is not working

2006-11-13 Thread Ian Harding


If first period end and second period start dates are the same, I need that in 
this case expression
returns true.
Is it possible to implement this using OVERLAPS operator ?



I think the best workaround is a function of some kind in whichever
language you choose.  I think you could actually clobber overlaps()
but I chose to give mine a different name.  In my world, all date
ranges have a start, but can have an indefinite end (null).

CREATE OR REPLACE FUNCTION isoverlap (date,date,date,date) RETURNS
boolean LANGUAGE pltcl  AS '

set d1 [clock scan $1]
set d3 [clock scan $3]

if {[string length $2] == 0} {
   set d2 0
} else {
   set d2 [clock scan $2]
}
if {[string length $4] == 0} {
   set d4 0
} else {
   set d4 [clock scan $4]
}

if {($d2 = $d3  ($d1 = $d4 || !$d4)) ||
   ($d1 = $d4  ($d2 = $d3 || !$d2)) ||
   (!$d2  !$d4)} {

   return true
} else {
   return false
}

' ;

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Is there anyway to...

2006-11-03 Thread Ian Harding

On 11/2/06, louis gonzales [EMAIL PROTECTED] wrote:

Hey Brian,
Yeah I had considered this, using cron, I just feel like that is too dirty.

Actually I didn't see Andreas' post, can someone forward that?

I'm running this application on Solaris 9.  Ultimately what I want to
know is, is there something that is internal to postgresql that can be
used that doesn't need external action, to make it do some task?



Your original scenario had an external action, instructor login.   The
daily increment is the days since account creation.

Since we don't have triggers based on login, you maybe could tickle a
function from your front-end that would simply look for students with
account age  30 and not paid whenever the instructor logs in.

Short of that, an OS based scheduler (AT, cron, etc.)  is your only
choice I can think of

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] why not kill -9 postmaster

2006-10-20 Thread Ian Harding

On 10/20/06, Tom Lane [EMAIL PROTECTED] wrote:

Alvaro Herrera [EMAIL PROTECTED] writes:
 Shane Ambler wrote:
 The one thing worse than kill -9 the postmaster is pulling the power
 cord out of the server. Which is what makes UPS's so good.

 If your server is changing the data file on disk and you pull the power
 cord, what chance do you expect of reading that data file again?

 1.  That's what we have WAL for.  The only thing that can really kill
 you is the use of non-battery-backed write cache.

The important distinction here is will you lose data vs can you start
a new server without tedious manual intervention (ipcrm etc).  kill -9
won't lose data, but you may have to clean up after it.  And, as Andreas
already noted, some people have been seen to mess up the manual
intervention part badly enough to cause data loss by themselves.
Personally I think the TIP that's really needed is never remove
postmaster.pid by hand.



When the machine crashes, don't you have to remove the pid file by
hand to get the Postgres to start?  I seem to remember having to do
that

- Ian Never-Say-Never Harding

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] LISTEN considered dangerous

2006-08-03 Thread Ian Harding

On 8/2/06, Flemming Frandsen [EMAIL PROTECTED] wrote:

Ian Harding wrote:
 NOTIFY interacts with SQL transactions in some important ways.
 Firstly, if a NOTIFY is executed inside a transaction, the notify
 events are not delivered until and unless the transaction is
 committed. This is appropriate, since if the transaction is aborted,
 all the commands within it have had no effect, including NOTIFY. But
 it can be disconcerting if one is expecting the notification events to
 be delivered immediately.

Yes, that's very nice, but it doesn't have *anything* to do with what I
posted about.



Quite true, but it does indicate, to me at least, the fact that this
is a SQL command and doesn't take effect until committed.


From what I read in the docs, I would expect the NOTIFY signals to be

like phone calls, if your phone's not plugged in (LISTEN not
committed) you miss the call.  That's the way it works apparently.


I'm bothered by listen listening from the end of the transaction in
stead of the start of the transaction.



What seems to be needed is an answering service that will record your
NOTIFY events, in case you decide to plug in the phone and retrieve
them.

- Ian

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Where do Tcl questions go?

2006-08-02 Thread Ian Harding

On 8/1/06, Carlo Stonebanks [EMAIL PROTECTED] wrote:





I have been trying to figure out where to put my plTcl questions, and where
the people most knowledgable about that topic may be – either on these mail
lists or elsewhere.



TCL is dead.  Long live TCL.

PLTCL was taken out of the core distribution, and is not in contrib.
You have to download it from here

http://gborg.postgresql.org/project/pgtclng/download/download.php

There is documentation available as well as Win32 binaries.

This list is as good a place as any to ask!

- Ian

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] LISTEN considered dangerous

2006-08-01 Thread Ian Harding

On 8/1/06, Flemming Frandsen [EMAIL PROTECTED] wrote:

I have an application that does aggresive caching of data pulled from
the database, it even keeps the objects cached between transactions.

Normally this works very well and when the cache is warmed up about 90%
of the database time is saved.

However that leaves the problem of how to notice that my cached objects
have turned stale, luckily pg has the listen/notify feature so I have
triggers on all tables that do a notify, as you do.

However that just doesn't work, because listen is broken, allow me to
illustrate, here A and B are two clients:

A: BEGIN
A: SELECT * FROM foo and cache the result.
A: LISTEN foochange
B: BEGIN
B: update foo
B: NOTIFY foochange
B: COMMIT
A: COMMIT

From the docs:.


NOTIFY interacts with SQL transactions in some important ways.
Firstly, if a NOTIFY is executed inside a transaction, the notify
events are not delivered until and unless the transaction is
committed. This is appropriate, since if the transaction is aborted,
all the commands within it have had no effect, including NOTIFY. But
it can be disconcerting if one is expecting the notification events to
be delivered immediately.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Best Procedural Language?

2006-08-01 Thread Ian Harding

On 8/1/06, Christopher Browne [EMAIL PROTECTED] wrote:

Martha Stewart called it a Good Thing when Carlo Stonebanks [EMAIL 
PROTECTED] wrote:
 I am interested in finding out a non-religious answer to which
 procedural language has the richest and most robust implementation
 for Postgres. C is at the bottom of my list because of how much
 damage runaway code can cause. I also would like a solution which is
 platorm-independent; we develop on Windows but may deploy on Linux.






 - Doing funky string munging using the SQL functions available in
   pl/pgsql is likely to be painful;

 - Doing a lot of DB manipulation in pl/Perl or pl/Tcl or such
   requires having an extra level of function manipulations that
   won't be as natural as straight pl/pgsql.


Another important distinguishing characteristic is whether it supports
set returning functions.  I think only plpgsql does right now.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] What about pgtclsh

2006-07-25 Thread Ian Harding

It's here now.  I think it used to be in the main distro, but has been
moved out recently.

http://pgfoundry.org/projects/pgtcl/

On 24 Jul 2006 03:07:59 -0700, [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:

Hello everybody,

I must replace an old server by a new one, and I decide to upgrade
postgresql 7.1.3 to 8.1.4.
But a lot of web script use a program name pgtclsh. I try to find somes
informations about it on the web, but all the information that I find
was for the older version thant 8.

Someone have a idea to install pgtclsh for Postgresql 8.1.4 ?

Thx a lot.

Maxime


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] pgsql user change to postgres

2006-07-04 Thread Ian Harding

I wasn't trying to fight it.  It's just that the port disagrees with the
PG documentation and apparently most other ports.  The maintainer said
it was for backward compatibility but it's apparently only a FreeBSD
phenomenom :-)


It may be a *BSD pheonomenon, since I know it applies in NetBSD as
well.  I think they all borrow from each other's ports collections

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Best open source tool for database design / ERDs?

2006-05-30 Thread Ian Harding

postgresql_autodoc and dia.

On 28 May 2006 05:19:04 -0700, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

What open source tool do people here like for creating ER diagrams?


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] how can i view deleted records?

2006-05-02 Thread Ian Harding

There used to be a knob that would allow you to temporarily see
deleted tuples.  Don't know if it's still there.  Sounded kinda
dangerous.

http://archives.postgresql.org/pgsql-patches/2005-02/msg00126.php

Also, you could start (now) using PITR so you could simply restore to
the moment before the records you are interested in were deleted.

Good luck,

- Ian

On 4/29/06, Dan Black [EMAIL PROTECTED] wrote:

Hello,  everybody!
How  can  I  view  deleted  records  in  table?

--
Verba volent, scripta manent
Dan Black

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Categories and Sub Categories (Nested)

2006-04-19 Thread Ian Harding
You should look up the contrib module ltree which is made for this
sort of thing.  If you reinvent the wheel like this, you will be stuck
with 2 levels.  With ltree you can have as many as you need and add
more at any time.  It lets you query for ancestors and descendants of
any item at any level.

I highly recommend it.

On 4/19/06, Martin Kuria [EMAIL PROTECTED] wrote:
 Hi,

 I have a postgresql database Table Categories which has the structure like
 this

 Cat_ID | Parent_ID | Name
 
 1 | 0 | Automobiles
 2 | 0 | Beauty  Health
 3 | 1 | Bikes
 4 | 1 | Cars
 5 | 3 | Suzuki
 6 | 3 | Yamaha
 7 | 0 | Clothes

 According to the above Database data, Suzuki and Yamaha are the
 Subcategories of Category Bikes and Bikes in turn is the Subcategory of
 Automobiles.

 My question is what is the advantage of Creating NESTED Table over have a
 table structure below which achieve the same goal:

 Category Table

 Cat_ID | Cat_Name
 
 1 | Automobiles
 2 | Beauty  Health
 3 | Bikes
 4 | Cars
 7 | Clothes

 Subcategory Table

 Subcat_ID | Sub_Cat_ID | Sub_Name
 --
 10 | 3 | Suzuki
 11 | 3 | Yamaha
 12 | 1 | Bikes
 13 | 1 | Cars

 Since all display the data in them in a tree structure like below

 Automobiles
 --- Bikes
 -- Suzuki
 -- Yamaha
 --- Cars
 Beauty  Health
 Clothes

 Please advice should I Create a Nested Table Categories or
 Should I create TWO Tables, Category Table and Subcategory Table?

 Thanks once again.
 +-+
 | Martin W. Kuria (Mr.) [EMAIL PROTECTED]
 ++

 _
 Express yourself instantly with MSN Messenger! Download today it's FREE!
 http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/


 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] how to prevent generating same clipids

2006-04-13 Thread Ian Harding
This problem has been solved, by the use of sequences.  If you can't
use them as a default, you can use them instead of

MAX(clipid)

You would use

NEXTVAL(clipid_seq)

assuming you had first done

CREATE SEQUENCE clipid_seq;
SELECT SETVAL('clipid_seq', (select MAX(clipid) from whatevertable));

This will guarantee no duplicates.  It will not guarantee no missing values.

On 10 Apr 2006 22:43:16 -0700, [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:
 Hi
Now we cannot change the field type. Because already our application
 is running and thousands of records are already entered. we are getting
 same no for clipid when 2 users are entering data at a time. so how to
 solve the problem


 ---(end of broadcast)---
 TIP 6: explain analyze is your friend


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Leverage your PostgreSQL V8.1 skills to learn DB2

2006-04-12 Thread Ian Harding
This is interesting.

http://www-128.ibm.com/developerworks/db2/library/techarticle/dm-0603wasserman2/

There are a few bugs

1.  In the graphic overview PostgreSQL == Progres
2.  In description of PostgreSQL database cluster, After
initialization, a database cluster contains a database called
postgres, which is a default database used by utilities, users and
third party applications.   That doesn't seem to be necessarily true.
 I don't have that database
3.  He misses the fact that some configuration options can be set
interactively via SET
4.  The VACUUM command must be run on a regular basis to recover disk
space occupied by updated or deleted rows and to update data
statistics used by the PostgreSQL query planner.  with no mention of
autovacuum.  He is talking about 8.1.

Every time I learn about a big commercial database I am amazed at how
much 'bigger' and more complicated than PostgreSQL they are - I guess
they have to be for their intended use - and how similar to PostgreSQL
they are in actual function.

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] postmaster.pid

2006-04-12 Thread Ian Harding
The docs state that postmaster.pid is A lock file recording the
current postmaster PID and shared memory segment ID (not present after
postmaster shutdown

I never looked until now, but I see the number 5432001 where the pid
should be, and the real pid is in /tmp/.s.PGSQL.5432.lock, along with
the path to data.

Is the documentation incomplete/misleading or am I doing something odd?

- Ian

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PostgreSQL x Sybase

2006-03-31 Thread Ian Harding
 A commonly overlooked comparison for always on systems is to compare
 what sorts of operations you can do to databases without needing to
 restart the server or drop tables, lock out users etc.


We use Sybase Adaptive Server Anywhere 8 here and the thing that
annoys me about it is exactly this.  You can't modify any database
objects (AFAICT) while there are other connections.  I have to script
things and schedule them to run in the middle of the night.

There Enterprise product is apparently a completely different
animal, so it is probably better.

- Ian

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] postgresql.conf listen_addresses causing connection problems

2006-03-29 Thread Ian Harding
On 3/29/06, David Bernal [EMAIL PROTECTED] wrote:
 I recently have been attempting to get my install of postgresql 8.1
 (running Win XP as OS) to listen on both 127.0.0.1 and my IP address,
 192.168.0.100 (inside my network, obviously.) As such, I tried first
 setting listen_addresses = '192.168.0.100, localhost'

Try * (wildcard) and see what happens.  It should either work or not
work, not work 'sometimes' so I think there must be something else
involved.  If pg_hba.conf is set up right, and listen address is *
then you have the network to look at.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Implementation Suggestions

2006-03-29 Thread Ian Harding
  I'm wondering if I could get some suggestions as to how implement
  this quickly and simply? I was thinking a web interface using PHP
  would be the fastest way of going about it.
 

If you used Ruby on Rails, you'd be finished by now.  It slices, it
dices, it makes julienne fries.

Seriously, it's not too bad if you don't mind it's plentiful
shortcomings.  I was getting carpal tunnel syndrome from typing
scripting language pages so I switched to RoR for a hobby app.  It
works fine, but you have to do it The Rails Way and expect no help
from the Community because they are a fanboi cheerleader squad, not
interested in silly stuff like referential integrity, functions,
triggers, etc.  All that nonsense belongs in the application!

Check this out, there is no stale connection detection or handling in
rails.  I'm not kidding.  If you connection drops out, restart your
web server.  Sorry.  Blah.

Anyway, besides its warts, it is dead easy to use, and does make
putting together web applications in a green field scenario quite
painless.  Just don't try to do anything outside the box like trying
to access an existing database that uses RDBMS features heavily and
uses normal object naming.

- Ian

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] basic stored proc/transaction question

2006-03-24 Thread Ian Harding
On 3/24/06, Ben [EMAIL PROTECTED] wrote:
 My understanding is that a stored procedure does an implicit begin/commit when
 it executes. Maybe my brain isn't working so well this morning, because I 
 can't
 figure out how I would do:

 begin;
 call stored proc;
 call another stored proc;
 commit;

 It seems that the transaction would be committed after the first call.

Nope.  Unless you use the new SAVEPOINT stuff, the explicit
transaction is the transaction.  Any error in any function will
rollback the whole thing.  The commit happens at the explicit commit.

Every SQL statement (such as calling a function) runs in an implicit
transaction.  Explicit transactions effectively group these implicit
transactions such that any one failure causes them all to fail.

- Ian

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Disability the trigger

2006-03-21 Thread Ian Harding
On 3/16/06, Tomi NA [EMAIL PROTECTED] wrote:


 On 3/13/06, Claudio Tognolo [EMAIL PROTECTED] wrote:
 
  I can disable the Trigger?
 

 I'd like to know how this could be done, as well. What I really need is a
 hold-off-all-triggers-untill-I-tell-you-to command, but
 hey, making a trigger just not fire and vice versa would also be nice. :)

 Tomislav

You can mark triggers as DEFERRABLE and optionally INITIALLY DEFERRED
which will make them hold off on firing until the end of the
transaction, or you can actually turn off triggers with something like
this.

UPDATE pg_class SET reltriggers = 0 WHERE relname = 'tablename';

and turn them back on like so

UPDATE pg_class SET reltriggers = (SELECT count(*) FROM pg_trigger where
pg_class.oid = tgrelid) WHERE relname = 'tablename';

Of course, all that is best done inside an explicit transaction.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Wal -long transaction

2006-03-20 Thread Ian Harding

 And it's been a while; but I thought transactions like that could
 overflow rollback segments in that other database.


ORA-01555:  snapshot too old: rollback segment number string with name
string too small
Cause:  Rollback records needed by a reader for consistent read are
overwritten by other writers.
Action: If in Automatic Undo Management mode, increase the setting of
UNDO_RETENTION. Otherwise, use larger rollback segments.

In 10g you can do

ALTER TABLESPACE UNDO_TS2 RETENTION GUARANTEE;

which will automgically grow the undo tablespace until you run out of
disk space or the transaction ends.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] full text indexing

2006-03-15 Thread Ian Harding
On 3/15/06, chris smith [EMAIL PROTECTED] wrote:
 Hi all,

 Just wondering which full text module is better  what the differences
 are between tsearch and fti ?

Having only used tsearch/tsearch2 all I can say that it works as
advertised and I am extremely happy with it.

- Ian

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] tsearch2 and how to use

2006-01-19 Thread Ian Harding
The big job is populating the index columns.  I think you can only put
the full text index column in the same table as the referenced
columns.  In other words, you will end up with 3 tables, each with a
ftidx column.  I hope your docs show how to create and populate the
indexes and to create triggers to keep them up to date.

To search across 3 tables I would do something like

select ...
from t1 join t2 on ... join t3 on...
where t1.idxfti @@ 'test'::tsquery
  or t2.idxfti @@ 'test'::tsquery
  or t2.idxfti @@ 'test'::tsquery

If there is an easier better way, I don't know it!

I can email you the very good intro and readme files directly if you need them.

On 1/19/06, Michelle Konzack [EMAIL PROTECTED] wrote:
 Hello,

 because I am not in Strasbourg and have NO WEB Access I have
 following question:
 How to use tsearch2?

 I have load the tsearch2.sql into my Database but I do not know
 how to use it, because the Documentation under Debian is not very
 usefull.

 Note:   I am sending messages via a GSM gateway which block any
 kind of SSL, FTP and Web access.

 Does anyone have some usefull examples how to search in
 three tables (timeline, dossiers and peoples) each
 three columns?

 Where the biggest column is in timeline and hold around
 130 GByte of text/plain data.

 Greetings
 Michelle Konzack
 Systemadministrator
 Tamay Dogan Network
 Debian GNU/Linux Consultant


 --
 Linux-User #280138 with the Linux Counter, http://counter.li.org/
 # Debian GNU/Linux Consultant #
 Michelle Konzack   Apt. 917  ICQ #328449886
50, rue de Soultz MSM LinuxMichi
 0033/3/8845235667100 Strasbourg/France   IRC #Debian (irc.icq.com)

 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster



--
Her faults were those of her race and sex; her virtues were her own.
Farewell, and if for ever - 

-- Travels with a Donkey in the Cevennes by Robert Louis Stevenson

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] E-mail harvesting on PG lists?

2006-01-09 Thread Ian Harding
On 1/8/06, Guy Rouillier [EMAIL PROTECTED] wrote:
 Carlos Moreno wrote:

  Any comments?   If it is the first option above, then it feels like
  by definition there is absolutely nothing that can be done, now or
  ever  :-(

 I got an IMAP account with BurntMail.com.  I belong to a dozen mailing
 lists, and haven't received any spam since getting the email account.
 They obviously do aggressive spam filtering, but as far as I know I'm
 getting all the email I should.  An option to consider...

I use Gmail which promises I will never run out of room, can tag
messages and bypass the inbox, and has a very good spam filter. 
Searching is obviously pretty good too.

I noticed I got about 5 - 10 spam emails within MINUTES of each post
before I switched to Gmail, I'm sure I still do, I just don't see
them.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Oracle DB Worm Code Published

2006-01-08 Thread Ian Harding
On 1/7/06, Magnus Hagander [EMAIL PROTECTED] wrote:
  A recent article about an Oracle worm:
  http://www.eweek.com/article2/0,1895,1880648,00.asp
  got me wondering.
  Could a worm like this infect a PostgreSQL installation?
  It seems to depend on default usernames and passwords - and
  lazy DBAs, IMO.
  Isn't it true that PostgreSQL doesn't have any default user/password?

 That's true. however, PostgreSQL ships by default with access mode set
 to trust, which means you don't *need* a password. And I bet you'll
 find the user being either postgres or pgsql in 99+% of all
 installations.

 We do, however, ship with network access disabled by default. Which
 means a worm can't get to it, until you enable that. But if you enable
 network access, and don't change it from trust to something else (such
 as md5), then you're wide open to this kind of entry.


I don't think it's quite that easy.  The default installs from SUSE
and other RPM I have done are set to ident sameuser for local
connections.  Even if you turn on the -i flag, you can't get in
remotely since there is no pg_hba.conf record for the rest of the
world by default.  You would have to add a record to pg_hba.conf.

PostgreSQL is remarkably secure out of the box compared to Brand X.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Reordering columns in a table

2006-01-06 Thread Ian Harding
As I recall, the MS SQL Server draggy droppy diagrammer tool made it
seem trivial to rearrange columns did the same thing.  It just
generated SQL statements to:

Begin transaction
select data in new order into a new table
drop dependent objects
drop old table
rename new table
re-create dependent objects
end transaction

It seemed kinda squirrelly to me, but it worked most of the time since
MSSQL Server had a good dependency tracking thingie.  However, I would
not really call it a feature of the DBMS.  I would call it a bolted on
utility.

On 1/6/06, Scott Ribe [EMAIL PROTECTED] wrote:
  I would assume
  that all dependent database objects are also dropped when you drop the
  table, so you'd have to recreate all of your foreign keys (both
  directions) and triggers etc.?

 Basically. I try to keep my DDL scripts organized in a way that makes this
 easy. Of course an automated tool could do this as well. For instance I used
 to use the products from Embarcadero to maintain Sybase databases, and their
 design tool would create all the DDL needed to update a live database to
 match the current design. Of course, one experience with a bug and I learned
 to have it show me the script and read it carefully before proceeding ;-)
 (Hey, I'm not stupid, it was just a development db that I hosed!)


 --
 Scott Ribe
 [EMAIL PROTECTED]
 http://www.killerbytes.com/
 (303) 665-7007 voice



 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] WAL logs multiplexing?

2005-12-28 Thread Ian Harding
On 12/28/05, Dmitry Panov [EMAIL PROTECTED] wrote:
 On Wed, 2005-12-28 at 13:38 +0100, Martijn van Oosterhout wrote:
  On Wed, Dec 28, 2005 at 03:17:40PM +0300, Dmitry Panov wrote:
   I'm currently considering setting up online backup procedure and I
   thought maybe it would be a useful feature if the online logs could be
   written into more than one place (something like oracle redo logs
   multiplexing).
  
   If I got it right if the server's filesystem crashes completely then the
   changes that haven't gone into an archived log will be lost. If the logs
   are written into more than one place the loss could be minimal.
 `

When I set up PITR I felt like something was missing.  You have to
wait for the current log file to be closed before it gets copied off
somewhere safe.  I think this is something that should be seriously
considered if it's not too hard.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] WAL logs multiplexing?

2005-12-28 Thread Ian Harding
On 12/28/05, Dmitry Panov [EMAIL PROTECTED] wrote:
 On Wed, 2005-12-28 at 11:05 -0500, Tom Lane wrote:
  Dmitry Panov [EMAIL PROTECTED] writes:
   Yes, but if the server has crashed earlier the script won't be called
   and if the filesystem can't be recovered the changes will be lost. My
   point is the server should write into both (or more) files at the same
   time.
 
  As for that, I agree with the other person: a RAID array does that just
  fine, and with much higher performance than we could muster.
 

 Please see my reply to the other person. The other place can be on an
 NFS mounted directory. This is what the Oracle guys do and they know
 what they are doing (despite the latest release is total crap).

RAID is great for a single box, but this option lets you have
up-to-the-second PITR capability on a different box, perhaps at
another site.  My boss just asked me to set something like this up and
the only way to do it at the moment is a replication setup which seems
overkill for an offline backup.

If this functionality existed, could it obviate the requirement for an
archive_command in the simple cases where you just wanted the logs
moved someplace safe (i.e. no intermediate compression or whatever)?

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Queries never returning...

2005-12-28 Thread Ian Harding
On 12/28/05, John McCawley [EMAIL PROTECTED] wrote:
 I am currently having a problem with a query never finishing (or at
 least not in a reasonable amount of time.)  I have had similar problems
 to this in Postgres over the past several years, and I have always found
 workarounds.  This time I'd like to actually understand what is happening.


I bet it would go faster if you dropped the RI constraints and any
other triggers first.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] psql scripts

2005-11-09 Thread Ian Harding
I have a file which is a long series of SQL commands.  Creating
tables, copying in data, indexing, munging the data, intermittently
vacuuming to keep things moving.

I have usually run this big script like this:

nohup psql dbname  script.sql 

After upgrading to 8.0, the script to slow to a crawl after a while.  
However, it works fine when run from inside a psql session like this

\i script.sql

All I can think of is that the entire script is running in one big
transaction, although there are no explicit BEGIN statements in it.

The man page for psql says, regarding the -c option

If the command string contains multiple SQL commands, they are
processed in a single transaction, unless there are explicit
BEGIN/COMMIT commands included in the string to divide it into
multiple transactions. This is different from the behavior when the
same string is fed to psql's standard input.

implying that stdin is run exactly the same as \i.

That being the case, what could be causing my script to slow to a
crawl when redirected to psql, but run fine from inside psql with -i?

- Ian

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Oracle 10g Express - any danger for Postgres?

2005-11-02 Thread Ian Harding
On 11/1/05, Andrew Rawnsley [EMAIL PROTECTED] wrote:

 They actually did make _some_ strides. The installer actually works
 consistently (knock on veneer-covered-pressboard), which is something I
 haven't seen since the pre-8i text-mode installs...

 Doesn't quite compare to the 5 minute untar/config/build/install/create
 database cycle we're used to with PG however.

Or the ten second rpm -i...

I've only ever installed Oracle once, but I was offended by the
REQUIREMENT of a gui to install the thing.  I managed to just install
the X client and connect from another machine, but still, that seems
whack to me.  Plus I had to run a command line installer for the
particular Linux flavor that prepared the system for me to install
Oracle.  Yikes.  I don't see a single MySQL fanboi going through that
if their biggest beef about PostgreSQL (besides how PAINFULLY slow it
is ;^) is how horribly difficult it is to install and configure.

Having said all that, I will probably dabble in it since it fattens up
the resume...

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL Gotchas

2005-10-08 Thread Ian Harding
On 10/8/05, Nikolay Samokhvalov [EMAIL PROTECTED] wrote:
 I use PostgeSQL less than year. Before I worked with MS SQL Server
 2000, MySQL 34, Oracle8i and Interbase. Also, I studied standards
 SQL:1999 and SQL:2003. So, after switching to PostgreSQL I've
 encountered with several things that seem strange to me. Of course,
 several of them are simply not implemented yet and are in the list of
 unsopported features:
 http://www.postgresql.org/docs/8.0/interactive/unsupported-features-sql-standard.html.
 But some seem to be 'old diseases'.

 Here is my list of the issues:
 http://chernowiki.ru/Dev/PostgreSQLComparedWithSQL2003AndOracleSQLServerDB2Etc

 Perhaps I'm wrong with some issues - any comments are welcome.

From the referenced page...

We can insert into column of type TIME the value such as '12:15' and
then obtain '12:15:00' making select, but we couldn't do the same with
dates.

Huh?  Minutes and seconds have a valid value of zero.  Months and days
don't.  Date types need to contain a valid date, and time types need
to contain a valid time.  Anything else seems like, well, MySQL.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Indexen on 8.0.3

2005-10-07 Thread Ian Harding
On 10/6/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 On Thursday 06 October 2005 18:20, Tom Lane wrote:
 
  No, there's no reason for 8.0 to be slower at this than 7.4, if all else
  is equal.  I'm betting that all else is not equal.  Maybe you are using
  a different encoding or locale in the new installation than the old?
 
 Well, I suspect that something is not equal as well. The trouble is I can't
 seem to find it.
 We're going to replay what happened on a different machine, and hopefully will
 find something.


When I went from 7.4 to 8.0 I had queries that were significantly
slower.  I had to tweak work_mem to get them to run at reasonable
speed, and then they were faster than 7.4 with default sort_mem. 
Can't remember the details of the queries right now.

 Thanks,

 Han Holl

 ---(end of broadcast)---
 TIP 6: explain analyze is your friend


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] SLOOOOOOOW

2005-09-06 Thread Ian Harding
On 9/6/05, Jürgen Rose [EMAIL PROTECTED] wrote:

 I did some serious stuff with SQLServer and Interbase, and I had
 **never** those performance problems.
 

On a laptop?  Under VMWare?  

I have used MSSQL Server too, and find PostgreSQL to compare favorably
in most cases.  You may have found a case where it does not.  I have
found PostgreSQL to work fine out of the box in most cases, with the
main 'obscure knob' that needs to be twiddled being sort_mem (or
work_mem for 8.0 and up.)

If you give specifics of your database structure and migration
functions I am sure someone on the list will be able to help you get
reasonable performance.

 enough of ranting, but I'm totally frustrated

It will work out.

- Ian

 with best regards
 Jürgen
 
 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] optimum settings for dedicated box

2005-08-30 Thread Ian Harding
Mine in similar, and the only thing I have changed from defaults is
work_mem.  It made certain complex queries go from taking forever to
taking seconds.  I have a database connection pool limited to 10
connections, so I set it to 10MB.  That means (to me, anyway) that
work_mem will never gobble more then 100MB.  Seems OK since I have
1GB.

Free space map should probably be tweaked too, if you have lots of
updates or deletes.  I think.

- Ian

On 8/30/05, Matt A. [EMAIL PROTECTED] wrote:
 Wondering what the optimum settings are for an
 dedicated postgresql database box? The box is an
 2.8ghz processor, 1gig ram (soon will be 4) and raid 1
 (mirroring) across two 10k rpm SCSI disks. I only have
 a single database on it running linux of course. Thanks.
 
 __
 Do You Yahoo!?
 Tired of spam?  Yahoo! Mail has the best spam protection around
 http://mail.yahoo.com
 
 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] POSS. FEATURE REQ: Dynamic Views

2005-08-27 Thread Ian Harding
My first idea when this was mentioned was more like 

ALTER TABLE  CASCADE 

where CASCADE meant recompile all the views that depend on that table.  

Not that I think any of this is a good idea, but if it was going to be
done, that's what would make the most sense to me.


- Ian

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] POSS. FEATURE REQ: Dynamic Views

2005-08-25 Thread Ian Harding
Brand X simulates this in their GUI diagrammer by tracking
dependencies and dropping and  recreating dependent views on schema
changes.  This might be a better job for one of the GUI tools for us
too, rather than trying to put it in the back end.  Brand X doesn't do
it in their backend either.

On 8/25/05, Jim C. Nasby [EMAIL PROTECTED] wrote:
 On Thu, Aug 25, 2005 at 03:03:30PM -0700, CSN wrote:
  For lack of a better term, but I was curious if there
  is/was any thought about making PG's views
  automatically see changes in underlying tables, as
  opposed to currently having to drop/create all
  corresponding views if a table's structure (add/delete
  fields, etc.) is changed.
 
 There's not currently a TODO for this, no.
 http://www.postgresql.org/docs/faqs.TODO.html I'm not sure how hard it
 would be to do, since currently CREATE VIEW v AS SELECT * FROM t
 actually expands the * out at creation time.
 --
 Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
 Pervasive Softwarehttp://pervasive.com512-569-9461
 
 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 6: explain analyze is your friend


  1   2   >