Re: [GENERAL] Linux v.s. Mac OS-X Performance

2007-11-27 Thread Doug McNaught


On Nov 27, 2007, at 8:36 PM, Gregory Stark wrote:


I think (but I'm not sure) that the kernel in OSX comes from BSD.


Kind of.  Mach is still running underneath (and a lot of the app APIs  
use it directly) but there is a BSD 'personality' above it which  
(AIUI) is big parts of FreeBSD ported to run on Mach.  So when you use  
the Unix APIs you're going through that.


-Doug

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Last modification time

2006-02-12 Thread Doug McNaught
Johan Vromans [EMAIL PROTECTED] writes:

 Doug McNaught [EMAIL PROTECTED] writes:
 Yeah, LISTEN/NOTIFY is definitely a lighter-weight solution--I didn't
 think of that before.

 LISTEN/NOTIFY looks like a synchronisation mechanism. You can notify a
 subscriber that something happened. But in my case, the report
 generating program runs only occasionally and will not be permanently
 running subscribed. I'm not sure how to use LISTEN/NOTIFY for that.

Well you need something trigger-based, because I don't think it would
be useful to look at modification dates on the database files or
anything like that (checkpoints, vacuum, rolled back transactions etc
would change those even when the user-visible data didn't change).
You could have the trigger update a single-row table and have the
report generator (or a wrapper) poll for it, or you could have it send
a NOTIFY to a little program sitting on a connection, which would then
kick off the report generator.

-Doug

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Last modification time

2006-02-11 Thread Doug McNaught
Johan Vromans [EMAIL PROTECTED] writes:

 Greetings,

 For a big application, I want to generate reports from the database and
 keep these on-line as long as they reflect the actual contents of the
 database. I only want to regenerate the reports when needed, i.e.,
 when the database contents have changed.

 I'm sure PostgreSQL can tell me when the last update has been
 COMMITted but until now I haven't been able to find out how. I must
 have used the wrong search terms...

 Can anyone tell me how to find the last update time of a database?

There isn't any out-of-the-box way that I know of.

I would put an AFTER trigger on all the tables concerned that inserts
a row into an audit table.  Your report generator can then run
periodically, see if there are any new audit entries, generate
reports, and clean out the audit table (if desired).

Note that the audit table may grow very fast and need vacuuming a lot
if you clean it out.

Audit tables are useful for other things too, if you can afford them.

-Doug

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Last modification time

2006-02-11 Thread Doug McNaught
Johan Vromans [EMAIL PROTECTED] writes:

 Doug McNaught [EMAIL PROTECTED] writes:

 I would put an AFTER trigger on all the tables concerned that
 inserts a row into an audit table. [...] Audit tables are useful for
 other things too, if you can afford them.

 I think auditing is much too heavy for something simple as finding the
 last modification timestamp. Yes, it may be the only alternative, but
 a heavy one...

Yeah, LISTEN/NOTIFY is definitely a lighter-weight solution--I didn't
think of that before.

-Doug

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Last modification time

2006-02-11 Thread Doug McNaught
Philippe Ferreira [EMAIL PROTECTED] writes:

Yeah, LISTEN/NOTIFY is definitely a lighter-weight solution--I didn't
think of that before.


 Hi,

 I'm interested too in using this method !

 Can anyone give a simple example of how to use/implement it ?
 Or good links to clear/quick documentation about these functions ?

The online PostgreSQL docs at postgresql.org are quite good.

-Doug

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Sequences/defaults and pg_dump

2006-02-10 Thread Doug McNaught
Bruno Wolff III [EMAIL PROTECTED] writes:

 On Tue, Feb 07, 2006 at 15:28:31 +0300,
   Nikolay Samokhvalov [EMAIL PROTECTED] wrote:
 The real situation would be as the following.
 I want to use some algorithm to hide real number of registered users
 in my table user. So, I don't want to use simple sequence, when every
 new registered user in my system can guess what is the number of
 registered users simply observing his ID. So, I use following

 (N and M are said to be relatively prime.)

 The above method isn't very secure. You might be better off using a block
 cipher in counter mode, depending on how badly you want to keep the number
 of users secret. Even that won't be foolproof as the users might cooperate
 with each other to estimate how many of them there are.

Or, just start your sequence counting at 100.  Or use bigint and
start it at a billion.

-Doug

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] plpgsql direct from C?

2006-02-04 Thread Doug McNaught
James Harper [EMAIL PROTECTED] writes:

 Is there an API method to be able to execute pl/pgsql (or any other
 language) code directly from C?

 Eg (please excuse the line wrapping, and the fact that this doesn't
 represent a case where this would actually be useful!)

 result = PQexecPL(plpgsql, IF $1 = 'xyzzy THEN SELECT * FROM fnord
 END IF;, xyzzy);

Not currently.  Any server-side code has to be in a function, so such
an API would have to create a temporary function (which you might or
might not have privileges to do), call it, and then drop it.  It's
theoretically possible I guess, but would be ugly and slow.

-Doug

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Automating backup

2006-02-03 Thread Doug McNaught
Richard Sydney-Smith [EMAIL PROTECTED] writes:

   pch := pchar('pg_dump -C -h '+host+' -U '+usr+' -p '+pswd+ ' -f
 '+bckup_path+' '+dbase);

 to postgres.

 as the operator is obviously logged in how do I
 (1) trap their user id
 (2) Send the call to pg_dump without knowing their password?

 I expect this is a well worn route and am hoping not to have to
 reinvent a wheel.

I don't think it's well-worn at all--everyone I've ever heard of
runs pg_dump from a cron script.

Why not have a shell script run by the operator that runs pg_dump and
then calls psql to insert the log record (assuming the dump succeeds)?
Putting the logic inside of the database doesn't seem to buy you
anything AFAICS.

-Doug

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] PostgreSQL + hibernate

2006-01-31 Thread Doug McNaught
Myatluk Andrey [EMAIL PROTECTED] writes:

 Hi, Chris!

 I've set the sequence owner to my database user. It hasn't changed
 anything.

 I guess I have to connect through psql as user and then see what
 happens.
 If it fails, what could be my next step?

That would depend on the error message you get.

-Doug

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] How to change the default database for a user

2006-01-30 Thread Doug McNaught
Alexander Farber [EMAIL PROTECTED] writes:

 Hello,

 I've created a user and a database both wrongly named phpbb.
 After that I have renamed both to punbb using ALTER DATABASE
 and ALTER USER. Now everything works fine, except I always
 have to specify the database when connecting (both using psql or
 the PQconnectdb() from my C-program):

 h754814:afarber {103} psql -U punbb
 psql: FATAL:  database phpbb does not exist

psql uses your Unix username as the default database if you don't
specify one. 

-Doug

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] How to change the default database for a user

2006-01-30 Thread Doug McNaught
Alexander Farber [EMAIL PROTECTED] writes:

 but I do specify the database username:

 h754814:afarber {109} psql --help | grep -w user
   -U NAME database user name (default: phpbb)

 h754814:afarber {110} psql -U punbb
 psql: FATAL:  database phpbb does not exist

 And it wants to connect to a wrong database
 (phpbb instead of punbb)

I was talking about the name of the database, not the user name in the
database.  The former defaults to the same as your Unix username
(e.g. if I log in as 'doug' psql will try to connect me to a database
called 'doug' if I don't supply a database name).  The PGDATABASE
environment variable overrides this.  Are you sure it's not set?

Anyway, you should always supply the database name explicitly when you
connect--it's a lot safer.

-Doug

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] New project launched : PostgreSQL GUI Installer for

2006-01-30 Thread Doug McNaught
Devrim GUNDUZ [EMAIL PROTECTED] writes:

 http://pgfoundry.org/projects/pgnixinstaller/

 We are actively looking for developers for the project. Please drop me
 an e-mail if you want to join this project. We will use Python, so you
 need to be a Python guy to join the project. We are in planning phase,
 if you join us earlier, we will be able to share more ideas. 

What value does this bring to systems that have a good package system
and up-to-date repositories?  I can install Postgres today on Ubuntu
using a GUI tool, and install another GUI tool to configure and
adminsiter it.

For systems like Solaris I can see it maybe being a win.

Are you going to work with the underlying system's package manager, or
put everything in /usr/local?

-Doug

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] [GENERAL] New project launched : PostgreSQL GUI

2006-01-30 Thread Doug McNaught
Devrim GUNDUZ [EMAIL PROTECTED] writes:

 On Mon, 2006-01-30 at 20:03 -0500, Doug McNaught wrote:

 What value does this bring to systems that have a good package system
 and up-to-date repositories?  I can install Postgres today on Ubuntu
 using a GUI tool, and install another GUI tool to configure and
 adminsiter it.

 You can install, but what if you need different configure options than
 the package provides? This means a rebuild of the package. Instead, we
 will build and install that package via the installer.

That's actually a pretty cool idea--compile and generate debs/rpms
that reflect the user's choices, then install them.  But the
dependency on a compiler adds a twist of complexity--sorry, you need
to install the following system packages (gcc, etc) before you can
install Postgres as you've configured it.  Not horrible, but perhaps
intimidating for the GUI crowd?  :)  Is gcc in the bog-standard
default install on FC these days?

Certainly you can install pre-built binaries without a compiler, and
let the user choose database location, autovacuum settings and stuff
like that.

Good luck!

-Doug

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] [GENERAL] New project launched : PostgreSQL GUI

2006-01-30 Thread Doug McNaught
Marc G. Fournier [EMAIL PROTECTED] writes:

 On Tue, 31 Jan 2006, Devrim GUNDUZ wrote:

 On my RHEL boxes, I do never ever recompile the kernel since Red Hat
 does not provide support if I do so :)

 Is everything 'loadable modules' then?  I can't imagine you have some
 mammoth kernel running on your system, do you?  with every conceivable
 piece of hardware configured in?

Yes, vendor kernels are very modular--most drivers, packet filtering,
scsi etc are all loadable modules.  You can of course build your own
kernel with only the drivers you need built-in, but it usually doesn't
make very much difference.  The module system works, in general,
extremely well.

-Doug

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Viewing Database Scheme

2006-01-28 Thread Doug McNaught
Rich Shepard [EMAIL PROTECTED] writes:

When installing the application I specified the database name as 'contacts'
 (not very innovative or clever, but descriptive). When I open the database
 with 'psql contacts' and ask to have the tables dumped (with \d), they go
 streaming by on the display. Of course, the bash 'tee' or 'less' commands
 don't work to allow me to capture the stream to a file or page through the
 output.

You can either use 'pg_dump --schema-only' as another poster
suggested, or use the '\o' command in psql.

-Doug

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Viewing Database Scheme

2006-01-28 Thread Doug McNaught
Rich Shepard [EMAIL PROTECTED] writes:

 On Sat, 28 Jan 2006, Eric B. Ridge wrote:

 Again, you can't use redirection via the psql prompt.  But you can do it
 via your shell command line:
 $ psql -c \dt  xrms.tables

Well, that doesn't seem to be working here, either:

 [EMAIL PROTECTED] ~]$ psql -c contacts \dt  xrms.tables
 psql: FATAL:  database \dt does not exist

 [EMAIL PROTECTED] ~]$ psql -c contacts \dt  xrms.tables
 psql: FATAL:  database rshepard does not exist

 [EMAIL PROTECTED] ~]$ psql -c contacts
 psql: FATAL:  database rshepard does not exist

Eric left off the database argument (which defaults to your user
name), which was a little misleading, but his syntax does work:

[EMAIL PROTECTED]:~$ psql -c '\dt' gateway
List of relations
 Schema | Name | Type  |  Owner   
+--+---+--
...

Redirecting to a file is left as an exercise to the reader.

 Alternatively, you can use psql's \o [FILE] command to redirect
 query results to a file:
 contacts=# \o /tmp/xrms.tables
 contacts=# \dt
 contacts=#
 That'll send all output to /tmp/xrms.tables.

This creates the file, but it's empty.

When I do this, after exiting 'psql' the file is populated.  It may be
a buffering issue, as another poster has said.  What you can also do
is close the file by setting output back to the terminal:

gateway=# \o /tmp/foo
gateway=# \dt
gateway=# \!cat /tmp/foo   --- empty at this point
gateway=# \o   --- switch output to the terminal
gateway=# \!cat /tmp/foo   --- now it's there
List of relations
 Schema | Name | Type  |  Owner   
+--+---+--


I'm really surprised that you managed to think 'pg_dump --schema_only'
is an SQL command.  It's listed nowhere in the SQL syntax reference,
and is listed in the manual as one of the utility commands that are
run from the shell.

Your flailing about, randomly trying different argument combinations,
suggests that you aren't understanding what you read, and aren't
bothering to try to understand the error messages you get.  You could
have figured out the first issue, certainly, by reading manpages and
error messages.  The second one is a bit tricky unless you understand
Unix stdio buffering, which I wouldn't necessarily expect.  So I'll
give you that one.  :)

-Doug

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Accessing an old database from a new OS installation.

2006-01-27 Thread Doug McNaught
A. Kretschmer [EMAIL PROTECTED] writes:

 am  27.01.2006, um 14:21:31 + mailte Matthew Henderson folgendes:
 Okay, so if I have 7.4 installed and I have the old
 harddisk mount under /mnt/hda can I do something
 like 
 
   pg_dump /mnt/hda/path_to_old_database  dump.txt

 No, this is imposible IMHO. You need a PG-Server with this version
 (7.4), to read the data. pg_dump is only a client for the DB, it can't
 read the files.

In addition, it's considered best practice to run (in this case) the
7.4 server against the old database, and use the 8.X pg_dump to dump
it out for loading into the 8.X server.

-Doug

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Are rules transaction safe?

2006-01-27 Thread Doug McNaught
Oliver Fürst [EMAIL PROTECTED] writes:

 Hi all,

 I have a question regarding rules on views. Are the commands inside a
 ON INSERT (or UPDATE) DO INSTEAD (implicit) transactions for postgres?
 I tried to put BEGIN; and COMMIT; around the commands in a ON ... DO
 INSTEAD ( ) block, but keep getting a syntax error.

Everything that happens in Postgres is inside either an implicit or
explicit transaction, so you can't do BEGIN/COMMIT inside rules or
functions. 

You might be able to use savepoints, depending on what you're actually
trying to do.

-Doug

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Are rules transaction safe?

2006-01-27 Thread Doug McNaught
Oliver Fürst [EMAIL PROTECTED] writes:

 Basically I'm worried that the whole relying on the last value of a
 sequence isn't such a great idea.

'currval()' is specifically written to Do The Right Thing.  See the
docs.

-Doug

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Are rules transaction safe?

2006-01-27 Thread Doug McNaught
Oliver Fürst [EMAIL PROTECTED] writes:

 But is nowhere stated if (multiple) commands inside a rule are treated
 as an implicit transaction as a whole.

If you don't specifically open a transaction at the top level (i.e. in
'psql' or SQL from your application's code), PG will encapsulate every
query you execute within its own transaction.  The statements in any
rules called will execute in the context of that transaction, along
with any other operations such as trigger calls, table updates,
whatever.  If something in that implicit transaction fails, it will be
rolled back; otherwise, it will be committed once the statement is
finished. 

Also, you need to think of rules as 'query rewrites' rather than
'code that executes', because that's what they are.

I hope that helps.

-Doug

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Access Problem After Version Upgrade -- Update

2006-01-26 Thread Doug McNaught
Rich Shepard [EMAIL PROTECTED] writes:

So, now I can see the tables in the various databases, but SQL-Ledger still
 cannot:

 Internal Server Error
 The server encountered an internal error or misconfiguration and was unable
 to complete your request.

How do I get this fixed, please?

That's a webserver error usually meaning a CGI script crashed or gave
bad output.  Look in the webserver error log file to see what
happened.

-Doug

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Access Problem After Version Upgrade -- FIXED

2006-01-26 Thread Doug McNaught
Rich Shepard [EMAIL PROTECTED] writes:

 On Thu, 26 Jan 2006, Joshua D. Drake wrote:

 Although I am glad you were able to get up and running, typically you don't
 want to move libs like that. Instead update your /etc/ld.so.conf and run
 ldconfig.

True, Josh. What I'd prefer to do is remove /usr/local/pgsql/ once I know
 that nothing there is being used any more with the 8.x versions of postgres.
 I thought of making softlinks, but those would fail as soon as the directory
 tree was removed.

FWIW, my practice when compiling PG from source (which I usually do)
is as follows:

* Configure each release with '--prefix=/usr/local/pgsql-8.1' (or
  whatever)

* Create (as far as disk space permits) independent data directories
  for each version--e.g. '/var/lib/pgsql-8.1/data'.

* Symlink whatever version I'm running to '/usr/local/pgsql'.  Have
  the standard PATH contain '/usr/local/pgsql/bin'.

This way, when I do an upgrade, I can test beforehand by setting PATH
and LD_LIBRARY_PATH (if applicable) appropriately, migrating my data,
running whatever programs I want to test, then switching the symlink.

It's worked fairly well, and it's nice to have the old binaries and
data directory sitting there to switch back to of something breaks
horribly.  The problem I've seen with RPM upgrades is that if
something breaks in the data upgrade process, the old binaries are
gone and it's a pain to get back to where you were.

Putting locally-compiled software in /usr/bin or /usr/lib is a bad
idea, generally, as you may confuse your package manager.

-Doug

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Trigger question: ROW or STATEMENT?

2006-01-25 Thread Doug McNaught
Patrick Hatcher [EMAIL PROTECTED] writes:

 Attempting to do my first trigger and I'm confused about which FOR EACH I
 should use: ROW or STATEMENT.  I import about 80K rows into an existing
 table each day.  If I do a STATEMENT, will the changes only happen on the
 new 80K rows I inserted or will it be for all rows in the table - currently
 about 12M.

If you told us what you want the trigger to do it would probably be
helpful.

-Doug

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Trigger question: ROW or STATEMENT?

2006-01-25 Thread Doug McNaught
Patrick Hatcher [EMAIL PROTECTED] writes:

 Here is the trigger the way it is currently written.  I add some additional
 information from another table:

If you're modifying each row before it goes in, it should definitely
be a FOR EACH ROW trigger.

-Doug

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Postgresql Segfault in 8.1

2006-01-24 Thread Doug McNaught
Benjamin Smith [EMAIL PROTECTED] writes:

 in /var/log/messages, I see 

 Jan 24 17:00:04 kepler kernel: postmaster[26185]: segfault at 
 2516d728 
 rip 0043c82c rsp 007fbfffddd0 error 4

 The insert statement is long, but doesn't seem to violate anything strange - 
 no weird characters, and all the fields have been properly escaped with 
 pg_escape(). 

 What information do you need to help figure this out? 

Reproduce it with gdb attached to the backend process and post the
backtrace...  You may need to recompile PG with debugging symbols to
get the most info.

-Doug

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] numeric data type?

2006-01-22 Thread Doug McNaught
Zlatko Matić [EMAIL PROTECTED] writes:

 So, it seems that numeric without parameters (precision, scale) behave
 similar to float, but is much exact. Am I right or I missunderstood?

Right.  It's also considerably slower, since floating point
calculations can use the hardware.  Unless you're doing a huge number
of computations this may not be an issue.

-Doug

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] sequences not restoring properly

2006-01-20 Thread Doug McNaught
Brian Dimeler [EMAIL PROTECTED] writes:

 I'm trying to transition a database from one server to another, the
 old one running Postgres 7.4.1 and the new, 8.1.1. When I try to
 restore using a pg_dump plaintext sql file from a nightly backup via
 the usual

 psql thedb  backup.sql

The best way to do this is to run the 8.1.1 pg_dump against the 7.4.1
server.  The new pg_dump will know better how to create a backup that
8.1.1 will like.

-Doug

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Page-Level Encryption

2006-01-20 Thread Doug McNaught
David Blewett [EMAIL PROTECTED] writes:

 In reading the documentation of Peter Gutmann's Cryptlib, I came
 across this section:
 The use of crypto devices can also complicate key management, since
 keys generated or loaded into the device usually can't be extracted
 again afterwards. This is a security feature that makes external
 access to the key impossible, and works in the same way as cryptlib's
 own storing of keys inside it's security perimeter. This means that if
 you have a crypto device that supports (say) DES and RSA encryption,
 then to export an encrypted DES key from a context stored in the
 device, you need to use an RSA context also stored inside the device,
 since a context located outside the device won't have access to the
 DES context's key.

 I'm not familiar with how his library protects keys, but this suggests
 that it would be possible to use it as a basis for transparent
 encryption.

He's talking about hardware crypto devices, which most systems don't
have (though they're certainly available).  If you don't have one of
those, then the key has to be stored in system memory.

-Doug

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] mount -o async - is it safe?

2006-01-19 Thread Doug McNaught
Martijn van Oosterhout kleptog@svana.org writes:

 That depends. As long as the data is appropriately sync()ed when
 PostgreSQL asks, it should be fine. However, from reading the manpage
 it's not clear if fsync() still works when mounted -o async. 

 If -o async means all I/O is asyncronous except stuff explicitly
 fsync()ed you're fine. Otherwise...

That's the way it works.  Async is the default setting for most
filesystems, but fsync() is always honored, at last as far as
non-lying hardware will allow.  :)

 The usual advice is to stick the WAL on a properly synced partition and
 stick the rest somewhere else. Note, I have no experience with this,
 it's just what I've heard.

This might not be optimal, as having every write synchronous actually
results in more synced writes than are strictly necessary.

-Doug

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Insert a default timestamp when nothing given

2006-01-19 Thread Doug McNaught
Martijn van Oosterhout kleptog@svana.org writes:

 Not directly. I suppose you could create a view that converted the
 value to the right date on insert.

I think a trigger might make more sense.

-Doug

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] mount -o async - is it safe?

2006-01-19 Thread Doug McNaught
Shane Wright [EMAIL PROTECTED] writes:

 Actually I thought that *all* the database had to have fsync() work 
 correctly; 
 not for integrity on failed transactions, but to maintain integrity during 
 checkpointing as well.  But I could well be wrong!

I think you're write, but what I was thinking of is the scenario where
WAL writes are done in small increments, then committed with fsync()
once a full page has been written.  With a sync mount this would
result in the equivalent of fsync() for every small write, which would
hurt a lot.

I dimly recall this sort of thing being discussed in the past, but I
don't know offhand whether PG does its WAL writes in small chunks or
page-at-a-time.

-Doug

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Upgrade Problem: 7.4.3 - 8.1.2

2006-01-19 Thread Doug McNaught
Rich Shepard [EMAIL PROTECTED] writes:

 On Thu, 19 Jan 2006, Doug McNaught wrote:

 Did you tell pg_restore to read from a file?  Otherwise it will try to read
 from your terminal, which probably isn't what you want.

 Doug,

No, I didn't. I've no idea where the file was dumped, so I've no idea of
 the name or location. Reading in Douglas*2 PostgreSQL book (pages 872-873),
 they write nothing about giving pg_dumpall a file name/location nor that
 pg_restore needs that information. They do write that pg_restore wants a dump
 produced with -format=c or -format=t, without further explanation.

pg_dumpall writes to standard output, so you should have seen an
enormous spew of data at your terminal.  I'm surprised you didn't.
The manpages in the official Postgres documentation are quite clear
about the behavior of these utilities; your book doesn't seem to be.

The recommended way to do an upgrade (if you compile from source) is:

1) Compile the new version of PG and install it to a different place. 
2) Use the new pg_dump to connect to the old server and dump all the
   databases.
3) Make sure your PATH points to the new binaries (or run them by
   hand), and
4) Run initdb to create a new data directory in a different place fom
   the old one.
5) Stop the old server and start the new one.
5) If you've used pg_dumpall, your backup will be in SQL test format,
   so you feed it to 'psql'
6) If you've done pg_dump --format=c for each database, then you need
   to use pg_restore.  pg_restore is only for the binary format
   dumps.

This preserves the old data and binaries so you can back out if you
need to.  Naturally you need enough disk space for three copies of the
data.

Now, since you're using a distribution's packages and upgrade
procedure, it's not clear what happened to your data.  You might want
to look at the server logfiles (if any) and ask on the Slackware
mailing lists to see if anyone else has had this problem.

-Doug

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Upgrade Problem: 7.4.3 - 8.1.2

2006-01-19 Thread Doug McNaught
Rich Shepard [EMAIL PROTECTED] writes:

Well, so much for the book. It did seem to be rather sparse on the upgrade.
 Hmmm-m-m. Wonder what is the most efficient way to get going again. Think
 I'll try the 'pg_dumpall -format=c' from the old directory and see if there's
 a new file there.

I don't think pg_dumpall supports the 'c' format--it only does SQL
(which should work fine for you unless you have large objects, which I
doubt SQL-Ledger uses).  Furthermore, whenever you use pg_dump or
pg_dumpall, you need to redirect it to a file:

$ pg_dumpall  /var/tmp/backup.sql

Otherwise the backup will go to your terminal and nowhere else--not
very useful.  :)

I highly suggest you read:

http://www.postgresql.org/docs/8.1/static/backup.html

Since it looks like you have both old and new data directories, you
might be able to get the old server running again, dump out the data
and load it into the new server.  Depends on whether your old binaries
got blown away by the upgrade.

PG is a bit tricky to upgrade and I haven't yet seen a distro upgrade
script that works really well.

-Doug

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Upgrade Problem: 7.4.3 - 8.1.2

2006-01-19 Thread Doug McNaught
Rich Shepard [EMAIL PROTECTED] writes:

Using the 8.1.2 pg_dumpall from the 2004 data directory produces a 819K
 file. But, when I then cd to the new data directory and run:

 [EMAIL PROTECTED]:/var/lib/pgsql/data$ pg_restore /var/tmp/backup.sql
 pg_restore: [archiver] input file does not appear to be a valid archive

 However, the file is readable and looks to have all the databases and tables
 in it. Perhaps I'm closer now?

Yes, as I said before, 'pg_dumpall' only produces SQL format dumps,
which you restore using 'psql'.  'pg_restore' is only for binary
dumps.

-Doug

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] bigger blob rows?

2006-01-18 Thread Doug McNaught
Eric Davies [EMAIL PROTECTED] writes:

Back in the days of 7.4.2, we tried storing large blobs  (1GB+) in
postgres but found them too slow because the blob was being chopped
into 2K rows stored in some other table.
However, it has occurred to us that if it was possible to configure
the server to split blobs into bigger pieces, say 32K,  our speed
problems might diminish correspondingly.
Is there a compile time constant or a run time configuration entry
that accomplish this?

I *think* the limit would be 8k (the size of a PG page) even if you
could change it.  Upping that would require recompiling with PAGE_SIZE
set larger, which would have a lot of other consequences.

-Doug

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Moving PostgreSQL data directory on Windows

2006-01-13 Thread Doug McNaught
Ottó Havasvölgyi [EMAIL PROTECTED] writes:

 Hello,

 I would like to move the data directory to another location. I have done this:
 1. Stop PostgreSQL
 2. Move data directory
 3. Create a PGDATA env. variable to the new location
 4. Start PostgreSQL

 And it cannot start, because it cannot find postgresql.conf. (in Event log)
 What should I do now?


Dumb question, but are you sure you exported the PGDATA variable?  Is
it pointing to the directory that actually contains postgresql.conf?

If you still have problems, post the exact error message that appears
in the logs.

-Doug

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Moving PostgreSQL data directory on Windows

2006-01-13 Thread Doug McNaught
Doug McNaught [EMAIL PROTECTED] writes:

 Dumb question, but are you sure you exported the PGDATA variable?  Is
 it pointing to the directory that actually contains postgresql.conf?

 If you still have problems, post the exact error message that appears
 in the logs.

Duh, I didn't read the subject line and assumed it was on Unix.  :)

-Doug

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Plans for 8.2?

2006-01-12 Thread Doug McNaught
Jeff Trout [EMAIL PROTECTED] writes:

 Isn't the [expensive db name here]'s replication/failover just an
 expensive addon?
 As in if you don't pay for it you don't get it.

 So we're basically in the same boat as them.. just an add on. we just
 offer more variety.

Well, [cheap and crappy open-source db name here]'s replication is
built in, but we already know we don't want to take them as an
example. :)

-Doug

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] (Select *) vs. (Select id) from table.

2006-01-08 Thread Doug McNaught
Scott Ribe [EMAIL PROTECTED] writes:

 The time the DB needs to find the record
 should be the same since the record has to be found before the resultset is
 assembled.

 What if the query can be satisfied from an index? I don't know if PostgreSQL
 has this kind of optimization or not. 

Nope.  Tuple visibility isn't stored in indexes, so it still has to
visit the heap to see if a row is visible to your transaction.

-Doug

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] PGError: server closed the connection unexpectedly

2006-01-07 Thread Doug McNaught
Dave Steinberg [EMAIL PROTECTED] writes:

 My biggest problem is the lack of any real error message on the server.
 I don't see anything wrong in the system logs, and there's no core
 file in the /var/postgresql directory. 

Are you sure core files are enabled; i.e. the server is running with
'ulimit -c unlimited' ?

 I did a 'vacuumdb -afz' just as
 a shot in the dark, without affect.  Pretty much all I see in the logs
 is this:

 USER%DB x.y.z.a(51478) 487LOG:  unexpected EOF on client connection

This means a client is dying or closing its connection prematurely,
and would seem to be a different problem.  It shouldn't ever cause the
server to crash.

 Googling turned up a few reports suggesting bad hardware, or corrupted
 indexes, but I don't think that's the case here.

 Any starting points or ideas would be greatly appreciated.

Make sure the server is able to dump a core file, and perhaps crank up
the logging level.

-Doug

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] PGError: server closed the connection unexpectedly

2006-01-07 Thread Doug McNaught
Dave Steinberg [EMAIL PROTECTED] writes:

My biggest problem is the lack of any real error message on the server.
I don't see anything wrong in the system logs, and there's no core
 file in the /var/postgresql directory.
 Are you sure core files are enabled; i.e. the server is running with
 'ulimit -c unlimited' ?

 Yes:

 $ whoami
 _postgresql
 $ ulimit -c
 unlimited

But does the startup script for PG set the limits as well?  It's quite
possible that the PG daemon startup sequence and logging in as the PG user go
through different scripts.

Also, make sure that you're looking in the right place for core
dumps--OpenBSD may put them somewhere weird by default.

[...]

 That looks to me like a clean and normal exit.  This is pointing more
 and more towards the client in the ruby case, isn't it?

Yeah, if the server were crashing its exit code would be greater than
127.

Also, usually when a backend crashes, the postmaster takes the whole
server down on the assumption that shared memory may have been
corrupted.  It doesn't sound like this is happening to you, which
again points to a client problem.

You're not using the same PG connection from two different threads, or
fork()ing and trying to use the same connection in the parent and the
child, or anything like that?

-Doug

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] PGError: server closed the connection unexpectedly

2006-01-07 Thread Doug McNaught
Dave Steinberg [EMAIL PROTECTED] writes:

 You're not using the same PG connection from two different threads, or
 fork()ing and trying to use the same connection in the parent and the
 child, or anything like that?

 Aha!  In the ruby code, I am forking!  I'll make the child reconnect
 and see if that helps (I am almost sure this will fix it).  This one I
 should have guessed - I dealt with similar stuff in perl somewhat
 recently.

Yay!  I thought it might be something like that.

 The pg_dumpall problem...  I'll crank the debug level and see if
 there's anything interesting there, and if so I'll post about it again.

You could also, if you have to, run the BSD equivalent of 'strace'
(ktrace?) against the backend that pg_dump is connected to, and see
what might be going on.  'strace' is the gun I pull out when logfiles
aren't working for me.  :)

-Doug

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Moving Tablespaces

2006-01-03 Thread Doug McNaught
Allen Fair [EMAIL PROTECTED] writes:

 Our databases can get large and we want to find the best way to plan for
 when a database outgrows its current server.

 How about a replication scheme and cutover? If so, is there a preferred
 replication package to support this?

Slony-I was designed for this scenario (among others) and is actively
developed.  There are also other replication solutions, some of them
proprietary.

-Doug

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] storing PDFs

2006-01-01 Thread Doug McNaught
Leonel Nunez [EMAIL PROTECTED] writes:

 You can   use ByteA   but the size will be  about 4 times bigger

Are you sure?  The intermediate form for bytea (escaped single-quoted
strings) is pretty inefficient, but once in the database the data is
stored as binary and can even be compressed.

Plus, I'm pretty sure the protocol supports transmitting bytea fields
as binary over the wire if your client software can handle it.

-Doug

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] restarting after power outage

2005-04-27 Thread Doug McNaught
Tom Lane [EMAIL PROTECTED] writes:

 Jon Lapham [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 It is.  We have been fooling with the postmaster startup logic to try to
 eliminate this gotcha, but it's only very recently (8.0.2) that I think
 we got it right.

 So, then it would be correct to change my init scripts to do the 
 following:  (if so, this patch can be applied to the 7.4 branch)

 I would recommend strongly AGAINST that, because what you just did was
 remove the defense against starting two postmasters concurrently in the
 same data directory (which would be a disaster of the first magnitude).
 This is not a problem for bootup of course, but if you ever use this
 script to start the postmaster by hand, then you are playing with fire.

What I have done is to create a separate init.d script that removes
the PID file, and arrange for it to run before the PG startup script.
That way you can use the regular script to stop and start without
danger, but on a bootup after an unclean shutdown the PID file will
get removed before PG gets started.  If you're dumb enough to run the
removal script by hand while PG is running, you deserve what you get.  :)

-Doug

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] restarting after power outage

2005-04-27 Thread Doug McNaught
Uwe C. Schroeder [EMAIL PROTECTED] writes:

 Is this just me or did anyone actually think about adding a UPS to
 the machine and monitor it with NUT ?  That way the machine would
 shut down properly, making the whole stale pid-file issue
 irrelevant.

UPSs fail.  People kick out power cords.  It's good to be able to deal
with it.

-Doug

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] [HACKERS] plPHP in core?

2005-04-04 Thread Doug McNaught
Robert Treat [EMAIL PROTECTED] writes:

 If by stripped down you mean without postgresql database support then
 I'll grant you that, but it is no different than other any other pl
 whose parent language requires postgresql to be installed.  If packagers
 are able to handle those languages than why can't they do the same with
 PHP ?

Other languages don't require PG to be installed in order to compile
them.  For example, you can build Perl (with no Postgres on the
system), build Postgres and then build DBD::Pg as a completely
separate step.

-Doug

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] libpq usage in Samba

2005-03-30 Thread Doug McNaught
Fernando Schapachnik [EMAIL PROTECTED] writes:

 I'm trying to figure out why Samba is failing at high loads if using
 Postgres as a backend (7.3.9 in my setup). On startup in makes one
 connection only against the database which is shared among all the
 Samba processes. In turn, each issue a PQexec() over the same
 connection (a SELECT, actually). Is that a safe use? I would think
 it is not, but not really sure. I mean, does libpq multiplex queries
 and handle concurrency correctly in a such a scenario?

No, it doesn't.  You need to figure out a way to have each Samba
process open its own connection.

-Doug

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Upgrade data

2005-03-30 Thread Doug McNaught
josue [EMAIL PROTECTED] writes:

 I did a pg_dumpall and this sentence to restore it back

 ./psql template1 -U postgres -p 9981  /home2/tmp/dbtest.tar

You need to use 'pg_restore' for tar format dumps.

-Doug

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Problem connecting to postmaster

2005-03-21 Thread Doug McNaught
Glenn Sullivan [EMAIL PROTECTED] writes:

 Hi,

 I have installed native postgresql8.0.1 on a PC running Windows XP
 professional.
 I am having trouble connecting to it from my application.  The application
 is a combination of java and C++ which was developed on Sun Solaris (Unix).
 Thus on the PC, I have installed Microsoft Interix (Service For Unix) on
 the PC to run the application.  I don't know if this is the problem or not,
 but I am thus trying to connect to the DB running native from an application
 running from Interix.  Does anyone know if this is inherently a problem?

Have you tried writing a small standalone Java application to test
connecting to the database?  That would take Interix out of the
equation.

 My java code connects to the DB using
  DriverManager.getConnection(jdbc:postgresql://gandalf:5432/mydb,
  user, passwd);

 I get a PSQLException with a null message when this is executed.

 I start up the postmaster with -i to allow TCP/IP connections.
 I set Windows Firewall to off

Does netstat show the Postgres server listening on 5432?  Have you
tried using localhost instead of gandalf in the JDBC URL?

-Doug

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] prelimiary performance comparison pgsql vs mysql

2005-03-16 Thread Doug McNaught
Joshua D. Drake [EMAIL PROTECTED] writes:

 Also, a 32-bit machine can only hold so much RAM. If I'm correct, there
 are ways to address more memory than that on a 32 bit machine, but I
 wonder at what cost? In other words, is it a good idea to address more
 than 4GB on a 32 bit machine? If not, is it a reasonable choice to
 invest in 64 bit if you want 4GB of RAM? Or are you better off just
 spending the money on RAID and staying at 4GB?

 It entirely depends on the database but not that the 32bit limit of 4GB
 is per CPU. So if you have 4 CPUs you can have 16GB of ram.

It's actually per-process, not per-CPU.  The x86 ISA only has 32-bit
address registers, so a process can only see 4GB max.  The PAE
extensions that came in with the PPro allow for more address bits in
the page tables, so each process sees a different subset of a larger
pool of physical RAM.

The implication of this for PostgreSQL on x86 is that each backend has
a maximum of 4GB (actually, usually more like 3 to allow for kernel
address space) that must include shared buffers, server code and data,
and memory used for sorting etc.

On 64-bit platforms, the 4GB address space limitation disappears, and
a single backend could use 20GB for a sort, if the memory was
available and the administrator allowed it.

 However, you should be running Opterons anyway.

Yup.  :)

-Doug

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Referencing created tables fails with message that

2005-02-27 Thread Doug McNaught
Tommy Svensson [EMAIL PROTECTED] writes:

 I have just installed Postgresql and tried it for the first time.

 One very serious problem I ran into was when actually trying to use
 created tables.
 Creating a simple table without any foreign keys works OK, but after
 creating the
 table it is not possible to do a select on it! I tried the following
 variants:

 SELECT * FROM table;
 SELECT * FROM public.table;
 SELECT * FROM schema.public.table;

 All result in the message The relation table does not exist! or
 The relation public.table does not exist!.

I bet it's a case problem.  Please give the actual table name and the
exact SQL you are using to create and access it.

-Doug

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Update command too slow

2005-02-08 Thread Doug McNaught
Venkatesh Babu [EMAIL PROTECTED] writes:

 Hello,

 Thanks for providing info... I tried disabling
 autocommit, as suggested by Mr. Greg Stark, I tried
 issuing the command set autocommit to off, but got
 the following error message:

 ERROR:  SET AUTOCOMMIT TO OFF is no longer supported

Autocommit is handled by the drivers now.

 Also, I can't implement the suggestions of Mr.
 Christopher Browne, because I'm not working with
 database directly. There is an abstract layer built
 over the database. This abstract layer provides an
 interface between application objects and data tables
 corresponding to those objects. Our application is
 developed over this abstract layer. Infact, we are
 using Collection datatype provided by this layer.
 Collection is similar to java vectors in that it can
 store any kind of persistable objects, also it
 implements the save method (which updates the tables
 corresponding to each object present in the
 collection), hence one update statement generated per
 object present in the collection.

Sounds like Hibernate--is that what you're using?  Make sure you use
your mapping library's transaction mechanism to execute the save()
inside a transaction and you may get get some speedup.

-Doug

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Introducing the future Debian

2005-02-04 Thread Doug McNaught
Martin,

This looks really good.  I wish it were going into Sarge, though of
course the timing isn't right for that.  :)

A couple things I noticed about the automated upgrade procedure
(pg_version_upgrade):

1) Since it uses pg_dumpall, it doesn't seem to be capable of handling
   databases with large objects (these have to be dumped individually
   using one of the non-text dump formats and the -b option).
   Furthermore, I believe it will fail silently, as pg_dumpall will
   simply leave out the pg_largeobject table and return success for
   such databases, which will be left with dangling LO references.

2) It is fairly commonly recommended, when doing a dump/restore
   upgrade, to use the new version's pg_dump to dump out the old
   database, as that generally makes it easier to load into the new
   version without manual surgery on the dump file.  The sentence All
   operations are done with the software version appropriate to the
   cluster version. in the document seems to preclude this option. 

Thanks for putting so much thought into this!

-Doug

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] postmaster listening on specified addresses

2005-01-14 Thread Doug McNaught
Administrator [EMAIL PROTECTED] writes:

 Hello,

 I was wondering if there's a correct method for running postmaster
 with the option of listening on a select group of addresses.  Does
 postmaster accept multiple -h hostname options on the command-line,
 or alternatively a comma-separated list of hostnames or addresses?
 What if you have a server with multiple network interfaces and
 addresses assigned to each, and you only want postmaster to listen on
 a specific subset of those addresses in addition to localhost?  Does
 anyone know if there a supported method for doing this?

It would require changes to the existing code (see below).

 The documentation for the -h option only states: Specifies the
 TCP/IP host name or address on which the postmaster is to listen for
 connections from client applications. Defaults to listening on all
 configured addresses (including localhost).  Clearly the server is
 capable of listening on mutliple addresses since the default is all of
 them, but the -h option is described only for use with a single
 address.

Actually, in the sockets API to listen on all configured addresses
you specify the wildcard address (0.0.0.0).  There is no call to
listen on this list of addresses.  What you are looking for could be
done, but it would require multiple listening sockets, one for each
address, which (as far as I know) the code doesn't currently do.

-Doug

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Adding UNIQUE constraint on NULL column

2005-01-13 Thread Doug McNaught
Dave Smith [EMAIL PROTECTED] writes:

 I am trying to add a unique constraint on a column that can be null. The
 documentation states that null is treated as non equal values but I want
 them to be equal. Is there another way of doing this other than writing
 a before insert trigger?

UNIQUE constraints on NULLable columns work fine.  It's not clear from
the above what you're looking for.

Are you really saying that you want 'NULL = NULL' to return 't'?

-Doug

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Adding UNIQUE constraint on NULL column

2005-01-13 Thread Doug McNaught
Dave Smith [EMAIL PROTECTED] writes:

 On Thu, 2005-01-13 at 10:12, Doug McNaught wrote:

 Are you really saying that you want 'NULL = NULL' to return 't'?

 Yes

Well, that's not how NULL works.

-Doug

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Shared disk

2004-12-11 Thread Doug McNaught
Nageshwar Rao [EMAIL PROTECTED] writes:

Hi,

I would like to use shared disk for two Postgresql database. I mean
that two Postgresql Database point to same $PGDATA directory. Is this
possible in Postgresql Clarification is appreciated.

No, this is not possible.

You may want to look at replication solutions such as Slony-I.

-Doug

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Postgres not using shared memory

2004-12-10 Thread Doug McNaught
Karl O. Pinc [EMAIL PROTECTED] writes:

 Hi,

 I can't seem to get postgresql to use shared memory and performance is
 terrrible.

1) Linux doesn't track shared pages (which is not the same as shared
   memory) anymore--the field the in 'free' output is just there to
   avoid breaking software.  Use the 'ipcs' command to list shared
   memory segments--you'll find Postgres is using what you told it to
   (otherwise it woudn't start up at all). 
2) The -performance list is a good place to find out why queries are
   running slowly.

-Doug

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Natural ordering in postgresql? Does it exist?

2004-12-10 Thread Doug McNaught
Clark Endrizzi [EMAIL PROTECTED] writes:

 Hi all,
 I have a field that I'll be ordering and  I noticed that ordering is
 done logically and would confuse my users here (1,12,16,4,8, etc).

Sounds like you're storing a number in a text field.  Numeric fields
sort in numerical order.

-Doug

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Network authentication

2004-12-06 Thread Doug McNaught
Bob Parnes [EMAIL PROTECTED] writes:

 I am having trouble connecting to a database on a debian server from a 
 client system, also debian. Acccording to the documentation, this is
 possible without a password and offers the following example,

 hosttemplate1   all 192.168.93.0  255.255.255.0 \
   ident sameuser

You need to be running an ident daemon on the client machine, and also
to be aware of the security issues involved with ident.

 In addition I noticed that if I have a second line in the pg_hba.conf file,

 hostall all 192.168.1.0   255.255.255.0 md5

 coming before the other line, I can connect to the server database using
 a password. However, if it follows the line, I cannot. Am I doing 
 something wrong here also?

Only the first matching line in pg_hba.conf is used.

-Doug

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] SSL confirmation

2004-12-05 Thread Doug McNaught
Michael Fuhr [EMAIL PROTECTED] writes:

 On Sun, Dec 05, 2004 at 09:10:42PM +, Andrew M wrote:

 The map i make reference to is a Jboss map used to make a jndi 
 connection to postgreSQL. What document do I need to access to get hold 
 of the envireonment variables, namely PGSSLMODE? 

 I don't know if the J-stuff wraps libpq or if it implements the
 communications protocol on its own. 

The latter.  AFAIK it doesn't use environment variables.  See the JDBC
driver docs for how to set options when connecting.

-Doug

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] general questions on Postgresql and deployment on

2004-12-03 Thread Doug McNaught
Calvin Wood [EMAIL PROTECTED] writes:

 symbolic link. But on win32, there is no equivalent. However, even under 
 *nix system, I believe symbolic link can only be created for directories on 
 the same hard drive. This seems less than optimal. Typically, one would 
 place database files on RAID 5 drives (to maximize random access speed) and 
 log files on mirrored drives (to maximize sequential access speed).

On Unix, a symbolic link can point anywhere.  It's hard links that are
limited to the same filesystem (not necessarily a disk drive in the
hardware sense).

-Doug

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Dont let those int8's drive you mad!!

2004-12-03 Thread Doug McNaught
Jeff Amiel [EMAIL PROTECTED] writes:

 If postgres knows the field is an int8, why do I have to cast it in my query?

This is fixed in 8.0.

-Doug

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] relation does not exist error

2004-12-02 Thread Doug McNaught
Simon Wittber [EMAIL PROTECTED] writes:

 WMSDV=# \dt
  List of relations
  Schema |   Name| Type  | Owner
 +---+---+---
  public | Customers | table | simon
  public | Persons   | table | simon
 (2 rows)

 WMSDV=# select * from Customers;
 ERROR:  relation customers does not exist

Identifiers get mashed to lower case unless you quote them:

SELECT * FROM Customers;

-Doug

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] [HACKERS] Adding Reply-To: listname to Lists

2004-11-28 Thread Doug McNaught
Marc G. Fournier [EMAIL PROTECTED] writes:

 No, the poster will still be included as part of the headers ... what
 happens, at least under Pine, is that I am prompted whther I want to
 honor the reply-to, if I hit 'y', then the other headers *are* strip'd
 and the mail is set right back to the list ...

I'm in the Reply-To considered harmful camp.  I also don't see any
real evidence that the current setup is causing problems. 

-Doug

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Moving/Using Postgres Binaries on multiple machines

2004-11-24 Thread Doug McNaught
Francis Reed [EMAIL PROTECTED] writes:

 If I want to create a postgres database on multiple machines, is the
 practice of tarring or zipping up binaries compiled on one machine and
 untarring them on another, and using the binaries (initdb etc) acceptable?.
 This removes the need for having a compiler and environment on the target
 machine, or is it necessary always to have such an environment on any
 machine you intend to use postgres on? Postgres seems to have enough
 environment options to allow this to work, overriding the original library
 locations and paths etc from the original machine on which postgres was
 compiled.

 Does anyone see a problem with this approach?

I've had no problems doing this, though I generally standardize the
install location across multiple machines.

-Doug

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Any equivalent of MSSQL Detach?

2004-11-21 Thread Doug McNaught
Kenneth Downs [EMAIL PROTECTED] writes:

 I am wondering if it is possible to pick up a database from one computer and
 move it over to another and then just plug it back in.

No, unless you do the whole database cluster, and the two machines are
the same architecture and have the same version of PG, compiled the
same way.  Basically there are a lot of interdependencies between
different parts of the database cluster, so you can't just grab a
single database and copy it over.

pg_dump is the way to go...

P.S. I'm replying to the list only because I can't be bothered to
demangle your address.  If you don't get the reply, tough toenails.

-Doug

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] How to clear linux file cache?

2004-11-16 Thread Doug McNaught
Janning Vygen [EMAIL PROTECTED] writes:

 So how do i easily empty all page/file caches on linux (2.4.24)?

Probably the closest you can easily get is to put the Postgres data
files on their own partition, and unmount/remount that partition
before running yuour tests.  Unmounting will sync and throw away all
cached file data for that partition.

-Doug

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] DROP DATABASE, but still there

2004-11-11 Thread Doug McNaught
Robert Fitzpatrick [EMAIL PROTECTED] writes:

 What does it mean when you drop a database and then recreate with the
 same name and all the objects are still there. I want to wipe out the db
 and put back from pg_restore. After I re-create the db, all the old
 tables are back before I run pg_restore.

Check the 'template1' database to see if the tables got put in there
by mistake at some point.

-Doug

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Transaction rollback - newbie

2004-11-09 Thread Doug McNaught
A. Mous [EMAIL PROTECTED] writes:

 Hi,

 I've got clients connected to pgsql via ODBC.  If they lose their connection
 abruptly, all un-committed transactions are automatically rolled-back (I'm
 assuming) but is there anything left behind that needs to be cleaned up on
 the server side with regards to the uncommitted transaction(s)?

No.  When the connection goes away, the backend will log an error,
roll back the transaction and exit cleanly.

-Doug

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Reasoning behind process instead of thread based

2004-10-27 Thread Doug McNaught
[EMAIL PROTECTED] writes:

 The developers agree that multiple processes provide
 more benefits (mostly in stability and robustness) than costs (more
 connection startup costs). The startup costs are easily overcome by
 using connection pooling.
 

 Please explain why it is more stable and robust? 

Because threads share the same memory space, a runaway thread can
corrupt the entire system by writing to the wrong part of memory.
With separate processes, the only data that is shared is that which is
meant to be shared, which reduces the potential for such damage. 

 Also, each query can only use one processor; a single query can't be
 executed in parallel across many CPUs. However, several queries running
 concurrently will be spread across the available CPUs.

 And it is because of the PostgreSQL process architecture that a query
 can't be executed by many CPU:s right?

There's no theoretical reason that a query couldn't be split across
multiple helper processes, but no one's implemented that feature--it
would be a pretty major job.

 Also, MySQL has a library for embedded aplications, the say:

 We also provide MySQL Server as an embedded multi-threaded library that
 you can link into your application to get a smaller, faster,
 easier-to-manage product.

 Do PostgreSQL offer anything similar?

No.  See the archives for extensive discussion of why PG doesn't do
this.

-Doug

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Importing a tab delimited text file - How?

2004-10-24 Thread Doug McNaught
Ken Tozier [EMAIL PROTECTED] writes:

 I've been looking though the PostgreSQL documentation but can't seem
 to find a command for importing files. I read the documentation
 related to large objects but this isn't what I'm looking for as I
 don't want to import the entire file into a single field, I want to
 import it as a table. I'm sure there's a way to do it but I can't seem
 to find the magic command.

 Could someone point me to (or provide) an example?

You want the SQL COPY statement, or the \copy command in 'psql'.

-Doug

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] OID's

2004-10-23 Thread Doug McNaught
Eddy Macnaghten [EMAIL PROTECTED] writes:

 The other thing to be aware of is if a large number of people are
 writing to the database concurrently it can go wrong (any method).  That
 is if you insert a record (using nextval for the sequence), then someone
 else quickly inserts a row too before you have a chance to get the
 sequence number at the next statement then the sequence number you get
 will be wrong (it would be of the new one, not yours).  This would be
 the case regardless of how the records are committed.

Not the case.  If you use currval(), it will always be the last value
the sequence took *in your session*, so it's immune to other sessions
inserting at the same time.  See the docs.

-Doug

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] '1 year' = '360 days' ????

2004-10-23 Thread Doug McNaught
Tom Lane [EMAIL PROTECTED] writes:

 Ricardo Perez Lopez [EMAIL PROTECTED] writes:
 I have observed that, for PostgreSQL, one year is actually 360 days:

 SELECT '1 year'::timestamp = '360 days'::timestamp;

 ?column?
 -
 t

 Nonsense.

 regression=# SELECT '1 year'::timestamp = '360 days'::timestamp;
 ERROR:  invalid input syntax for type timestamp: 1 year

 How about telling us what you *really* did, instead of posting faked
 examples?

FWIW:

template1=# select '1 year'::interval = '360 days'::interval;
 ?column? 
--
 t
(1 row)

template1=# select '1 year'::interval = '365 days'::interval;
 ?column? 
--
 f
(1 row)

template1=# select version();
   version   
-
 PostgreSQL 7.4.5 on i686-pc-linux-gnu, compiled by GCC egcs-2.91.66
(1 row)

-Doug

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] How to increase number of connections to 7.2.1

2004-10-17 Thread Doug McNaught
Edwin New [EMAIL PROTECTED] writes:

I need to increase the number of connections to PostgreSQL 7.2.1.


I have tried changing the value of max_connections in
Postgresql.conf.  It was commented out (as are all other entries
except tcpip_socket = true).  Setting it to any other value that the
default 32 resulted in the server failing to restart.

1) Make sure your shared_buffers setting is big enough to handle your
   desired connection limit.

2) Make sure your kernel's shared memory limits allow for your
   shared_buffers setting.

If you can't get it to start, please post the server log entries from
the failed attempt--otherwise we have nothing to go on.

-Doug
-- 
Let us cross over the river, and rest under the shade of the trees.
   --T. J. Jackson, 1863

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] dealing with invalid date

2004-10-16 Thread Doug McNaught
Mage [EMAIL PROTECTED] writes:

   Hi,

 can pgsql acceppt invalid date values? Sometimes it would be nice to
 convert 2003-02-29 to 2003-03-01 or to 2003-02-28 automatically
 instead of throwing back an error message.

If you want MySQL, you know where to find it.

-Doug
-- 
Let us cross over the river, and rest under the shade of the trees.
   --T. J. Jackson, 1863

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Message-ID as unique key?

2004-10-12 Thread Doug McNaught
Jerry LeVan [EMAIL PROTECTED] writes:

 Hi,
 I am futzing around with Andrew Stuarts Catchmail program
 that stores emails into a postgresql database.

 I want to avoid inserting the same email more than once...
 (pieces of the email actually get emplaced into several
   tables).

 Is the Message-ID  header field a globally unique identifer?

It is intended to be, but since it is generated by either the MUA or
the first MTA that sees the message, there is a lot of scope for
broken software to screw things up.  

-Doug
-- 
Let us cross over the river, and rest under the shade of the trees.
   --T. J. Jackson, 1863

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Help on copy function

2004-10-06 Thread Doug McNaught
Tom Lane [EMAIL PROTECTED] writes:

 [EMAIL PROTECTED] writes:
 And an error occurs :
 psql:lineitem.loader.psql:1: ERROR:  could not extend relation
 24342131/24342133
 /24342324: There is not enough space in the file system.
 HINT:  Check free disk space.

 but my fileSystem has something like 2Go free when copy fails!

 Maybe you are running the postmaster under a disk-space-usage limit?
 I'm not sure that there's a separate errno for you can't have any
 more space as opposed to there isn't any more space.

It's also possible that PG is trying to create a new table file and
he's out of inodes...

-Doug
-- 
Let us cross over the river, and rest under the shade of the trees.
   --T. J. Jackson, 1863

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Help on copy function

2004-10-06 Thread Doug McNaught
Tom Lane [EMAIL PROTECTED] writes:

 Doug McNaught [EMAIL PROTECTED] writes:

 It's also possible that PG is trying to create a new table file and
 he's out of inodes...

 Good thought, although I think that this particular error message would
 only come out from a seek/write failure and not from an open failure.
 In any case it's some sort of externally imposed resource limit ...

Yeah.  My first reaction to inexplicable ENOSPC is always are you
out of inodes? just because I've been bitten by it several times and
felt like an idiot afterwards.  ;)

-Doug
-- 
Let us cross over the river, and rest under the shade of the trees.
   --T. J. Jackson, 1863

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] default select ordering

2004-10-02 Thread Doug McNaught
Matt Roberts [EMAIL PROTECTED] writes:

 Please accept my apologies if this is answered elsewhere in the archives
 or docs but I have searched without luck.

 I've always assumed that default ordering of selects are based on a first
 in first out principle and that this remains true at the row level despite
 edits to columns.

There is no guaranteed ordering of rows without an ORDER BY clause.
VACUUM will change the row ordering as it moves tuples around to free
up space.  If you want a guaranteed order, use a key field and ORDER BY.

-Doug
-- 
Let us cross over the river, and rest under the shade of the trees.
   --T. J. Jackson, 1863

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] newby question

2004-10-01 Thread Doug McNaught
Scott Frankel [EMAIL PROTECTED] writes:

 Additional note:  configure warned of an old version of bison when I
 attempted an install of postgresql7.4.5 the other day.  Seems the
 version
 that comes with OSX 10.3.x is too old for postgres.

You only actually need Bison if you are building from CVS--the release
tarballs are pre-Bisonated, so you can ignore that warning.  :)

-Doug
-- 
Let us cross over the river, and rest under the shade of the trees.
   --T. J. Jackson, 1863

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] To OID or not to OID

2004-09-28 Thread Doug McNaught
David Parker [EMAIL PROTECTED] writes:

 Is there any concrete reason NOT to create tables WITHOUT OIDS? We are
 going to have some very large tables in our app, so saving the space and
 not worrying about int-wrap seems like a good idea, but I'm worried that
 I'm missing something.

Nothing in PG depends on user tables having an OID column.  They used
to be a way to get a primary key before SERIAL came along, but they
are now deprecated for user tables.  WITHOUT OIDS will be the default
in a future release.

-Doug
-- 
Let us cross over the river, and rest under the shade of the trees.
   --T. J. Jackson, 1863

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Custom Functions

2004-09-28 Thread Doug McNaught
Matthew Metnetsky [EMAIL PROTECTED] writes:

 On Tue, 2004-09-28 at 10:55, Katsaros Kwn/nos wrote:
  So, does anyone know of (or have) good examples of queries within
  functions (and returning sets of data)?
 
 I'm not very experienced in PostgreSQL but maybe SPI functions is what you
 need.

 Yeah I took a look at them, but I'm not sold because when I look at
 Postgres' build in functions I never see it.  Its as if SPI_* is slower
 or something.

SPI is an internal API for server-side functions to use.  Client apps
will never use it.  

I think Katsaros might have meant set-returning functions (SRF)
which are documented in the manual and available to client apps.

-Doug
-- 
Let us cross over the river, and rest under the shade of the trees.
   --T. J. Jackson, 1863

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] serial data type

2004-09-25 Thread Doug McNaught
Tom Allison [EMAIL PROTECTED] writes:

 Can I use the serial data type in lieu of an 'auto_number' field?

What are the exact semantics of an auto_number field?

 I asked something like this some months ago and it seems that
 auto_number fields were addressed through a combination of triggers
 and procedures to ensure that there were do duplicate KEYS generated.

 Is it realistic to use the serial data type as a KEY?

Lots and lots of people do.  If you're just looking for a unique key
column for a single table, it works fine.

-Doug
-- 
Let us cross over the river, and rest under the shade of the trees.
   --T. J. Jackson, 1863

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] serial data type

2004-09-25 Thread Doug McNaught
Joseph Healy [EMAIL PROTECTED] writes:

 On Sat, Sep 25, 2004 at 07:20:23PM -0400, Tom Allison wrote:

 Any idea how to set up a timestamp=now on every insert/update ?

 when you create your table, use:

 create table mytable (
 id serial primary key,
   updated timestamp default(now()),
   mydata int4
 );

That won't change the timestamp on UPDATE queries; you need a trigger
for that.

-Doug
-- 
Let us cross over the river, and rest under the shade of the trees.
   --T. J. Jackson, 1863

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] abnormal data grow

2004-09-21 Thread Doug McNaught
Reynard Hilman [EMAIL PROTECTED] writes:

 Hi,

 I have been having this problem where the database size suddenly grows
 from the normal size of about 300Mb to 12Gb in one night.
 When I look up the table size, the biggest one is only 41Mb and the
 total of all table size is only 223Mb.
 But in the filesystem data directory the total size is 12Gb. I noticed
 there are 10 files with 1Gb size each:
 1.1G25677563
 1.1G25677563.1
 1.1G25677563.2
 1.1G25677563.3
 1.1G25677563.4
 1.1G25677563.5
 1.1G25677563.6
 1.1G25677563.7
 1.1G25677563.8
 1.1G25677563.9

25677563 is the OID of this object.  The different files are segments
of the relation, not duplicates (PG restricts individual data file
size to 1GB).  So look for which relation has that OID--it's quite
possibly an index that is bloating up.

-Doug
-- 
Let us cross over the river, and rest under the shade of the trees.
   --T. J. Jackson, 1863

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Oracle / Postgres Interface

2004-09-21 Thread Doug McNaught
Shah, Sameer [EMAIL PROTECTED] writes:

 1. I have DataBase-A which is Oracle on Server A with FunctionA
(PLSQL)
 2. I have DataBase-B which is Postgress on Server B with FunctionB
(PgSQL)
 3. I need to call FunctionA from FunctionB

As far as I know, the on'y way to do this is to write a Postgres
function (function B) in C that links with the Oracle client libraries,
connects to Oracle, and calls your function A.

-Doug
-- 
Let us cross over the river, and rest under the shade of the trees.
   --T. J. Jackson, 1863

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Can't connect to Windows port + other

2004-09-01 Thread Doug McNaught
Anony Mous [EMAIL PROTECTED] writes:

 A few days ago there was a fellow that had trouble connecting remotely to
 the 8.0 beta win port.  I had the same problem, but have since found the
 solution.  In postgresql.conf file, ensure the line listen_addresses is
 set to '*', ie,

 listen_addresses = '*'

 This should do it.  Can someone explain to my why this line is here?

TCP/IP has always been disabled by default, leaving just Unix sockets
for connections.  This is probably not a very useful default
configuration on Windows.  :)

-Doug
-- 
Let us cross over the river, and rest under the shade of the trees.
   --T. J. Jackson, 1863

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Change config of running PGSQL

2004-08-31 Thread Doug McNaught
McDougall, Marshall (FSH) [EMAIL PROTECTED] writes:

 I have an existing installation of 7.3.4 that appears to have been made with
 --without-readline and --without-zlib parameters used at build time.  How
 do I change those  parms without reinstalling the whole thing.  I would
 normally go through the archives, but they appear to be unavailable.  All
 replies appreciated.

The minumum you can do is recompile with the configure options you
want, move the affected new binaries into place, and restart.  There's
really no reason not to do a full make install though--you won't have
to dump and reload your data, and it's easier than figuring out by
hand what you need to upgrade...

-Doug
-- 
Let us cross over the river, and rest under the shade of the trees.
   --T. J. Jackson, 1863

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Change config of running PGSQL

2004-08-31 Thread Doug McNaught
McDougall, Marshall (FSH) [EMAIL PROTECTED] writes:

 I made sure that readline was indeed installed to no avail.  The configure
 still does not see the libs.  Any ideas?

You don't say what system you're on, but most Linux distributions
split library packages like readline into 'runtime' and 'devel'
packages.  You need the latter in order to compile software that ses
the library.  On HedHat/Fedora, it would be called 'readline-devel' or
some such. 

-Doug
-- 
Let us cross over the river, and rest under the shade of the trees.
   --T. J. Jackson, 1863

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] upgrading minor versions

2004-08-27 Thread Doug McNaught
Jeff Amiel [EMAIL PROTECTED] writes:

 So I downloaded postgresql-7.4.5.tar.gz
 ftp://ftp21.us.postgresql.org/pub/postgresql/v7.4.5/postgresql-7.4.5.tar.gz
 uncompressed...
 configure...
 make
 make install

 I run psql and it gives me the Welcome to psql 7.4.5, the PostgreSQL
 interactive terminal.
 So far so good...it says 7.4.5.

 but when I select version() from the database that I connected to with
 psql, I still receive:
 PostgreSQL 7.4.2 on i386-portbld-freebsd5.2.1, compiled by GCC cc
 (GCC) 3.3.3 [FreeBSD] 20031106

Did you shut down and restart the server after the upgrade?  Are you
sure the new binaries went where you think they did?

-Doug
-- 
Let us cross over the river, and rest under the shade of the trees.
   --T. J. Jackson, 1863

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Problem to connect to the Windows Port

2004-08-24 Thread Doug McNaught
Tom Lane [EMAIL PROTECTED] writes:

 Cornelia Boenigk [EMAIL PROTECTED] writes:
 C:\psql -h 192.168.1.8 -U postgres -d minitest
 psql: could not connect to server: Connection refused
 Is the server running on host 192.168.1.8 and accepting
 TCP/IP connections on port 5432?

 Connection refused suggests that you've got a firewall-type problem,
 ie the operating system is rejecting the connection rather than letting
 the postmaster receive it.  Check packet filtering rules...

It's also possible that the postmaster is listening on a port other
than 5342...

-Doug
-- 
Let us cross over the river, and rest under the shade of the trees.
   --T. J. Jackson, 1863

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] COPY not handling BLOBs

2004-08-08 Thread Doug McNaught
Jan Wieck [EMAIL PROTECTED] writes:

 On 8/4/2004 1:58 PM, David Rysdam wrote:

 bytea will only go up to several thousand bytes according to the
 docs.  I assume it's not very precise because the maximum is 8196 -
 $other_fields_in_row.  My binary data could be a couple times that
 or even much bigger for other apps.

 I suggest upgrading to PostgreSQL 7.1 or newer.

Or reading the right set of docs.  :)

-Doug
-- 
Let us cross over the river, and rest under the shade of the trees.
   --T. J. Jackson, 1863

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Question about linux filesystem and psql

2004-07-29 Thread Doug McNaught
Marcus Wegner [EMAIL PROTECTED] writes:

 Is it possible to keep data safe with linux and psql after crash?

 The short description of the scenario is:
 - writing data with psql (using transactions), store process completed
 - user hits the reset button or kernel crashes (whatever left the filesystem 
 unsynced)
 - filesystem is xfs or reiserfs
 - reboot causes some blocks filled with zero (journalling feature discussed 
 on lkml) of last accessed files from some applications

The quesiton here is: do these filesystems lie about fsync()?  Or do
they just corrupt files that were written but not synced before the
crash?

I think it's the latter--lying about fsync() is a pretty major bug,
and I don't recall that being claimed on LKML.

Given this, PG should be in good shape--it fsyncs() the WAL file
before reporting transaction success, so it should be able to recover
committed transactions.

 My questions are:
 Is there any solution that psql keeps the database intact (already written 
 data)?
 Is there an option for psql or filesystem (like reiserfs data=ordered) which 
 should be used  (maybe backup database)

You should always back up your data.  :)

You might also consider ext3 with data=writeback.  As long as the WAL
files are in sync and the filesystem metadata is journaled, PG can
recover from crashes just fine (barring actual hardware failure).

-Doug
-- 
Let us cross over the river, and rest under the shade of the trees.
   --T. J. Jackson, 1863

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] altering a table to set serial function

2004-07-28 Thread Doug McNaught
Prabu Subroto [EMAIL PROTECTED] writes:

 If I read your suggestion, that means...I have drop
 the column salesid and re-create the column
 salesid. and it means, I will the data in the
 current salesid column.

 Do you have further suggestion?

You can do it by hand without dropping the column:

CREATE SEQUENCE salesid_seq;
SELECT setval('salesid_seq', (SELECT max(salesid) FROM sales) + 1);
ALTER TABLE sales ALTER COLUMN salesid DEFAULT nextval('salesid_seq');

This is the same thing that the SERIAL datatype does behind the
scenes.

I can't vouch for the exact syntax of the above but that should get
you started.

-Doug
-- 
Let us cross over the river, and rest under the shade of the trees.
   --T. J. Jackson, 1863

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Do we need more emphasis on backup?

2004-07-11 Thread Doug McNaught
[EMAIL PROTECTED] (Jim Seymour) writes:

 Oh, if you're accepting punctuation nits ;), in most cases, the comma
 should come after but, not before it.  So your sentence should read
 PostgreSQL will run on almost any hardware but, if you are...

Wrong.  :)

You are sentenced to go read Strunk and White again.

-Doug

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Row-level security--is it possible?

2004-07-02 Thread Doug McNaught
Michal Taborsky [EMAIL PROTECTED] writes:

 Doug McNaught wrote:
 But why not create a products_restricted view that uses the
 CURRENT_USER function to see who's running it?
 CREATE VIEW products_restricted AS
 SELECT * FROM products WHERE Producer_ID = get_producer_id(CURRENT_USER);
 [CURRENT_USER returns a string, so you would need to map it to your
 producer_id somehow.]

 This would work only for this case (limiting single producer to one
 user). But we want to have a bit more flexible system, so we'd be able
 define the restrictions freely (like only producers 1 and 5 and price
 less than 100). I'm sorry I did not mention this.

Have you looked into set-returning functions for this?  That would let
you basically put whever logic you need into the function.

-Doug

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Error loading trigger in C

2004-07-01 Thread Doug McNaught
Juan Jose Costello Levien [EMAIL PROTECTED] writes:

 Hello,
 I am trying to use a trigger function I wrote in C. Basically what I
 want to do is to audit a table when a row is inserted into another
 table by copying the row to the new table. It compiles Ok and I
 created a shared library trigger.so. But when I load it into pgAdmin
 it tells me 'An error had occured'.

Does it work if you take pgAdmin out of the loop?  E.g. in psql:

CREATE FUNCTION trigf ... 
CREATE TRIGGER ...

-Doug

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] coalesce and nvl question

2004-06-23 Thread Doug McNaught
Simon Windsor [EMAIL PROTECTED] writes:

 Hi

 I understand that null and '' are different, and MySQL and Oracle
 functions are confusing, but my question was not about replacing NULL
 but replacing Empty strings. These are handled in MySQL/Oracle by the
 same functions that do NULL checks.

 Is there a standard function in Postgres that replaces Empty strings, as
 against a NULL value.

I don't think so, but you could use a CASE clause for this.

-Doug

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


  1   2   3   >