Re: [GENERAL] mild modification to pg_dump

2017-11-17 Thread Ron Johnson


What about the pgpass file?

https://www.postgresql.org/docs/9.2/static/libpq-pgpass.html

On 11/17/2017 03:06 PM, marcelo wrote:
I need to "emulate" the pg_dump code because the password prompt. Years 
ago I write a program (for the QnX environment) that catched some prompt 
and emulates the standard input. I don't like to do that again.


On 17/11/17 17:23, John R Pierce wrote:

On 11/17/2017 12:19 PM, marcelo wrote:

Sorry, I was not exact.
I don't need nor like to change pg_dump. Rather, based on pg_dump code, 
I need to develop a daemon which can receive a TCP message (from a 
privileged app) containing some elements: the database to dump, the user 
under which do that, and his password. (My apps are using that same 
data, of course, encripted to the common users).



I would just fork pg_dump to do the actual dump rather than try and 
incorporate its source code into your app.







--
World Peace Through Nuclear Pacification



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] mild modification to pg_dump

2017-11-17 Thread Ron Johnson

On 11/17/2017 02:23 PM, John R Pierce wrote:

On 11/17/2017 12:19 PM, marcelo wrote:

Sorry, I was not exact.
I don't need nor like to change pg_dump. Rather, based on pg_dump code, I 
need to develop a daemon which can receive a TCP message (from a 
privileged app) containing some elements: the database to dump, the user 
under which do that, and his password. (My apps are using that same data, 
of course, encripted to the common users).



I would just fork pg_dump to do the actual dump rather than try and 
incorporate its source code into your app.




Specifically, do you mean to write a simple daemon which forks pg_dump at 
the appropriate time?



--
World Peace Through Nuclear Pacification



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore load data

2017-11-16 Thread Ron Johnson

On 11/16/2017 03:13 PM, bricklen wrote:


On Thu, Nov 16, 2017 at 1:07 PM, Ron Johnson <ron.l.john...@cox.net 
<mailto:ron.l.john...@cox.net>> wrote:


v9.2.7  (Yes, I know, it's old.  Nothing I can do about it.)

During a "whole database" restore using pg_restore of a custom dump,
when is the data actually loaded?  I've looked in the list output and
don't see any "load" statements.


Look for COPY lines, that's how the data is restored.


$ pg_restore -l CDSHA01.dump > CDSHA01.txt
$ grep --color -i copy CDSHA01.txt
$ echo $?
1

There are lots of "restoring data", though.  I should have thought to grep 
for that.


One thing that puzzles me is how fast the tables (even large ones) loaded 
compared to how slow the pg_dump -Fc was.  Granted, I'm running -j4 but 
still, these were some really large, poorly compressible tables (the dump 
file was about as big as du -mc data/base).


--
World Peace Through Nuclear Pacification



[GENERAL] pg_restore load data

2017-11-16 Thread Ron Johnson

Hi,

v9.2.7  (Yes, I know, it's old.  Nothing I can do about it.)

During a "whole database" restore using pg_restore of a custom dump, when is 
the data actually loaded?  I've looked in the list output and don't see any 
"load" statements.


Thanks

--
World Peace Through Nuclear Pacification



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Converting AGE() to something human readable

2017-11-06 Thread Ron Johnson

Hi,

How is this done in v8.4?

postgres=# SELECT datname, datfrozenxid, age(datfrozenxid)
postgres-# FROM pg_database;
  datname  | datfrozenxid |    age
---+--+---
template1 |   3603334165 |  25735089
template0 |   3603470462 |  25598792
postgres  |   3576970250 |  52099004
TAPd  |   3489165829 | 139903425
(4 rows)


--
World Peace Through Nuclear Pacification



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Old pg_clog files

2017-10-29 Thread Ron Johnson

On 10/29/2017 03:37 PM, David G. Johnston wrote:
On Sunday, October 29, 2017, Ron Johnson <ron.l.john...@cox.net 
<mailto:ron.l.john...@cox.net>> wrote:


Hi,

v8.4.17

http://www.postgresql-archive.org/pg-clog-questions-td2080911.html
<http://www.postgresql-archive.org/pg-clog-questions-td2080911.html>

According to this old thread,  doing a VACUUM on every table in the
postgres, template1 and TAPd databases should remove old pg_clog files.


However, while about 40 of them have been deleted, 183 still exist. 
What did I do wrong?


Reading the old thread it sounds like it might require multiple vacuums to 
affect complete removal.


I vacuumed them twice.

Does it matter whether I ran a VACUUM ANALYZE on every table instead of 
"vacuumdb --all"?


--
World Peace Through Nuclear Pacification



[GENERAL] Old pg_clog files

2017-10-29 Thread Ron Johnson

Hi,

v8.4.17

http://www.postgresql-archive.org/pg-clog-questions-td2080911.html

According to this old thread,  doing a VACUUM on every table in the 
postgres, template1 and TAPd databases should remove old pg_clog files.


postgres=# SELECT datname, age(datfrozenxid) FROM pg_database;
  datname  |    age
---+---
template1 |   9951058
template0 |   9814761
postgres  | 163406258
TAPd  | 179543997
(4 rows)


However, while about 40 of them have been deleted, 183 still exist. What did 
I do wrong?


Thanks

--
World Peace Through Nuclear Pacification



Re: [GENERAL] Finally upgrading to 9.6!

2017-10-18 Thread Ron Johnson

On 10/18/2017 10:16 AM, Igal @ Lucee.org wrote:

On 10/18/2017 7:45 AM, Ron Johnson wrote:

On 10/18/2017 09:34 AM, Igal @ Lucee.org wrote:

A bit off-topic here, but why upgrade to 9.6 when you can upgrade to 10.0?


There's no way we're going to put an x.0.0 version into production.


Then think of it as 9.7.0 but with an easier name to pronounce ;)


No .0 is going into production...

--
World Peace Through Nuclear Pacification



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Finally upgrading to 9.6!

2017-10-18 Thread Ron Johnson

On 10/18/2017 09:34 AM, Igal @ Lucee.org wrote:

On 10/18/2017 6:24 AM, Ron Johnson wrote:

On 10/17/2017 11:17 AM, Tom Lane wrote:

Ron Johnson <ron.l.john...@cox.net> writes:

Where can I look to see (roughly) how much more RAM/CPU/disk needed when
moving from 8.4 and 9.2?

It's entirely possible you'll need *less*, as you'll be absorbing the
benefit of several years' worth of performance improvements. But this
is such a workload-dependent thing that there's no general answer.


XML stored in blobs (not sure whether text or bytea) and b-tree indexes.



A bit off-topic here, but why upgrade to 9.6 when you can upgrade to 10.0?

Obviously you're not one to upgrade often so shouldn't you take advantage 
of all of the new features and improvements when "finally" (to use your 
own word) upgrading?




There's no way we're going to put an x.0.0 version into production.

--
World Peace Through Nuclear Pacification



Re: [GENERAL] Finally upgrading to 9.6!

2017-10-18 Thread Ron Johnson

On 10/17/2017 11:17 AM, Tom Lane wrote:

Ron Johnson <ron.l.john...@cox.net> writes:

Where can I look to see (roughly) how much more RAM/CPU/disk needed when
moving from 8.4 and 9.2?

It's entirely possible you'll need *less*, as you'll be absorbing the
benefit of several years' worth of performance improvements.  But this
is such a workload-dependent thing that there's no general answer.


XML stored in blobs (not sure whether text or bytea) and b-tree indexes.

--
World Peace Through Nuclear Pacification



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Finally upgrading to 9.6!

2017-10-17 Thread Ron Johnson
Where can I look to see (roughly) how much more RAM/CPU/disk needed when 
moving from 8.4 and 9.2?


Thanks

--
World Peace Through Nuclear Pacification



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using cp to back up a database?

2017-10-09 Thread Ron Johnson

On 10/09/2017 01:02 PM, Scott Mead wrote:



On Mon, Oct 9, 2017 at 1:19 PM, Ron Johnson <ron.l.john...@cox.net 
<mailto:ron.l.john...@cox.net>> wrote:


Maybe my original question wasn't clear, so I'll try again: is it safe
to do a physical using cp (as opposed to rsync)?


Yes -- however*you must configure WAL archiving* first.  If not, no backup 
tool, cp, rsync, etc... will provide a good backup.


Oh, and BTW -- The obligatory: You are on an ancient, EOL version of PG.  
Upgrade.


Make sure that these are set:
- archive_mode
- archive_command


Then, on when you restore the backup, you need to create a recovery.conf 
and configure

- restore_command

https://www.postgresql.org/docs/8.4/static/continuous-archiving.html


This is good to know.  Thanks.

--
World Peace Through Nuclear Pacification



Re: [GENERAL] Using cp to back up a database?

2017-10-09 Thread Ron Johnson
Maybe my original question wasn't clear, so I'll try again: is it safe to do 
a physical using cp (as opposed to rsync)?



On 10/09/2017 11:49 AM, Darren Douglas wrote:

Ron:

Here is an explanation that may help a bit.

Your script is executing a PHYSICAL backup. A physical backup is simply a 
full copy of the cluster (instance) data directory ($PGDATA). A physical 
backup is your best option when you need to backup the cluster data as 
well as all configuration for the cluster. Essentially, if you had to 
rebuild the entire computer hosting the cluster, you could just reinstall 
the same version of postgres, copy in the backup data directory, and the 
cluster would run exactly as it did before with the same data. A physical 
backup is also necessary when the databases get very large.


In the backup script you posted, the 'pg_start_backup' and 
'pg_stop_backup' commands fulfill two purposes. The first is to create a 
label for the point in time the backup was started - this is done by 
pg_start_backup. The second is to ensure that all WAL segments that have 
been written since the backup began have been safely archived. That is 
done by pg_stop_backup. This approach is necessary to accomplish an online 
physical backup.


As others have mentioned pg_dump is a LOGICAL backup tool similar to any 
SQL dump you've done with another DBMS. The pg_dump command will do a SQL 
dump to recreate everything within a single database. So, if you have 
multiple databases in your cluster, its not the best option. pg_dumpall is 
the logical backup tool that will do a logical dump of all globals (schema 
+ roles) along with all databases in the cluster. Because the 
pg_dump/pg_dumpall commands are not executing a physical backup, the 
pg_start_backup and pg_stop_backup commands do not apply.


As for times when you would elect to do a logical backup, as others have 
mentioned, this is the only valid option when you are restoring to a 
different version of Postgres. It is also a good option to do a backup of 
a single small database or several small databases. And, if for any reason 
the backup needs to be human-readable, this is the approach of choice as well.


Darren



The first

On Mon, Oct 9, 2017 at 6:40 AM, Ron Johnson <ron.l.john...@cox.net 
<mailto:ron.l.john...@cox.net>> wrote:


Hi,

v8.4.20

This is what the current backup script uses:

/usr/bin/psql -U postgres -c "SELECT
pg_start_backup('Incrementalbackup',true);"
cp -r /var/lib/pgsql/data/* $dumpdir/data/
/usr/bin/psql -U postgres template1 -c "SELECT pg_stop_backup();"


Should it use rsync or pg_dump instead?

Thanks

-- 
World Peace Through Nuclear Pacification





--
Darren Douglas
Synse Solutions
dar...@synsesolutions.com <mailto:dar...@synsesolutions.com>
520-661-5885



--
World Peace Through Nuclear Pacification



Re: [GENERAL] Using cp to back up a database?

2017-10-09 Thread Ron Johnson

On 10/09/2017 11:33 AM, Jeff Janes wrote:
On Mon, Oct 9, 2017 at 6:40 AM, Ron Johnson <ron.l.john...@cox.net 
<mailto:ron.l.john...@cox.net>> wrote:


Hi,

v8.4.20

This is what the current backup script uses:

/usr/bin/psql -U postgres -c "SELECT
pg_start_backup('Incrementalbackup',true);"
cp -r /var/lib/pgsql/data/* $dumpdir/data/
/usr/bin/psql -U postgres template1 -c "SELECT pg_stop_backup();"


That's fine, as long as you have a wal archive. Although I don't know what 
is "Incremental" about it.


From reading the docs, that field is just a label with no intrinsic meaning.


--
World Peace Through Nuclear Pacification



Re: [GENERAL] Using cp to back up a database?

2017-10-09 Thread Ron Johnson


Sure I want a consistent database.  Why doesn't?

But log shipping requires you to rsync/var/lib/pgsql/data to the remote 
server, and that's consistent, so why wouldn't rsync to a local directory 
also be consistent?


On 10/09/2017 08:51 AM, Larry Rosenman wrote:


If you want a consistent database (you **REALLY** do), pg_dump is the 
correct tool.


--

Larry Rosenman http://www.lerctr.org/~ler <http://www.lerctr.org/%7Eler>

Phone: +1 214-642-9640 E-Mail: l...@lerctr.org 
<mailto:l...@lerctr.org>


US Mail: 5708 Sabbia Drive, Round Rock, TX 78665-2106

*From: *<pgsql-general-ow...@postgresql.org> on behalf of Ron Johnson 
<ron.l.john...@cox.net>

*Date: *Monday, October 9, 2017 at 8:41 AM
*To: *"pgsql-general@postgresql.org" <pgsql-general@postgresql.org>
*Subject: *[GENERAL] Using cp to back up a database?

Hi,

v8.4.20

This is what the current backup script uses:

/usr/bin/psql -U postgres -c "SELECT 
pg_start_backup('Incrementalbackup',true);"


cp -r /var/lib/pgsql/data/* $dumpdir/data/

/usr/bin/psql -U postgres template1 -c "SELECT pg_stop_backup();"



Should it use rsync or pg_dump instead?

Thanks




--
World Peace Through Nuclear Pacification



[GENERAL] Using cp to back up a database?

2017-10-09 Thread Ron Johnson

Hi,

v8.4.20

This is what the current backup script uses:

/usr/bin/psql -U postgres -c "SELECT pg_start_backup('Incrementalbackup',true);"
cp -r /var/lib/pgsql/data/* $dumpdir/data/
/usr/bin/psql -U postgres template1 -c "SELECT pg_stop_backup();"


Should it use rsync or pg_dump instead?

Thanks

--
World Peace Through Nuclear Pacification



Re: [GENERAL] Up to date conventional wisdom re max shared_buffer size?

2017-09-20 Thread Ron Johnson

On 09/20/2017 01:05 PM, Jerry Sievers wrote:

Ron Johnson <ron.l.john...@cox.net> writes:


On 09/19/2017 05:00 PM, Jerry Sievers wrote:
[snip]


The DB is 10TB total size with OLTP plus some occasional heavy batching
which frequently correlates with degradation that requires intervention.

Unrelated server problem forced us to relocate from a Debian/Wheezy 3.x
kernel 1T 144 CPU to the even bigger box mentioned earlier.  And we wen
up a major kernel version also in the process.

How did you backup/restore a 10TB db?

We just relocated the SAN volume.  Takes about 1 minute :-)


Ah, yes.  Major *kernel* version.  Never mind...  :)

--
World Peace Through Nuclear Pacification



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Up to date conventional wisdom re max shared_buffer size?

2017-09-19 Thread Ron Johnson

On 09/19/2017 05:00 PM, Jerry Sievers wrote:
[snip]

The DB is 10TB total size with OLTP plus some occasional heavy batching
which frequently correlates with degradation that requires intervention.

Unrelated server problem forced us to relocate from a Debian/Wheezy 3.x
kernel 1T 144 CPU to the even bigger box mentioned earlier.  And we wen
up a major kernel version also in the process.


How did you backup/restore a 10TB db?

--
World Peace Through Nuclear Pacification



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Selecting a daily puzzle record - which type of column to add?

2017-09-18 Thread Ron Johnson

On 09/18/2017 08:17 AM, Melvin Davidson wrote:
[snip]
I don't have any specific suggestion for an additional column, other than 
Berend's idea. However, I strongly advise against the use
of ENUM's. They can create a major problem in the event one needs to be 
removed.


Because it will internally renumber them?

--
World Peace Through Nuclear Pacification



Re: [GENERAL] "Canceling authentication due to timeout" with idle transaction and reindex

2017-09-15 Thread Ron Johnson

On 09/15/2017 06:34 AM, Justin Pryzby wrote:
[snip]

But you might consider: 1) looping around tables/indices rather than "REINDEX
DATABASE", and then setting a statement_timeout=9s for each REINDEX statement;


Is there a way to do that within psql?  (Doing it from bash is trivial, but 
I'd rather do it from SQL.)


--
World Peace Through Nuclear Pacification



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WAL & ready files retained after turning off log shipping

2017-09-12 Thread Ron Johnson

On 09/07/2017 09:32 AM, Tom Lane wrote:

Ron Johnson <ron.l.john...@cox.net> writes:

On 09/07/2017 09:08 AM, Tom Lane wrote:

Manual cleanup shouldn't be very hard, fortunately.  Run pg_controldata
to see where the last checkpoint is, and delete WAL files whose names
indicate they are before that (but not the one including the checkpoint!).

All WAL files after log shipping was stopped will keep accumulating "forever"?

Hmm ... on second thought, I think if you just remove the .ready/.done
files, the next checkpoint should clean up the old WAL files.  That'd
certainly be safer than doing it manually.


This weekend, (early Sunday morning) WAL files on the master started 
accumulating again.  Now, .ready files are regenerated every time I delete 
them, even though according to pg_controldate the last checkpoint was 28 
minutes ago.


--
World Peace Through Nuclear Pacification



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Joining 16 tables seems slow

2017-09-12 Thread Ron Johnson

On 09/12/2017 01:45 AM, Frank Millman wrote:

Hi all
I am using 9.4.4 on Fedora 22.
I am experimenting with optimising a SQL statement. One version uses 4 
LEFT JOIN’s and a 5-way CASE statement in the body. The second moves the 
filtering into the JOIN section, and I end up with 16 LEFT JOIN’s and no 
CASE statements.
My test involves selecting a single row. Both versions work. The first 
version takes 0.06 seconds. The second takes 0.23 seconds. On further 
experimentation, the time for the second one seems to taken in setting up 
the joins, because if I omit selecting anything from the joined tables, it 
still takes 0.23 seconds.
Exactly the same exercise on Sql Server results in 0.06 seconds for both 
versions.
I realise that, if I was selecting a large number of rows, 0.23 seconds is 
trivial and the overall result could be different. But still, it seems odd.


Just out of curiosity, what if you PREPARE the statement, and take multiple 
timings?


--
World Peace Through Nuclear Pacification



[GENERAL] B-tree index on a VARCHAR(4000) column

2017-09-07 Thread Ron Johnson

Hi,

v 9.2.7

Based on LENGTH(offending_column), none of the values are more than 144 
bytes in this 44.2M row table.  Even though VARCHAR is, by definition, 
variable length, are there any internal design issues which would make 
things more efficient if it were dropped to, for example, VARCHAR(256)?


(I don't have access to the source code or to development boxes, so can't 
just test this on my own.)


Thanks,

--
World Peace Through Nuclear Pacification



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WAL & ready files retained after turning off log shipping

2017-09-07 Thread Ron Johnson

On 09/07/2017 05:07 PM, Michael Paquier wrote:

On Thu, Sep 7, 2017 at 11:08 PM, Tom Lane  wrote:

Manual cleanup shouldn't be very hard, fortunately.  Run pg_controldata
to see where the last checkpoint is, and delete WAL files whose names
indicate they are before that (but not the one including the checkpoint!).
If you don't intend to do archiving any more, you can just flush all the
.ready files (and .done if any) without much thought.

It would be less risky to do that as a two-time move:
- First change archive_command to /sbin/true and let all archives be
switched to .done.
- And then disable archive_mode.


Interesting.  Thanks.

--
World Peace Through Nuclear Pacification



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WAL & ready files retained after turning off log shipping

2017-09-07 Thread Ron Johnson

On 09/07/2017 09:08 AM, Tom Lane wrote:

Ron Johnson <ron.l.john...@cox.net> writes:

After disabling log shipping via setting "archive_mode = off", and then
running, "pg_ctl reload", old WAL files and their associated .ready files
aren't being deleted.

Hmm.  I might be misremembering, but I think that it's the archiver
process that is in charge of deleting those files, so that this behavior
doesn't seem surprising.

I don't think anybody's thought very hard about how to clean up if
archiving had been running and then you turn it off with not everything
having been archived.


Presumably, restarting postgres will fix that?


Manual cleanup shouldn't be very hard, fortunately.  Run pg_controldata
to see where the last checkpoint is, and delete WAL files whose names
indicate they are before that (but not the one including the checkpoint!).


All WAL files after log shipping was stopped will keep accumulating "forever"?


If you don't intend to do archiving any more, you can just flush all the
.ready files (and .done if any) without much thought.


Many thanks.

--
World Peace Through Nuclear Pacification



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] WAL & ready files retained after turning off log shipping

2017-09-07 Thread Ron Johnson

Hi,

v8.4 (and there's nothing I can do about it).

After disabling log shipping via setting "archive_mode = off", and then 
running, "pg_ctl reload", old WAL files and their associated .ready files 
aren't being deleted.


Is there any document you can point me to as to why this is happening, and 
what I can do to stop it?


Thanks

--
World Peace Through Nuclear Pacification



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Veritas cluster management

2017-08-30 Thread Ron Johnson

On 08/30/2017 08:48 AM, Scott Mead wrote:



On Wed, Aug 30, 2017 at 9:43 AM, Ron Johnson <ron.l.john...@cox.net 
<mailto:ron.l.john...@cox.net>> wrote:


Hi,

For any of you with those failover clusters, do you know if "pg_ctl
reload" works (for compatible config file changes), or must we bounce
the database using "hares -offline" then "hares -online"?


pg_ctl reload does work in this case. HOWEVER, if you do something that 
could cause trouble to the cluster (i.e.  a pg_hba.conf change that breaks 
connectivity), this could cause veritas to try and failover.  It's 
recommended that you test your changes to avoid these scenarios.


  Technically however, pg_ctl reload works just fine, just don't break 
anything :)


That's great news.  Many thanks.

--
World Peace Through Nuclear Pacification



[GENERAL] Veritas cluster management

2017-08-30 Thread Ron Johnson

Hi,

For any of you with those failover clusters, do you know if "pg_ctl reload" 
works (for compatible config file changes), or must we bounce the database 
using "hares -offline" then "hares -online"?


Thanks

--
World Peace Through Nuclear Pacification



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Logging the fact that a log was shipped

2017-08-28 Thread Ron Johnson

On 08/28/2017 08:22 AM, Stephen Frost wrote:

* Christoph Moench-Tegeder (c...@burggraben.net) wrote:

## Ron Johnson (ron.l.john...@cox.net):


How is this done in v8.4?  (I tried adding "date; rsync ..." but pg
didn't like that *at all*.)

There's a DEBUG1-level log message on successful archive_command
completion - that would give you a lot of other low-prio log
messages wich you probably don't care about.
I'd put a wrapper around your rsync (a short shell script
would be sufficient) in the lines of rsync ... && logger "done",
that way you'd get the information via syslog.
On the other hand, do you really need the details about each WAL
segment? Since 9.4 there's pg_stat_wal_archiver... You're really
making your job harder than it needs to be with that ancient
PostgreSQL...

Worse, such scripts run the serious risk of losing WAL if a crash
happens because nothing is ensuring that the WAL has been sync'd to disk
before returning from the archive_command.

Most of the existing tools for dealing with WAL archival (pgbackrest,
barman and WAL-E, at least) already log successful and unsuccessful
archive command runs.  I'm pretty sure barman supports back to 8.4 and I
know pgbackrest does.


Thanks for the info on pgbackrest.

--
World Peace Through Nuclear Pacification



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Logging the fact that a log was shipped

2017-08-28 Thread Ron Johnson

On 08/28/2017 06:06 AM, Christoph Moench-Tegeder wrote:

## Ron Johnson (ron.l.john...@cox.net):


How is this done in v8.4?  (I tried adding "date; rsync ..." but pg
didn't like that *at all*.)

There's a DEBUG1-level log message on successful archive_command
completion - that would give you a lot of other low-prio log
messages wich you probably don't care about.
I'd put a wrapper around your rsync (a short shell script
would be sufficient) in the lines of rsync ... && logger "done",
that way you'd get the information via syslog.


And if logging to stderr?


On the other hand, do you really need the details about each WAL
segment? Since 9.4 there's pg_stat_wal_archiver... You're really
making your job harder than it needs to be with that ancient
PostgreSQL...


That's far beyond my control.

--
World Peace Through Nuclear Pacification



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Logging the fact that a log was shipped

2017-08-27 Thread Ron Johnson

Hi,

How is this done in v8.4?  (I tried adding "date; rsync ..." but pg didn't 
like that *at all*.)


Thanks


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Log shipping in v8.4.7

2017-08-27 Thread Ron Johnson

On 08/27/2017 02:23 PM, Christoph Moench-Tegeder wrote:

## Ron Johnson (ron.l.john...@cox.net):


Everything I've read says that you should use "rsync -a".  Is there
any reason why we can't/shouldn't use "rsync -az" so as to reduce
transfer time?

On today's LANs, total archiving time is dominated by connection
startup time (how long does it take to transfer 16MB on a 10GbE link?
See...).


And if we've only got a WAN link from one DC to another 360 miles away?


That's even worse when using rsync via ssh transport without
ssh's connection multiplexing - key exchange and authentication
can easily take longer than the data transfer. Compression won't
save you much time, but sure won't break anything either (but
it will take some amount of CPU time).
On really slow links, your mileage may vary.


Also, does that change require a full restart (difficult with
production systems)?

Even in 8.4 archive_command is marked PGC_SIGHUP, so a reload
will be sufficient. The sample configuration and perhaps pg_settings
(can't remember how informative that was back then) should
tell you the same.


Thanks



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Log shipping in v8.4.7

2017-08-27 Thread Ron Johnson

Hi,

(Yes, its old.  Nothing I can do about that.)

Everything I've read says that you should use "rsync -a".  Is there any 
reason why we can't/shouldn't use "rsync -az" so as to reduce transfer time?


Also, does that change require a full restart (difficult with production 
systems)?


Thanks



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG and database encryption

2017-08-22 Thread Ron Johnson

On 08/22/2017 02:55 PM, Joshua D. Drake wrote:

On 08/22/2017 12:48 PM, rakeshkumar464 wrote:

We have a requirement to encrypt the entire database.  What is the best tool
to accomplish this. Our primary goal is that it should be transparent to the
application, with no change in the application, as compared to un-encrypted
database. Reading about pgcrypto module, it seems it is good for few columns
only and using it to encrypt entire database is not a good use-case.

Is this which can be done best by file level encryption?  What are the good
tools on Linux (RHES), preferably open-source.


If you are encrypting the entire database, use the filesystem.

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Security_Guide/chap-Security_Guide-Encryption.html 



But that's protection against stolen laptops.

--
World Peace Through Nuclear Pacification



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fwd: 2 process postgres -D for one instance

2017-08-11 Thread Ron Johnson

On 08/11/2017 02:35 PM, Peter Eisentraut wrote:

On 8/11/17 10:15, Murtuza Zabuawala wrote:

some time whe have 2 process postgres for 1 instance like this


exppgs*17769*  1  0 01:06 ?00:01:04
/usr/pgsql-9.3/bin/postgres -D /bases/postgresql/scl/data -i -p 5450 -h
bd-sillage.info.
exppgs   39922*17769*   0 15:39 ?00:00:00
/usr/pgsql-9.3/bin/postgres -D /bases/postgresql/scl/data -i -p 5450 -h
bd-sillage.info.

It appears that the second one is a process forked off from the first
one.  That looks normal to me.


Because of the reference to 17769 in the second entry?



Re: [GENERAL] CPU

2007-12-03 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 12/03/07 21:27, Joshua D. Drake wrote:
 Uwe C. Schroeder wrote:
 On Monday 03 December 2007, Tom Allison wrote:
 is there much of a difference in performance between a XEON, dual
 core from intel and a dual core AMD 64 CPU?
 
 Well honestly, with how cheap you can get a quad core from Intel... I
 say do that :). The general difference between a dual core opteron and a
 dual core xeon will likely not be noticeable to a PostgreSQL
 installation (generally speaking).
 
 However, the two extra cores (even if slower), will greatly help if you
 have any kind of concurrency.

Are there any heat/power considerations?  An Opteron will most
likely draw less power, generate less heat, be easier to cool and
thus generate less noise.

Of course, the heat and whine from those 10K and 15K SCSI drives
will override any possible Opteron CPU fan quietness.

- --
Ron Johnson, Jr.
Jefferson LA  USA

%SYSTEM-F-FISH, my hovercraft is full of eels
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHVNgCS9HxQb37XmcRArgLAKCqTxy49KKaRy3P2UUqEyy6LJJKHACg0RDm
8TeEugJQYEGwyJ3nZBUWc9I=
=LBD2
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] postgresql in ramdisk

2007-12-02 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 12/02/07 04:43, oruc çimen wrote:
 hi;
 i have tested postgresql in memory  but in ramdisk is not faster than
 hardisk.
 why??? if there are some option for postgresql in ramdisk, pls help me
 i need too much fast db if you know another way for fast db pls send a
 mail to me
 thank you

If you explain what kind of task needs such high speeds (lots of
reads, lots of simple writes, busy web server, etc), and what kind
of hardware you have, then we can begin to ask you more detailed
questions.

 (sorry for my bad english:(   )

Bed English?  You should see my Russian!!!

- --
Ron Johnson, Jr.
Jefferson LA  USA

%SYSTEM-F-FISH, my hovercraft is full of eels
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHUvf5S9HxQb37XmcRApraAKDlKLorRwSSuqVe66cUBbmPdaJXrQCgsLa0
589HllNDuKk8ImByzPAtJBE=
=ZH4M
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] One or more tables?

2007-12-02 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 12/02/07 07:35, rokj wrote:
 Hi.
 
 For an example let me say that I have a big (over 1 million) user
 base. Then every user does a lot of inserting/updating of data.
 Would it be better to create different tables for insert/updating for
 every user or  would it be better just to have one big table with all
 data (tables would have of course the same columns, ...). How do you
 cope with this kind of things?
 
 1.example (1 enormous table)
 tablename (id, user_id, datetime, some_data)
 
 2. example (a big number of tables)
 tablename_user_id( id, datetime, some_data)

This should help you to decide how to design your tables.  3NF is as
far as you really need to go.

http://en.wikipedia.org/wiki/Data_normalization
http://pubs.logicalexpressions.com/Pub0009/LPMArticle.asp?ID=88

- --
Ron Johnson, Jr.
Jefferson LA  USA

%SYSTEM-F-FISH, my hovercraft is full of eels
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHUvmzS9HxQb37XmcRAnNhAJ4/bMbLyDXioe7duTO4Dm0vBD8TCgCg3H84
/+gRlkgyuIlRYYGOGH8LWPM=
=LfO7
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] One or more tables?

2007-12-02 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 12/02/07 14:58, Usama Dar wrote:
 On Dec 2, 2007 6:35 PM, rokj [EMAIL PROTECTED] wrote:
 
 Hi.

 For an example let me say that I have a big (over 1 million) user
 base. Then every user does a lot of inserting/updating of data.
 Would it be better to create different tables for insert/updating for
 every user or  would it be better just to have one big table with all
 data (tables would have of course the same columns, ...). How do you
 cope with this kind of things?

 1.example (1 enormous table)
 tablename (id, user_id, datetime, some_data)

 2. example (a big number of tables)
 tablename_user_id( id, datetime, some_data)
 
 
 Although  there isn't enough information in the email, but instead of
 creating a separate table for every user, you could use one table ,
 partitioned on userid,  that would , however, add a maint overhead whenever
 you add a new user.

Cluster by *range* of user ids, and preallocate some number of
tablespaces.

- --
Ron Johnson, Jr.
Jefferson LA  USA

%SYSTEM-F-FISH, my hovercraft is full of eels
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHU0tsS9HxQb37XmcRAhPoAJsESJL/Zs+SBRisowPXZbWQzIZqSgCeMEJE
uKC47H0oPOI6qxxCFpipD9E=
=A0ks
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Stored procedure issue

2007-12-01 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 12/01/07 20:40, Dragan Zubac wrote:
 Hello
 
 I have a stored procedure which does the billing stuff
 in our system,it works ok,but if I put in
 production,where there is some 5-10 billing events per
 second,the whole database slows down. It won't even
 drop some test table,reindex,vacuum,things which were
 done before in the blink of an eye. If I stop the
 application which calls the procedure,all is back to
 normal.
 
 We didn't implement any special locking mechanism in
 the procedure,all is default. The procedure is
 updating user's balance in table 'users'. On the other
 hand a couple of 'heavy load' table has foreign keys
 pointing to table 'users'.
 
 Is it the matter of concurency and some locking issue
 or maybe the existing of all those foreign keys
 pointing to table 'users',or maybe something else
 which we're not aware at the moment ?

Are you using transactions?

Are the tables properly indexed?

Are the queries in the SP using the indexes properly?

Did you do all the testing on a tiny database.

Is the SP written as efficiently?  (Think of ways to refactor it in
order to get the same results with less effort.)

- --
Ron Johnson, Jr.
Jefferson LA  USA

%SYSTEM-F-FISH, my hovercraft is full of eels
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHUh9nS9HxQb37XmcRAjPTAJ4jRUZUaF+j2KAB3+lBY6A3ROfynACfawWT
0QN026Ncl/Iag2M6E1kfjUg=
=RlXy
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Linux v.s. Mac OS-X Performance

2007-11-28 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 11/28/07 11:13, Magnus Hagander wrote:
 On Wed, 2007-11-28 at 07:29 -0700, Scott Ribe wrote:
 Yes, very much so. Windows lacks the fork() concept, which is what makes
 PostgreSQL much slower there.
 So grossly slower process creation would kill postgres connection times. But
 what about the cases where persistent connections are used? Is it the case
 also that Windows has a performance bottleneck for interprocess
 communication?
 
 There is at least one other bottleneck, probably more than one. Context
 switching between processes is a lot more expensive than on Unix (given
 that win32 is optimized towards context switching between threads). NTFS

Isn't that why Apache2 has separate thread mode and 1.x-style
pre-forked mode?

 isn't optimized for having 100+ processes reading and writing to the
 same file. Probably others..

- --
Ron Johnson, Jr.
Jefferson LA  USA

%SYSTEM-F-FISH, my hovercraft is full of eels
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHTaP3S9HxQb37XmcRAoFfAJ4gQJIzI95FWyukNy0+7mt2NT+MFgCbBpt/
pdIzLmq1Rndnt3busADFHP8=
=NgLQ
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Select all fields except one

2007-11-28 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 11/28/07 11:18, Vivek Khera wrote:
 
 On Nov 28, 2007, at 11:06 AM, Matt Doughty wrote:
 
 Is there a way of selecting all fields except for one in particular?
 I'd like to create a query that says something like:

 select * except fieldx


 
 For best practices, you should never use select * in your queries.  You
 will inevitably end up with code that cannot deal with a schema change,
 and for any live system, you will have a schema change at some point...

Remember back in the late-80s when Data Dictionaries were the rage?
 (Some legacy still have these.)  3GL structs/copybooks can be
auto-generated from the CDD, so any added columns are auto-added to
your record structure.

Of course, you still have to rebuild your apps.

 It is best to explicitly list the field names your code is expecting. 
 Besides, I don't think you can do what you want to do with just SQL.

- --
Ron Johnson, Jr.
Jefferson LA  USA

%SYSTEM-F-FISH, my hovercraft is full of eels
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHTaTaS9HxQb37XmcRAiaWAJ9/BiarNsC9UUNyreg8LiIq9+mUKwCeNS/L
1y4DkS4vJbJd15ZbPuwalac=
=QZG7
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgresSQL vs. Informix

2007-11-28 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 11/28/07 17:11, Jeff Larsen wrote:
[snip]
 
 Lastly on the Informix side, they have more advanced online
 backup/restore tools. It's similar to PG PITR backup but does not
 depend on file-system level backup tools. The option I use (called
 ontape) stores data in a proprietary format to disk or tape. It also
 has an incremental backup option so you don't have to do a full dump
 every time. There is a more advanced Informix backup tool called
 onbar, but I haven't worked with it enough to comment on it.
 
 What does PG have going for it? Price, obviously. I'd love to have
 that $100k that I just spent back. PG has better conformance to SQL
 language standards, so portability of code would be easier. PG has
 some better built in functions and indexing features. I prefer the
 PLPGSQL language for stored procedures to Informix.PG has more options
 for stored procedure languages (python, perl). PG has table
 inheritance, Informix does not.

That's similar to how I'd compare PG to the niche legacy database
(Rdb/VMS) that we use.

 One of the most impressive things about PG has been these mailing
 lists. Informix support is OK, but the front-line support drones just
 don't have the same access to developers who really know what's going
 on that you can get directly on this list. Heck, PG developers have
 answered my questions here on the weekend! I don't know if you can
 even put a price on such direct access to high-level gurus.

Rdb has Informix beat there.  Greybeard engineers are always on the
Rdb mailing list, and the support staff are long-timers who have
access to a continuously updated 20+year VAX NOTES database that
came along when Oracle purchased Rdb from DEC.

 I wish I had a better feature comparison list for you. I'm sure I've
 failed to mention a lot of great things about PG here, but like I
 said, my evaluation has been pretty informal. However at this time, I
 have concluded that we could move our company from Informix to PG
 without having to give up too much, other than the big licensing fees.
 We use a lot of open source software at our company and I would love
 to add PostgreSQL to the list.

We couldn't do it because our databases are too big for single-
threaded backups.  The proprietary backup scheme is transactionaly
consistent and format lets us easily restore to systems with wildly
different disk layouts than the original database.

- --
Ron Johnson, Jr.
Jefferson LA  USA

%SYSTEM-F-FISH, my hovercraft is full of eels
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHTgd3S9HxQb37XmcRAs5kAKCSuOLOguqhpf/DT0OxbA6ew33CWQCfaVf1
KBzM2RxA91WQEa7MM02SKZg=
=lvNg
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Linux v.s. Mac OS-X Performance

2007-11-27 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 11/27/07 18:01, Scott Ribe wrote:
 In general, you can expect any Unix based OS, which includes MacOS X, to
 perform noticeably better than Windows for PostgreSQL.
 
 Is that really true of BSD UNIXen??? I've certainly heard it's true of
 Linux. But with BSD you have the kernel funnel which can severely limit
 multitasking, regardless of whether threads or processes were used. Apple
 has been working toward finer-grained locking precisely because that was a
 serious bottleneck which limited OS X server performance.
 
 Or have I misunderstood and this was only the design of one particular
 flavor of BSD, not BSDen in general?

IIRC, FreeBSD got rid of the Giant Lock back in v5.x.

There was a benchmark in Feb 2007 which demonstrated that FBSD 7.0
scaled *better* than Linux 2.6 after 4 CPUs.
http://jeffr-tech.livejournal.com/5705.html

Turns out that there was/is a bug in glibc's malloc().  Don't know
if it's been fixed yet.

- --
Ron Johnson, Jr.
Jefferson LA  USA

%SYSTEM-F-FISH, my hovercraft is full of eels
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHTMAfS9HxQb37XmcRAg4NAJsFXVFa5NQtctsdrjbNCZ8GRAHMlwCeOfZr
kBFOQUI6zGcTDiy793+JSIc=
=/W4e
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Linux v.s. Mac OS-X Performance

2007-11-27 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 11/27/07 19:36, Gregory Stark wrote:
 Joshua D. Drake [EMAIL PROTECTED] writes:
[snip]
 
 That was true of the traditional BSD 4.3 and 4.4 design. However when people
 refer to BSD these days they're referring to one of the major derivatives
 which have all undergone extensive further development. FreeBSD has crowed a
 lot about their finer-grained kernel locks too for example. Other variants of
 BSD tend to focus on other areas (like portability for example) so they may
 not be as far ahead but they've still undoubtedly made significant progress
 compared to 1993.

NetBSD and OpenBSD are still pretty not-good at scaling up.

But they're darned good at running on 68K Macs (NBSD) and
semi-embedded stuff like low-end firewalling routers (OBSD).

 Not much of a kernel guy here but my understanding is that MacOSX is
 basically NeXT version 10, which means... Mach... which is entirely
 different than say FreeBSD at the kernel level.
 
 I think (but I'm not sure) that the kernel in OSX comes from BSD. What they
 took from NeXT was the GUI design and object oriented application framework
 stuff. Basically all the stuff that Unix programmers still haven't quite
 figured out what it's good for.

Even AfterStep is written is plain C...

- --
Ron Johnson, Jr.
Jefferson LA  USA

%SYSTEM-F-FISH, my hovercraft is full of eels
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHTMqjS9HxQb37XmcRAmS+AKCyzxZ9b1jmcye8gEwlun7VrszhfgCfVC6B
LEaSaGlorSQ5lX5eIIgx7dM=
=NvJi
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Linux v.s. Mac OS-X Performance

2007-11-27 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 11/27/07 19:35, Greg Smith wrote:
[snip]
 to you.  The minute performance becomes a serious concern, you'd be much
 better off with Linux, one of the BSDs that's not hobbled by using the
 Mach kernel, or one of the more serious UNIXes like Solaris.

Wasn't there a time (2 years ago?) when PG ran pretty dog-like on SPARC?

- --
Ron Johnson, Jr.
Jefferson LA  USA

%SYSTEM-F-FISH, my hovercraft is full of eels
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHTMzQS9HxQb37XmcRAo91AJ0d1l1LW0REaUEyVwrkhAF7u6+EYgCaA1aG
/qrqS5JebnStbMbO/QD+YA0=
=U6ta
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Primary Key

2007-11-26 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 11/26/07 12:11, Steve Crawford wrote:
[snip]
 
 If we presume that the plate is a key to a vehicle, then we immediately
 run into problems as a vehicle can, over time, have several plates
 (lost, stolen, changed to vanity...) and a plate can belong,
 sequentially, to several vehicles (especially when vanity plates are
 transferred to new cars).
 
 And when you have your char(6) plate-number column, they run out of
 numbers and switch to 7-characters requiring changes to all tables that
 used the plate as a key. Or you realize that ABC123 could be
 ABC123-California, ABC123-Nevada or ABC123-New York (I'm assuming that
 AAA999 is a valid format in those states).

We use this as a *non*-unique index:
PLATE_NUMBERCHAR(10)
PLATE_STATE CHAR(2)
PLATE_COUNTRY   CHAR(4)

The country field could be dropped off and Canada/USA differentiated
by the state/province code, but with NAFTA it's possible that
Mexican plates will turn up soon, and there's always the off
chance that a European car will show up.

(We used to have PLATE_STATE first, but then discovered how many
northeasterners don't know what state their vehicle is registered in.)

- --
Ron Johnson, Jr.
Jefferson LA  USA

%SYSTEM-F-FISH, my hovercraft is full of eels
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHSzxlS9HxQb37XmcRAh0QAKCLp5aNkoPPs8P5oXQCJ0HI28MNuACeKtFH
eECn8XRwrjOqonUuDr8DDH8=
=cYiG
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Disk arrangement in a cheap server

2007-11-25 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1



On 11/25/07 01:28, Alex Turner wrote:
 Why the hell would you buy a 1U chassis in the first place when
 perfectly good cheap 4U chassis exists that will take 8 or more drives?
 
 1U motherboards are a pain, 1U power supplies are a pain and 1U space
 for drives sucks.
 
 Most tests I've seen these days show that there is very little actual
 benefit from seperating pg_xlog and tablespace if you have a half decent
 controller card.  Infact you are better off putting it all on one nice
 RAID 10 to get the good read performance that splitting it up will loose.
 
 if you don't have a decent controller card, RAID 0 will suck too. 
 Namely onboard SATA RAID often sucks.

pg_xlog and tablespaces should be on as much different hardware as
possible, to reduce the likelihood that a single part failure will
knock out both directory structures.

 Alex
 
 On Nov 24, 2007 12:06 PM, Steve Atkins  [EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED] wrote:
 
 
 On Nov 24, 2007, at 8:17 AM, Ron Johnson wrote:
 
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
 
  On 11/24/07 09:12, Scott Marlowe wrote:
  On Nov 24, 2007 5:09 AM, Clodoaldo
   [EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED] wrote:
  I will build a cheap server and I'm in doubt about what would the
  the
  best for performance:
 
  1 - everything in one lonely fast 10,000 rpm Raptor HD;
 
  2 - two cheap 7,200 rpm 16MB cache HDs like this:
 
  disk 1 - system and pg_xlog
 
 This doesn't really buy you much. The supposed advantage of having
 pg_xlog on its own drive is so that the head doesn't need to seek. If
 it's on the system drive it'll be competing with, at least, syslog.
 
  disk 2 - pg_data without pg_xlog
  or a better arrange suggested by you;
 
  3 - The two cheap HDs above in Raid 0.
 
  From a DBA perspective, none of those seem like a good choice, as
  there's no redundancy.
 
  I'd make the two 7200 RPM drives a RAID-1 and have some redundancy so
  a single disk failure wouldn't lose all my data.  then I'd start
  buying more drives and a good RAID controller if I needed more
  performance.
 
 It depends on what the box is used for, but for most cases where the
 data
 is valuable, that sounds like a much better idea.
 
 For batch data crunching, where the data is loaded from elsewhere then
 processed and reported on, the cost of losing the data is very low, and
 the value of the machine is increased by RAID0-ing the drives to make
 the crunching faster... RAID0 could be good. That's probably not the
 case
 here.
 
 
  Remember: disks are *cheap*.  Spend an extra US$250 and buy a couple
  of 500GB drives for RAID 1.  You don't mention what OS you'll use,
  but if you really need cheap then XP  Linux do sw RAID, and FreeBSD
  probably does too.
 
 
 Disks aren't necessarily cheap. Disks are fairly expensive, especially
 when you need more spindles than will fit into the servers chassis
 and you
 need to move to external storage. Disk n+1 is very expensive, likely
 more expensive than the cheap 1U server you started with.
 
 Two, though, does seem to be false economy for a server that'll be
 running a database, when you can get a 1U chassis that'll take 4 drives
 pretty cheaply.

- --
Ron Johnson, Jr.
Jefferson LA  USA

%SYSTEM-F-FISH, my hovercraft is full of eels
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHSSvAS9HxQb37XmcRAsWOAKCfO79c6HLqLDBNOYrzkaLaj1D47QCghVYF
tIhKgVmBpV3XolRtkcd1+m0=
=HqMl
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Disk arrangement in a cheap server

2007-11-24 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 11/24/07 09:12, Scott Marlowe wrote:
 On Nov 24, 2007 5:09 AM, Clodoaldo [EMAIL PROTECTED] wrote:
 I will build a cheap server and I'm in doubt about what would the the
 best for performance:

 1 - everything in one lonely fast 10,000 rpm Raptor HD;

 2 - two cheap 7,200 rpm 16MB cache HDs like this:

 disk 1 - system and pg_xlog
 disk 2 - pg_data without pg_xlog
 or a better arrange suggested by you;

 3 - The two cheap HDs above in Raid 0.
 
 From a DBA perspective, none of those seem like a good choice, as
 there's no redundancy.
 
 I'd make the two 7200 RPM drives a RAID-1 and have some redundancy so
 a single disk failure wouldn't lose all my data.  then I'd start
 buying more drives and a good RAID controller if I needed more
 performance.

Remember: disks are *cheap*.  Spend an extra US$250 and buy a couple
of 500GB drives for RAID 1.  You don't mention what OS you'll use,
but if you really need cheap then XP  Linux do sw RAID, and FreeBSD
probably does too.

- --
Ron Johnson, Jr.
Jefferson LA  USA

%SYSTEM-F-FISH, my hovercraft is full of eels
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHSE6CS9HxQb37XmcRAhRTAKC4gFKymM0f46jKXpUX2NsUog4dOwCg00WP
cDE5xB8Qm+3MDtri40HFrRs=
=Vnb7
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Primary Key

2007-11-23 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 11/23/07 03:33, Peter Childs wrote:
 
 
 On 21/11/2007, *Sascha Bohnenkamp* [EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED] wrote:
 
  Fie on you evil synthetic key lovers.  Long live the Natural Key!
 
 think of an foreign key referencing a primary key with 6 columns *urgs*
 never!
 
 
 
 The worse thing I meet is people who think primary keys need to be
 integer single field unique serial fields
 
 I tend to agree that primary keys should be single fields if they need
 to be referenced but should also be natural if at all possible. ie use
 car number plates rather than some serial int.

I wouldn't trust plate number to be unique over time, since the
format ABC 123 only has a capacity of 17,576,000 vehicles.

- --
Ron Johnson, Jr.
Jefferson LA  USA

%SYSTEM-F-FISH, my hovercraft is full of eels
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHRvTAS9HxQb37XmcRAiNRAJ40NhYEE7tVlx1jaBLPbjsf2RSW4ACcCLfc
QGvy8Jz9Nja0kfEJYEx9VV4=
=VnCw
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Primary Key

2007-11-17 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 11/17/07 01:21, Gregory Stark wrote:
 Ron Johnson [EMAIL PROTECTED] writes:
 
 On 11/16/07 12:50, João Paulo Zavanela wrote:
 Hello,

 How many fields is recomended to create a primary key?
 I'm thinking to create one with 6 fields, is much?
 The number of recommended fields is the *minimum* number required
 for uniqueness.  1 or 6 or 24.  Doesn't matter.
 
 Unless of course you care about every other table being 24x larger and slower
 due to having all these copies of the 24 fields. And of course unless you care
 about being able to handle the inevitable day when it turns out the 24 fields
 aren't unique and you need to consider adding a 25th column to the table *and
 every table referencing it* as well as changing every line of application code
 to use the new column.

What's got to be done has got to be done.

On one of our systems, the natural PK of an electronic road toll is:
ETC_ACCOUNT_ID  INTEGER
FISCAL_PERIOD   INTEGER
LANE_TX_ID  BIGINT
TX_TYPE_ID  CHAR(1)
TX_SUBTYPE_IND  CHAR(1)

On another, it's:
ETC_ACCOUNT_ID  INTEGER
FISCAL_PERIOD   INTEGER
LANE_TX_ID  BIGINT
DEVICE_NO   CHAR(12)  added column
TX_TYPE_ID  CHAR(1)
TX_SUBTYPE_IND  CHAR(1)


If the PK was synthetic and generated by the engine, then a (buggy)
app could insert duplicate tolls and the system wouldn't utter a
peep.  But the customer sure would when he saw the duplicate entries.

Note the seemingly *synthetic* field LANE_TX_ID.

Records coming in from the lane are inserted into the T_LANE_TX
table which has the PK of LANE_TX_ID.  However, that table also has
a natural unique index of LANE_ID, TX_DATE, TX_TIME, TX_SEQ_NUMBER

Likewise, T_LANE has the synthetic PK of LANE_ID, but it
back-stopped by a natural unique index PLAZA_ID, EXTERN_LANE_ID.

And... T_PLAZA has the PK of PLAZA_ID and is back-stopped by the
natural unique index AGENCY_ID, EXTERN_PLAZA_ID.

Breaking the chain, T_AGENCY only has the synthetic key AGENCY_ID.

But it only has 27 rows.

- --
Ron Johnson, Jr.
Jefferson LA  USA

%SYSTEM-F-FISH, my hovercraft is full of eels
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHP701S9HxQb37XmcRAk1cAKCFnNraM2Z1s8M8j8sl8Gotxz1r0QCglEfJ
sUCabkDaZTQVc/kCyHGewhQ=
=b9ii
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Primary Key

2007-11-16 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 11/16/07 12:54, Sam Mason wrote:
 On Fri, Nov 16, 2007 at 03:50:20PM -0300, Jo??o Paulo Zavanela wrote:
 How many fields is recomended to create a primary key?
 I'm thinking to create one with 6 fields, is much?
 
 Normally a primary key would just be a single column.  When you start
 going to that many I'd probably have a serial column as the primary key,
 and a UNIQUE index on those six fields.  Depends on what you're doing,
 though unless you've got a few years experience I'd be tempted to stay
 away from primary keys of more than a single column.

Fie on you evil synthetic key lovers.  Long live the Natural Key!

- --
Ron Johnson, Jr.
Jefferson LA  USA

%SYSTEM-F-FISH, my hovercraft is full of eels
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHPel7S9HxQb37XmcRAmPZAJ9mml45XtPRSCrAiQ7K+LCwWf5J5QCgvaSp
Zs5F1SU5CNQvnoofaxkAoIY=
=9ipR
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Primary Key

2007-11-16 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 11/16/07 12:50, João Paulo Zavanela wrote:
 Hello,
 
 How many fields is recomended to create a primary key?
 I'm thinking to create one with 6 fields, is much?

The number of recommended fields is the *minimum* number required
for uniqueness.  1 or 6 or 24.  Doesn't matter.

- --
Ron Johnson, Jr.
Jefferson LA  USA

%SYSTEM-F-FISH, my hovercraft is full of eels
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHPej/S9HxQb37XmcRAo5mAJoDwp1E+aL2M/oTWhOsR5XYJi0AhgCgvUgU
/ZB7nP+K6j0WW8vNn5Q8tFI=
=s+zS
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Primary Key

2007-11-16 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 11/16/07 20:50, Greg Smith wrote:
[snip]
 
 He doesn't use that example anymore but still misses the point I tried
 to make.  The ability of the world to invalidate the assumptions that go
 into natural key assignment are really impressive.  I particularly enjoy
 that so many systems are built presuming that the Social Security number
 for a person is involatile that this topic comes up in their FAQ about
 identify theft:  http://www.socialsecurity.gov/pubs/10064.html

Natural PKs are *not* set in stone, and only stubborn fools won't
admit that they can't divine all situations.  So, you add a new
column to the PK and keep on going.

But still, there *are* some circumstances where natural PKs just
don't work.  After all, SSNs and credit card numbers are synthetic
(just not generated sequential by the RDBMS).

- --
Ron Johnson, Jr.
Jefferson LA  USA

%SYSTEM-F-FISH, my hovercraft is full of eels
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHPnLPS9HxQb37XmcRAggNAKCL8UDTQ3238mbYIiV32AUAnOs+aQCgkhTP
yr+t6pT5loh7PBUc3QPljD4=
=O/Eb
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Syntax error in a large COPY

2007-11-07 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 11/07/07 11:35, Andrej Ricnik-Bay wrote:
 On 11/8/07, Collin Kidder [EMAIL PROTECTED] wrote:
[snip]
 
 
 No, just not everyone agrees with your viewpoint on this topic. Top
 posting has it's place and some of us prefer it.
 But they could just adhere to the law of the land, or when in
 Rome, ...  practice instead of kicking off fuss.  And with my
 mail client top-posting has no place.  Let's just stick to good
 old standards.

SARCASM
What ever happened to I gotta do what's right for me! and I'm OK,
you're OK?
/SARCASM

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHMgG/S9HxQb37XmcRAiDmAJ9heLxbBvBSVP0duhzSfI1bvnskoACeOlWp
UDS6YJV0KsYD44FMIpa54m4=
=x+Xs
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: DB on a ramdisk (was Re: [GENERAL] Temporary, In-memory Postgres DB?)

2007-11-07 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


Undo an initdb?  Probably the same way you undo unlinking an SQLite
database.

Maybe being wrapped in my own little niche I just don't know enough
about the wide world of hyperfeaturitis, but making temporary DB
as a feature seems a little vague.

It doesn't really take that long to create a new database
(especially if it's scripted!), and it's even faster if you make the
temporary DB a schema off a public database.

On 11/07/07 11:27, Gauthier, Dave wrote:
 I understand caching. 
 
 Here's the reason I'm inquiring into this line of thought...
 
 I already have a big on-disk DB with lots and lots of data, and lots
 of stored functions and a data model that works with DB loaders and
 other code that queries out data.  I also have users that want all of
 that, except for the particular data content.  They basically want to
 load a DB with data that's in their scratch area without polluting
 what's in the main DB. The cardinality of this personal, scratch
 data will be orders of magnitude smaller than what's in the main (could
 all fit in memory).  And once loaded,  they would be able to run all the
 same DB load and query tools that work on the main DB, just redirect to
 the small, personal DB.  
 
 This would be a good app for SQLite, but SQLite can't do a lot of the
 things that are running in the main DB (like stored procedures). 
 
 It's become clear that PG cannot do a pure in-memory DB like SQLite.
 It's why I initially called this a longshot and the answer to my
 question is probably no.  But enabling something like a pure in-memory
 (and temporary) DB for small apps that can reap all the wonderful
 features of PG would make it very attractive for some users.  Just
 something to think about for future development.
 
 One question I had earlier that I don't think got answered was how to
 undo an initdb.  dropdb drops a DB, but how do I undo an initdb?
 
 
 -dave
 
 
 
 
 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Tom Lane
 Sent: Wednesday, November 07, 2007 12:05 PM
 To: Ron Johnson
 Cc: pgsql-general@postgresql.org
 Subject: Re: DB on a ramdisk (was Re: [GENERAL] Temporary, In-memory
 Postgres DB?) 
 
 Ron Johnson [EMAIL PROTECTED] writes:
 On 11/07/07 09:58, Tom Lane wrote:
 Or put it on a ramdisk filesystem.
 
 But doesn't that just add more overhead and reduce the amount of
 memory that the OS can cache things in?
 
 It's very possibly not a win, but the kinds of people who ask this
 question at all do not understand the concept of caching, so I'm
 sure they'll be happier with a solution where the data demonstrably
 never hits disk ;-)
 
 A case where it could be a win is where you are thrashing the DB with
 heavy update load.  Even if everything is cached there will be a pretty
 serious amount of disk write traffic, which'd possibly interfere with
 other system activity.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHMgEJS9HxQb37XmcRApJ9AJ98fxi/RecoS+MUZimzGEk5zYP15QCg7Iz/
VtVm5BMgjWsV+71AFH8M88g=
=uTCV
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


DB on a ramdisk (was Re: [GENERAL] Temporary, In-memory Postgres DB?)

2007-11-07 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 11/07/07 09:58, Tom Lane wrote:
 Ron Johnson [EMAIL PROTECTED] writes:
 On 11/07/07 09:03, Gauthier, Dave wrote:
 Is there such a thing as a temporary, probably in-memory, version of a
 Postgres DB?
 
 If you have enough RAM, and your database is small enough, the OS
 will eventually cache the whole thing.
 
 Or put it on a ramdisk filesystem.

But doesn't that just add more overhead and reduce the amount of
memory that the OS can cache things in?


- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHMeykS9HxQb37XmcRArErAJ47+9oq1/fTZZ4AXrLnL2qGo6E29gCgqVhP
DoZuWDTpWE4Rks3tjAWa0mQ=
=fuHr
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Temporary, In-memory Postgres DB?

2007-11-07 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 11/07/07 09:03, Gauthier, Dave wrote:
 This is a real longshot, but here goes...
 
  
 
 Is there such a thing as a temporary, probably in-memory, version of a
 Postgres DB?  Sort of like SQLite, only with the features/function of
 PG?  A DB like this would exist inside of, and for the duration of, a
 script/program that created it, then vanish when the script/program
 ends.
 
  
 
 Probably not, but if not, then this would be (IMO) a great addition to
 have, something that'd really make it distinct from MySQL .  I'd use
 SQLite, but I want to have stored functions and other real database
 features that it just doesn't have.

If you have enough RAM, and your database is small enough, the OS
will eventually cache the whole thing.

I know that's not exactly what you're talking about, but I think
it's as close as you'll get.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHMdcAS9HxQb37XmcRAqD9AJ4usfOq49ApqnLOz9advUnRmc7q2QCdFa8s
xAL+tMf4Xu4T4hGhvUCzomA=
=QmE5
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] How to create primary key

2007-11-06 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 11/06/07 03:15, Richard Huxton wrote:
 Reg Me Please wrote:
 Hi all.

 What'd be the syntax to create a primary key on an already build table?
 
 Hmm - so you want to sort of alter a table and add a primary key
 constraint to an existing column?

While I understand your feelings, I must disagree.

That's because adding a secondary index does not have the same sort
of syntax:
ALTER TABLE foo ADD INDEX (some, existing, columns);

 ALTER TABLE my_table ADD PRIMARY CONSTRAINT (existing_column);
 
 All in the manuals:
 http://www.postgresql.org/docs/8.2/static/sql-altertable.html
 


- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHMFr/S9HxQb37XmcRAuqfAKCzmYeBU4dVXorvQxBUYIoQuw9YcgCeOve1
NUtFo4N/sC4julpZmMvLN0s=
=qcCO
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] How to temporarily disable a table's FK constraints?

2007-11-05 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 11/05/07 10:50, Kynn Jones wrote:
 Hi, everyone.
 
 Is there a standard way to disable a table foreign-key constraint temporarily?
 
 I thought that this would be a fairly common thing to want to do, but
 I only found this snippet online:
 
 -- to disable
 UPDATE pg_class SET reltriggers=0 WHERE relname = 'your_table';
 
 -- to re-enable
 UPDATE pg_class SET reltriggers = count( * )
   FROM pg_trigger WHERE pg_class.oid=tgrelid AND relname = 'your_table';
 
 and it appears that one needs to be root to execute these statements.
 
 Is there any other way for non-root users?

The whole idea of enforcing Relational Integrity in the database
engine is to *not* allow regular users to bypass data integrity
checks.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD4DBQFHL1PLS9HxQb37XmcRAm7zAKDbdYSymz3zIyKmfdU5wPjtpVTAlwCYoEA/
DI1Z2Fbgo62k6C2P8gsCQQ==
=Np96
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] How to ALTER a TABLE to change the primary key?

2007-10-26 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/26/07 04:39, Reg Me Please wrote:
 Hi all.
 I'd need to modify the primary key definition in an already populated table.
 How can I do it?

Have you tried dropping the constraint, and creating a new one?

http://www.postgresql.org/docs/8.2/interactive/sql-altertable.html

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHIbwhS9HxQb37XmcRAjg6AJ4sPW7wPH599JPVdmZ5s25b5yHnHQCeJtsr
0TRv9XcYy2+04FW+1dNIYFc=
=ldEW
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Using Postgres as a embedded / SQL Lite database on Windows

2007-10-24 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/24/07 15:04, Craig Hawkes wrote:
 Hi
 
 OK, Sorry but I did search, and did not find anything useful.
 
 Maybe putting forward a embedded solution as part of the question was the
 wrong option.
 
 If I could reword:
 
 Given that we have a large estiblished client base running a Delphi/Paradox
 solution, and that we would like to replace Paradox with a much better SQL
 engine, I was looking for comments as to how Postgres maybe suitable.
 
 We will be implementing more multi-user features - it really is only single
 user at this stage, plus we would like to be able to use better SQL -
 espically for Reporting - where currently we have hundreds of lines of
 Delphi code which could be replaced with a one or two reasonable SQL
 Queries.

Hmmm.  Is Paradox that bad?  Or are you pinning too many hopes on SQL?

 There will really be two targets, replacing the existing single user
 solutions, and providing a larger solution with multi-user etc. There is
 however large base of similar code between these solutions, and it would be
 perferable if they could use the same back-end engine or something very
 simliar. I know some people will aruge that you should use the right tool
 for the job, and I guess I am tring to see if there is a right tool which
 will work for both these.
 
 My main/only real hesiation with Postgres is around supporting non IT users,
 mainly around installation. I am keen to here peoples comments on this, and
 what steps could be taken to mitigate these.

Install PostgreSQL on your box, run the app and see how much
maintenance it needs, and how much can be scripted.

If PG is installed as a service (which it should be, no?), then
certainly there is an API call to start the postgresql service at
the beginning of your app and another to stop the service during app
termination.

 Currently the main alternate which is being considered is SQL Server, using
 SQL Express, SQL Mobile/Compact. But I see this as limiting use to only
 features supported by the Compact edition, or having to install SQL Express
 - which I can see as being worst than supporting Postgres Installs.

I think that you're just going to have to create a pilot project to
see how it fits your individual needs.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHIAehS9HxQb37XmcRApZYAJ9uI5NHgVF19MytK2M+7+6xHGhlNACfR1bL
qRhO+gSrIF5ow7lbQUbkWqo=
=5vCS
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] SQL spec/implementation question: UPDATE

2007-10-21 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/21/07 17:52, Tom Lane wrote:
 andy [EMAIL PROTECTED] writes:
 I think your comparing apples and oranges.  I'll bet that mysql is 
 taking a shortcut and testing the value before updating it.
 
 The update is probably more close to:
 update test set name = 'kevin' where passion = 'soccer' and name  'kevin';
 
 Yeah, that seems to be what they're doing.  PG does not bother to make
 such a test, on the grounds that it would waste more net cycles than it
 would save.  Most people are not in the habit of issuing lots of no-op
 updates.
 
 Also, if you have a case where you think that is what will happen, you
 can add the WHERE-condition for yourself; whereas there is no way in
 mysql to get rid of the useless test even if you know it's useless.

Not to bash MySQL (much...) but ISTM that this is another example of
MySQL playing fast and loose with SQL.

IOW, the RDBMS shouldn't try to out-think me even if I seem seem to
be doing something odd.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHHBoGS9HxQb37XmcRAnGwAKCmiUnUvXHDyGs5Z0q0dZYlVOFaUgCcClhu
hwwRK9w9RhFM9lmAPZl2oP8=
=6Tso
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Abbreviation list

2007-10-19 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/19/07 14:36, Alvaro Herrera wrote:
 Steve Crawford wrote:
 Bruce Momjian wrote:
 Tom Lane wrote:
 Steve Crawford [EMAIL PROTECTED] writes:
 My vote is to add Appendix I. Abbreviations.
 It seems more like FAQ material than something for the manual.
 I prefer the manual.  I would think the list would be pretty long and
 deal with lots of internals terms.
 My vote too. Just noticed I missed (probably one of many):
 BLOB - Binary Large Object
 
 Do we use that term?  Normally for us it's large objects.

Good news everyone!  Large objects is generic enough that if PG is
ever ported to the IBM 1400 that you won't have to come up with a
new acronym: DLOB (Decimal Large OBject).

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHGRT8S9HxQb37XmcRAvnnAJwO9RVzeNwFWh5hCdQNnUihDuy2QQCeJyUL
SHMDqqSUmIPbTLU5d+/LmKI=
=y/Vu
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Temp Table

2007-10-19 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/19/07 11:39, Bob Pawley wrote:
 When I fetch a row, as in the following, how do I copy the row's data into a 
 temporary table so that I can process it??
 
 begin work; 
   Declare loop_set  Cursor 
for Select  one, two, three, four, five, six, seven from loop_import 
order by loop_id ;
  fetch next From loop_set; 

Wouldn't it be simpler to do:
INSERT INTO some_temp SELECT field list FROM some_table;

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHGPp0S9HxQb37XmcRAlpUAKDTR/qG8VQxZJpt1IyqZp7l9Q+V0gCfSW54
ANpxcxz4AAOYddctMe4inJc=
=1lk2
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Inconsistence in transaction isolation docs

2007-10-16 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/16/07 07:08, Trevor Talbot wrote:
 On 10/16/07, Nico Sabbi [EMAIL PROTECTED] wrote:
 /From:
 http://www.postgresql.org/docs/8.2/interactive/transaction-iso.html

 
 Read Committed/ is the default isolation level in PostgreSQL. When a
 transaction runs on this isolation level, a SELECT query sees only data
 committed before the query began; it never sees either uncommitted data
 or changes committed during query execution by concurrent transactions.
 (However, the SELECT does see the effects of previous updates executed
 within its own transaction, even though they are not yet committed.) In
 effect, a SELECT query sees a snapshot of the database as of the instant
 that that query begins to run. Notice that two successive SELECT
 commands can see different data, even though they are within a single
 transaction, if other transactions commit changes during execution of
 the first SELECT.
 

 to me the above sentence sounds inconsistent: it's asserting that both
 1) and 2) apply:

 1) it never sees ... changes committed during query execution by
 concurrent transactions
 
 During *query* execution.  If you start a SELECT that runs through a
 table from beginning to end, and while it is running some other
 transaction quickly commits a row to the end, this SELECT will not see
 it when it gets there.
 
 2) Notice that two successive SELECT commands can see different data,
 even though they
 are within a single transaction, if other transactions commit changes
 during execution
 of the first SELECT
 
 Within a single *transaction*.  If you run the above SELECT again, it
 will see the newly added row.

And this is the big difference between READ COMMITTED and
SERIALIZABLE.  With the latter, inside a single transaction the same
query will return the same result set over and over again regardless
of the updates to the base tables.

And is why READ COMMITTED makes your RDBMS fail part 3 (Isolation)
of the ACID test.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHFK+wS9HxQb37XmcRApXmAJ9K5W4taxUX4A3Aihs1971nJ5c6SQCgwfVu
3TKJez3RWeftJr7qeo8zJ/U=
=qAhM
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] improvement proposition

2007-10-16 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/16/07 08:21, hubert depesz lubaczewski wrote:
 hi,
 would it be possible for someone to add last query for
 pg_stat_activity view?
 
 there is a lot of cases that we have idle in transaction sitting for
 long time, and since we dont log all queries it is next to impossible to
 tell where in app code the problem lies.
 it would be very useful to get something like previous query.

Transaction analysis is the way to go here.  It requires a serious
code review, though.

 is adding something like this possible?

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHFRUuS9HxQb37XmcRAkvuAKCrVQZK1wEAxLquo8ZHHCTLyOmEUgCfYc5o
55pO2d/P4nEp8a8l3GRl0bk=
=Kkkd
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] good sql tutorial

2007-10-04 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/04/07 10:28, Geoffrey wrote:
 Anyone have a recommendation for a good sql tutorial?  Looking for a
 book, but online would be useful as well.
 
 This is for a financial user who will need to have an understanding of
 sql in order to generate reports with a report writer like openrpt.
 
 Thanks for any suggestions.

My suggestion: tell him that the SQL interface is broken.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHBQxVS9HxQb37XmcRAnygAKCqEMdordaHeBkThF/fZNKFJaWiSQCfaviU
Wv49BY8zYL6W7cQDy7jYRGE=
=FnQq
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] good sql tutorial

2007-10-04 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/04/07 11:06, Geoffrey wrote:
 Ron Johnson wrote:

 On 10/04/07 10:28, Geoffrey wrote:
 Anyone have a recommendation for a good sql tutorial?  Looking for a
 book, but online would be useful as well.

 This is for a financial user who will need to have an understanding of
 sql in order to generate reports with a report writer like openrpt.

 Thanks for any suggestions.

 My suggestion: tell him that the SQL interface is broken.
 
 Then we have to write the reports for him, which is what we are trying
 to get away from.

You'll see his SQL, squeal like a girl and then forbid him from ever
touching the database again.  I'm just saving you the intermediary
steps.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHBRM4S9HxQb37XmcRAi25AKDf3C73/L5wWEQCokbU2f3kMgtJ4ACgoe5b
/Z13ghfxSDePc7fu3i8ck30=
=UX2f
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Feature Request - Defining default table space for Indexes in Conf file

2007-10-02 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/01/07 13:22, S Sharma wrote:
 Hi All,
 
 The default table space defined in db conf file is used for all database
 tables as well as indexes. So putting the indexes on another table space
 requires manually dropping and re-creating indexes.
 It would be nice to have a feature to define a default table space for
 indexes in db conf file and all indexed are created in that table space.

ALTER INDEX foo SET TABLESPACE bar;

 This would allow creating a good database architecture to avoid disc
 contention easily.

How difficult is it to specify tablespace when creating an index?

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHAnExS9HxQb37XmcRAiceAJ9vUNKVa8voo2gISHhzDgKY4OOkuQCgxuxG
jR6S8CY4INa+fKbOE00oqZk=
=3QvI
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] pg_dump

2007-10-02 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


One table or many?  Transactionally consistent?  Live connection or
intermittent?

On 10/02/07 17:37, Bob Pawley wrote:
 Is there a better method of transfering the database and data to between
 DBs?
 
 Bob
 
 
 - Original Message - From: Joshua D. Drake [EMAIL PROTECTED]
 To: Bob Pawley [EMAIL PROTECTED]
 Cc: pgsql-general@postgresql.org
 Sent: Tuesday, October 02, 2007 3:26 PM
 Subject: Re: [GENERAL] pg_dump
 
 
 Bob Pawley wrote:
 I want to be able to convert a PostgreSQL database to other formats
 such as Oracle, Access etc. - with, as well as without, the data.

 Can this task be accomplished by employing pg_dump in  SQL?
 
 If you dump with inserts, data only, then yes but it will be slow as
 snot to import.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHAukNS9HxQb37XmcRAmedAJ4xvRu85AXf4sjqnOU01mVyN/UVQACeP2TR
gHjnN3eU93MY3iOxPX+ec2o=
=4/LC
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Please change default characterset for database cluster

2007-09-30 Thread Ron Johnson
On 09/30/07 10:31, brian wrote:
[snip]
 
 The default for MySQL is latin1 with swedish sorting.

Yorn desh born, der ritt de gitt der gue
Orn desh, dee born desh, de umn børk! børk! børk!

-- 
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Please change default characterset for database cluster

2007-09-28 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/28/07 21:12, Tom Lane wrote:
 Michael Fuhr [EMAIL PROTECTED] writes:
 No, it isn't.  If you get UTF8 (formerly UNICODE) as a default then
 it's because initdb is picking it up from your environment.
 
 Which initdb has done since 8.0.  If the OP is such a rabid UTF8 fan,
 one wonders why his default locale setting isn't using UTF8 ...

He uses Windows?

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG/bXTS9HxQb37XmcRAjquAJ9EkSRbr4oHmQsFgKbSl7KJzZbqwgCfWp41
6ccK8ThWWoScU9yv3nCq3xQ=
=xcfs
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Is this good spec for a PostgreSQL server?

2007-09-23 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/23/07 22:40, Ow Mun Heng wrote:
 On Thu, 2007-09-20 at 07:55 -0500, Ron Johnson wrote:
 On 09/20/07 05:43, Ow Mun Heng wrote:
 
 hehe.. I'll end up running it on a low-end desktop w/ 1GB ram and a
 celeron 2G processor w/ ~30GB data/month.
 I probably would too, if I wasn't half-way across the country from
 the DC.
 
 Just curious, Why would being half-way across the country got to do with
 the server specs? Better specs - Less Issues? :-)

If I was plugged into the company's LAN, I also could use my low-end
desktop as a database server...

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG90sGS9HxQb37XmcRAp2vAKDE3AMdELX0JDCVfPU5fndHwE9GzwCfeLIL
WTLQo+YUM43aOTYNOW6Gmm0=
=lWWq
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Is this good spec for a PostgreSQL server?

2007-09-20 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/20/07 05:43, Ow Mun Heng wrote:
 On Wed, 2007-09-19 at 08:40 -0500, Ron Johnson wrote:
 Yes, I think that it's a bit excessive but the company can afford it so why 
 not... :)
 Lucky SOB.

 I can't get my company to spring for a dual-core 2GB system with
 SATA drives.

 
 hehe.. I'll end up running it on a low-end desktop w/ 1GB ram and a
 celeron 2G processor w/ ~30GB data/month.

I probably would too, if I wasn't half-way across the country from
the DC.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG8m3FS9HxQb37XmcRAmrnAJ404YJyOqnJGDxviAjFWAlmMdyaawCcCM/a
Gud2Ef//IuG3YBGSn8Gb/uU=
=SO8R
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Calculate Weighted Aggregation between Two Tables

2007-09-19 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/19/07 04:13, Stefan Schwarzer wrote:
 Hi,
 
 I have global national statistics (GDP, Fish Catch etc.), which I would
 like to aggregate to and calculate per Capita data for given regions
 (Africa, Europe...) and subregions (Western Africa, ) on-the-fly.
 
 From a statistical point of view it seems best to use something like this:
 
 given that the variable is v and the Total Population table is tp:
 
(country_1_of_region_1_for_v / country_1_of_region_1_for_tp)
 + (country_2_of_region_1_for_v / country_2_of_region_1_for_tp) + 
  
  
   
  (sum_countries_all_of_region_1_for_v /
 sum_countries_all_of_region_1_for_tp)
 
 
 and then same thing for the other regions (in my case a total of 6
 regions).
 
 
 I have a table of countries with a field country_id and region_id (which
 specifies to which region this country belongs).
 I have the table for the variable with a field country_id, which has
 kind of a Excel design, i.e. columns for each year.

Well that's a Very Bad Thing.  (Wasn't there a thread about this
last week?)

Since the years are columns, are the variables in rows?

 And I have a table of regions with a field region_id.
 
 It seems like a rather complicated thing to do, as one needs first to
 get all the countries of a specific region for both tables (variable 
 pop_total), have the SUM(of all countries) available and run the
 mathematical procedure... 
 
 Could someone give me a hint how this could be achieved?

This would be bog-simple if your tables were structured like this:

CREATE TABLE T_REGION (
REGION_ID CHAR(8) PRIMARY KEY,
DESCRIP  TEXT);

CREATE TABLE T_COUNTRY (
COUNTRY_ID  CHAR(8) PRIMARY KEY,
REGION_ID   CHAR(8) FOREIGN KEY REFERENCES (T_REGION.REGION_ID),
DESCRIP TEXT);

CREATE TABLE T_NATIONAL_STATISTICS (
COUNTRY_ID CHAR(8),
STAT_YEAR  SMALLINT CHECK VALUES BETWEEN(1900 AND 2020),
FORECAST   BOOL,
GOVT_TYPE  CHAR(4) FOREIGN KEY REFERENCES (T_GOVERNMENT.GOVT_TYPE),
POPULATION BIGINT,
GDPBIGINT,
FISH_CATCH BIGINT,
FISH_CONSUMED BIGINT,
WHEAT_HARVEST BIGINT,
WHEAT_CONSUMED BIGINT
CRUDE_PROD_BBL BIGINT, -- crude oil production,
CRUDE_CONSUM_BBL BIGINT, -- crude oil consumtion,
etc,
etc,
etc,
PRIMARY_KEY (COUNTRY_ID, STAT_YEAR));

(Syntax is a bit off, but you get the meaning...)
(I like human-readable keys.)

Whenever you find a new statistic you want to track, you add a new
column to T_NATIONAL_STATISTICS.

So, to (I think) get info on per-capita fish usage by year, by
country, for West Africa, you do:

SELECT NS.STAT_YEAR,
   NS.COUNTRY_ID,
   CAST(CAST(NS.FISH_CATCH AS DOUBLE)/NS.POPULATION
AS NUMERIC(8,2)) AS CATCH_PERCAP,
   CAST(CAST(NS.FISH_CONSUMED AS DOUBLE)/NS.POPULATION
AS NUMERIC(8,2)) AS EATEN_PERCAP,
   CAST(CAST(NS.FISH_CATCH AS DOUBLE)/NS.POPULATION) -
 CAST(NS.FISH_CATCH AS DOUBLE)/NS.POPULATION)
 AS NUMERIC(8,2)) AS FIST_SELF_SUFFICIENCY
FROM T_REGION R,
 T_COUNTRY C,
 T_NATIONAL_STATISTICS NS
WHERE R.REGION_ID = 'WESTAFRI'
  AND C.REGION_ID = R.REGION_ID
  AND NS.COUNTRY_ID = C.COUNTRY_ID;

You might want to left outer-join NS to C to indicate any west
African countries with missing statistics.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG8QGvS9HxQb37XmcRAprKAKCiDphSaLe1nTIf8tfaIwNCx/zZngCgpfeo
QF+ojV0/jYlLbLP+BDV86p4=
=7o+G
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Is this good spec for a PostgreSQL server?

2007-09-19 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/19/07 07:33, Bjørn T Johansen wrote:
 Well, it isn't really the largest database or the database that
 need the most performance... At the moment, the database isn't
 larger than 15MB and is growing slowly... It is a webapp that is

That'll fit in shared memory.  Very fast.

Where will it be in a year?

 using the database and at the most (at the moment) there is about
 12-14 concurrent users and not much data volume...

How many users in a year?

 We are thinking about this spec. because the web app is a java
 app, and we need need something that can run java fast as well as
 postgresql...

12-14 users on a Quad-core system with 4GB RAM?

Am I so old that (even accepting Tomcat and Java) that seems
excessive?

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG8R04S9HxQb37XmcRAhblAJ9AIS90c+xjOs4KOLqkYOg7gf2PwgCgleFw
gZ82nICVs6tEKVY7IxGD1Fs=
=xrCi
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Is this good spec for a PostgreSQL server?

2007-09-19 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/19/07 06:30, Bjørn T Johansen wrote:
 It's a Dell server with the following spec:
 
 PE2950 Quad-Core Xeon E5335 2.0GHz, dual 
 4GB 667MHz memory
 3 x 73GB SAS 15000 rpm disk
 PERC 5/I Integrated controller card (8 ports, 256MB cache, battery 
 backup) x 6 backplane

You *know* we're going to say something obvious like it depends on
the size of the database and the workload.

 Is this ok to run PostgreSQL 8.2.x and Tomcat on? And does anyone
 know if this PERC controller is supported under Linux (not heard
 of it before...)

Google says yes.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG8Rn3S9HxQb37XmcRAmEXAKDuh3tm+8am5Baopiwzinxh009xdgCdGgxS
5RhuTNIo88h227syqIIzfdA=
=/YEE
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Is this good spec for a PostgreSQL server?

2007-09-19 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/19/07 08:32, Bjørn T Johansen wrote:
 On Wed, 19 Sep 2007 07:59:36 -0500
 Ron Johnson [EMAIL PROTECTED] wrote:
 
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 09/19/07 07:33, Bjørn T Johansen wrote:
 Well, it isn't really the largest database or the database that
 need the most performance... At the moment, the database isn't
 larger than 15MB and is growing slowly... It is a webapp that is
 That'll fit in shared memory.  Very fast.

 Where will it be in a year?
 
 Well, twice as much I guess...
 
 using the database and at the most (at the moment) there is about
 12-14 concurrent users and not much data volume...
 How many users in a year?
 
 It's an internal webapp for a company, so I guess not that much more...
 
 We are thinking about this spec. because the web app is a java
 app, and we need need something that can run java fast as well as
 postgresql...
 12-14 users on a Quad-core system with 4GB RAM?

 Am I so old that (even accepting Tomcat and Java) that seems
 excessive?
 
 Yes, I think that it's a bit excessive but the company can afford it so why 
 not... :)

Lucky SOB.

I can't get my company to spring for a dual-core 2GB system with
SATA drives.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG8Sa4S9HxQb37XmcRAiOzAKDh3TGGuYLoJvK5bAJzGfouYDqVeQCgzcp4
lUjG26gFkQwccLuG9WuT+Do=
=oFhQ
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Recall: August Monthly techdata split fi

2007-09-18 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/18/07 05:54, Ow Mun Heng wrote:
 On Tue, 2007-09-18 at 06:52 -0400, Geoffrey wrote:
 [EMAIL PROTECTED] wrote:
 Robert Wickert would like to recall the message, August Monthly
 techdata split file printers for France and Denmark .
 In my experience, attempting to 'recall' an email message is a fruitless 
 endeavor.  Seems to me that this is a 'Microsoft' creation.  I really 
 don't understand the purpose, because by the time you consider 
 'recalling' the email message, it's already sitting in 1000s of inboxes...
 
 It works for people that has the message _still_ in their exchange
 boxes. For those that don't, then... all is lost unfortunately.

Actually, it even works if you fetch mails from the Exchange server
to your local box.  But, as another poster mentioned, you still have
to be attached to an Exchange domain for it to work.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG78l2S9HxQb37XmcRAjmlAJ9pHW2sDN/c2y6Rng+mzv3te+h2LgCdFZqw
QYs2/bEoaWFlMW0+priEzTs=
=i/MW
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Q:Aggregrating Weekly Production Data. How do you do it?

2007-09-18 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/17/07 23:34, Ow Mun Heng wrote:
 Just wondering how everyone is doing aggregration of production data.
 
 Data which runs in the vicinity of a few million a week.
 
 What are the methods which will effectively provide the
 min/max/average/count/stdev of the weekly sample size based on different
 products/software mix etc.
 
 and still be able to answer correctly, what's the average of data_1 over
 the pass 2 months? 

That's the purpose of data warehouses and ETL, isn't it?

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG78prS9HxQb37XmcRAhhBAKCGoBYox6azDqxQpEbvMo/Zya8cAACgy5Y6
XtrDC35IE0TOcD29Iziorfs=
=XCZw
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Locking entire database

2007-09-15 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/15/07 03:28, Panagiwths Pediadiths wrote:
 Thats the fun part, I actually need to allow duplicates in specific cases
 but not in this one :)

Same table?

 Shouldn't the serializable level prevent these duplicates? As I understand
 it serializable
 should give the same result as if the transactions were performed the one
 after the other.

(Please don't top-post.)

Seems to me that you are confused as to the essence of relational
databases.  In other words, the best (heck, even the acceptable) way
to design schemas, and how to control the flow of data in order to
achieve your ultimate data goal.

 
 On Fri, 14 Sep 2007, Scott Marlowe wrote:
 
 On 9/14/07, Panagiotis Pediaditis [EMAIL PROTECTED] wrote:
 A simpler example,
 In the context of one transaction i do many queries of the form
   INSERT INTO table value WHERE value NOT IN TABLE;

 If i have 2 processes running the same 100s of these at the same time i
 end up with duplicates.
 Even with isolation set to serializable
 any ideas?
 Unique index?


- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG653ZS9HxQb37XmcRAjV+AJsFoJKc79XiGLFWSOT8Kfs0kxQItQCfWcJp
syO91mlpB6+P+n5tWh0fGSc=
=t8pG
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] strange TIME behaviour

2007-09-15 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/15/07 06:45, rihad wrote:
 Can someone please explain to me why these two give different results?
 The idea is to get the number of seconds past 00:00:00, so the second
 one is obviously correct.

How about:

select extract(hour from current_time)*3600
 + extract(minute from current_time)*60
 + extract(second from current_time);

 foo= select extract(epoch from current_time);
   date_part
 --
  42023.026348
 (1 row)
 
 foo= select extract(epoch from cast(current_time as time));
   date_part
 --
  60030.824587
 (1 row)
 
 
 Isn't current_time already a time? Why is the cast necessary?



- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG68zOS9HxQb37XmcRAl7KAKDNKaUwMn7mpwYiE1huKd4KvW+T+ACeM8lC
6AZEwlHNUwOucQ3jSWRfqGM=
=0GIE
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Locking entire database

2007-09-15 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/15/07 19:59, Panagiwths Pediadiths wrote:
 
 On Sat, 15 Sep 2007, Ron Johnson wrote:
 
 On 09/15/07 03:28, Panagiwths Pediadiths wrote:
 Thats the fun part, I actually need to allow duplicates in specific cases
 but not in this one :)
 Same table?
 Yup
 
 Shouldn't the serializable level prevent these duplicates? As I understand
 it serializable
 should give the same result as if the transactions were performed the one
 after the other.
 (Please don't top-post.)
 
 Seems to me that you are confused as to the essence of relational
 databases.  In other words, the best (heck, even the acceptable) way
 to design schemas, and how to control the flow of data in order to
 achieve your ultimate data goal.
 
 
 I dont see why the case i suggest is so obscene

Then you have not explained it to us clearly.

(English language difficulties *are* an acceptable excuse...)

 More specifically consider a table with to columns where the unique index
 is the two columns together
 
 However at some stage of the application I want to insert into the
 database only if there is no element
 with a value at column 1 equal to that that i intend to insert.
 
 Oddly, in serializable isolation mode, two transactions performing such an
 insertion in parallel one of the
 two transaction hits the phantom read case, whereas it should be protected
 by the isolation level.

It should, *if* you do it properly.

IOW, is your program structured like:
BEGIN
SELECT COUNT(*) INTO :cnt
FROM rdf WHERE segment_1 = :some_val;
IF :cnt == 1 THEN
   do one thing
ELSE
   do another
END IF;
COMMIT;

or is it structured:
BEGIN
SELECT COUNT(*) INTO :cnt
FROM rdf WHERE segment_1 = :some_val;
COMMIT;
BEGIN
IF :cnt == 1 THEN
   do one thing
ELSE
   do another
END IF;
COMMIT;

 On Fri, 14 Sep 2007, Scott Marlowe wrote:

 On 9/14/07, Panagiotis Pediaditis [EMAIL PROTECTED] wrote:
 A simpler example,
 In the context of one transaction i do many queries of the form
   INSERT INTO table value WHERE value NOT IN TABLE;

 If i have 2 processes running the same 100s of these at the same time i
 end up with duplicates.
 Even with isolation set to serializable
 any ideas?
 Unique index?

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG7IaUS9HxQb37XmcRAq/bAJwNlJG2BNqfTbXPxd2sa6GsQn3nwQCfXaDo
BMR4Lple09XnPB5w11geonY=
=g8lJ
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Tablespaces on tertiary media

2007-09-14 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/14/07 04:06, Mark Morgan Lloyd wrote:
 Where does PostgreSQL stand with storing /really/ large amounts of data
 offline? Specifically, if a FUSE is used to move a tablespace to
 something like a tape archiver can the planner be warned that access
 might take an extended period?
 
 I know that at one point (v6?) there were hooks in the code for
 experimental Berkeley code to do this sort of thing but as far as I know
 there has never been anything publicly available.

While tertiary media certainly was relevant 10 years ago, is it
really necessary in 2007? A couple of MSA-1000s stuffed with 1TB
disks would hold an l-o-t *lot* of historical data.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG6rpUS9HxQb37XmcRApN4AJ9ETn8nRlfGn67oRk4KVvd2+S6vtQCeKzlh
pxIham1MIue8+PhxuK0PBFQ=
=nOC4
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Database/Table Design for Global Country Statistics

2007-09-12 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/12/07 03:28, Stefan Schwarzer wrote:
[snip]
 
 The new design would be like this:
 
 id   |year|value
 ---
  1   |   1970| NULL
  1   |   1971|  36
  1   
  1   |   2005|  45
  2   |   1970|  
  2   .
 
 
 Would that be considered as good table design then?

What Richard says, plus:

1. The PK of New_Design should be country_id/year.

2. You also should not store the records where value is NULL.  A
left outer join would handle that.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG57E/S9HxQb37XmcRAsb/AJ97lFt25sLwIYhkhQgGdJq2m8NaXgCfTxIV
rVuX5FF0XRDkXJgdSlV1qZg=
=jyv3
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Hardware recommendation: which is best

2007-09-11 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/11/07 07:55, Phoenix Kiula wrote:
 On 11/09/2007, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 It depends what you want to do with your database.

 Do you have many reads (select) or a lot of writes (update,insert) ?
 
 
 This one will be a hugely INSERT thing, very low on UPDATEs. The
 INSERTS will have many TEXT fields as they are free form data. So the
 database will grow very fast. Size will grow pretty fast too.

15000 rows/day times 365 days = 5475000 rows.

How big are these rows?  *That* is the crucial question.

 You should use a hardware raid controller with battery backup write cache
 (write cache should be greater than 256 MB).
 
 
 I'll have a raid controller in both scenarios, but which RAID should
 be better: RAID1 or RAID10?

The striping aspects of RAID10 makes sequential reads and writes and
large writes much faster.

The more spindles you have, the faster it is.

If you are *really* concerned about speed, 4 x 147GB 10K SCSI

 How much memory do you have ?
 
 
 4GB to begin with..
 
 
 How big is your database, tables ... ?
 
 
 Huge, as the two main tables will each have about ten TEXT columns
 each. They will have about 15000 new entries every day, which is quite
 a load, so I believe we will have to partition it at least by month
 but even so it will grow at a huge pace.

15000 in an 8 hour window is 31.25 inserts/minute or ~2 seconds/insert.

If the records are 30MB each, then that could cause some stress on
the system in that 8 hour window.

If they are 3MB each, not a chance.

 While we are at it, would postgres be any different in performance
 across a single-CPU Quad Core Xeon with a dual CPU dual-core AMD
 Opteron? Or should the hard disk and RAM be the major considerations
 as usually proposed?

Opteron is the standard answer.

What is your backup/recovery strategy?

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG5pn1S9HxQb37XmcRAnl1AJ48p5CGBMma15yWt9FtD0bOXN/D7ACeNxxq
9EWbm10L/Zt/tB1xPly/Ex0=
=QPI1
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Hardware recommendation: which is best

2007-09-11 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/11/07 11:26, Phoenix Kiula wrote:
 Thanks Greg.
 
 
 You're not going to get a particularly useful answer here without giving
 some specifics about the two disk controllers you're comparing, how much
 cache they have, and whether they include a battery backup.

[snip]
 
 
 Would appreciate any tips. From these two, Scenario 1 looks marginally
 better to me. I am requesting further information about cache and
 battery powered backup, but would appreciate first-off thoughts based
 on above info.

How (on average) large are the records you need to insert, and how
evenly spread across the 24 hour day do the inserts occur?

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG5saES9HxQb37XmcRAjZnAKCrPnIyuhuXm+InFpmn/bDiw/1xKgCg7xoC
pq4xdQ72BT4qFVqvy2g5RrM=
=EbRU
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Hardware recommendation: which is best

2007-09-11 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/11/07 12:02, Phoenix Kiula wrote:
 On 12/09/2007, Ron Johnson [EMAIL PROTECTED] wrote:
 How (on average) large are the records you need to insert, and how
 evenly spread across the 24 hour day do the inserts occur?
 
 
 There will be around 15,000 inserts in a day. Each insert will have
 several TEXT columns, so it is difficult to predict, but about 30,000
 to 100,000 characters in each row. And yes, the inserts will be very
 consistently timed every day.

15000*10 = 1.5GB.

1.5GB / (24*60) = ~1 binary MB.

Any computer that that can't write 1 megabyte per minute in a day
should have been retired in 1970.

So.

Unless there's something that you aren't telling us, this should be
handleable by a Wal-Mart Special with an extra-large disk in it.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG5vxLS9HxQb37XmcRAoneAKCsw0W92jmySlGaRFpkZsvIJovTJwCgqPFg
j4Fp2uV5vf+OIVhnuJSaQxs=
=4u83
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] arrays of foreign keys

2007-09-10 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/07/07 18:47, Max wrote:
 Hello,
 
 And pardon me if I posted this question to the wrong list, it seems
 this list is the most appropriate.
 
 I am trying to create a table with an array containing foreign keys.
 I've searched through the documentation and couldn't find a way to do
 so.
 
 Is this something that one can do?
 
 Basically, I have two tables:
 
 create table user (
   user_id serial,
   login varchar(50) primary key,
   tags integer[]-- this is where the problem lies
 );
 
 create table tag (
   tag_id serial,
   name varchar(50) primary key
 );
 
 I would like the user.tags field to be a list of foreign keys (tag_ids
 specifically).
 
 I know I can solve my problem with the following table:
 
 create table user_tag (
   user_id integer,
   tag_id integer,
   foreign key (user_id) references table user(user_id) on delete
 cascade,
   foreign key (tag_id) references table tag(tag_id) on delete cascade,
   primary key (user_id,tag_id)
 );
 
 But I would really like to avoid doing that.

Why?

The (literally) cardinal rule of database normalization is
eliminate repeating values.

 Is there a solution to this problem with arrays of foreign keys, and
 if so, how does one do that?
 
 Thanks for any help.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG5UDUS9HxQb37XmcRAo+WAKDSB8DNYBZiZtOJ3utAkkX9QkDYtACgk0ab
HKf6Oe1DbvZP8cmh1e9dZaQ=
=lPTU
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Time Zone design issues

2007-09-10 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/10/07 10:55, Scott Marlowe wrote:
 On 9/10/07, novnov [EMAIL PROTECTED] wrote:
 Your explanation of now() and localtime() is good...but localtime() to
 postgres acting as a web app db, wouldn't it just return the local time as
 far as the server is concerned?
 
 No, it would return it as the local time of the POSTGRESQL client.
 
 For instance, suppose that someone in NY, connects to a web app, which
 asks him what TZ he's in and he picks EST5EDT.  The web app sets
 timezone='EST5EDT' and inserts a time of '2007-07-11 12:30:00'.
 
 The database now stores that as '2007-07-11 16:30:00+00'
 
 No matter WHERE the postgresql server is, that's what it has in it.
 
 Now, I connect from Chicago, and your web app ascertains my timezone
 as CST6CDT.  When I ask for the same timestamp, I get '2007-07-11
 11:30:00-05'.
 
 Now, if you don't set a timezone, then the database will use it's own.
  If your postgresql server is in california, then it might have a
 timezone of PST8PDT set.  If your web app didn't tell it otherwise,
 then the time would be '2007-07-11 09:30:00-07'.
 
 Note that there are IP lookup tools you can use to determine, or at
 least make a wild guess at, someone's timezone.  But you still need to
 let them pick one if you get it wrong.  Then you can store that
 timezone in a user profile and set it everytime the user uses your web
 app.

In both Windows *and* Unix you set your TZ when you install the
system.  There are system functions to inquire how you've set it.

Browsers already report back a mountain of client data to the web
server.  I'd be stunned if FF, IE, Opera, Konq, etc don't already
expose TZ, too.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG5W04S9HxQb37XmcRAhsrAKCr3WQZi0oyWSJsMxortjoExeaS1QCg5HbS
G+fd0X7UvX9406A+Td2GYpw=
=wk17
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Time Zone design issues

2007-09-10 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/10/07 15:21, Alvaro Herrera wrote:
[snip]
 
 I wouldn't trust the browser's TZ, and you would need a way to
 override it.

Why?

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG5asLS9HxQb37XmcRAoQqAKCoqdRr6cIOnIktIa8l2689isYtAQCbBK9z
/O1wYFZrtlols1lrvo1Rw5Q=
=WcJ6
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Time Zone design issues

2007-09-10 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/10/07 19:50, Tom Lane wrote:
 Ron Johnson [EMAIL PROTECTED] writes:
 On 09/10/07 15:21, Alvaro Herrera wrote:
 I wouldn't trust the browser's TZ, and you would need a way to
 override it.
 
 Why?
 
 The browser may not know the setting, or may not tell it to you,
 or you might not be able to make any sense of what it says
 (timezone names are hardly standardized).

Well that's true.  Except for numeric offsets.

 This whole sub-thread actually is predicated on an assumption not
 in evidence, which is that there is any browser anywhere that will
 tell the http server timezone information.  I'm quite sure no such
 thing is required by the http standard.

I'm really surprised.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG5ff8S9HxQb37XmcRAstvAJ4tnhHhv9SCWnrWGq8td5orPttrfgCg1c+t
IyhqUpzNg6RDS3wkALx5mUc=
=BtHs
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Time Zone design issues

2007-09-09 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/09/07 22:29, novnov wrote:
[snip]
 
 But it doesn't particularly make sense to set the server's clock to UTC.
 It'd seem to skew a lot of server functionality which I'd think should
 normally be geared around local time. So I'd guess that the route to take is
 to keep the server pegged to local time, and use a function that derives UTC
 from now(), tags the 'last modified' fields with that value.

Your single-user Windows mindset is shining brightly.

Unix servers have had their internal clocks set to UTC for a decade
or more, and there have been no noticeable ill effects, since apps
all know to adjust for TZ.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG5MfXS9HxQb37XmcRAuf5AKDKm9h0AxznSTJ0fJx7KzVqFDblYACfeSUV
Lub89IZdWSIfvGhUZde/jG0=
=3+7a
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Scalability Design Questions

2007-09-08 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/08/07 19:58, novnov wrote:
 You're right, that's not how I understood your partitioning question. I'd not
 be eager to implement that kind of split in the db, though I can see that
 it's an example of how to spread the load around.
 
 But basically, it seems that the answer to one of my questions is that there
 is currently no way with postgres to spread a single database over multiple
 servers, ala a loadbalanced apache cluster, where requests are forwarded to
 different boxes.
 
 It's not going to be an issue for me soon, maybe ever, I just wanted to see
 what the score is.

Scott is correct both in (a) that the money otherwise spent on RAC
can buy a big heap of extra oomph, and (b) middleware partitioning
method of scaling systems works very well.

Any way you go, though, requires proper design and coding from the
very beginning.

And it never hurts to think outside the box: sometimes you can get
the results you want by doing something that seems crazy.  For
example, we had a couple of tables that were specifically designed
for OLTP.  A specific weekly report that joined the 2 tables just
would *never* finish.  Instead of fighting to create some hairy-
looking SQL-statement-from-Hell, I create WHERE-less views on the
two tables with only the columns that were needed.  Now we have a
job that unloads the views, truncates then loads them into a
*separate* database (so that backups  CREATE INDEX don't clash),
indexes them perfectly for this query, and then runs the query
against this separate database.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG42yaS9HxQb37XmcRAg+VAJ48E7pERfJ2MpRj2j9N4u1Wh01wJQCfUwnN
h1pHoRtnUOLrJkWOtg0Fs4A=
=7meG
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Column as arrays.. more efficient than columns?

2007-09-07 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/07/07 02:49, Ow Mun Heng wrote:
 On Fri, 2007-09-07 at 00:18 -0500, Ron Johnson wrote: 
 On 09/06/07 21:26, Ow Mun Heng wrote:
 I've not arrived at any conclusion but merely
 exploring my options on which way would be the best to thread. I'm
 asking the list because I'm new in PG and after reading all those
 articles on highscalability etc.. majority of them are all using some
 kind of denormalised tables.
 Correlation != causation.

 There *might* be a causal relationship between high scalability and
 table denormalization, but I seriously doubt it.
 
 I can't refute you on this since I have no experience in this arena,
 only what I read in highscalbility.com (IIRC)
 
 Right now, there's 8 million rows of data in this one table, and growing
 at a rapid rate of ~2 million/week. I can significantly reduce this
 number down to 200K (i think by denormalising it) and shrink the table
 size.
 Even presuming you only insert data SIX hours per day, that's only
 13.3 inserts per second.  Not very impressive.
 
 Data is inserted 24 hours a day, but not at the same rate each
 sec/minute. The problem isn't really the data-insertion, it's already
 inserted in a normalised manner. It's the selection of data. (OLTP
 datahouse) which takes a longer time and which is the area of worry.

Datahouse or data warehouse?

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG4T8QS9HxQb37XmcRAmwFAJ0bOFYj4gWg2VGa4l28kiDAkraQYACgl167
sRA33c8h7ZHS2qgAfgFmzkg=
=66Z0
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Column as arrays.. more efficient than columns?

2007-09-07 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/07/07 07:49, Merlin Moncure wrote:
 On 9/7/07, Ron Johnson [EMAIL PROTECTED] wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 09/06/07 20:53, Merlin Moncure wrote:
 [snip]
 arrays are interesting and have some useful problems.  however, we
 must first discuss the problems...first and foremost if you need to
 read any particular item off the array you must read the entire array
 from disk and you must right all items back to disk for writes.
 Reads and writes are done at the page level, so I'm not sure this is
 valid.
 
 sure it is...since the denormalized record is much larger (especially
 in array scenarios), the tuple is much larger meaning the page will
 fill up much more quickly meaning more dead pages, more vacuuming,
 etc.   Besides that, the server has to do some work presenting the
 array as part of the read which is overhead.  I didn't go into a lot
 of detail but the reasoning is sound.  Here is a quick example showing
 the problem.

We agree.

What I meant was that reads and writes are done at the page level no
matter whether the table is normalized or not.  Thus, to say if you
need to read any particular item off the array you must read the
entire array from disk and you must right all items back to disk for
writes. is... irrelevant.

That's probably an imprecise word, but it's all I can think of at
the moment.

We also agree regarding big records filling pages faster.



- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG4Uv8S9HxQb37XmcRAk2rAKCpxsJjhcMbvHJB5GrZOzNsUXgmWwCg7Cy0
CEU8zlbHGG9JvZgCSm/hajE=
=/Uv/
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] an other provokative question??

2007-09-07 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/07/07 01:37, Greg Smith wrote:
 On Fri, 7 Sep 2007, Ron Johnson wrote:
 
 Definitely a niche product.
 
 Stonebraker's commentary was unfortunately spun by the ComputerWorld
 columnist.

Tech journalist morphing reality to make a provocative story?

Say in ain't so!

I hope people followed the link to his actual blog entry at
 http://www.databasecolumn.com/2007/09/one-size-fits-all.html where his
 arguement is that the idea of one database approach always being right
 just isn't true anyway.  With that mindset, every technology is a niche
 product of sorts; just the size of the niche varies.

I read something similar by him a few months ago.  Very interesting.

 Given past history of this project and its relation to Stonebraker, I
 was tempted earlier today to suggest that the Postgres vs. PostgreSQL
 renaming argument be dropped in favor of renaming the database
 Horizontica.



- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG4T/TS9HxQb37XmcRAj8gAKC+IeBhW9+0qlrRSiUbVwQH/EiQlACfWxVm
Y4uTvkWRDog+W2vPTrULXmQ=
=bKYs
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Column as arrays.. more efficient than columns?

2007-09-07 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/07/07 09:00, Ow Mun Heng wrote:
 
Datahouse or data warehouse?
 
 OLTP data warehouse.

But OLTP  DW are diametrically opposed in how you design,
structure, load and use them.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG4ZRES9HxQb37XmcRArECAJ0elIpVRxGjB14fzH98opOmOnm4jwCg4Vb+
rn/mb7tB0d6p6GAio4lSxCc=
=SCl9
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Need suggestion on how best to update 3 million rows

2007-09-06 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/06/07 01:13, Ow Mun Heng wrote:
 I have a table in PG, Pulled from SQL Server using Perl DBI (w/o using
 chopblanks) and have ended up with a column where the space is being
 interpreted as a value.
 
 eg: 
 
 ABC  when it should be ABC
 
 this is being defined  as varchar(4)
 
 I've already pull the relevent columns with 
 
 create foo as select unique_id, rtrim(number) from org_column
 
 I've tried to do the update using
 
 update org_column set number = foo.number where foo.unique_id =
 org_column=unique_id.

Number?  Where does number come from?  Unless you've got weird
field names, that doesn't sound like a very good name for a
VARCHAR(4) column.

 The update is taking a few hours and still hasn't ended.
 
 I've killed it already and rolled back the changes.
 
 what's the easiest way to update these fields?

Is it only *some* tuples that have the extra space problem?

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG38NAS9HxQb37XmcRAlZhAKCMtXSMzvbZ04M3YAdlAhjN4p7rSQCfZTDp
Goyd+/FIFdwoc7IA87Mr3xM=
=hJfr
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


  1   2   3   4   5   6   7   >