[ADMIN] Cascade on update statement, not on table definition

2003-02-05 Thread webmaster
Hi,

I saw I can define a table that specifies a ON UPDATE CASCADE for a Foreign Key.

I have a lot of tables with FK´s, and a lot of others with others FK\'s for the firts 
ones, and so on, and I haven´t defined the ON CASCADE clause on the FK´s tables. 
Now I need to update a primary key on the \"mother\" table. 

Is there a way to execute a stmt like \'update tbl set id_field=NEW_ID where 
id_field=OLD_ID on CASCADE\', or something like that? Or do I need to redefine my 
tables, or the constraints?

Tks
Felipe Nascimento

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [ADMIN] Re: pgadmin

2001-02-08 Thread DHSC Webmaster

I have successfully setup the driver on a win98 machine, but when
performing the same process on a win2000 machine, I get the following
error:
ODBC--call failed, Password authentication failed for user
'ADMINISTRATION' (#210)[Microsoft][ODBC Driver Manager] Connection not
open (#0)

I have tried repeatedly, re-entering the user and password in case of
typo error, but have had no success. I've even tried different settings
in 'options'.
Any suggestions?
Thanks, Bill MacArthur

Alfonso Peniche wrote:
> 
> What you have to do is:
> 
> For every database you have in PostgreSQL you have to:
> 
> 1.- Select ODBC Data Sources in the Control Panel
> 2.- Select the User DSN tab
> 3.- Click on 'Add...'
> 4.- Choose the ODBC Driver corresponding to PostgreSQL and click on
> 'Finish'.
> 5.- A Setup window will appear with the following:
> Data Source: Type the name you will identify your driver
> with (could be the name of your database)
> Description: (Optional) Anything that will make you remember
> what the database or connection is
> Database: Type your database's name (where you are going to
> connect).
> Server: Type your server name (where Postgres is installed)
> Port: If you use default values, leave 5432 as is,
> otherwiase type the Postgres Port you are using.
> Username: Type the name you will use to make the connection
> Password: Type the user's password
> NOTE 1: Remember, in order for this to work you have to have
> a Postgres username and password (since you need a user for unix, I
> would suggest, for security reasons, that you take advantage of this
> feature).
> NOTE 2: You have to modify your pg_hba.conf file so the
> connection privilege is 'password' (no quotes).
> 
> I haven't been able to make it work with 'crypt'. I don't
> know if it's a bug or if it's meant to be that way or if it's just me
> (maybe someone from the ODBC area could help us out here)
> 
> 6.- In the Options (Advanced) section (still in the Setup window) click
> on 'DataSources'
> 6.1.- A new window will appear. Make sure that the only boxes
> checked are Show Columns and Fake Index in the OID Options Section.
> 
> If I didn't forget anything, this should work.
> 
> Hope this helps. (If you have any question don't hesitate to ask).
> 
> "Raghav Infotech Ltd [Chennai]" wrote:
> 
> > We want to install the client at Windows machine.  We had downloaded
> > the pgadmin and installed in in windows2000 machine through windows
> > installer.  But we could not configure the DSN and get going.  kindly
> > help by pointing further documentation Regards, T.G.Sekhar,Raghav
> > Infotech Limited,Chennai,India,

-- 
Bill MacArthur
Webmaster
DHS Club



[ADMIN] PG 7.0.3 & RH 7 IPC problems?

2001-03-29 Thread DHSC Webmaster

I'm running 7.0.3 on a double pentium 733 RH 7 box with 512M ram.

We quickly ran up against the default 32 max. connection limit since it
is a backend DB for our web server. When trying to increase the buffers
and connections as advised in the docs, I got the error:

[postgres@atl01371 data]$ nohup postmaster -D /usr/local/pgsql/data -i
-B 128 -N 64 > /usr/local/pgsql/data/logs/log.txt
IpcMemoryCreate: shmget failed (Invalid argument) key=5432001,
size=2048000, permission=600
This type of error is usually caused by an improper
shared memory or System V IPC semaphore configuration.
For more information, see the FAQ and platform-specific
FAQ's in the source directory pgsql/doc or on our
web site at http://www.postgresql.org.
FATAL 1:  ShmemCreate: cannot create region
[postgres@atl01371 data]$

Since Redhat says that version 7 supports IPC and we have increased
shared memory to 128 meg without solving the problem, does anyone have a
positive experience with this scenario to share?

Here is a partial list from sysctl -a
vm.kswapd = 512 32  32
vm.pagecache = 215  75
vm.buffermem = 210  60
vm.overcommit_memory = 0
vm.freepages = 256  512 768
kernel.sysrq = 0
kernel.shmall = 33554432
kernel.shmmax = 134217728
kernel.rtsig-max = 1024
kernel.rtsig-nr = 0
kernel.acct = 4 2   30
kernel.modprobe = /sbin/modprobe
kernel.printk = 6   4   1   7
kernel.ctrl-alt-del = 0
kernel.real-root-dev = 2054
kernel.panic = 0
kernel.version = #1 SMP Mon Feb 5 14:36:22 EST 2001
kernel.osrelease = 2.2.17-14smp

-- 
Bill MacArthur
Webmaster
DHS Club

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [ADMIN] PG 7.0.3 & RH 7 IPC problems?

2001-03-29 Thread DHSC Webmaster

It would appear that either my icps is not working or that everything is
alright.
(don't know enough about what to expect to know!)
bash-2.04$ ipcs

-- Shared Memory Segments 
key   shmid owner perms bytes nattchstatus

-- Semaphore Arrays 
key   semid owner perms nsems status

-- Message Queues 
key   msqid owner perms used-bytes  messages

bash-2.04$

Tom Lane wrote:
> 
> DHSC Webmaster <[EMAIL PROTECTED]> writes:
> > [postgres@atl01371 data]$ nohup postmaster -D /usr/local/pgsql/data -i
> > -B 128 -N 64 > /usr/local/pgsql/data/logs/log.txt
> > IpcMemoryCreate: shmget failed (Invalid argument) key=5432001,
> > size=2048000, permission=600
> 
> Not sure about Linux, but on HPUX there are two possible reasons for
> shmget to return EINVAL:
> 
>   [EINVAL]   size is less than the system-imposed minimum or
>  greater than the system-imposed maximum.
> 
>   [EINVAL]   A shared memory identifier exists for key but the
>  size of the segment associated with it is less
>  than size and size is not equal to zero.
> 
> In other words, you could get this if there is an old segment of smaller
> size and the same key still laying about, which there might well be if
> you were foolish enough to kill -9 the old postmaster.  Use ipcs and
> ipcrm to check and clean up.
> 
> 7.1 uses a slightly different approach that hopefully avoids this
> particular failure scenario...
> 
> regards, tom lane
> 
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
> 
> http://www.postgresql.org/users-lounge/docs/faq.html

-- 
Bill MacArthur
Webmaster
DHS Club

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [ADMIN] PG 7.0.3 & RH 7 IPC problems?

2001-03-29 Thread DHSC Webmaster

Thank you Tom. Here are the results.
[root@atl01371 linux]# ipcs

-- Shared Memory Segments 
key   shmid owner perms bytes nattchstatus
0x0052e2ca 36864 postgres  700   144   5
0x0052e2c1 26625 postgres  600   1104896   5
0x0052e2c7 37890 postgres  600   66060 5

-- Semaphore Arrays 
key   semid owner perms nsems status
0x0052e2ce 6144  postgres  600   16
0x0052e2cf 6145  postgres  600   16

-- Message Queues 
key   msqid owner perms used-bytes  messages

These values don't seem to change from one moment to the next although
there are of course varying numbers of backends running at any given
moment.
Since I am unfamiliar with this output I hesitate to use icprm to remove
something which ought not be removed.
I have not used kill -9 on anything. :)

Tom Lane wrote:
> 
> DHSC Webmaster <[EMAIL PROTECTED]> writes:
> > It would appear that either my icps is not working or that everything is
> > alright.
> 
> Try it as root.  I think on Linux, ipcs won't show shm segments that you
> do not have read access to (a mighty braindead way to do it but...)
> 
>     regards, tom lane

-- 
Bill MacArthur
Webmaster
DHS Club

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [ADMIN] PG 7.0.3 & RH 7 IPC problems?

2001-03-29 Thread DHSC Webmaster

Gracias, merci' & Thank You Tom!
After shutting down the entries were still there.
I rebooted and restarted and voila!, we're in business.

Tom Lane wrote:
> 
> DHSC Webmaster <[EMAIL PROTECTED]> writes:
> > [root@atl01371 linux]# ipcs
> 
> > -- Shared Memory Segments 
> > key   shmid owner perms bytes nattchstatus
> > 0x0052e2ca 36864 postgres  700   144   5
> > 0x0052e2c1 26625 postgres  600   1104896   5
> > 0x0052e2c7 37890 postgres  600   66060 5
> 
> > -- Semaphore Arrays 
> > key   semid owner perms nsems status
> > 0x0052e2ce 6144  postgres  600   16
> > 0x0052e2cf 6145  postgres  600   16
> 
> > -- Message Queues 
> > key   msqid owner perms used-bytes  messages
> 
> > These values don't seem to change from one moment to the next although
> > there are of course varying numbers of backends running at any given
> > moment.
> 
> These objects are created by the postmaster and persist as long as the
> postmaster does.  They *should* go away when you shut down the
> postmaster, but perhaps they didn't for some reason.
> 
> Anyway, I suggest (a) shut down postmaster, (b) use ipcs to make sure
> there are no postgres shmem or sema objects (delete 'em with ipcrm if
> necessary), and (c) try to start postmaster with new -B/-N values.
> 
> If step (c) fails when there are no pre-existing objects, then it must
> be that you have not managed to increase the kernel's limits on shared
> memory.  The exact incantations needed to accomplish that trick vary
> across systems, so I can't offer a lot of help there.
> 
> regards, tom lane

-- 
Bill MacArthur
Webmaster
DHS Club

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [ADMIN] pg_clog & vacuum oddness

2003-10-29 Thread DHS Webmaster
This thread caught my eye and I decided to look at our pg_clog
directory. Sure enough we have got every clog file since we upgraded
back in April,  - 02F8.
We vacuum our working database nightly. Although this is not a 'full',
we don't exclude any tables. We don't do anything with template1
(knowingly), so we do not perform any maintenance on it either.
Questions:
1. Should we be doing a periodic vacuum on template1?
2. Is what I am seeing possibly indicative of something else beside
template1 that would show up the postgres log.
3. It is safe to delete all the clog files prior to the last restart of
postgres, yes?

Tom Lane wrote:
> 
> Jeff <[EMAIL PROTECTED]> writes:
> > [ pg_clog not getting truncated ]
> 
> pg_clog is truncated on the basis of the oldest completely vacuumed
> database in your installation.  Most likely your maintenance script
> is failing to vacuum some database(s) (template1, perhaps?) and/or
> is doing table-by-table vacuums rather than an unqualified VACUUM.
> 
> I doubt this explains any performance problems though.  Old pg_clog
> segments don't do anything except sit there.
> 
> regards, tom lane
> 
> ---(end of broadcast)---
> TIP 9: the planner will ignore your desire to choose an index scan if your
>   joining column's datatypes do not match

-- 
Bill MacArthur
Webmaster
DHS Club

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] pg_clog & vacuum oddness

2003-10-29 Thread DHS Webmaster
Well, you know, I looked at the list and saw only template0, template1 &
our working DB. So just for kicks I vacuumed template1. That cleared the
directory. To my knowledge we did not modify template1 one when we setup
the database initially, so perhaps it just needed a vacuum to set the
stage correctly. Either that or during our data import, there was
something in the import file that referenced template1 which created a
need for vacuuming.
Whatever the case, this was an interesting exercise which I'm sure will
prove useful in the future.
Thank you.

-- 
Bill MacArthur
Webmaster
DHS Club

Jeff wrote:
> 
> On Wed, 29 Oct 2003 11:53:38 -0500
> DHS Webmaster <[EMAIL PROTECTED]> wrote:
> 
> > We vacuum our working database nightly. Although this is not a 'full',
> > we don't exclude any tables. We don't do anything with template1
> > (knowingly), so we do not perform any maintenance on it either.
> 
> Why not go through the list in pg_database to make sure you didn't
> forget about any (like I did).
> 
> given that template0 and 1 rarely change.. I don't see why we'd need to
> vacuum them
> 
> --
> Jeff Trout <[EMAIL PROTECTED]>
> http://www.jefftrout.com/
> http://www.stuarthamm.net/

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[ADMIN] dropped users appear as numbers in ACL

2003-11-13 Thread DHS Webmaster
I have experienced this in the past with members of groups, but now with
relation ACLs. It goes like this:
Certain users are given specific permissions of different tables and
such.
The users are later dropped but reference to them remains in the ACL as
their former ID number.
When trying to revoke permissions on the number, it fails.
I noticed this with groups in the past and have become accustomed to
deleting these users from the groups before dropping them. Otherwise I
had to create dummy users with those IDs, go about cleaning things out
and then drop the dummy user.
Will I have to perform this same procedure for ACLs?
Are there any plans to handle these issues transparently when a user is
dropped?
-- 
Bill MacArthur
Webmaster
DHS Club

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[ADMIN] bizarre AGE behaviour

2004-03-03 Thread DHS Webmaster
Hello all,
Many thanks for this fine product.
We began encountering some unexpected date related errors this week and
after further investigation found the following. We use the postgres AGE
function in a custom function. The AGE function has begun to throw some
unanticipated results back. This has not happened before and my first
guess is that it may be leap year related. Here are some examples.

This is good...
network=# select age('04-01-04','03-01-04');
  age
---
 1 mon
(1 row)

This isn't...
network=# select age('05-01-04','03-01-04');
 age
-
 1 mon 30 days 23:00
(1 row)

Now it gets really strange..
network=# select age('06-01-04','04-01-04');
 age
-
 1 mon 29 days 23:00
(1 row)

This may have been addressed in the past (I subscribe to this list) and
I just missed it, but is there a simple fix for this problem?
Thanks.
-- 
Bill MacArthur
Webmaster
The DHS Club, Inc.
The Best Is Yet To Come!

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [ADMIN] bizarre AGE behaviour

2004-03-04 Thread DHS Webmaster
You are both right.
Your explanation, Steve, was the light that got me going on a simpler
solution along the lines of what Tom suggested. I didn't really need
AGE, and upon digging in, couldn't even remember why I had chosen that
in the first place.
Postgres is the bomb!
Thanks guys.

Steve Crawford wrote:
> 
> On Wednesday 03 March 2004 3:19 pm, Tom Lane wrote:
> > Steve Crawford <[EMAIL PROTECTED]> writes:
> > > US Daylight Saving Time starts this year on April 4 when 0200
> > > jumps to 0300. The answers PostgreSQL gave are correct.
> >
> > I suspect what the OP wants is non-timezone-aware behavior...
> 
> You are probably right. In his original post he posited that the
> problem was due to some sort of leap-year bug in PostgreSQL. I was
> just pointing him to the real cause of his observations so he could
> tackle whatever problem he was having secure in the knowledge that PG
> was working correctly.
> 
> Cheers,
> Steve
> 
> ---(end of broadcast)---
> TIP 9: the planner will ignore your desire to choose an index scan if your
>   joining column's datatypes do not match

-- 
Bill MacArthur
Webmaster
The DHS Club, Inc.
The Best Is Yet To Come!

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[ADMIN] performance issues

2001-06-13 Thread DHSC Webmaster

Hello all,
We are running 7.0.3 on a RH 7.0 box w/twin 733 pentium CPUs, 512 meg.
ram and a 3 drive raid 5 scsi array. The machine is dedicated. Our main
table is about 700,000 rows with an associated table of about 15,000
rows. 
-N 128 -B 256
logging to a file
Without spelling out our complete schema I'll give some basic info
main table has 37 columns with 3 indexes, several foreign key
constraints and one rule
the associated table has about 10 columns all or which are int 4 with
the exception of one text
Most of our queries are simple select queries, half of which would use
the indexes.
Now, does this scenario sound about right or do I have a lot of work to
do?
At any given moment having 12 - 16 queries running (say 1 update & one
insert, the rest selects),
I'm dragging bad. Vacuum -z was just run. top gives me numbers around
20. Even with just a 2 or 3 queries running top gives me 2 - 3. Does
this sound typical?
If my information is insufficient to give a ballpark quess, perhaps you
could give me some normal types of performance figures.

Thank you for your feedback.
-- 
Bill MacArthur
Webmaster
DHS Club

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [ADMIN] performance issues

2001-06-14 Thread DHSC Webmaster

Thank you Tom. That has seemed to help substantially.
I received another sugestion to increase -B 3 which would give 240
meg. to postgres. I plan to give that a try. I've already reset the
SHMMAX val in sysctl to accomodate that size. Do you see any reason not
to try this? My thought is if some is good, more must be better. :)
Also I noticed in this artice:
http://www.ca.postgresql.org/docs/faq-english.html#3.7
mention of another memory parameter (-S), however the elephant book
refers to -S as silent mode.
Could you clarify this?

Tom Lane wrote:
> 
> DHSC Webmaster <[EMAIL PROTECTED]> writes:
> > We are running 7.0.3 on a RH 7.0 box w/twin 733 pentium CPUs, 512 meg.
> > ram and a 3 drive raid 5 scsi array. The machine is dedicated. Our main
> > table is about 700,000 rows with an associated table of about 15,000
> > rows.
> > -N 128 -B 256
> 
> That's not nearly enough buffers for an installation that size.  Try -B
> of a few thousand.
> 
> regards, tom lane

-- 
Bill MacArthur
Webmaster
DHS Club

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[ADMIN] 7.1.3 & ODBC

2001-08-28 Thread DHSC Webmaster

We just moved our database to a different machine and upgraded from
7.0.3 to 7.1.3 in the process. We have been using the windows driver v.
6.50., Access 2000 and a combination of win flavors with good
success on Postgres 7.0 . However, now that we have upgraded, we can
connect to the database alright, but no tables show up in the applet for
linking to tables. We just get an empty window. We can connect to the
tables that are there because we have changed the IP address in our ODBC
drivers, but building or modifying windows apps or even setting up a new
machine is impossible because the tables won't show. Any suggestions?
All our our database and table permissions are identical to the previous
machine setup.
-- 
Bill MacArthur
Webmaster
DHS Club

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [ADMIN] Bad Date field

2001-11-29 Thread DHSC Webmaster

You could use a perl script to read your data and do a pattern match for
consistency with your date output. Or you could import the data into a
temp table which has a text field instead of date. You could query
against your temp table to find offending data. Then use SQL to insert
the data into your real table.

Peralta Miguel-MPERALT1 wrote:
> 
> Hello everyone:
> 
> Would anyone know how to find a bad date field in an external file?  I have
> an external file that I am using to populate a table using the following
> command:
> COPY release_data FROM '/home/mperalta/release_data' USING DELIMITERS '|'
> 
> When I issue this command, the response afte a while is the following:
> 
> ERROR; Bad date external representation '9'
> 
> The external file is rather large and I've tried finding this bad date
> manually with no success.  The table I'm copying into has a datestamp of
> type DATE.
> 
> Any suggestions would be greatly appreciated.
> 
> ---(end of broadcast)---
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

-- 
Bill MacArthur
Webmaster
DHS Club

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [ADMIN] copy command

2002-01-03 Thread DHSC Webmaster



Isabelle Brette wrote:
> 
> Hello,
> 
> On Thu, Jan 03, 2002 at 12:10:01PM -0500, Zhang, Anna wrote:
> > I have a problem to load data to postgres database using copy command. The
> > problem is that we have one column called address which is multi-line text,
> > the taxt file looks like this:
> >
> > aab770|awkc.com administration|sultan 23
> > Bogota, na0|CO
> >
> > above shows one record with '|' as delimiters. Column 3 has two lines which
> > caused the record is splited into two lines. If I use copy command to load
> > this record, it is always treated as two records. Is there any way to tell
> > postgres to read until reach a specified char? not stop at '\n'?
> > Or maybe edit the text file to make it one line, but how?  We have milions
> > of such records in the text file.
> 
> You can use some powerful editor, such as VI, to recognize any line that
> does not begin with a proper pattern, and join it with the one before.
> Not to lose the line feed you'll probably have to add something to
> replace it (such as, let's say, a tab) and convert it back once you've
> imported (postgres has some nice pattern-matching features).
> 
> I've done this before to prepare an import (through a perl script,
> because I needed the information previously in one table to be split
> within several ones, but this does not make a major difference, because
> a perl script only reads one line at a time), the condition is that the
> beginning of each line is easily recognizable. If your first column is
> _not_ easily recognizable, maybe you can find a pattern showing a line
> is incomplete and join with the next line (bad number of |, for
> example).
> 
> Hope I've been clear enough (I'm not that good at explaining things in
> English) and that it helps.
> 
> --
> Isabelle Brette - [EMAIL PROTECTED]
>

If you use a perl script to 'count' your columns, you can escape your
newlines with a \ in your addresses and postgres will transparently
convert them to newlines in the data. By doing this it will process more
than one line in your text file as a row.

network=# create table tmp2 (id int, name text, address text);
CREATE
network=# insert into tmp2 (id,name,address) values(100,'bill','456
harmony ln.
network'# hickup, NH');
INSERT 86195110 1
network=# select * from tmp2;
 id  | name |  address
-+--+
 100 | bill | 456 harmony ln.
hickup, NH
(1 row)

Insert another row.

network=# insert into tmp2 (id,name,address)values(200,'Sam','Is
linebreak here?\newline');
INSERT 86195203 1
network=# select * from tmp2;
 id  | name |   address
-+--+-
 100 | bill | 456 harmony ln.
hickup, NH
 200 | Sam  | Is linebreak here?
ewline
(3 rows)

Here is the export file I copied out from tmp2
100|bill|456 harmony ln.\
hickup, NH
200|Sam|Is linebreak here?\
ewline

This file imported correctly to another table with the newlines embedded
in the data.

> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
> 
> http://www.postgresql.org/users-lounge/docs/faq.html

-- 
Bill MacArthur
Webmaster
DHS Club

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [ADMIN] Maximum Performance

2002-01-24 Thread DHSC Webmaster

Jean,
We are using a quad pentium xeon machine with 6 GB ram RH 7.1 enterprise
kernel. Postgres uses everything that is available. Actually, its only
idling with the hardware we have and we have tables with over 2M rows.
Your disk subsystem is very important as well as it can easily become a
bottleneck that can cripple an otherwise capable system. We have placed
our largest and most active tables as well as indices on separate
physical drives for best performance.

Jean Huveneers wrote:
> 
> Hi,
> 
> Within an couple of weeks we will start using PostgreSQL on Mandrake 8.1
> in real business (we have been testing, for over a half year).
> 
> In future we will have some tables with 100.000+ records an the system
> has te work very fast.
> 
> I know that speed of querries depend much on the amount of availible RAM
> to PostgreSQL, the server will only run the databases. What amount is
> RAM is usefull (I meen, does Postgres use the RAM, if availible, up to 2
> GB)?
> 
> I'm can chose for a Dual Athlon MP on a Tyan Thunder K7 motherbord, but
> does PostgreSQL use the 2 processors?
> 
> Regards,
> 
> Jean Huveneers
> 
> ---(end of broadcast)---
> TIP 4: Don't 'kill -9' the postmaster

-- 
Bill MacArthur
Webmaster
DHS Club

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [ADMIN] Indexes on separate disk ?

2002-06-14 Thread DHSC Webmaster

You sure can, Charlie.
We have our indexes, lightly used/smaller tables and heavily used/larger
tables split across several disks.
In a nutshell,
1. You just have to create the objects.
2. Then identify the objects using oid2name.
3. Shut down your database and move the objects.
4. Then put symlinks in their place pointing to their new destination.
5. Fire it back up.

(creating a reliable backup before this procedure is highly recommended)

Charlie Toohey wrote:
> 
> I've looked around a lot and don't think this is possible with Postgres, but
> figured I would ask in case I missed something. Is it possible to configure
> things so that an index resides on a separate disk ? It doesn't look like it,
> since they both have to reside in the same database, and the entire contents
> of the database would reside underneath a single directory, and therefore on
> a single disk.
> 
> Thanks
> Charlie
> 
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
> 
> http://www.postgresql.org/users-lounge/docs/faq.html

-- 
Bill MacArthur
Webmaster
DHS Club


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]




[ADMIN] best OS suggestions / ease my doubts

2003-03-31 Thread DHS Webmaster
Hello,
Just a quickie. We are planning an upgrade from 7.1.3 to 7.3. Since we
have to bring down our production server to do it we are planning on
killing all the birds with one shot. We are running RH 7.1 and were
planning on using RH 8. After reading a little bit recently here and
there and installing RH 8 on a simple mail server, I'm getting a little
nervous.
We have a database dedicated Dell 6450 quad Xeon machine with 6gig ram.
We have had frequent bounce buffer errors and have to run the enterprise
kernal rather than the smp kernel to recognize the full memory with RH
7.1. I realize this is more of an OS question, but I want to provide the
optimal OS environment for our Postgres intallation and you guys are the
most attuned to those issues.
Anyone got any real case based suggestions using similar hardware
config, Postgres 7.3 and a more recent linux?
Many thanks!
-- 
Bill MacArthur
Webmaster
DHS Club


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [ADMIN] How do I select the last Id in a column???

2003-06-26 Thread DHS Webmaster
What we do since 2 queries are necessary anyway, is to select the
NEXTVAL which gives us our 'record id' up front. Then when we do our
insert, we include that field with the value we just selected from our
sequence and we're all set. Of course if the insert fails for some
reason, then you will have a gap in the IDs, but that is not a problem
to us and the query to perform the NEXTVAL is surely less expensive than
a query on the table after the insert to get the ID.

Josh Goldberg wrote:
> 
> to get the last record inserted just
> select * from row order by autoincId desc limit 1;
> 
> if you have several clients doing this you might want to put your insert and
> the select inside a transaction.
> 
> - Original Message -
> From: "Michael Kovalcik" <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Sent: Monday, June 23, 2003 11:47 AM
> Subject: [ADMIN] How do I select the last Id in a column???
> 
> > Hello,
> >
> > I'm trying to find out how I select the last id in a
> > column if I don't know anything else about the row.
> > What i am doing is inserting information into my
> > Orders table and the id is auto-incrementing.  The
> > next thing i want to be able to do is grab the the id
> > of the order i just inserted.  How do i do this?
> >
> > Here is the code i have so far:
> >
> > my $sql = "INSERT INTO ${Orders}
> > (locationid,shippingid,statusid,date) VALUES (?
> > ,?,?,?)";
> >
> > my $sth = $dbh->prepare($sql);
> >
> > $sth->execute(${LocationID},${ShippingID},${StatusID},${date});
> > #--*
> > my $sth = $dbh->prepare("SELECT id FROM ${Orders}");
> >
> > $sth->execute();
> > $OrderID = $sth->fetchrow_arrayref;
> >
> > I know the select statement that i have will not work,
> > because it will bring me every id number in the table.
> >  I just want the latest one.
> >
> > Please HELP
> >
> > Thank you so much,
> > Mike
> >
> > __
> > Do you Yahoo!?
> > SBC Yahoo! DSL - Now only $29.95 per month!
> > http://sbc.yahoo.com
> >
> > ---(end of broadcast)---
> > TIP 2: you can get off all lists at once with the unregister command
> > (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
> >
> 
> ---(end of broadcast)---
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

-- 
Bill MacArthur
Webmaster
DHS Club

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [ADMIN] Many idle processes?

2005-05-17 Thread DHS Club Webmaster
Joel Fradkin wrote:
I just went live yesterday and I can see many idle processes on my 
redhat AS4 running postgres 8.0.2

Is this something I should be worried about?
Being a newb to Linux and somewhat to postgres I need to add the vacuum 
and backup scenario.

Josh from Commandpromt gave me some syntax, so I am wondering about how 
often I should vacuum ?

An example of cron setting would be perfect.
I may run the backup from a different server.
 

Also I am experiencing an issue with French text not appearing correctly 
on the web site.

We did not have this issue in testing and even now when we hook up to a 
test server locally (running windows) the French text appears ok in a 
test of our app running on a desk top.

 

In production I have windows 2000 for my IIS server and the Linux box 
for Data. Any ideas how to fix up my French the database itself is 
SQL_ASCHII.

 

Joel Fradkin
 

Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 

[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>
www.wazagua.com <http://www.wazagua.com>
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
© 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may 
contain confidential and privileged information.  Any unauthorized 
review, use, disclosure or distribution is prohibited.  If you are not 
the intended recipient, please contact the sender by reply email and 
delete and destroy all copies of the original message, including 
attachments.

Joel, the 'idle' processes are probably a result of your web server 'caching' 
them for reuse. This should not present a problem. Although, we use linux 
apache for our web server, we have it configured to 'cache' connections for 
quick reuse. As long as your max connections setting in postgres is high 
enough, you won't experience any trouble.
As far as your french, I'm guessing that either the data has been stored in the 
DB as unicode and is being rendered as iso-8859 or vice-versa. The test server 
is probably sending a different http header than the live server which would 
account for the difference in what the browser is representing. If you are 
seeing odd character doubling where there should be an accented character, then 
the data is stored as utf-8, but the http header says it is something other 
than utf-8. If you are seeing a '?' where there should be an accented 
character, then it is probably stored as plain ascii but the http header says 
it is supposed to be utf-8.
--
Bill MacArthur
Webmaster
The DHS Club, Inc.
The Best Is Yet To Come!
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings