Re: [GENERAL] Referential integrity question

2000-11-05 Thread William H. Geiger III

In <[EMAIL PROTECTED]>, on 11/05/00 
   at 10:12 AM, "Raymond O'Donnell" <[EMAIL PROTECTED]> said:

>Hi all,

>Is this normal? - I have discovered the hard way that permissions 
>GRANTed to a user on one table influence that user's ability to  update
>another table in a foreign key relationship with it. To be  specific,
>here are the relevant bits of the two tables:

>create table usertypes (
>   typecode  varchar(16) primary key,
>   description   varchar(64),
>);

>grant select on usertypes to webanon;

>create table users (
>   uid   varchar(16)  primary key,
>   pwd   varchar(16),
>   typecode   varchar(16) references usertypes
>  on update cascade
>);

>grant select,insert,update,delete on users to webanon;

>Now, when user webanon tries to update a field in table "users", the 
>update doesn't happen.however, it DOES happen if I grant update 
>permissions to webanon on table "usertypes".

>Any comments welcome! - Many thanks.

If I am looking at this you wouldn't want webanon to be able to update
table "usertypes". Table "usertypes" is a lookup table and as such should
not be modified by any entries in table "users".

How do you have the relationship between the two tables set up?

-- 
---
William H. Geiger III  http://www.openpgp.net  
Geiger Consulting

Data Security & Cryptology Consulting
Programming, Networking, Analysis
 
PGP for OS/2:   http://www.openpgp.net/pgp.html
E-Secure:   http://www.openpgp.net/esecure.html
---




Re: [GENERAL] where to find postgresql jobs in the Washington, DCarea??

2000-11-05 Thread John McKown

I totally agree that putting job postings here would be undesirable. Well,
at least for me. I don't want a job, I've got one. However, the addition
of a job mailing list by itself would be OK by me. But I wonder how
effective it would be. I guess if the list is archived, then that would be
a bit better. However, I think that a separate page or set of pages on a
Web somewhere site would be better. Of course, such a web page would use
PostgreSQL to store the information (job wanted / job opening). I've used
PostgreSQL and PHP4 together effectively to make dynamic pages. The
problem with this would be administrative. Adding and removing openings,
Adding in and removing "position desired". Unless it were to be totally
user-driven. I guess that each entry would need to have a validated email
address (like the mailing list) and a password. That way it would be more
difficult for one person to alter something put in by another person. But
there are many such Web sites (I think - http://www.monster.com comes to 
mind).

Well, just my thoughts. Probably not worth the electrons it takes to
display them,

John McKown

On Sat, 4 Nov 2000, Tom Lane wrote:

> KuroiNeko <[EMAIL PROTECTED]> writes:
> >  Maybe  this post  by Julian  will  lead to  `legalization' of  job-related
> > postings on -general?
> 
> Personally, I'd rather it didn't.
> 
> I have no objection to creation of a pgsql-jobs mail list, if there's
> interest --- but let's not clutter the general list with such traffic.
> 
>   regards, tom lane
> 





[GENERAL] PG and login

2000-11-05 Thread Jarmo Paavilainen

Hi,

Where does the backend do its user validation?

If I login with "psql -u" how does the backend take care of the username and
password?

Ive tried to find where it happens without success (using a lot of
elog(DEBUG,...)). (we are talking "password" identification, not "ident",
kerberos or "crypt").

Ive located where the pg_shadow compare happens, but If I give a wrong
username it will not even get that far. "pg_pwd" is never created so I
assume its not used. So how in h*** does pg do it? And where? (which
function?, file? mechanism?).

Im trying to make usernames caseless (yes I enforce that people use
ASCII-only (32-127) in usernames, and yes this is not SQL92).
[EMAIL PROTECTED]

// Jarmo




[GENERAL] Expectations of MEM requirements for a DB with large tables.

2000-11-05 Thread Michael Miyabara-McCaskey

Hello to all,

I am new to the PostgreSQL world, as well as the Linux world... kind of a
double whammy...

Anyway, I crashed my system the other day when I did a "select *" from one
of my large tables (about 5.5gb in size). Now this is not something that
will normally happen, as I would normally have some criteria to reduce the
output size, but it got me thinking...

Does anyone know what the ratio of data output size (say from a select) to
the amount of
RAM used is?

Hopefully the answer is not 1 to 1... as I would need to buy ALOT of RAM...

Other relevant info, RedHat Linux 2.2.17 kernel, PostgreSQL 7.0.2.

Thank you in advance for any and all help.

Note: I posted this to "Admin" also, but I'm not quite sure if it's really a
"installation" question, so I'm posting it here also.

-Michael Miyabara-McCaskey




Re: [GENERAL] Expectations of MEM requirements for a DB with large tables.

2000-11-05 Thread Bruce Guenter

On Sun, Nov 05, 2000 at 09:17:52PM -0800, Michael Miyabara-McCaskey wrote:
> Anyway, I crashed my system the other day when I did a "select *" from one
> of my large tables (about 5.5gb in size). Now this is not something that
> will normally happen, as I would normally have some criteria to reduce the
> output size, but it got me thinking...
> 
> Does anyone know what the ratio of data output size (say from a select) to
> the amount of RAM used is?

You are really asking two questions:  how much memory does the back end
take to execute that query, and how much memory does the front end
(psql, I assume) take to receive the response.

To answer the first, the back-ends allocate a fixed pool of buffers when
they start up, and never use more RAM than is in that pool.  If they
need more temporary space (ie for sorting), they will create temporary
files as necessary.

To answer the second, if you do a plain "SELECT *", it will buffer the
entire response set into RAM before printing anything out.  If you have
more than a trivial number of records to fetch from the database (and
5.5GB is certainly more than trivial), use a cursor and only fetch a few
hundred at a time.
-- 
Bruce Guenter <[EMAIL PROTECTED]>   http://em.ca/~bruceg/

 PGP signature


Re: [GENERAL] Expectations of MEM requirements for a DB with large tables.

2000-11-05 Thread KuroiNeko


 Mike,

> Anyway, I crashed  my system the other  day when I did a  "select *" from
> one
> of my large tables (about 5.5gb in size).

 Well It  takes abit more than  that to actually crash  the system. Can
you  give  more details?  What  _exactly_  happened?  Did it  hang?  Kernel
panicked? Something else.

> Does anyone know what  the ratio of data output size  (say from a select)
> to
> the amount of
> RAM used is?

 It simply can't be  other than 1 : 1. Well, in a  sense. Did you use psql?
Other app? If the  latter, what interface: API, DBI, other?  Was it a local
or remote request?

> Other relevant info, RedHat Linux 2.2.17 kernel, PostgreSQL 7.0.2.

 Is this a  genuine kernel, or the  one shipped by RH? Did  you build PGSQL
from sources or  was it a pre-built  binary? How much RAM do  you have, how
fast and big is your swap? Swap settings? Other processes running?
 As you see, there  are more questions than answers at  this stage. Tell us
more. Tuning the system has never been easy.




RE: [GENERAL] Expectations of MEM requirements for a DB with large tables.

2000-11-05 Thread Michael Miyabara-McCaskey

Bruce,

Your assumptions were absolutely on target.

I appreciate the fact that you de-coupled my question as well.

As I was in fact using "psql", this certainly explains my system dropping to
it's knees...

Out of curiosity, if I were using something else besides "psql" would this
have still been a problem?  Or is pgsql uncommon in this respect?

-Michael


> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Bruce Guenter
> Sent: Sunday, November 05, 2000 9:34 PM
> To: [EMAIL PROTECTED]
> Subject: Re: [GENERAL] Expectations of MEM requirements for a DB with
> large tables.
>
>
> On Sun, Nov 05, 2000 at 09:17:52PM -0800, Michael
> Miyabara-McCaskey wrote:
> > Anyway, I crashed my system the other day when I did a
> "select *" from one
> > of my large tables (about 5.5gb in size). Now this is not
> something that
> > will normally happen, as I would normally have some
> criteria to reduce the
> > output size, but it got me thinking...
> >
> > Does anyone know what the ratio of data output size (say
> from a select) to
> > the amount of RAM used is?
>
> You are really asking two questions:  how much memory does
> the back end
> take to execute that query, and how much memory does the front end
> (psql, I assume) take to receive the response.
>
> To answer the first, the back-ends allocate a fixed pool of
> buffers when
> they start up, and never use more RAM than is in that pool.  If they
> need more temporary space (ie for sorting), they will create temporary
> files as necessary.
>
> To answer the second, if you do a plain "SELECT *", it will buffer the
> entire response set into RAM before printing anything out.
> If you have
> more than a trivial number of records to fetch from the database (and
> 5.5GB is certainly more than trivial), use a cursor and only
> fetch a few
> hundred at a time.
> --
> Bruce Guenter <[EMAIL PROTECTED]>
http://em.ca/~bruceg/




RE: [GENERAL] Expectations of MEM requirements for a DB with large tables.

2000-11-05 Thread Michael Miyabara-McCaskey

Kuroi,

Thank you for the response.

I was not specifically thinking of diagnosing my crash, but if you are
curious...

 - about a 1000 error messgaes saying that kswapd could not do something
relating to allocating memory.

 - after stopping the "psql" job, it still took about 20 minutes before my
Linux OS recovered enough to reboot.

 - yep, used "psql", plan on using Perl-DBI soon... I guess from your
response I better make sure that whatever I use never does a full blown
'select *' on that table.

 - 2.2.17 from kernel.org, w/RAID patch.

 - PostgreSQL 7.0.2 was installed via the RPM from RedHat.

 - 98MB of RAM (this is a simple test system only)

 - 256MB of SWAP on a Linux SWAP Partition.

 - SWAP settings? Hmm... sorry this is where my novice level shows its
colors, didn't know I could modify it.

 - Other processes, sure tons of them...

Anyway, based upon your response, I understand that the 1 to 1 ratio is what
crashed my system.

Hmm, now you have me wondering though... I recall reading something that
mentioned that data storage size in PostgreSQL is about 6 times the size of
standard text, so, if my table size is about 5.5GB on the disk does this
mean I really could only need about well say... 5.5GB / 6 = 936MB of RAM?

Any idea?

-Michael Miyabara-McCaskey


> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of KuroiNeko
> Sent: Sunday, November 05, 2000 9:54 PM
> To: [EMAIL PROTECTED]
> Subject: Re: [GENERAL] Expectations of MEM requirements for a DB with
> large tables.
>
>
>
>  Mike,
>
> > Anyway, I crashed  my system the other  day when I did a
> "select *" from
> > one
> > of my large tables (about 5.5gb in size).
>
>  Well It  takes abit more than  that to actually crash
> the system. Can
> you  give  more details?  What  _exactly_  happened?  Did it
> hang?  Kernel
> panicked? Something else.
>
> > Does anyone know what  the ratio of data output size  (say
> from a select)
> > to
> > the amount of
> > RAM used is?
>
>  It simply can't be  other than 1 : 1. Well, in a  sense. Did
> you use psql?
> Other app? If the  latter, what interface: API, DBI, other?
> Was it a local
> or remote request?
>
> > Other relevant info, RedHat Linux 2.2.17 kernel, PostgreSQL 7.0.2.
>
>  Is this a  genuine kernel, or the  one shipped by RH? Did
> you build PGSQL
> from sources or  was it a pre-built  binary? How much RAM do
> you have, how
> fast and big is your swap? Swap settings? Other processes running?
>  As you see, there  are more questions than answers at  this
> stage. Tell us
> more. Tuning the system has never been easy.
>