[ADMIN] Database Access Limitation (cann't use IDENT).
Hello all, sorry about my english. I use PostgreSQL-7.2.1 on Linux. I have a problem to limit Database Access. I want user X use (ONLY) database Y and user A use (ONLY) database B, and so on. Is there another way except use pg_ident.conf ? because i want more restrictly from pg_ident.conf (every user just use their own database) On my PostgreSQL, if I use pg_ident.conf to mapping user, why there is no password check for user in pg_ident.conf map ? thanks for your answer .. ./oon Oon Arfiandwi M. -If I Want to Create DataBase by My Self, where is the URL ?- ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[ADMIN] ecpg problem : pre-processor translated hex constant to char
Dear all, A simple testing program : * * * * * * * * * * * * * * begin * * * * * * * * * * * * * * * * #include #include int main (void) { unsigned int v; v = 0x87654321L; return (0); } * * * * * * * * * * * * * * end * * * * * * * * * * * * * * * * compile with ecpg using : ecpg -o mytest.c -I/usr/include/pgsql mytest.pgc produces the output C program file as follow : * * * * * * * * * * * * * * begin * * * * * * * * * * * * * * * * /* Processed by ecpg (2.8.0) */ /* These three include files are added by the preprocessor */ #include #include #include #line 1 "test.pgc" #include #include int main (void) { unsigned int v; v = '0x87654321'L; return (0); } * * * * * * * * * * * * * * * end * * * * * * * * * * * * * * * It has translated the 4 bytes constant (0x87654321) into a one byte char constant (within the single quotes) during pre-processing. Seems this happens only when the high bit of the constant is set (i.e. it won't add the quotes if the constant is 0x12345678). Also, I noticed that the line number reported during the preprocessing error output is incorrect : it is '1' less than the actual line number in the source file. As shown, I am using version 2.8.0 of ecpg. Is my version being too old to be buggy ? Any suggestion to bypass the translation problem ? Thanks, Raymond Fung. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[ADMIN] starting the databaseserver
Hi there, i have a realy silly question: I installed postgressql (v 7.2) from suse linux 8.0 and then i wanted to try the first steps with it. As i understood, i shoud login as user postgres an then start the server by some commands, wich are described in the documents. It seems, the user postgres (group daemons) has been added by the installation-procedure -at least not by me. Nevertheless, when i try to login to this user, it fails because i do not have a password! So i can not take the very fist step! So my question simply is: How do I login to the user postgres? Please excuse my bad english. It is not my native language. Thank you for any information Wolfgang ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [ADMIN] starting the databaseserver
This is more, to my mind, a linux question. But all the same: su - # su to root su - postgres # once root you can su to postgres without password prompt - and hence ( hopefully ) avoid your problem then you can run the program - hopefully. Gareth -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of Wolfgang Meiners Sent: 03 July 2002 22:16 To: [EMAIL PROTECTED] Subject: [ADMIN] starting the databaseserver Hi there, i have a realy silly question: I installed postgressql (v 7.2) from suse linux 8.0 and then i wanted to try the first steps with it. As i understood, i shoud login as user postgres an then start the server by some commands, wich are described in the documents. It seems, the user postgres (group daemons) has been added by the installation-procedure -at least not by me. Nevertheless, when i try to login to this user, it fails because i do not have a password! So i can not take the very fist step! So my question simply is: How do I login to the user postgres? Please excuse my bad english. It is not my native language. Thank you for any information Wolfgang ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] starting the databaseserver
Hi Wolfgang, > It seems, the user postgres (group daemons) has been added by the > installation-procedure -at least not by me. Nevertheless, when i > try to login > to this user, it fails because i do not have a password! So i can > not take > the very fist step! > > So my question simply is: How do I login to the user postgres? Switchuser to root (su - root) and then change either the password or switch to user postgres (su - postgres) - as you do this as root, you don't have to log in. Florian ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[ADMIN] Which DB is using space.
Hi all, Below is the output from 'du -kx /|sort -rn|head'. It's showing that /var/lib/pgsql/data/base/46169 is using a big chunk of my disk. How can I tell which DB this is? What suggestions can people make to reduce the amount of disk space used by this DB? __BEGIN__ 361365 / 282030 /var 232427 /var/lib 186879 /var/lib/pgsql 186874 /var/lib/pgsql/data 153866 /var/lib/pgsql/data/base 122399 /var/lib/pgsql/data/base/46169 40675 /lib __END__ Ta -- Gary Stainburn This email does not contain private or confidential material as it may be snooped on by interested government parties for unknown and undisclosed purposes - Regulation of Investigatory Powers Act, 2000 ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[ADMIN] Union strange explain
Hi all, I have a view in this form: CREATE VIEW my_view AS < QUERY-A> UNION ALL Now if I do: # explain WHERE login = 'asdadad'; NOTICE: QUERY PLAN: Nested Loop (cost=0.00..7.08 rows=1 width=88) -> Nested Loop (cost=0.00..5.05 rows=1 width=52) -> Index Scan using user_login_login_key on user_login (cost=0.00..2.02 rows=1 width=16) -> Index Scan using idx_user_user_traffic on user_traffic ut (cost=0.00..3.02 rows=1 width=36) -> Index Scan using contracts_pkey on contracts c (cost=0.00..2.01 rows=1 width=36) # explain WHERE login = 'asdadad'; NOTICE: QUERY PLAN: Nested Loop (cost=0.00..4.22 rows=1 width=68) -> Nested Loop (cost=0.00..3.20 rows=1 width=40) -> Index Scan using user_login_login_key on user_login (cost=0.00..2.02 rows=1 width=16) -> Seq Scan on cas_subscription csub (cost=0.00..1.08 rows=8 width=24) -> Seq Scan on cas_service cser (cost=0.00..1.01 rows=1 width=28) if instead I do: # explain select * from my_view where login = 'asdadad'; NOTICE: QUERY PLAN: Subquery Scan foo (cost=367.73..393.27 rows=93 width=88) -> Unique (cost=367.73..393.27 rows=93 width=88) -> Sort (cost=367.73..367.73 rows=929 width=88) -> Append (cost=84.59..321.95 rows=929 width=88) -> Subquery Scan *SELECT* 1 (cost=84.59..303.59 rows=926 width=88) -> Hash Join (cost=84.59..303.59 rows=926 width=88) -> Hash Join (cost=81.57..262.53 rows=926 width=52) -> Seq Scan on user_traffic ut (cost=0.00..52.78 rows=2778 width=36) -> Hash (cost=78.16..78.16 rows=1364 width=16) -> Seq Scan on user_login (cost=0.00..78.16 rows=1364 width=16) -> Hash (cost=2.81..2.81 rows=81 width=36) -> Seq Scan on contracts c (cost=0.00..2.81 rows=81 width=36) -> Subquery Scan *SELECT* 2 (cost=0.00..18.36 rows=3 width=68) -> Nested Loop (cost=0.00..18.36 rows=3 width=68) -> Seq Scan on cas_service cser (cost=0.00..1.01 rows=1 width=28) -> Materialize (cost=17.31..17.31 rows=3 width=40) -> Nested Loop (cost=0.00..17.31 rows=3 width=40) -> Seq Scan on cas_subscription csub (cost=0.00..1.08 rows=8 width=24) -> Index Scan using user_login_pkey on user_login (cost=0.00..2.02 rows=1 width=16) How it is possible that the two Subquery Scan have two completely different plan ? How I can obtain for the two subselect the same plan like is in a single query ? Ciao Gaetano -- #exclude #include printf("\t\t\b\b\b\b\b\b");. printf("\t\t\b\b\b\b\b\b"); ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [ADMIN] Which DB is using space.
Gary Stainburn wrote: > Hi all, > > Below is the output from 'du -kx /|sort -rn|head'. It's showing that > /var/lib/pgsql/data/base/46169 is using a big chunk of my disk. > > How can I tell which DB this is? > What suggestions can people make to reduce the amount of disk space used by > this DB? > > __BEGIN__ > 361365/ > 282030/var > 232427/var/lib > 186879/var/lib/pgsql > 186874/var/lib/pgsql/data > 153866/var/lib/pgsql/data/base > 122399/var/lib/pgsql/data/base/46169 > 40675 /lib > __END__ > > Ta > -- > Gary Stainburn > > This email does not contain private or confidential material as it > may be snooped on by interested government parties for unknown > and undisclosed purposes - Regulation of Investigatory Powers Act, 2000 Sure, see contrib/oid2name. It generates a list of oid/dbname pairs. Also, I recommend getting oid2name from the snapshot/CVS and using that. It has the newer readme and some changes to make the examples display properly. Also see the new doc section for 7.3 on disk usage: http://developer.postgresql.org/docs/postgres/diskusage.html -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup.| Drexel Hill, Pennsylvania 19026 ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [ADMIN] Authentication in batch processing
Alfred Anzlovar wrote: > Maybe this could/can be done differently, but as it was, it was so easy > and nice. And if there are more of us, poor souls, with comparable > problems maybe an option called '--pwd-from-stdin-only' would be a > solution. Another option (thanks God for Open Source projects) is adding > an option for prompting into psql by myself. But I see this is a rather > drastic measure to get what I need/want. How about PGUSER/PGPASSWORD? That will work. This is assuming you don't have an OS (BSD?) that displays environment variables for a process. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup.| Drexel Hill, Pennsylvania 19026 ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [ADMIN] Authentication in batch processing
Alfred Anzlovar wrote: > On Thu, 4 Jul 2002, Bruce Momjian wrote: > > > How about PGUSER/PGPASSWORD? That will work. This is assuming you > > don't have an OS (BSD?) that displays environment variables for a > > process. > > No BSD, Linux. And it works. Thanks. > > Well, I must admit we had some rain today, but after your answer sun came > from behind the clouds :) > > More thanks and greetings from the Sunny side of the Alps. > Alfred Anzlovar 7.3 may remove PGPASSWORD, I think, and instead allow you to specify a file that contains the password. > PS > Is it ok to send e-mail to [EMAIL PROTECTED] AND TO you? Yes, of course. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup.| Drexel Hill, Pennsylvania 19026 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] Union strange explain
On Thu, 4 Jul 2002, Gaetano Mendola wrote: > Hi all, > > I have a view in this form: > > CREATE VIEW my_view AS > < QUERY-A> > UNION ALL > > > Now if I do: > > # explain WHERE login = 'asdadad'; > NOTICE: QUERY PLAN: > > Nested Loop (cost=0.00..7.08 rows=1 width=88) > -> Nested Loop (cost=0.00..5.05 rows=1 width=52) > -> Index Scan using user_login_login_key on user_login > (cost=0.00..2.02 rows=1 width=16) > -> Index Scan using idx_user_user_traffic on user_traffic ut > (cost=0.00..3.02 rows=1 width=36) > -> Index Scan using contracts_pkey on contracts c (cost=0.00..2.01 > rows=1 width=36) > > # explain WHERE login = 'asdadad'; > NOTICE: QUERY PLAN: > > Nested Loop (cost=0.00..4.22 rows=1 width=68) > -> Nested Loop (cost=0.00..3.20 rows=1 width=40) > -> Index Scan using user_login_login_key on user_login > (cost=0.00..2.02 rows=1 width=16) > -> Seq Scan on cas_subscription csub (cost=0.00..1.08 rows=8 > width=24) > -> Seq Scan on cas_service cser (cost=0.00..1.01 rows=1 width=28) > > > > if instead I do: > # explain select * from my_view where login = 'asdadad'; It's probably not pushing the login='asdadad' condition down into the queries in the view so it's possibly doing a full union all followed by the condition (given that it's estimating a larger number of rows returned). I think there was some question about whether it was safe to do that optimization (ie, is select * from (a union [all] b) where condition always the same as select * from a where condition union [all] select * from b where condition ) This was discussed recently, but I forget what the final determination was. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[ADMIN] Performance impact of record sizes
We have a need to store text data which typically is just a hundred or so bytes, but in some cases may extend to a few thousand. Our current field has a varchar of 1024, which is not large enough. Key data is fixed sized and much smaller in this same record. Our application is primarily transaction oriented, which means that records will normally be fetched via random access, not sequential scans. The question is: what size thresholds exist? I assume that there is a "page" size over which the record will be split into more than one. What is that size, and does the spill cost any more or less than I had split the record into two or more individual records in order to handle the same data? Obviously, the easiest thing for me to do is just set the varchar to something big (say - 10K) but I don't want to do this without understanding the OLTP performance impact. Thanks in advance John Moore http://www.tinyvital.com/personal.html UNITED WE STAND ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[ADMIN] views: performance implication
We hope to use views as a way to give customers odbc based ad-hoc query access to our database while enforcing security. The reason is that we do not want to put data into separate tables by customer, but rather use a customer ID as part of any query criteria on any table. So the question is: are there any negative performance implications of doing so (other than the obvious of having more data in a table than is of interest to the querying customer)? Back in the old days, views were a performance no-no in Informix, so I want to be sure we aren't setting a big trap for ourselves. Thanks in advance. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [ADMIN] Performance impact of record sizes
John Moore wrote: > We have a need to store text data which typically is just a hundred or so > bytes, but in some cases may extend to a few thousand. Our current field > has a varchar of 1024, which is not large enough. Key data is fixed sized > and much smaller in this same record. > > Our application is primarily transaction oriented, which means that records > will normally be fetched via random access, not sequential scans. > > The question is: what size thresholds exist? I assume that there is a > "page" size over which the record will be split into more than one. What is > that size, and does the spill cost any more or less than I had split the > record into two or more individual records in order to handle the same data? > > Obviously, the easiest thing for me to do is just set the varchar to > something big (say - 10K) but I don't want to do this without understanding > the OLTP performance impact. > If you don't want a limit, use TEXT. Long values are automatically stored in TOAST tables to avoid performance problems with sequential scans over long row values. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup.| Drexel Hill, Pennsylvania 19026 ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [ADMIN] views: performance implication
John Moore wrote: > We hope to use views as a way to give customers odbc based ad-hoc query > access to our database while enforcing security. The reason is that we do > not want to put data into separate tables by customer, but rather use a > customer ID as part of any query criteria on any table. > > So the question is: are there any negative performance implications of > doing so (other than the obvious of having more data in a table than is of > interest to the querying customer)? Back in the old days, views were a > performance no-no in Informix, so I want to be sure we aren't setting a big > trap for ourselves. Views are rewritten as rules for every query issued, so there is little performance impact except for the complexity of your view (GROUP BY, aggregates, etc.) -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup.| Drexel Hill, Pennsylvania 19026 ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [ADMIN] Performance impact of record sizes
>If you don't want a limit, use TEXT. Long values are automatically >stored in TOAST tables to avoid performance problems with sequential >scans over long row values. Thanks... I wasn't quite clear enough in my question I am focused on OLTP performance, and in my case the vast majority of the rows will have only a few hundred bytes in that column, while a few (1%?) may be larger. I assume that internally there is a fixed page size (by which I mean cache buffer size or disk read size) for normal records. In my case, 99% of the physical rows should be short, and would fit easily in whatever that size is. So I *suspect* I want to keep the data in the physical row, rather than using TEXT and having it stored separately from the record.. The question is... are there any unexpected consequences. For example, if I have a whole bunch of rows with, say, a 10K varchar field which is only populated with a hundred or two bytes each, will it perform just as well as if that field was defined as a 200 byte field? A related question: is it more expensive to use varchar than fixed char fields? I assume some additional work in physically unpacking the record. My past experience is with Informix, and a lot with very old versions where views were high cost, and so were varchars. Likewise, you didn't want your physical row to exceed the size of a physical page if you could avoid it. John ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster