Re: [GENERAL] Postgresql and SSL
Jorge Godoy wrote: > I'll have to dig if the libraries I use support that. It > would be much more interesting changing certificates once > a year than hardcoding passwords on code... But remember that you will still be asked for a password unless you use trust authentication. Yours, Laurenz Albe ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] queston about locking
I'm writing a python script to update some tables in a db. My problem is that I need to lock a couple of tables, perform several operations and read the corresponding output. I was thinking about lock in exclusive mode, but in the documentation I found that it is valid only in a transaction. But since I need to execute a command and read the output and so forth, I think I cannot use a transaction. What would you use to lock the table? signature.asc Description: OpenPGP digital signature
Re: [GENERAL] queston about locking
Ottavio Campana wrote: > I'm writing a python script to update some tables in a db. My > problem is > that I need to lock a couple of tables, perform several operations and > read the corresponding output. > > I was thinking about lock in exclusive mode, but in the > documentation I > found that it is valid only in a transaction. But since I need to > execute a command and read the output and so forth, I think I > cannot use > a transaction. > > What would you use to lock the table? What makes you think that you "need to lock a couple of tables"? Maybe there is a different, less invasive way to solve your problem. What do you want to achieve? Yours, Laurenz Albe ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] autovacuum
On Thu, Sep 20, 2007 at 04:33:25PM -0500, Scott Marlowe wrote: > On 9/20/07, Robert Fitzpatrick <[EMAIL PROTECTED]> wrote: > > On Thu, 2007-09-20 at 16:38 -0400, Bill Moran wrote: > > > In response to Robert Fitzpatrick <[EMAIL PROTECTED]>: > > > Why does everyone leave of the IO subsystem? It's almost as if many > > > people don't realize that disks exist ... > > > > > > With 2G of RAM, and a DB that's about 3G, then there's at least a G of > > > database data _not_ in memory at any time. As a result, disk speed is > > > important, and _could_ be part of your problem. You're not using RAID > > > 5 are you? > > > > Yes, using RAID 5, not good? RAID 5 with hot fix total of 4 drives. All > > SATA 80GB drives giving me little under 300GB to work with. > > RAID5 optimizes for space, not performance or reliability. It gets > faster but less reliable as it gets bigger. If you can afford the > space RAID-10 is generally preferred. > > Note however that it is far more important for most general purpose > servers to have a RAID controller that is both fast by nature (i.e. > not $50.00) and has battery backed cache with write thru turned on. Surely you mean with write thru turned *off*... Or write-back turned on. But write thru turned on will make your battery unnecessary... //Magnus ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] queston about locking
Albe Laurenz ha scritto: > Ottavio Campana wrote: >> I'm writing a python script to update some tables in a db. My >> problem is >> that I need to lock a couple of tables, perform several operations and >> read the corresponding output. >> >> I was thinking about lock in exclusive mode, but in the >> documentation I >> found that it is valid only in a transaction. But since I need to >> execute a command and read the output and so forth, I think I >> cannot use >> a transaction. >> >> What would you use to lock the table? > > What makes you think that you "need to lock a couple of tables"? the point is that for each table I have a copy I previously made and I want to create an incremental backup. My problem is that I don't want the original table to change, so I lock it. I admin that exclusive lock is probably too much. Does share mode block inser/update/delete but allows reading? Thanks. PS: By the way, I just discovered that with python psycopg2 the cursor is wrapped in a transaction, so locking works. So the problem is only the correct lock level. signature.asc Description: OpenPGP digital signature
[GENERAL] "not in" clause too slow?
mytable has 1857 rows, copy_mytable is a copy of mytable and I want to know which new rows have been entered. I used the where id not in, and the query works. My problem is that if I run the same command on another table with 378415 rows, it is terribly slow. I ran explain analyze on the first table, just to see how is works, and I have two questions: 1) from explain analyze output, does the planner scan mytable and for each element runs a sec sqn on copy_mytable? 2) how can I speed it up? by using indexes? or by changing the query? db=# EXPLAIN ANALYZE select * from mytable where id not in (select id from copy_mytable); QUERY PLAN -- Seq Scan on mytable (cost=53.21..148.34 rows=925 width=96) (actual time=9.813..9.813 rows=0 loops=1) Filter: (NOT (hashed subplan)) SubPlan -> Seq Scan on copy_mytable (cost=0.00..48.57 rows=1857 width=4) (actual time=0.031..3.132 rows=1857 loops=1) Total runtime: 10.291 ms signature.asc Description: OpenPGP digital signature
Re: [GENERAL] queston about locking
On Fri, Sep 21, 2007 at 11:53:54AM +0200, Ottavio Campana wrote: > the point is that for each table I have a copy I previously made and I > want to create an incremental backup. My problem is that I don't want > the original table to change, so I lock it. > > I admin that exclusive lock is probably too much. Why not just use SERLIALISED transaction mode, then your program won't see any changes, while other programs can still use it normally. That's how pg_dump generates consistant backups. Have a nice day, -- Martijn van Oosterhout <[EMAIL PROTECTED]> http://svana.org/kleptog/ > From each according to his ability. To each according to his ability to > litigate. signature.asc Description: Digital signature
Re: [GENERAL] "not in" clause too slow?
Alban Hertroys ha scritto: > Ottavio Campana wrote: >> 2) how can I speed it up? by using indexes? or by changing the query? > > Do you have indices on mytable.id and copy_mytable.id? > Does using NOT EXISTS get you any better results? mytable.id is primary key. I create copy_mytable with create table copy_mytable as (select * from mytable); the planer behavior does not change no matter if I create and index on copy_mytable.id or not. >> db=# EXPLAIN ANALYZE select * from mytable where id not in (select id >> from copy_mytable); >> QUERY PLAN >> >> -- >> Seq Scan on mytable (cost=53.21..148.34 rows=925 width=96) (actual >> time=9.813..9.813 rows=0 loops=1) >>Filter: (NOT (hashed subplan)) >>SubPlan >> -> Seq Scan on copy_mytable (cost=0.00..48.57 rows=1857 width=4) >> (actual time=0.031..3.132 rows=1857 loops=1) >> Total runtime: 10.291 ms >> > > signature.asc Description: OpenPGP digital signature
[GENERAL] foreign key on views
Hi, Where in the source code can I see the error message that says foreign key to a view is not allowed The exact error message as it appears when I try foreign key reference on a view is referenced relation "xyz_view" is not a table I just wanted to know where is this error message thrown in the source code.? Any help will be appreciated. Thanks josh
Re: [GENERAL] Windows Auto-Vacuum in 8.2.4 or 8.2.5
I thought I would give this question a second try. --- Richard Broersma Jr <[EMAIL PROTECTED]> wrote: > A while back it was pointed out the that the Windows version of 8.2.3 had a > bug that prevented > auto-vacuum from working correctly. > > http://archives.postgresql.org/pgsql-general/2007-04/msg00139.php > > I wasn't able to determine from the release notes if this bug was fixed in > versions 8.2.4 or > 8.2.5. Does any know if it was? > > Regards, > Richard Broersma Jr. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Windows Auto-Vacuum in 8.2.4 or 8.2.5
Richard Broersma Jr wrote: > A while back it was pointed out the that the Windows version of 8.2.3 > had a bug that prevented auto-vacuum from working correctly. > > http://archives.postgresql.org/pgsql-general/2007-04/msg00139.php > > I wasn't able to determine from the release notes if this bug was > fixed in versions 8.2.4 or 8.2.5. Does any know if it was? Hmm, maybe it was, but then I cannot recall what bug I was referring to :-( I can't find a relevant CVS log message either. -- Alvaro Herrera http://www.amazon.com/gp/registry/DXLWNGRJD34J Licensee shall have no right to use the Licensed Software for productive or commercial use. (Licencia de StarOffice 6.0 beta) ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Using RETURNING with INTO inside pgsql
I have to agree with both of you... But unfortunately there are still some loose ends... See bug 3596... http://archives.postgresql.org/pgsql-bugs/2007-09/msg9.php But leaving bugs aside, I will have to say "Bravo!" to the development team! Ciprian Craciun. P.S.: I forgot to press reppy all... On 9/21/07, Scott Marlowe <[EMAIL PROTECTED]> wrote: > On 9/21/07, D. Dante Lorenso <[EMAIL PROTECTED]> wrote: > > Not really a question here, I just wanted to share my joy with the > > group. I'm loving the new RETURNING clause in PostgreSQL. This is > > really cool stuff ... > > I love it too! I have an update for out monitoring software that does > something like: > > insert into monitoring table (a,b,c,thredhols) select distinct a,b,c, > 50 from bigtable where date > now() - interval '24 hours' returning > a,b,c > > and it is so much simpler than how I had to do it before, mostly in app code. > > Every time I turn around there's something new in postgresql that > makes me love it that much more. > > I owe somebody a whole bunch of pizza... > > ---(end of broadcast)--- > TIP 2: Don't 'kill -9' the postmaster > ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Windows Auto-Vacuum in 8.2.4 or 8.2.5
--- Alvaro Herrera <[EMAIL PROTECTED]> wrote: > Hmm, maybe it was, but then I cannot recall what bug I was referring to > :-( I can't find a relevant CVS log message either. Okay. FWIW, I am still seeing the same behavior in Version 8.2.4. But I am about to upgrade to 8.2.5, and I will check to see if I notice a difference. Ofcourse, maybe auto-vacuum does work but I have a configuration error some where. Thanks for the consideration! Regards, Richard Broersma Jr. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] "not in" clause too slow?
On 9/21/07, Ottavio Campana <[EMAIL PROTECTED]> wrote: > My problem is that if I run the same command on another > table with 378415 rows, it is terribly slow. How much is "terribly slow"? Did you VACUUM ANALYZE? Anyways, try this: SELECT * FROM MYTABLE T1 LEFT JOIN COPY_MYTABLE T2 ON T1.ID = T2.ID WHERE T2.ID IS NULL ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] autovacuum
On 9/21/07, Magnus Hagander <[EMAIL PROTECTED]> wrote: > On Thu, Sep 20, 2007 at 04:33:25PM -0500, Scott Marlowe wrote: > > On 9/20/07, Robert Fitzpatrick <[EMAIL PROTECTED]> wrote: > > > On Thu, 2007-09-20 at 16:38 -0400, Bill Moran wrote: > > > > In response to Robert Fitzpatrick <[EMAIL PROTECTED]>: > > > > Why does everyone leave of the IO subsystem? It's almost as if many > > > > people don't realize that disks exist ... > > > > > > > > With 2G of RAM, and a DB that's about 3G, then there's at least a G of > > > > database data _not_ in memory at any time. As a result, disk speed is > > > > important, and _could_ be part of your problem. You're not using RAID > > > > 5 are you? > > > > > > Yes, using RAID 5, not good? RAID 5 with hot fix total of 4 drives. All > > > SATA 80GB drives giving me little under 300GB to work with. > > > > RAID5 optimizes for space, not performance or reliability. It gets > > faster but less reliable as it gets bigger. If you can afford the > > space RAID-10 is generally preferred. > > > > Note however that it is far more important for most general purpose > > servers to have a RAID controller that is both fast by nature (i.e. > > not $50.00) and has battery backed cache with write thru turned on. > > Surely you mean with write thru turned *off*... Or write-back turned on. > But write thru turned on will make your battery unnecessary... Yeah, I meant write back turned on... ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] "not in" clause too slow?
Ottavio Campana <[EMAIL PROTECTED]> írta: > mytable.id is primary key. > > I create copy_mytable with > create table copy_mytable as (select * from mytable); > > the planer behavior does not change no matter if I create and index > on copy_mytable.id or not. Try this, with an index on copy_mytable.id: SELECT orig.* FROM mytable AS orig WHERE NOT EXISTS ( SELECT 1 FROM copy_mytable AS copy WHERE copy.id = orig.id ); Regards, Denes Daniel Olvasd az [origo]-t a mobilodon: mini magazinok a Mobizin-en ___ www.t-mobile.hu/mobizin ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Windows Auto-Vacuum in 8.2.4 or 8.2.5
--- Alvaro Herrera <[EMAIL PROTECTED]> wrote: > Maybe you should restate your problem so we can try to discover the > cause. Okay. I will dis-able the hourly manual vacuum/analyze script that I implemented as a work-around to this problem to see if auto-vacuum is ever triggered. But to start off with, here are the settings that I currently have in my postgresql.conf for auto-vacuum. I will post-back with any results whether I see auto-vacuum working in 8.2.5 or not. Thanks! Regards, Richard Broersma Jr. #--- # RUNTIME STATISTICS #--- # - Query/Index Statistics Collector - stats_start_collector = on # needed for block or row stats # (change requires restart) stats_row_level = on #--- # AUTOVACUUM PARAMETERS #--- autovacuum = on # enable autovacuum subprocess? # 'on' requires stats_start_collector # and stats_row_level to also be on autovacuum_naptime = 1min # time between autovacuum runs autovacuum_vacuum_threshold = 100 # min # of tuple updates before # vacuum autovacuum_analyze_threshold = 25 # min # of tuple updates before # analyze autovacuum_vacuum_scale_factor = 0.002 # fraction of rel size before # vacuum autovacuum_analyze_scale_factor = 0.001 # fraction of rel size before # analyze autovacuum_freeze_max_age = 2 # maximum XID age before forced vacuum # (change requires restart) autovacuum_vacuum_cost_delay = -1 # default vacuum cost delay for # autovacuum, -1 means use # vacuum_cost_delay autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for # autovacuum, -1 means use # vacuum_cost_limit ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Using RETURNING with INTO inside pgsql
Woah, when did that come around? Talk about sweet syntactic sugar On Sep 20, 2007, at 10:12 PM, D. Dante Lorenso wrote: I'm loving the new RETURNING clause in PostgreSQL. This is really cool stuff ... ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Windows Auto-Vacuum in 8.2.4 or 8.2.5
Richard Broersma Jr wrote: > --- Alvaro Herrera <[EMAIL PROTECTED]> wrote: > > > Hmm, maybe it was, but then I cannot recall what bug I was referring to > > :-( I can't find a relevant CVS log message either. > > Okay. FWIW, I am still seeing the same behavior in Version 8.2.4. But > I am about to upgrade to 8.2.5, and I will check to see if I notice a > difference. > > Ofcourse, maybe auto-vacuum does work but I have a configuration error some > where. Maybe you should restate your problem so we can try to discover the cause. -- Alvaro Herrera Developer, http://www.PostgreSQL.org/ "Digital and video cameras have this adjustment and film cameras don't for the same reason dogs and cats lick themselves: because they can." (Ken Rockwell) ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Using RETURNING with INTO inside pgsql
You blinked again, didn't you? :) On 9/21/07, Ben <[EMAIL PROTECTED]> wrote: > Woah, when did that come around? Talk about sweet syntactic sugar > > On Sep 20, 2007, at 10:12 PM, D. Dante Lorenso wrote: > > > I'm loving the new RETURNING clause in PostgreSQL. This is really > > cool stuff ... > > > ---(end of broadcast)--- > TIP 9: In versions below 8.0, the planner will ignore your desire to >choose an index scan if your joining column's datatypes do not >match > ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Windows Auto-Vacuum in 8.2.4 or 8.2.5
--- Richard Broersma Jr <[EMAIL PROTECTED]> wrote: > I will dis-able the hourly manual vacuum/analyze script that I implemented as > a work-around to > this problem to see if auto-vacuum is ever triggered. it appears to be working fine in 8.2.5: proj02u20411=> begin transaction; BEGIN proj02u20411=> update document set docdisc = docdisc; UPDATE 7927 proj02u20411=> update document set docnum = docnum; UPDATE 7927 proj02u20411=> rollback; ROLLBACK proj02u20411=> explain analyze select count(*) from docs.document; QUERY PLAN - Aggregate (cost=641.58..641.59 rows=1 width=0) (actual time=25.340..25.341 rows=1 loops=1) -> Seq Scan on document (cost=0.00..587.26 rows=21726 width=0) (actual time=0.025..14.578 rows=7927 loops=1) Total runtime: 25.508 ms (3 rows) proj02u20411=> \x Expanded display is on. proj02u20411=> SELECT * proj02u20411-> FROM Pg_stat_all_tables proj02u20411->WHERE schemaname = 'docs' proj02u20411-> AND relname = 'document' proj02u20411-> ORDER BY schemaname, relname; -[ RECORD 1 ]+--- relid| 16867 schemaname | docs relname | document seq_scan | 4769 seq_tup_read | 34099275 idx_scan | 452899 idx_tup_fetch| 1679315 n_tup_ins| 59 n_tup_upd| 15868 n_tup_del| 52 last_vacuum | 2007-09-21 07:00:03.646-07 last_autovacuum | 2007-09-21 07:57:49.496-07 last_analyze | 2007-09-21 07:00:03.646-07 last_autoanalyze | 2007-09-21 07:57:49.496-07 proj02u20411=> \x Expanded display is off. proj02u20411=> explain analyze select count(*) from docs.document; QUERY PLAN - Aggregate (cost=234.09..234.10 rows=1 width=0) (actual time=22.014..22.016 rows=1 loops=1) -> Seq Scan on document (cost=0.00..214.27 rows=7927 width=0) (actual time=0.025..11.490 rows=7927 loops=1) Total runtime: 22.140 ms (3 rows) proj02u20411=> ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] 5 minutes to pg_dump nothing
Hi all, I'm seeing a problem where pg_dump takes at least 5 minutes to execute no matter what I ask it to dump -- even a non-existent or empty table. One possible red flag is that pg_type contains 56508 rows. This doesn't seem excessive to me, but perhaps it should. I've looked through the source code and a system trace and I can see where the execution time disappears, but I can't pinpoint the source of the problem. Some background: the database contains about 1200 schemas with 30+ tables and 20+ custom types in each. The whole database is only about 12G. An empty table created in another database on the same machine pg_dumps in < .1sec, as expected. PG version is 8.1.4, OS is FreeBSD. When I run pg_dump in verbose mode, it stalls after printing "reading user-defined types". In the systrace, I can see the pg_types query at the top of getTypes() in pgdump.c followed by 628 reads of 4k blocks (= 2512k) which only takes a second or so in total. I guess this is the loop through the tuples in getTypes(). There's then four calls to getDomainConstraints() followed by a 129-second delay during which nothing appears in the system trace. After that, there's 124 pairs of these: 72263 pg_dump 135.956209 CALL break(0x9b37000) 72263 pg_dump 135.956222 RET break 0 They consume another 118 seconds, and then pg_dump moves on to "reading procedural languages". I've VACUUM FULL ANALYZEd everything I can think of. I'd really appreciate any further hints or help. PS - Thanks for Postgres; it works like a dream for me except for infrequent bumps. =) -- Philip http://NikitaTheSpider.com/ Whole-site HTML validation, link checking and more ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] foreign key on views
"Josh Harrison" <[EMAIL PROTECTED]> writes: > I just wanted to know where is this error message thrown in the source > code.? "\set VERBOSITY verbose" would help... regression=# create view v as select * from int4_tbl; CREATE VIEW regression=# \set VERBOSITY verbose regression=# create table t (f1 int references v); ERROR: 42809: referenced relation "v" is not a table LOCATION: ATAddForeignKeyConstraint, tablecmds.c:3949 regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] 5 minutes to pg_dump nothing
"Nikita The Spider The Spider" <[EMAIL PROTECTED]> writes: > I'm seeing a problem where pg_dump takes at least 5 minutes to execute > no matter what I ask it to dump -- even a non-existent or empty table. > One possible red flag is that pg_type contains 56508 rows. This > doesn't seem excessive to me, but perhaps it should. That does seem like a lot --- what sort of types are they? Scalar, composite, what? It's fairly likely that no one has tried to optimize pg_dump for such a case. It'd be helpful if you could recompile pg_dump with profiling enabled (-pg compiler switch) and get a gprof profile to show where the time is going. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Migrating from 32 bit to 64 bit binaries
Just a quick question. Are there any issues or incompatibilities that I should be aware of if I want to build out 64 bit binaries to run on a db that's previously been run by 32 bit binaries? Erik Jones Software Developer | Emma® [EMAIL PROTECTED] 800.595.4401 or 615.292.5888 615.292.0777 (fax) Emma helps organizations everywhere communicate & market in style. Visit us online at http://www.myemma.com ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] 5 minutes to pg_dump nothing
On 9/21/07, Tom Lane <[EMAIL PROTECTED]> wrote: > "Nikita The Spider The Spider" <[EMAIL PROTECTED]> writes: > > I'm seeing a problem where pg_dump takes at least 5 minutes to execute > > no matter what I ask it to dump -- even a non-existent or empty table. > > One possible red flag is that pg_type contains 56508 rows. This > > doesn't seem excessive to me, but perhaps it should. > > That does seem like a lot --- what sort of types are they? Scalar, > composite, what? It's fairly likely that no one has tried to optimize > pg_dump for such a case. Aha, thanks. Didn't realize I was pushing the bounds of what was reasonable. Here's the type counts: typrelkind | the_count +--- | 114 sequence | 11496 composite | 12290 ordinary | 13844 TOAST | 9215 view | 9699 (6 rows) > It'd be helpful if you could recompile pg_dump with profiling enabled > (-pg compiler switch) and get a gprof profile to show where the time > is going. Will do. I'm going to try to recreate the problem in my development environment where I have a bit more freedom to tinker. -- Philip http://NikitaTheSpider.com/ Whole-site HTML validation, link checking and more ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Migrating from 32 bit to 64 bit binaries
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Erik Jones wrote: > Just a quick question. Are there any issues or incompatibilities that I > should be aware of if I want to build out 64 bit binaries to run on a db > that's previously been run by 32 bit binaries? Its a dump reload. Joshua D. Drake > > Erik Jones > > Software Developer | Emma® > [EMAIL PROTECTED] > 800.595.4401 or 615.292.5888 > 615.292.0777 (fax) > > Emma helps organizations everywhere communicate & market in style. > Visit us online at http://www.myemma.com > > > > ---(end of broadcast)--- > TIP 2: Don't 'kill -9' the postmaster > - -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240 PostgreSQL solutions since 1997 http://www.commandprompt.com/ UNIQUE NOT NULL Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate PostgreSQL Replication: http://www.commandprompt.com/products/ -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFG8/iLATb/zqfZUUQRAvloAKCe0l8evJ/+xTkwDu29fbC89ucCggCgjKcZ uQmXLLy5YSct+QTVuz3rNtU= =vyxW -END PGP SIGNATURE- ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Migrating from 32 bit to 64 bit binaries
On Sep 21, 2007, at 11:59 AM, Joshua D. Drake wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Erik Jones wrote: Just a quick question. Are there any issues or incompatibilities that I should be aware of if I want to build out 64 bit binaries to run on a db that's previously been run by 32 bit binaries? Its a dump reload. Ouch, that's what I thought but was hoping someone would have some magic formula and say it wasn't so. Erik Jones Software Developer | Emma® [EMAIL PROTECTED] 800.595.4401 or 615.292.5888 615.292.0777 (fax) Emma helps organizations everywhere communicate & market in style. Visit us online at http://www.myemma.com ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Migrating from 32 bit to 64 bit binaries
On Sep 21, 2007, at 12:02 PM, Alvaro Herrera wrote: Erik Jones wrote: Just a quick question. Are there any issues or incompatibilities that I should be aware of if I want to build out 64 bit binaries to run on a db that's previously been run by 32 bit binaries? Obviously you are already aware that you need to dump/reload, so I won't mention that. Which makes this reply empty. Hmmm. Yeah, I did, I was just hoping I was wrong somehow :) Erik Jones Software Developer | Emma® [EMAIL PROTECTED] 800.595.4401 or 615.292.5888 615.292.0777 (fax) Emma helps organizations everywhere communicate & market in style. Visit us online at http://www.myemma.com ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Migrating from 32 bit to 64 bit binaries
Erik Jones wrote: > Just a quick question. Are there any issues or incompatibilities that I > should be aware of if I want to build out 64 bit binaries to run on a db > that's previously been run by 32 bit binaries? Obviously you are already aware that you need to dump/reload, so I won't mention that. Which makes this reply empty. Hmmm. -- Alvaro Herrera Developer, http://www.PostgreSQL.org/ "Changing the world ... one keyboard at a time!" (www.DVzine.org) ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] Return t/f on existence of a join
... Or something like that. :) Sorry for so many questions! I have another "how do I create this query?" question, if it's okay. I've got three tables; 'foo', 'bar' and 'baz'. In 'foo' I've got 'foo_id' which is a PK. I've also got a bunch of other info, but in essence this is the "parent" table that all others reference in some way. In 'bar' I've got 'bar_id' which is also a PK. I also have 'bar_foo_id' which is a FK pointing to 'foo' -> 'foo_id', to show what 'foo' row it (primarily) belongs to. Lastly, I've got a table called 'baz' which has 'baz_id'. In it, there are just two columns; - 'baz_foo_id' which is a FK pointing to 'foo' -> 'foo_id'. - 'baz_bar_id' which is a FK pointing to 'bar' -> 'bar_id'. This last table, 'baz' is used as a way for saying 'bar *also* belongs to a given 'foo' row, So now my question; I want to create a query that will allow me to say "show me all 'foo' rows and tell me if a specific 'baz_id' belongs to it". Normally, I would do this: SELECT foo_id FROM foo; (for each returned row) { # Where '$foo_id' is the current 'foo_id' and '$bar_id' is # the specific/static 'bar_id' we are checking. SELECT COUNT(*) FROM baz z, bar r WHERE baz_foo_id=$foo_id AND baz_bar_id=$bar_id; ( if count > 0 ) { # TRUE } else { # FALSE } } This is pretty inefficient, obviously. How could I create a query that returned a TRUE/FALSE column that checks if there is a 'baz' record for a specified 'bar_id' in all 'foo_id's in one query? I hope this isn't too muddy. I think part of my problem is I am having trouble even visualizing my question... Thanks as always! Madi ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] SPI shared memory ?
Guys, Is there any exposed PG-specific API to utilize shared memory while working with SPI ? Thanks! Best regards, Alex Vinogradovs ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Return t/f on existence of a join
On Sep 21, 2007, at 2:17 PM, Madison Kelly wrote: ... Or something like that. :) Sorry for so many questions! I have another "how do I create this query?" question, if it's okay. I've got three tables; 'foo', 'bar' and 'baz'. In 'foo' I've got 'foo_id' which is a PK. I've also got a bunch of other info, but in essence this is the "parent" table that all others reference in some way. In 'bar' I've got 'bar_id' which is also a PK. I also have 'bar_foo_id' which is a FK pointing to 'foo' -> 'foo_id', to show what 'foo' row it (primarily) belongs to. Lastly, I've got a table called 'baz' which has 'baz_id'. In it, there are just two columns; - 'baz_foo_id' which is a FK pointing to 'foo' -> 'foo_id'. - 'baz_bar_id' which is a FK pointing to 'bar' -> 'bar_id'. This last table, 'baz' is used as a way for saying 'bar *also* belongs to a given 'foo' row, So now my question; I want to create a query that will allow me to say "show me all 'foo' rows and tell me if a specific 'baz_id' belongs to it". Normally, I would do this: SELECT foo_id FROM foo; (for each returned row) { # Where '$foo_id' is the current 'foo_id' and '$bar_id' is # the specific/static 'bar_id' we are checking. SELECT COUNT(*) FROM baz z, bar r WHERE baz_foo_id=$foo_id AND baz_bar_id=$bar_id; ( if count > 0 ) { # TRUE } else { # FALSE } } This is pretty inefficient, obviously. How could I create a query that returned a TRUE/FALSE column that checks if there is a 'baz' record for a specified 'bar_id' in all 'foo_id's in one query? I hope this isn't too muddy. I think part of my problem is I am having trouble even visualizing my question... Thanks as always! Madi *Not tested* If this isn't quite it, then it should at least get you close SELECT foo.foo_id, CASE WHEN baz.foo_id IS NULL THEN FALSE ELSE TRUE as has_baz FROM foo LEFT JOIN baz ON (foo.foo_id=baz.baz_foo_id AND baz.baz_bar_id=$bar_id) Erik Jones Software Developer | Emma® [EMAIL PROTECTED] 800.595.4401 or 615.292.5888 615.292.0777 (fax) Emma helps organizations everywhere communicate & market in style. Visit us online at http://www.myemma.com ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] Solved! Was (Return t/f on existence of a join)
Thanks to both of you, Erik and Jon! I had to tweak your two replies to get what I wanted (all 'foo' rows returned, was only getting ones with a match in 'baz'). You two sent me on the right path though and I was able to work out the rest using the PgSQL docs on 'CASE' and 'JOIN'. Here is the working query (where 'bar_id'=2): SELECT CASE z.baz_bar_id WHEN 2 THEN TRUE ELSE FALSE END AS tf_col, f.foo_id FROM foo f LEFT OUTER JOIN baz z ON (f.foo_id=z.baz_foo_id) LEFT OUTER JOIN bar b ON (b.bar_id=z.baz_bar_id) AND b.bar_id=2; Thanks kindly to both! I honestly didn't expect to work this out before then end of the day. Cool! Madi ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Return t/f on existence of a join
Madison Kelly wrote: > I want to create a query that will allow me to say "show me all 'foo' > rows and tell me if a specific 'baz_id' belongs to it". Normally, I > would do this: > > SELECT foo_id FROM foo; > (for each returned row) > { > # Where '$foo_id' is the current 'foo_id' and '$bar_id' is > # the specific/static 'bar_id' we are checking. > SELECT COUNT(*) FROM baz z, bar r WHERE baz_foo_id=$foo_id AND > baz_bar_id=$bar_id; > ( if count > 0 ) { # TRUE } else { # FALSE } > } > > This is pretty inefficient, obviously. How could I create a query that > returned a TRUE/FALSE column that checks if there is a 'baz' record for > a specified 'bar_id' in all 'foo_id's in one query? What you want is a "left outer join" to do all of this in a single query. select f.foo_id, case when count(r.bar_id) > 0 then true else false end as tf_col from foo f join baz z on (z.baz_foo_id = f.foo_id) left join bar r on (r.baz_bar_id = z.bar_id) group by f.foo_id; -Jon -- Senior Systems Developer Media Matters for America http://mediamatters.org/ ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Wait, not solved... Was (Return t/f on existence of a join)
Madison Kelly wrote: Thanks to both of you, Erik and Jon! I had to tweak your two replies to get what I wanted (all 'foo' rows returned, was only getting ones with a match in 'baz'). You two sent me on the right path though and I was able to work out the rest using the PgSQL docs on 'CASE' and 'JOIN'. Here is the working query (where 'bar_id'=2): SELECT CASE z.baz_bar_id WHEN 2 THEN TRUE ELSE FALSE END AS tf_col, f.foo_id FROM foo f LEFT OUTER JOIN baz z ON (f.foo_id=z.baz_foo_id) LEFT OUTER JOIN bar b ON (b.bar_id=z.baz_bar_id) AND b.bar_id=2; Thanks kindly to both! I honestly didn't expect to work this out before then end of the day. Cool! Madi Doh! It's returning a row from 'foo' for every entry in baz that has an entry pointing to foo (possibly same problem with each pointer to an entry in bar, not sure yet). The 'true/false' part is working though... Back to reading. *sigh* :) Madi ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Actually Solved! Was: (Return t/f on existence of a join)
Madison Kelly wrote: It's returning a row from 'foo' for every entry in baz that has an entry pointing to foo (possibly same problem with each pointer to an entry in bar, not sure yet). The 'true/false' part is working though... Back to reading. *sigh* :) Madi I'm sorry for all the line noise, but I like to post my solutions "for the record". Reading up a bit more of the JOIN types I was finally able to get all rows in 'foo' returned just once with a t/f depending if a given baz_bar_id exists. The query is: (Where '2' is the 'baz_bar_id' I am checking on) SELECT f.foo_id, f.foo_name, CASE z.baz_foo_id WHEN 2 THEN TRUE ELSE FALSE END FROM foo f LEFT JOIN baz z ON f.foo_id=z.baz_foo_id AND z.baz_bar_id=2; Thanks again, both of you! Madi ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] Unable to start PostgresSQL Server
PostgreSQL Version: v1.8.2 System: Windows 2000 Server I am having a problem starting the PostgreSQL server. I get the following Log errors: LOG: unrecognized win32 error code: 487 FATAL: could not reattach to shared memory (key=5432001, addr=015E): Invalid argument What do I need to do to repair this? Thanks! Dan No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.5.487 / Virus Database: 269.13.27/1020 - Release Date: 9/20/2007 12:07 PM ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Unable to start PostgresSQL Server
Daniel B. Thurman wrote: > > PostgreSQL Version: v1.8.2 > System: Windows 2000 Server > > I am having a problem starting the PostgreSQL server. I > get the following Log errors: > > LOG: unrecognized win32 error code: 487 > FATAL: could not reattach to shared memory (key=5432001, addr=015E): > Invalid argument Known bug, unknown cause or solution as of yet. Please see http://search.postgresql.org/search?q=2881&m=1&l=8&d=365&s=r I think you need to restart the service. -- Alvaro Herrera Valdivia, Chile ICBM: S 39º 49' 18.1", W 73º 13' 56.4" "Cómo ponemos nuestros dedos en la arcilla del otro. Eso es la amistad; jugar al alfarero y ver qué formas se pueden sacar del otro" (C. Halloway en La Feria de las Tinieblas, R. Bradbury) ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Performance Issues
Christian Schröder wrote: > I think it is my job as db admin to make the database work the way my users > need it, and not the user's job to find a solution that fits the database's > needs ... > > Is there really nothing that I can do? You can improve the selectivity estimator function. One idea is that if you are storing something that's not really a general character string, develop a specific datatype, with a more precise selectivity estimator. If you are you up to coding in C, that is. -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Restore 8.1.4 backup in 8.2.5
>"Andrew J. Kopciuch" <[EMAIL PROTECTED]> wrote on 09/19/2007 08:31:08 PM: > Or just run the dump through psql. > psql -U postgres -d template1 -f /tmp/$(date +%F)owl.sql Thank you this worked. *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** Margaret Gillon, IS Dept., Chromalloy Los Angeles, ext. 297
Re: [GENERAL] Uninstall mess Postgresql 7.3.4, 8.1.4 on Redhat 9
>Andrew Sullivan <[EMAIL PROTECTED]> wrote on 09/19/2007 10:12:26 AM: > It wouldn't, but your old data still need to be dumped and restored; > and without a running 8.1, that won't help you. Unless you mean that > you'd install 8.2.x and load from a backup. > Thank you for your help. I wasn't able to repair the 8.1.4 install but I moved it out of the way before installing 8.2.5 so it is still intact. I was able to push my database into the newer version from the sql backup file. I am hoping to fix the 8.1.4 version next week. I now have breathing space since the database is back on-line in the 8.2.5 version, my clients can attach, and my backup is functional. Margaret Gillon
Re: [GENERAL] Unable to start PostgresSQL Server
>> >> PostgreSQL Version: v1.8.2 >> System: Windows 2000 Server >> >> I am having a problem starting the PostgreSQL server. I >> get the following Log errors: >> >> LOG: unrecognized win32 error code: 487 >> FATAL: could not reattach to shared memory (key=5432001, >addr=015E): Invalid argument > >Known bug, unknown cause or solution as of yet. Please see > >http://search.postgresql.org/search?q=2881&m=1&l=8&d=365&s=r > >I think you need to restart the service. > I have restarted the server as suggested and the problem went away. Guess it is intermittent... Thanks! Dan No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.5.487 / Virus Database: 269.13.27/1020 - Release Date: 9/20/2007 12:07 PM ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] SPI shared memory ?
On Fri, 2007-09-21 at 12:50 -0700, Alex Vinogradovs wrote: > Guys, > > > Is there any exposed PG-specific API to utilize shared memory while > working with SPI ? Thanks! > What are you trying to do? The available SPI functions don't include any shared memory access, do you actually want access to postgresql's shared structures, or do you just want your own area? Regards, Jeff Davis ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] SPI shared memory ?
Yes, I'd like to get some memory of my own, so that it can be shared between functions executing in different connections. Is there an existent API, or should I just straight use Unix IPC ? Thanks! Alex. On Fri, 2007-09-21 at 16:33 -0700, Jeff Davis wrote: > On Fri, 2007-09-21 at 12:50 -0700, Alex Vinogradovs wrote: > > Guys, > > > > > > Is there any exposed PG-specific API to utilize shared memory while > > working with SPI ? Thanks! > > > > What are you trying to do? > > The available SPI functions don't include any shared memory access, do > you actually want access to postgresql's shared structures, or do you > just want your own area? > > Regards, > Jeff Davis > ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Migration from PervasiveSQL
On Thursday 20 September 2007 18:38, Merlin Moncure wrote: > On 9/20/07, Collin <[EMAIL PROTECTED]> wrote: > > Well, the subject says it pretty well but to elaborate: > > > > I have a database from our ERP package that uses btrieve (PervasiveSQL) > > for it's database engine. I'd like to transition all of the data to > > PostgreSQL. I've been having trouble finding a suitable program to > > automatically get all of the data transferred over. > > > > I have the proper DDF files and an ODBC link in place to the data. > > Maestro DataDump (from the Postgresql Data Wizard program) only locks my > > machine up and it seemed to be the only think I could find that would > > take an ODBC link to the btrieve data and use it to extract the table > > defs and data to postgresql. Is there some other utility I could use or > > am I stuck writing a custom program to do it? I could maybe extract the > > btrieve data to CSV files but I don't have any easy way of doing that > > quickly for so many tables (and there are a lot!) > > dump to csv would be ideal (you could then use the built in csv import > through pg via COPY). > > barring that you need to roll some scripts or use a tool. There are > many ways to do this of varying quality, mostly pretty dubious. In > the microsoft world you have DTS which has an ok migrator. > Perhaps odbclink could be of help here? http://odbclink.projects.postgresql.org/ -- Robert Treat Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Migration from PervasiveSQL
On 9/21/07, Robert Treat <[EMAIL PROTECTED]> wrote: > On Thursday 20 September 2007 18:38, Merlin Moncure wrote: > > On 9/20/07, Collin <[EMAIL PROTECTED]> wrote: > > > Well, the subject says it pretty well but to elaborate: > > > > > > I have a database from our ERP package that uses btrieve (PervasiveSQL) > > > for it's database engine. I'd like to transition all of the data to > > > PostgreSQL. I've been having trouble finding a suitable program to > > > automatically get all of the data transferred over. > > > > > > I have the proper DDF files and an ODBC link in place to the data. > > > Maestro DataDump (from the Postgresql Data Wizard program) only locks my > > > machine up and it seemed to be the only think I could find that would > > > take an ODBC link to the btrieve data and use it to extract the table > > > defs and data to postgresql. Is there some other utility I could use or > > > am I stuck writing a custom program to do it? I could maybe extract the > > > btrieve data to CSV files but I don't have any easy way of doing that > > > quickly for so many tables (and there are a lot!) > > > > dump to csv would be ideal (you could then use the built in csv import > > through pg via COPY). > > > > barring that you need to roll some scripts or use a tool. There are > > many ways to do this of varying quality, mostly pretty dubious. In > > the microsoft world you have DTS which has an ok migrator. > > > > Perhaps odbclink could be of help here? > http://odbclink.projects.postgresql.org/ Development Status: 1 - Planning :-) merlin ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] SPI shared memory ?
Alex Vinogradovs <[EMAIL PROTECTED]> writes: > Yes, I'd like to get some memory of my own, so that it can > be shared between functions executing in different connections. > Is there an existent API, or should I just straight use Unix IPC ? As of (I think) 8.2, there's RequestAddinShmemSpace and RequestAddinLWLocks. However, the fly in the ointment is that you can only usefully call these from a loadable module that is preloaded at postmaster start; otherwise it is too late to affect the sizing of shared memory. There's enough slop in the shared memory allocation that you can probably grab a few dozen or hundred bytes without a pre-allocation. One possibility is to do that and use it to store a reference to a separate shared memory segment. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Performance Issues
Alvaro Herrera wrote: Christian Schröder wrote: I think it is my job as db admin to make the database work the way my users need it, and not the user's job to find a solution that fits the database's needs ... Is there really nothing that I can do? You can improve the selectivity estimator function. One idea is that if you are storing something that's not really a general character string, develop a specific datatype, with a more precise selectivity estimator. If you are you up to coding in C, that is. Hm, that sounds interesting! I will definitely give it a try. Will that also solve the problem of combining more than one of these conditions? As far as I can see, the main issue at the moment is that we often have "... where test like '11%' and test not like '113%'" in our queries. Even if the selectivity estimation of the single condition will be improved, it will still be wrong to multiply the selectivities. I think I will have a look at the src/backend/optimizer/util/plancat.c, src/backend/optimizer/path/clausesel.c and src/backend/utils/adt/selfuncs.c files after my holiday. Kind regards, Christian -- Deriva GmbH Tel.: +49 551 489500-42 Financial IT and Consulting Fax: +49 551 489500-91 Hans-Böckler-Straße 2 http://www.deriva.de D-37079 Göttingen Deriva CA Certificate: http://www.deriva.de/deriva-ca.cer ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster