[GENERAL] how can I create the DML for an existing database - within an application
I want to get the output frompg_dump --struct-only --table=whateverinside a programm. Of course I could call pg_dump in a seperate process and capture the output and all; but ...as PGAdmin is doing it someway, I strongly suspect there is a kind of call to recreate the DML language from a table in the database. Please give me a hint! -- GHUM Harald Massapersuadere et programmareHarald Armin MassaReinsburgstraße 202b70197 Stuttgart0173/9409607-PostgreSQL - supported by a community that does not put you on hold
Re: [GENERAL] Truncate and Foreign Key Constraint question
Doh ! A test schema that was a left over. Thanks for the sanity check ... as usual, pilot error! g -Original Message- From: Tom Lane [mailto:[EMAIL PROTECTED] Sent: Wed 4/12/2006 10:02 PM To: Gregory S. Williamson Cc: pgsql-general@postgresql.org Subject:Re: [GENERAL] Truncate and Foreign Key Constraint question "Gregory S. Williamson" <[EMAIL PROTECTED]> writes: > As the sequence below shows, I dropped the FK constraint successfully, but > when I run TRUNCATE collections_l it says: > ERROR: cannot truncate a table referenced in a foreign key constraint > DETAIL: Table "client_collect_rates" references "collections_l" via foreign > key constraint "$2". Hm, works for me. Is it possible that you've got multiple client_collect_rates tables in different schemas, and it's complaining about some other one? The error message doesn't show the schema of the table ... regards, tom lane !DSPAM:443ddb4b66027357040552! ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Select first ten of each category?
Michael Glaesemann wrote: Without using UNION, (which would require writing a select statement for each category), how would LIMIT allow him to do this for each category in a single query? You're right, it would need a UNION, and a SELECT per category. So there'd be another SELECT to find all the categories beforehand. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Truncate and Foreign Key Constraint question
"Gregory S. Williamson" <[EMAIL PROTECTED]> writes: > As the sequence below shows, I dropped the FK constraint successfully, but > when I run TRUNCATE collections_l it says: > ERROR: cannot truncate a table referenced in a foreign key constraint > DETAIL: Table "client_collect_rates" references "collections_l" via foreign > key constraint "$2". Hm, works for me. Is it possible that you've got multiple client_collect_rates tables in different schemas, and it's complaining about some other one? The error message doesn't show the schema of the table ... regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Truncate and Foreign Key Constraint question
This is in postgres 8.1: PostgreSQL 8.1.1 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 3.4.2 20041017 (Red Hat 3.4.2-6.fc3) I've got a table in one schema (work.client_collect_rates) which has an FK constraint with a table, content.collections_l (definitions shown below). There's about 500 entries currently in my collections_l table. I need to wipe out the contents of the collections_l table nightly and refresh it from a remote master source. (Don't ask ... long & sordid history) As the sequence below shows, I dropped the FK constraint successfully, but when I run TRUNCATE collections_l it says: ERROR: cannot truncate a table referenced in a foreign key constraint DETAIL: Table "client_collect_rates" references "collections_l" via foreign key constraint "$2". HINT: Truncate table "client_collect_rates" at the same time. This truncation of the second table is *not* an option, but since the constraint "$2" is clearly gone, I am wondering what in the name of sweet apples is going on ? Is this a bug ? Have a developed premature senility ? Any clues for the clueless would be gratefully accepted! TIA, Greg Williamson DBA GlobeXplorer LLC billing=# \d work.client_collect_rates Table "work.client_collect_rates" Column | Type | Modifiers ---+-+--- contract_id | integer | not null collection_id | integer | not null rate | numeric | break_1 | numeric | rate_1| numeric | break_2 | numeric | rate_2| numeric | break_3 | numeric | rate_3| numeric | break_4 | numeric | rate_4| numeric | Indexes: "clnt_colrate_ndx" UNIQUE, btree (contract_id, collection_id) Foreign-key constraints: "$1" FOREIGN KEY (contract_id) REFERENCES client_contracts(contract_id) "$2" FOREIGN KEY (collection_id) REFERENCES content.collections_l(collect_id) billing=# \d content.collections_l Table "content.collections_l" Column| Type | Modifiers --++ collect_id | integer| not null owner| integer| collection_name | character(50) | begin_date | date | end_date | date | pos_accuracy | integer| res_accuracy | integer| loc_code | character(30) | color| integer| default 0 category_id | integer| is_mosaic| integer| not null default 0 detail_metadata_view | character varying(255) | jdbc_url | character varying(255) | jdbc_driver | character varying(255) | Indexes: "collections_l_pkey" PRIMARY KEY, btree (collect_id) "collect_own_ndx" btree ("owner", collect_id) billing=# alter table work.client_collect_rates drop constraint "$2"; ALTER TABLE billing=# \d work.client_collect_rates Table "work.client_collect_rates" Column | Type | Modifiers ---+-+--- contract_id | integer | not null collection_id | integer | not null rate | numeric | break_1 | numeric | rate_1| numeric | break_2 | numeric | rate_2| numeric | break_3 | numeric | rate_3| numeric | break_4 | numeric | rate_4| numeric | Indexes: "clnt_colrate_ndx" UNIQUE, btree (contract_id, collection_id) Foreign-key constraints: "$1" FOREIGN KEY (contract_id) REFERENCES client_contracts(contract_id) (Note that the "$2" FK is gone...) billing=# truncate content.collections_l; ERROR: cannot truncate a table referenced in a foreign key constraint DETAIL: Table "client_collect_rates" references "collections_l" via foreign key constraint "$2". HINT: Truncate table "client_collect_rates" at the same time. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Select first ten of each category?
On Apr 13, 2006, at 11:08 , Yanni Chiu wrote: Benjamin Smith wrote: It has a LARGE number of entries. I'd like to grab the 10 most expensive items from each category in a single query. How can this be done? Use a LIMIT on your SELECT. See: http://www.postgresql.org/docs/8.1/static/queries-limit.html Without using UNION, (which would require writing a select statement for each category), how would LIMIT allow him to do this for each category in a single query? Michael Glaesemann grzm myrealbox com ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Select first ten of each category?
On Apr 13, 2006, at 10:16 , Benjamin Smith wrote: I'm stumped on this one... I have a table defined thusly: create table items ( id serial, category integer not null references category(id), name varchar not null, price real, unique(category, name)); It has a LARGE number of entries. I'd like to grab the 10 most expensive items from each category in a single query. How can this be done? You can use a correlated subquery, something like (untested): select category, id, name, price , ( select count(*) from items i2 where i2.category = i.category and i2.price > i.price ) + 1 as rank from items i order by category, rank desc; Note that this query actually counts the number of items (in the category) with prices greater than the given item's price. And to limit it just to the top 10 items: select category, id, name, price, rank from ( select category, id, name, price , ( select count(*) from items i2 where i2.category = i.category and i2.price > i.price ) + 1 as rank from items i ) where rank <= 10 order by category, rank desc; Note that this may return more than 10 items per category in the case of more than one item in a given category having the same price. Hope this points you in the right direction. Michael Glaesemann grzm myrealbox com ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Select first ten of each category?
Benjamin Smith wrote: It has a LARGE number of entries. I'd like to grab the 10 most expensive items from each category in a single query. How can this be done? Use a LIMIT on your SELECT. See: http://www.postgresql.org/docs/8.1/static/queries-limit.html ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Select first ten of each category?
I'm stumped on this one... I have a table defined thusly: create table items ( id serial, category integer not null references category(id), name varchar not null, price real, unique(category, name)); It has a LARGE number of entries. I'd like to grab the 10 most expensive items from each category in a single query. How can this be done? Something like Select items.* FROM items where id IN ( select firstTen(id) FROM items group by category ORDER BY price DESC ) ORDER BY price desc; But I've not found any incantation to make this idea work... -Ben -- "The best way to predict the future is to invent it." - XEROX PARC slogan, circa 1978 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Leverage your PostgreSQL V8.1 skills to learn DB2
On 4/12/06, Scott Marlowe <[EMAIL PROTECTED]> wrote: > > Actually, after reading that page, I'm don't think DB2 came off as being > much better than PostgreSQL. > > The postgres database is new to 8.1, I believe. I think it was as much I think the article should have been titled: 'why you dont really need db2 and should stick with postgresql'. merlin ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] How to import a CSV file (originally from Excel)
On Wed, 2006-04-12 at 17:57 +0200, Magnus Hagander wrote: > > > \copy "Flight Traffic" from yourfile.csv delimiter as ',' > > csv quote as > > > '"' > > > > > > (might need some adaption, of course) > > > > > > > > > Loading 45,000 lines is trivial for copy, it shouldn't take > > noticable > > > time at all. > > > > along these lines - can I do something similar (CSV file) but > > 'update' 2 or 3 columns? > > I'd use COPY to a temp table, then run a normal UPDATE on that. is that a pg 8.x thing? I'm using 7.4.x (RHEL) th-db_development=# \copy "clients_temp" from representatives.csv delimiter as ',' csv quote as '"' ERROR: syntax error at or near "CSV" at character 53 LINE 1: ...PY "clients_temp" FROM STDIN USING DELIMITERS ',' CSV QUOTE ... ^ \copy: ERROR: syntax error at or near "CSV" at character 53 Craig ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Initdb problem installing 8.1.3-1 on winXP
I have been running 8.0 on this Windows XP machine for some time now without problem. I decided I would upgrade the machine to 8.1 before starting my next project. I tried running the upgrade.bat file that comes with 8.1 however it failed to run for some reason or another. Instead I uninstalled 8.0 and tried to do a fresh install of 8.1. The 8.1 refuses to complete giving me "Failed to run initdb: 1!" when it gets to the point of creating the database. The initdb.log file contains the following information: ===: initdb.log The files belonging to this database system will be owned by user "postgres". This user must also own the server process. The database cluster will be initialized with locale C. fixing permissions on existing directory C:/apps/PostgreSQL8.1/data ... ok creating directory C:/apps/PostgreSQL8.1/data/global ... ok creating directory C:/apps/PostgreSQL8.1/data/pg_xlog ... ok creating directory C:/apps/PostgreSQL8.1/data/pg_xlog/archive_status ... ok creating directory C:/apps/PostgreSQL8.1/data/pg_clog ... ok creating directory C:/apps/PostgreSQL8.1/data/pg_subtrans ... ok creating directory C:/apps/PostgreSQL8.1/data/pg_twophase ... ok creating directory C:/apps/PostgreSQL8.1/data/pg_multixact/members ... ok creating directory C:/apps/PostgreSQL8.1/data/pg_multixact/offsets ... ok creating directory C:/apps/PostgreSQL8.1/data/base ... ok creating directory C:/apps/PostgreSQL8.1/data/base/1 ... ok creating directory C:/apps/PostgreSQL8.1/data/pg_tblspc ... ok selecting default max_connections ... 10 selecting default shared_buffers ... 50 creating configuration files ... ok creating template1 database in C:/apps/PostgreSQL8.1/data/base/1 ... child process was terminated by signal 1 initdb: removing contents of data directory "C:/apps/PostgreSQL8.1/data" == I have tried removing the postgres user and letting the installer create it, installing to different directories, ect without any luck. Any pointers would be appreciated. Ryan ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] IA64 RPMs for 8.1.3
Hi, I've just built 8.1.3 RPMs for IA64. The platform is Red Hat Enterprise Linux Advanced Server 4 for Itanium. The RPMs should be on main FTP site very soon. I'll upload 8.0.7 binaries tomorrow. Please let me know if you have problems with these binaries. Regards, -- The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564 PostgreSQL Replication, Consulting, Custom Development, 24x7 support Managed Services, Shared and Dedicated Hosting Co-Authors: PL/php, plPerlNG - http://www.commandprompt.com/ ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] IA64 RPMs for 8.1.3
Hi -- The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564 PostgreSQL Replication, Consulting, Custom Development, 24x7 support Managed Services, Shared and Dedicated Hosting Co-Authors: PL/php, plPerlNG - http://www.commandprompt.com/ ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] pg_restore --schema always returns an empty dump
Nick Johnson <[EMAIL PROTECTED]> writes: > using "pg_restore db.dump", as expected, returns the entire dump. > However, "pg_restore --schema foo db.dump", where 'foo' is the name of a > schema present in the database dump always returns an empty dump. > "pg_restore --schema foo --table bar" returns the table foo.bar, though. > Does anyone know why this is? Probably because it's broken :-(. I've applied a patch. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Performance UPDATE/INSERT
Turn off your indices on the table... do the inserts... do the updates... rebuild the indices ""MG"" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED]... Hello, I have about 100 000 records, which need about 30 minutes to write them with single INSERTs into PostgreSQL. If I go through these 100 000 records and make an UPDATE on each record, it takes 1 hour. Can anyone tell me, about his experience of the performance of INSERT and UPDATE. Greetings Michaela
Re: [GENERAL] "Hidden" field for each column
You can specifiy a "comment" on each field "Don Y" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > Hi, > > Is there any way that I can consistently (across all > tables) add a parameter defining "what" each column > "is"? (sorry, crappy grammar and ill-formed question). > > I want to be able to embed in the database parameters > that tell it how to interpret each column. In other > words, while the TYPE for two columns might be > text/char/varchar/etc., the data that each contained > could have vastly different interpretations. > > For example, it might be a surname. Or, the name of > a city. Or, the name of a handtool. Or a street address. > > I would like the support routines to be able to look at > these parameters and adjust queries and other activities > accordingly. So, when matching a surname, it might > apply some heuristics built on metaphone. Yet, when > matching a book's title, may try some simple substitutions > and permutations (e.g., stripping words like "The" from > the beginning of the title; or relaxing rules for > punctuation or capitalization). > > Burying this information in the database itself will let > the applications be unconcerned with adding this level > of support at their level -- and afford all applications > a consistent set of capabilities. > > On a similar, but different, note, are there any ways > to "hide" a value behind each datum? I.e. to treat > each entry as a struct containing the actual data type > PLUS this "hidden" value? Without explicitly creating > ADT's for each potential datatype? > > (sorry, I realize both questions are off-the-wall and > I can undoubtedly come up with kludges to implement them > but I was hoping for something "slicker"... :>) > > Thanks! > --don > > ---(end of broadcast)--- > TIP 2: Don't 'kill -9' the postmaster > ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Evaluating client processes vs stored procedures
During your process loop -- when / where are the updates committed ? all at the end ? How may rows (approx) are you updating ? FWIW: I think you will probably find that it is NOT the SQL update that is your bottleneck I am inclined to speculate that the performance issue is related to the size of the loop or related to when / where you commit your updates... "Tim Hart" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > List, > > I have a client process that is causing some performance issues with my > app. On my current platform, the standard automated profiling tools are > not available. Consequently, I can't easily determine where the bottleneck > is in the process. > > I have an inkling that rewriting the client process to a stored procedure > (implemented in C), will give at least a modest speed boost. > > What I'm trying to evaluate is the risk proposition of my effort. I really > need a big bang for the buck, and I'm willing to risk a week of effort if > the general consensus is that it's a good bet. I estimate a week to > re-implement as a stored procedure, vs about 3 to manually profile what I > need. Will I find out in 3 weeks what I already suspect? If I do profile > it and fine-tune the slow points, would the newly architected client STILL > run faster as a stored proc? > > So - any sage words of wisdom for those who've been down this path? I > realize that w/o specifics no one can say for sure. General feelings from > those with experience are welcome. > > Process specifics: > > Currently, the client process is run once a night. It's run on a separate > machine than the database. Values in one table are updated by evaluating > user defined functions against values in another table. Here's the > pseudo-code: > > For each formula_row in formula_table: > newValue = evaluate(formula_row.body) > update formula_row set value = newValue where sid = formula_row.sid > > > The evaluate function is a typical lexx/yacc parser. The formula body may > refer to other formulas in the table, or to constant values in another > table. > > If I do implement it as a stored proc, I'd likely add triggers so that the > formulas would be re-evaluated once values in the constants table get > updated. > > ---(end of broadcast)--- > TIP 1: if posting/reading through Usenet, please send an appropriate > subscribe-nomail command to [EMAIL PROTECTED] so that your > message can get through to the mailing list cleanly > ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] pg_restore --schema always returns an empty dump
If I create a complete database dump in custom or tar format, with a command like the following: pg_dump -Fc dbname > db.dump using "pg_restore db.dump", as expected, returns the entire dump. However, "pg_restore --schema foo db.dump", where 'foo' is the name of a schema present in the database dump always returns an empty dump. "pg_restore --schema foo --table bar" returns the table foo.bar, though. Does anyone know why this is? -Nick ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] "Hidden" field for each column
Don Y <[EMAIL PROTECTED]> writes: > I want to be able to embed in the database parameters > that tell it how to interpret each column. In other > words, while the TYPE for two columns might be > text/char/varchar/etc., the data that each contained > could have vastly different interpretations. If I were you, I'd probably do that by creating a bunch of domains and using the domains instead of plain text/etc as the column types. See CREATE DOMAIN. You could also commandeer the COMMENT ON COLUMN facility if you'd rather have a text string associated with each column. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] postmaster.pid
"Ian Harding" <[EMAIL PROTECTED]> writes: > The docs state that postmaster.pid is "A lock file recording the > current postmaster PID and shared memory segment ID (not present after > postmaster shutdown" > I never looked until now, but I see the number 5432001 where the pid > should be, and the real pid is in /tmp/.s.PGSQL.5432.lock, along with > the path to data. Not sure which PG version you are looking at, but in recent releases the PID is on the first line, the second line is the $PGDATA path, and the third has a couple of values associated with shared memory. regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Evaluating client processes vs stored procedures
List, I have a client process that is causing some performance issues with my app. On my current platform, the standard automated profiling tools are not available. Consequently, I can't easily determine where the bottleneck is in the process. I have an inkling that rewriting the client process to a stored procedure (implemented in C), will give at least a modest speed boost. What I'm trying to evaluate is the risk proposition of my effort. I really need a big bang for the buck, and I'm willing to risk a week of effort if the general consensus is that it's a good bet. I estimate a week to re-implement as a stored procedure, vs about 3 to manually profile what I need. Will I find out in 3 weeks what I already suspect? If I do profile it and fine-tune the slow points, would the newly architected client STILL run faster as a stored proc? So - any sage words of wisdom for those who've been down this path? I realize that w/o specifics no one can say for sure. General feelings from those with experience are welcome. Process specifics: Currently, the client process is run once a night. It's run on a separate machine than the database. Values in one table are updated by evaluating user defined functions against values in another table. Here's the pseudo-code: For each formula_row in formula_table: newValue = evaluate(formula_row.body) update formula_row set value = newValue where sid = formula_row.sid The evaluate function is a typical lexx/yacc parser. The formula body may refer to other formulas in the table, or to constant values in another table. If I do implement it as a stored proc, I'd likely add triggers so that the formulas would be re-evaluated once values in the constants table get updated. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Hardware related question: 3ware 9500S
On Wed, 2006-04-12 at 13:53, Ted Byers wrote: > - Original Message - > From: "Scott Marlowe" <[EMAIL PROTECTED]> > > > > There have been NUMEROUS discussions of RAID-5 versus RAID 1+0 in the > > perform group in the last year or two. Short version: > > > Interesting. SNIP > This question of data security is becoming of increasing importance to me > professionally since I will soon have to advise the company I'm working with > regarding how best to secure the data managed by the applications I'm > developing for them. I will need overall guidelines to produce a design > that makes it virtually impossible for them to lose even on field in one > record. The data is both sensitive and vital. Fortunately, I have a few > months before we need to commit to anything. Also, fortunately, with one > exception, the applications rely on a data feed that comes in once a day > after normal working hours, so I won't have to worry about writes to the DB > other than what my script does to load the datafeed into the DB. All other > access is read only. This should make it easier to produce a strategy to > protect the data from any kind of technology failure (software or hardware). > Cost is a factor, but reliability is much much more important! When you say reliability, I'm not sure your definition is my definition. Is is that the database MUST be up during business hours, even if the updates that happen during the day can't go through? Or even if those updates get lost and have to be re-entered that's OK, as long as the data entered by the batch file at night is available for business processes. There are a lot of ways to set this up, and each tool in the tool box has its advantages and disadvantages. PITR, Slony Replication, pgpool, pgcluster, mammoth replicator, bizgress, bizgress MPP... I'd guess that the data entered during the day is the most important. If this is so, you could set up slony replication on those tables to a backup machine so that should the primary suffer catastrophic failure you still have the inputs. RAID is great, but it's no replacement for replication and / or point in time recovery. More an augment. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Hardware related question: 3ware 9500S
On 4/12/06, Ted Byers <[EMAIL PROTECTED]> wrote: > > > - Original Message - > > From: "Merlin Moncure" <[EMAIL PROTECTED]> > > To: "Janning Vygen" <[EMAIL PROTECTED]> > > Cc: > > Sent: Wednesday, April 12, 2006 12:31 PM > > Subject: Re: [GENERAL] Hardware related question: 3ware 9500S > > [snip] > > > > - I want to know if 3ware 9500 S is recommended or if its one of those > > > controllers which sucks. > > > > escalade is a fairly full featured raid controller for the price. > > consider it the ford taurus of raid controllers, it's functional and > > practical but not sexy. Their S line is not native sata but operates > > over a pata->sata bridge. Stay away from raid 5. > > > Hi Merlin > > Why? What's wrong with raid 5? I could well be wrong (given how little there are reasons to go with raid 5 or other raids. where I work we often do 14 drive raid 6 plus 1 hot swap on a 15 drive tray. However, for 4 drive raid, I think 0+1 is the by far the best choice. For three drive, I'd suggest two drive raid 1 plus hot swap. > Do you have a recommendation regarding whether the raid array is built into > the server running the RDBMS (in our case PostgreSQL), or located in a > network appliance dedicated to storing the data managed by the RDBMS? If > you were asked to design a subnet that provides the best possible > performance and protection of the data, but without gold-plating anything, > what would you do? How much redundancy would you build in, and at what > granularity? I would stay clear of cheaper NAS solutions (AoE, iscsi) unless you really didn't care about performance. In my experience the better SANs are a good way to go if you need flexibility or easy managment (especially if you need to do things besides database) without losing performance. A good SAN makes everything easy but boy do you pay for it. If you want most bang for the buck, I'd suggest either attached scsi or sata (especially the latter). With sata, 24 raptors will get you insane performance for a very reasonable price. Most of my apps are cpu bound anyways. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Hardware related question: 3ware 9500S
Martijn van Oosterhout wrote: On Wed, Apr 12, 2006 at 02:53:01PM -0400, Ted Byers wrote: I take it that "RAID 1+0" refers to a combination of Raid 1 and RAID 0. What about RAID 10? I am curious because RAID 10 has come out since the last time I took a look at RAID technology. I am not sure what it actually does differently from RAID 5. AIUI, RAID 10 = RAID 1+0. Lame, I know. Similarly, some people have invented RAID 50 = RAID 5+0. Not sure if that's the official definition though, but that's the way I've seen it used. Useful info on RAID definitions: http://tinyurl.com/zhnmc -- Until later, Geoffrey Any society that would give up a little liberty to gain a little security will deserve neither and lose both. - Benjamin Franklin ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Hardware related question: 3ware 9500S
On Wed, Apr 12, 2006 at 02:53:01PM -0400, Ted Byers wrote: > I take it that "RAID 1+0" refers to a combination of Raid 1 and RAID 0. > What about RAID 10? I am curious because RAID 10 has come out since the > last time I took a look at RAID technology. I am not sure what it actually > does differently from RAID 5. AIUI, RAID 10 = RAID 1+0. Lame, I know. Similarly, some people have invented RAID 50 = RAID 5+0. Not sure if that's the official definition though, but that's the way I've seen it used. -- Martijn van Oosterhout http://svana.org/kleptog/ > Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a > tool for doing 5% of the work and then sitting around waiting for someone > else to do the other 95% so you can sue them. signature.asc Description: Digital signature
Re: [GENERAL] Hardware related question: 3ware 9500S
- Original Message - From: "Scott Marlowe" <[EMAIL PROTECTED]> To: "Ted Byers" <[EMAIL PROTECTED]> Cc: "Merlin Moncure" <[EMAIL PROTECTED]>; "Janning Vygen" <[EMAIL PROTECTED]>; "pgsql general" Sent: Wednesday, April 12, 2006 2:24 PM Subject: Re: [GENERAL] Hardware related question: 3ware 9500S On Wed, 2006-04-12 at 13:10, Ted Byers wrote: > - Original Message - > From: "Merlin Moncure" <[EMAIL PROTECTED]> > To: "Janning Vygen" <[EMAIL PROTECTED]> > Cc: > Sent: Wednesday, April 12, 2006 12:31 PM > Subject: Re: [GENERAL] Hardware related question: 3ware 9500S > [snip] Why? What's wrong with raid 5? I could well be wrong (given how little attention I have paid to hardware over the past few years because of a focus on developing software), but I was under the impression that of the raid options available, raid 5 with hot swappable drives provided good data protection and performance at a reasonably low cost. Is the problem with the concept of raid 5, or the common implementations? Do you have a recommendation regarding whether the raid array is built into the server running the RDBMS (in our case PostgreSQL), or located in a network appliance dedicated to storing the data managed by the RDBMS? If you were asked to design a subnet that provides the best possible performance and protection of the data, but without gold-plating anything, what would you do? How much redundancy would you build in, and at what granularity? There have been NUMEROUS discussions of RAID-5 versus RAID 1+0 in the perform group in the last year or two. Short version: Interesting. I take it that "RAID 1+0" refers to a combination of Raid 1 and RAID 0. What about RAID 10? I am curious because RAID 10 has come out since the last time I took a look at RAID technology. I am not sure what it actually does differently from RAID 5. This question of data security is becoming of increasing importance to me professionally since I will soon have to advise the company I'm working with regarding how best to secure the data managed by the applications I'm developing for them. I will need overall guidelines to produce a design that makes it virtually impossible for them to lose even on field in one record. The data is both sensitive and vital. Fortunately, I have a few months before we need to commit to anything. Also, fortunately, with one exception, the applications rely on a data feed that comes in once a day after normal working hours, so I won't have to worry about writes to the DB other than what my script does to load the datafeed into the DB. All other access is read only. This should make it easier to produce a strategy to protect the data from any kind of technology failure (software or hardware). Cost is a factor, but reliability is much much more important! Thanks, Ted ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] sound index
Teodor Sigaev wrote: >> also, i'd be happy to listen opinions from people who have experience >> of usage of such things like soundex. I'm using metaphone() together with levenshtein() to search a place name gazetteer database and order the results. That works reasonably well and gives interesting results ("places with similar names"). However, it does not cover "partial" matches (it does just compare the whole string, and does not find multi-word names when just a single word is entered, eg. it would not find "santa cruz" when you just enter "cruz"). Regarding db structure: I've specifically added a column which contains the metaphone string (loaded with "UPDATE places set pname_metaphone = metaphone(pname, 11)") - this row is obviously indexed (and, with functional indices, actuall redundant ;). i'm then using "SELECT * from places where pname_metaphone = metaphone('searchstring', 11)" to retrieve similar names. levenshtein is used to order those rows by string distance. try it at http://nona.net/features/map/ I haven't attemted yet to combine tsearch2 and metaphone results - that would probably be the PerfectSolution(tm). hope that helps Alex ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Hardware related question: 3ware 9500S
On Wed, 2006-04-12 at 13:10, Ted Byers wrote: > > - Original Message - > > From: "Merlin Moncure" <[EMAIL PROTECTED]> > > To: "Janning Vygen" <[EMAIL PROTECTED]> > > Cc: > > Sent: Wednesday, April 12, 2006 12:31 PM > > Subject: Re: [GENERAL] Hardware related question: 3ware 9500S > > [snip] > > > > - I want to know if 3ware 9500 S is recommended or if its one of those > > > controllers which sucks. > > > > escalade is a fairly full featured raid controller for the price. > > consider it the ford taurus of raid controllers, it's functional and > > practical but not sexy. Their S line is not native sata but operates > > over a pata->sata bridge. Stay away from raid 5. > > > Hi Merlin > > Why? What's wrong with raid 5? I could well be wrong (given how little > attention I have paid to hardware over the past few years because of a focus > on developing software), but I was under the impression that of the raid > options available, raid 5 with hot swappable drives provided good data > protection and performance at a reasonably low cost. Is the problem with > the concept of raid 5, or the common implementations? > > Do you have a recommendation regarding whether the raid array is built into > the server running the RDBMS (in our case PostgreSQL), or located in a > network appliance dedicated to storing the data managed by the RDBMS? If > you were asked to design a subnet that provides the best possible > performance and protection of the data, but without gold-plating anything, > what would you do? How much redundancy would you build in, and at what > granularity? There have been NUMEROUS discussions of RAID-5 versus RAID 1+0 in the perform group in the last year or two. Short version: RAID 5 is useful, with large numbers of drives, for OLAP type databases, where you're trying to get as much storage as possible from your drives. RAID 5 pretty much REQUIRES battery backed cache for decent write performance, and even then, will saturate faster than RAID 1+0. RAID-5 cannot survive multiple simultaneous drive failures. RAID 1+0 requires better than average controllers, since many serialize and lockstep data through the various layers of RAID on them. It provides less storage for a given number of drives. It is faster for OLTP workloads than RAID-5. RAID 1+0 can survive multiple drive failures as long as two drives in the same mirror set do not fail at once. With increasing number of drives, the chances of a RAID 5 failing go up linearly, while the chances of RAID 1+0 failing due to multiple drive failure stay the same. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Leverage your PostgreSQL V8.1 skills to learn DB2
On Wed, 2006-04-12 at 11:28, Ian Harding wrote: > This is interesting. > > http://www-128.ibm.com/developerworks/db2/library/techarticle/dm-0603wasserman2/ > > There are a few bugs > > 1. In the graphic overview PostgreSQL == Progres > 2. In description of PostgreSQL database cluster, "After > initialization, a database cluster contains a database called > postgres, which is a default database used by utilities, users and > third party applications. " That doesn't seem to be necessarily true. > I don't have that database > 3. He misses the fact that some configuration options can be set > interactively via SET > 4. "The VACUUM command must be run on a regular basis to recover disk > space occupied by updated or deleted rows and to update data > statistics used by the PostgreSQL query planner. " with no mention of > autovacuum. He is talking about 8.1. > > Every time I learn about a big commercial database I am amazed at how > much 'bigger' and more complicated than PostgreSQL they are - I guess > they have to be for their intended use - and how similar to PostgreSQL > they are in actual function. Actually, after reading that page, I'm don't think DB2 came off as being much better than PostgreSQL. The postgres database is new to 8.1, I believe. I think it was as much as anything to stop the bothersome "database postgres does not exist" error messages a newbie would get when starting out. In addition to no mention of autovacuum, there's no mention of analyze. Sense this is a guide for people going FROM postgresql to db2, it's forgiveable I guess to leave out autovacuum, but analyze is really closer to the RUNSTATS command than is the vacuum command. vacuum analyze would probably be the best match. The one thing that bothered me is that they didn't mention the serializable transaction mode in postgresql. It's quite possible that people migrating from transaction systems utilizing serializable will run into more bumps in the road on db2 than folks migrating from a system that uses Read Committed. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Hardware related question: 3ware 9500S
- Original Message - From: "Merlin Moncure" <[EMAIL PROTECTED]> To: "Janning Vygen" <[EMAIL PROTECTED]> Cc: Sent: Wednesday, April 12, 2006 12:31 PM Subject: Re: [GENERAL] Hardware related question: 3ware 9500S [snip] > - I want to know if 3ware 9500 S is recommended or if its one of those > controllers which sucks. escalade is a fairly full featured raid controller for the price. consider it the ford taurus of raid controllers, it's functional and practical but not sexy. Their S line is not native sata but operates over a pata->sata bridge. Stay away from raid 5. Hi Merlin Why? What's wrong with raid 5? I could well be wrong (given how little attention I have paid to hardware over the past few years because of a focus on developing software), but I was under the impression that of the raid options available, raid 5 with hot swappable drives provided good data protection and performance at a reasonably low cost. Is the problem with the concept of raid 5, or the common implementations? Do you have a recommendation regarding whether the raid array is built into the server running the RDBMS (in our case PostgreSQL), or located in a network appliance dedicated to storing the data managed by the RDBMS? If you were asked to design a subnet that provides the best possible performance and protection of the data, but without gold-plating anything, what would you do? How much redundancy would you build in, and at what granularity? Ted ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] "Hidden" field for each column
Hi, Is there any way that I can consistently (across all tables) add a parameter defining "what" each column "is"? (sorry, crappy grammar and ill-formed question). I want to be able to embed in the database parameters that tell it how to interpret each column. In other words, while the TYPE for two columns might be text/char/varchar/etc., the data that each contained could have vastly different interpretations. For example, it might be a surname. Or, the name of a city. Or, the name of a handtool. Or a street address. I would like the support routines to be able to look at these parameters and adjust queries and other activities accordingly. So, when matching a surname, it might apply some heuristics built on metaphone. Yet, when matching a book's title, may try some simple substitutions and permutations (e.g., stripping words like "The" from the beginning of the title; or relaxing rules for punctuation or capitalization). Burying this information in the database itself will let the applications be unconcerned with adding this level of support at their level -- and afford all applications a consistent set of capabilities. On a similar, but different, note, are there any ways to "hide" a value behind each datum? I.e. to treat each entry as a struct containing the actual data type PLUS this "hidden" value? Without explicitly creating ADT's for each potential datatype? (sorry, I realize both questions are off-the-wall and I can undoubtedly come up with kludges to implement them but I was hoping for something "slicker"... :>) Thanks! --don ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] Off-Topic: DBMS Market Research
Hello All, Can someone where I can found DBMS Market Researches? What institutes publish reliable researchs? Gartner, IDC? Thanks in advance. Renato Cramer. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Hardware related question: 3ware 9500S
Thanks for your fast reply. Am Mittwoch, 12. April 2006 18:31 schrieb Merlin Moncure: > On 4/12/06, Janning Vygen <[EMAIL PROTECTED]> wrote: > > Hi, > > disk 1: OS, tablespace > > disk 2: indices, WAL, Logfiles > > - Does my partitioning make sense? > > with raid 10 all four drives will appear as a single physical device > shared by all. I'm personally not a big fan of logical partitioning > of a single raid device unless you are trying to keep a physical > volume under 1 TB for example. Each sync on the volume is guaranteed > to sync all 4 disks regardless of how you set your partitions up. Ok, i am not a raid expert. but in my understanding RAID 10 is faster than two RAID 1 arrays, aren't they? So, given that i can put up to 4 S-ATA disk in my server and the mentioned raid controller. Would you prefer no-raid, RAID1 or RAID 10? > > - I want to know if 3ware 9500 S is recommended or if its one of those > > controllers which sucks. > > escalade is a fairly full featured raid controller for the price. > consider it the ford taurus of raid controllers, it's functional and > practical but not sexy. Their S line is not native sata but operates > over a pata->sata bridge. Stay away from raid 5. thanks for your recommendation. ford taurus is ok for me :-) kind regrads janning ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] postmaster.pid
The docs state that postmaster.pid is "A lock file recording the current postmaster PID and shared memory segment ID (not present after postmaster shutdown" I never looked until now, but I see the number 5432001 where the pid should be, and the real pid is in /tmp/.s.PGSQL.5432.lock, along with the path to data. Is the documentation incomplete/misleading or am I doing something odd? - Ian ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] sound index
Have a look at contrib/pg_trgm Nikolay Samokhvalov wrote: hello. does anybody know any solutions to the problem of searching words/phrases, which are close to each other by sounding? e.g. soundex index or smth. problem I have: tag suggestion mechanism, similar to google suggest, which is intended to suggest names of people (search field "person's name" in web form). it would be great if it does its work smarter than simple LIKE. also, i'd be happy to listen opinions from people who have experience of usage of such things like soundex. -- Best regards, Nikolay ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match -- Teodor Sigaev E-mail: [EMAIL PROTECTED] WWW: http://www.sigaev.ru/ ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Hardware related question: 3ware 9500S
On 4/12/06, Janning Vygen <[EMAIL PROTECTED]> wrote: > Hi, > disk 1: OS, tablespace > disk 2: indices, WAL, Logfiles > - Does my partitioning make sense? with raid 10 all four drives will appear as a single physical device shared by all. I'm personally not a big fan of logical partitioning of a single raid device unless you are trying to keep a physical volume under 1 TB for example. Each sync on the volume is guaranteed to sync all 4 disks regardless of how you set your partitions up. > - I want to know if 3ware 9500 S is recommended or if its one of those > controllers which sucks. escalade is a fairly full featured raid controller for the price. consider it the ford taurus of raid controllers, it's functional and practical but not sexy. Their S line is not native sata but operates over a pata->sata bridge. Stay away from raid 5. merlin ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] Leverage your PostgreSQL V8.1 skills to learn DB2
This is interesting. http://www-128.ibm.com/developerworks/db2/library/techarticle/dm-0603wasserman2/ There are a few bugs 1. In the graphic overview PostgreSQL == Progres 2. In description of PostgreSQL database cluster, "After initialization, a database cluster contains a database called postgres, which is a default database used by utilities, users and third party applications. " That doesn't seem to be necessarily true. I don't have that database 3. He misses the fact that some configuration options can be set interactively via SET 4. "The VACUUM command must be run on a regular basis to recover disk space occupied by updated or deleted rows and to update data statistics used by the PostgreSQL query planner. " with no mention of autovacuum. He is talking about 8.1. Every time I learn about a big commercial database I am amazed at how much 'bigger' and more complicated than PostgreSQL they are - I guess they have to be for their intended use - and how similar to PostgreSQL they are in actual function. ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] How to import a CSV file (originally from Excel)
> > \copy "Flight Traffic" from yourfile.csv delimiter as ',' > csv quote as > > '"' > > > > (might need some adaption, of course) > > > > > > Loading 45,000 lines is trivial for copy, it shouldn't take > noticable > > time at all. > > along these lines - can I do something similar (CSV file) but > 'update' 2 or 3 columns? I'd use COPY to a temp table, then run a normal UPDATE on that. //Magnus ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Hardware related question: 3ware 9500S
Hi, i don't know much about hard disks and raid controllers but often there is some discussion about which raid controller rocks and which sucks. my hosting company offers me a raid 10 with 4 serial-ata disks. They will use a "3ware 4-Port-RAID-Controller 9500S" More than 4 disks are not possible. Most operations and all time-critical operations are read-only using a lot of indices. My partioning plans are like this: disk 1: OS, tablespace disk 2: indices, WAL, Logfiles - Does my partitioning make sense? - I want to know if 3ware 9500 S is recommended or if its one of those controllers which sucks. kind regards, janning ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Could not send Query(connection dead)
My client application receives this error randomnly a number of times per day when user tries to save document. TCP connection to server seems to be OK since INSERT INTO logfile ... command which logs this error to database works. (I use singe TCP connection to server in applicaton). How to fix this error ? Environment: Postgres 8.1.3 running in Windows 2000 server Client application uses Postgres ODBC driver from XP, no SSL Andrus. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] hard shut down of system
"surabhi.ahuja" <[EMAIL PROTECTED]> writes: > if postmaster is running and i do, > kill -9 -1 > i.e. i did abrupt shut down > i am not able to start postmaster What happens exactly when you try --- what error messages does it print? What method are you using to try to start the postmaster (pg_ctl, some init script (whose), etc)? The reasons I know about for failure to restart in this situation include * you forgot to kill all the postmaster child processes too; * you didn't remove the shared memory segment and/or semaphores, and the system's SysV limits are too small to let a new postmaster create a whole duplicate set. None of these things have anything to do with what happens in a real system crash. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] How to import a CSV file (originally from Excel) into a Postgresql 8.0.7 database
Hello, Thank you for all your answers. I did indeed generate a .csv from OpenOffice (Excel could do the job as well) and I imported it into Postgresql using 'copy'. I had to put the .csv files into /tmp to avoid permissions issues but it went fine and fast. To import 45000 lines took about a second. This is impressive... Thanks for your help. > > For generation in Excel, you can just use File->Save, and select CSV as > > the format. You can use either CSV or tab delimited, really. > > I am not sure if it matters with postgresql, but other programs require > (MS-DOS) CSV format rather than the initial CSV choice. > > Regards, > > Richard -- ** Daniel TOURDEE-mail : [EMAIL PROTECTED] Tel : +46 (0)8-55 50 32 12 Fax : +46 (0)8-55 50 30 68 Cellular : +46 (0)70-849 93 40 FOI, Swedish Defence Research Agency; Systemteknik Department of Aviation Environmental Research SE-164 90 Stockholm, Sweden ** ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] [HACKERS] RH9 postgresql 8.0.7 rpm
Hi, On Wed, 2006-04-12 at 13:07 +0200, Gaetano Mendola wrote: > > I was able to create it with: > > --nodeps --define 'buildrhel3 1' --define 'build9 1' I'll be hppy if you send the RPMs directly to me; so that I can upload them. A tarball would be fine. Regards, -- The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564 PostgreSQL Replication, Consulting, Custom Development, 24x7 support Managed Services, Shared and Dedicated Hosting Co-Authors: PL/php, plPerlNG - http://www.commandprompt.com/ ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] RH9 postgresql 8.0.7 rpm
Hi, On Wed, 2006-04-12 at 12:24 +0200, Gaetano Mendola wrote: > I thought given this link > > http://www.postgresql.org/ftp/binary/v8.0.7/linux/srpms/redhat/redhat-9/ > > is not empty RH9 was still supported. PostgreSQL is built on Red Hat 9; but "we" (RPM packagers) can't provide RPMs now, because I upgraded my RH9 box. I provided the SRPMS to most platforms for the people who would like to pick it up and build RPMs on their boxes (and want to contribute the RPMs they build ;) ) Regards, -- The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564 PostgreSQL Replication, Consulting, Custom Development, 24x7 support Managed Services, Shared and Dedicated Hosting Co-Authors: PL/php, plPerlNG - http://www.commandprompt.com/ ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] [HACKERS] RH9 postgresql 8.0.7 rpm
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Devrim GUNDUZ wrote: > Hi Gaetano, > > On Tue, 2006-04-11 at 18:31 +0200, Gaetano Mendola wrote: >> I'm trying to build the rpms for RH9, >> I downloaded the srpm for RH9 but I'm stuck on these errors: >> >> Attempt a: >> >> # rpmbuild --rebuild postgresql-8.0.7-1PGDG.src.rpm >> Installing postgresql-8.0.7-1PGDG.src.rpm >> error: Failed build dependencies: >> tcl-devel is needed by postgresql-8.0.7-1PGDG >> >> why tcl-devel on rh9 version? tcl-devel doesn't exist on rh9 ) > > We use only one spec file for all platforms. If you are using Red Hat 9, > you should consider using the macros that are enabled for RH9 (build89 > and/or build9 in the spec file). > >> Attempt b: >> # rpmbuild --nodeps --rebuild postgresql-8.0.7-1PGDG.src.rpm >> >> checking krb5.h presence... no >> checking for krb5.h... no >> configure: error: header file is required for Kerberos 5 >> error: Bad exit status from /var/tmp/rpm-tmp.73067 (%build) > > I think rebuilding with --define 'buildrhel3 1' will work here. >> ok no kerberos now: >> >> Attempt c: >> # rpmbuild --nodeps --rebuild --define 'kerberos 0' >> postgresql-8.0.7-1PGDG.src.rpm >> . >> checking for zlib.h... yes >> checking openssl/ssl.h usability... no >> checking openssl/ssl.h presence... no >> checking for openssl/ssl.h... no >> configure: error: header file is required for OpenSSL >> error: Bad exit status from /var/tmp/rpm-tmp.3109 (%build) >> >> actually I have that file: >> >> # locate openssl/ssl.h >> /usr/include/openssl/ssl.h > > I have no idea about this and I can't remember right now how I was > building RH9 RPMs ... > > If you provide me a RH9 box, I can help you. I was able to create it with: - --nodeps --define 'buildrhel3 1' --define 'build9 1' thank you for the support and I'm rly sorry to not be able to write/read the postgres forums as I was used to do in the past. Regards Gaetano Mendola -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.5 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFEPN977UpzwH2SGd4RAgT7AJ9FjmQ1QbX6RmuAl35UmiaqJbQ2pQCguudG JvAWClxAWnT3FjbRS6M5gf8= =uUh5 -END PGP SIGNATURE- ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] RH9 postgresql 8.0.7 rpm
Joshua D. Drake wrote: > Gaetano Mendola wrote: >> -BEGIN PGP SIGNED MESSAGE- >> Hash: SHA1 >> >> Hi all, >> I'm trying to build the rpms for RH9, >> I downloaded the srpm for RH9 but I'm stuck on these errors: > > RH9 is not a supported platform by RedHat or PGDG. > I thought given this link http://www.postgresql.org/ftp/binary/v8.0.7/linux/srpms/redhat/redhat-9/ is not empty RH9 was still supported. Regards Gaetano Mendola ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq