Re: [GENERAL] Difference between PRIMARY KEY index and UNIQUE-NOT NULL index
Josh Tolley [EMAIL PROTECTED] writes: Might it just be that the original UNIQUE + NOT NULL index was bloated or otherwise degraded, and reindexing it would have resulted in the same performance gain? That's just a guess. Yeah. There is precious little difference between UNIQUE+NOT NULL and PRIMARY KEY --- to be exact, the latter will allow another table to reference this one in FOREIGN KEY without specifying column names. The planner knows nothing of that little convenience. The interesting thing about this report is that the plan changed after creating the new index. That has to mean that some statistic visible to the planner changed. Creating an index does update the pg_class columns about the table's size and number of rows, but probably those weren't that far off to start with. My bet is that the new index is a lot smaller than the old because of bloat in the old index. If so, REINDEX would have had the same result. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] Char vs SmallInt
Which data type is smaller and will lead to better query performance - smallint or char?
[GENERAL] Best interval timeout and check interval for Slony
Hi, All i want to replicate my DB with one master and two Slave over VPN with bandwidth 64Kbps and my DB 125MB..what is the best interval timeout and time to check interval..for this time, i use default..any one has the best experience for timeout?? can i use trigger to make connection timeout?? could you give me trigger example to make connection timeout.. if i can't do it,am i only use -t .. i've trying to use this rule,but i am still couldnt do it..how can i check for connection timeout?? Thanks, Regards Angga Bayu Need a vacation? Get great deals to amazing places on Yahoo! Travel. http://travel.yahoo.com/ ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Char vs SmallInt
Pg Coder [EMAIL PROTECTED] writes: Which data type is smaller and will lead to better query performance - smallint or char? smallint is 2 bytes, usually with 2-byte alignment. If you declare a column as char it means CHAR(1) which in 8.2 is 5-8 bytes (5 bytes if it's an ascii character) and has 4-byte alignment. In 8.3 it's 2-5 bytes (2 bytes if it's an ascii character) and has 1-byte alignment. If you declare a column as char with the quotes then it's a 1-byte integer with 1-byte alignment. That'll be smaller than smallint. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Foreign key constraint question
Jeff Davis skrev: On Fri, 2007-07-20 at 19:18 -0500, Perry Smith wrote: The relational model handles inheritance and polymorphism very well if you don't store types as values. What if I have just an id for an item? This will happen when another table references an item. How do I know what type it is? Are you suggesting I look in companies, people, etc, etc to find the type? It would seem better to have a table that tells me the type. Then retrieve the item from the specified table. Why do you need to know the type? The purpose of polymorphism is that, if you are looking to access a set of polygons, you don't care whether an individual shape is a triangle or a square, all you care is that it's a polygon. If you want to access triangles specifically, you join polygons to triangles. What if, for instance, I want to render a list of shapes? To render the shape, I need to get its data, to get its data, I need to know what type it is. ISTM that the easiest way to achieve this is storing the type info at the top of the table hierarchy. This then gives us two evils to choose from wrt integrity checking: 1. Add the type column to these tables as well, restricting possible values. This gives us integrity at the cost of verbosity and lack of modularity (a type need to know its subtypes). 2. Leave integrity enforcement to the applications (and possibly write stored procedures for it). Both of these are reasonable implementation choices, I would say. The OP wants to do 1, is only envisioning one level of inheritance, and wants a shortcut for it. Yours, Nis Jorgensen PS: Hi Jeff. Small world, isn't it? ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] encodings
Hello. If I have an UTF8 database, dump it and the restore as WIN1250 database, then dump it again and restore as UTF8, would structure of the database (schema) be exactly the same as initial database, or something will change in the process? In other words, does encoding influence only data stored in tables, or it influences database structure as well? Thanks. Zlatko
Re: [GENERAL] Ordering by a complex field
In article [EMAIL PROTECTED], Robert James [EMAIL PROTECTED] wrote: % I'd like to order so that records where field='2' come first, then '1', then % '9', then anything but '0', then '0'. Is there anyway to do this in a % standard order by clause (that is, without writing a new SQL function)? You can use a case statement in the order by clause order by case when field = '0' then 4 when field = '1' then 1 when field = '2' then 0 when field = '9' then 2 else 3 end -- Patrick TJ McPhee North York Canada [EMAIL PROTECTED] ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Primary Key Performance with INTEGER vs. VARCHAR
Hi, Designing my application, I was wondering if having my primary keys (to be indexed) with VARCHAR brings performance down significantly? My own test didn't show much difference. Thinking about it though, I'd guess Integer Indexing should be much quicker and efficient. I'd appreciate all comments in this regard, Mike ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] Help with date math
I need some help. I am trying to replicate a function from Sybase ASA, and am having difficulty. I need to be able to subtract 2 date (or timestamps) and return the results expressed in days, weeks, month, quarters, or years. How do I do this? I believe Postgres is returning the number of days when you subtract to days. i.e. postgres=# select current_date - '2007/01/01'; ?column? -- 200 (1 row) However, I can not figure out how to get the results expressed as 7 months, or 2 quarters, x weeks, or 0 years. I have tried to use date_part, but since the subtraction is returning an integer, it is not happy. Any help would be greatly appreciated. Chris P.S. I'm trying to replicate Sybase's datediff function.
[GENERAL] postgresql compile problem
Hi, Eecently, I have downloaded the postgresql-8.1.9.tar.gz from the official website,and then I install in my linux System ,whose gcc version is 2.9.6.Although I can install it successfully,then result version I check is 7.2.1~£¬and how can this happen,can u tell me the reason? Thanks a lot ! Regards caijun ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Foreign key constraint question
Perry Smith wrote: I want to do something like this: ALTER TABLE companies ADD CONSTRAINT fk_companies_item_id FOREIGN KEY (item_id, 'Company') REFERENCES item_bases(item_id, item_type) INITIALLY DEFERRED I could add a column to companies that is always set to Company but that seems like a waste. I tried the above and I got a syntax error. Can there be more than one row in item_bases with the same item_id but different item_types? -- Lew ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] shp2pgsql Help with sintax!
Hi! New to postgres, im trying to import shapefiles into postgres 8.2 using shp2pgsql but so far have failed. I tryed using the -p option just to prepare / create the tables then insert the data, but no luck, please help me writing the correct syntax to acomplish for the following: Shapefile named callesnac.shp Database postgres (postgres default installation) Schema sky Need: Create table callesnac Add column gid integer / not null default / next val Add column clasificac_vias varchar import / insert all the data from callesnac.shp into callesnac table. Thanks! -- View this message in context: http://www.nabble.com/shp2pgsql-Help-with-sintax%21-tf4098088.html#a1165 Sent from the PostgreSQL - general mailing list archive at Nabble.com. ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] PGInstaller Project
This project is of great interest to me as I am looking for a easier-to-deploy database server and I've always loved PostgreSQL. the PGInstaller project (http://pgfoundry.org/projects/pginstaller/) looks like what I need but I'm worried that it's abandoned. I see a beta posted back in April but nothing more since then. Is there another more active project working on a way to install PG without making the user go through all the configuration by hand? I want to ship PG with a desktop software and I'd rather choose all the defaults myself rather than try to talk a novice user through setting up PG. -- - Mitchell Vincent - K Software - Innovative Software Solutions - Visit our website and check out our great software! - http://www.ksoftware.net ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Update of table lags execution of statement by 1 minute?
This happens in the same session. I have an application that had been encountering this issue periodically, and I have rigged it to send me an email whenever it happens. I¹ll get that email, fire up my client, and try to update the row manually. Right after my update (autocommit is on, but it makes no difference if I do a manual commit), I¹ll issue a select statement on the same row, and it returns with the value from before the update. I¹ll keep watching that row, and eventually it will show the right value in that column. Once again, looking at the query logs, there are no conflicting updates happening. With your corrupted index theory, would simply rebuilding the index ³fix² the issue? Is there any other way to diagnose if this is indeed the cause? Thanks for your help, Erik Peterson On 7/17/07 10:54 AM, Tom Lane [EMAIL PROTECTED] wrote: Erik Peterson [EMAIL PROTECTED] writes: I'm having this issue where once or twice per day (out of ~100,000 = queries) the table doesn't reflect a committed update immediately. Usually when = this problem occurs the update takes 1-3 minutes to be reflected in SELECT queries. Occasionally, it has taken more than 10 minutes. The session could go something like this: UPDATE mytable SET myattribute=1 WHERE id=14; COMMIT; SELECT myattribute FROM mytable WHERE id=14; (Query returns myattribute with a value of 0) (Wait 5 minutes) SELECT myattribute FROM mytable WHERE id=14; (Query returns myattribute with a value of 1) To be blunt, I don't believe it. I can think of bugs by which a commit might be lost entirely, but there is no mechanism that would make it good five minutes later. I think you've misdiagnosed your problem somehow --- either you're not really committing where you think you are, or the observing query is using an old snapshot (maybe you are running it in a serializable transaction?) A somewhat more credible theory would revolve around corrupted indexes. If there's a corrupted index on id in the above example, a query might sometimes find one version of a row and sometimes find another; although any given search would be deterministic, apparently-unrelated changes in the index contents could change which one is found. Have you actually been able to reproduce a problem as sketched above in a single session --- ie, the update and the contradictory observations all done by the same backend? Or is this a representation of things that are happening in different sessions? What else is going on meanwhile? regards, tom lane
[GENERAL] Problem with large bytea fields in windows
I have been having a lot of trouble with postgres and large bytea fields under windows and would appreciate any suggestions. I have a large database (22 GB) that I have slowly grown on one machine. It is currently running postgres 8.2.3 under windows xp. It is a simple db with one table that has a serial column and a large bytea column. It is a repository for scanned documents. A few of these documents are over 250 pages and take up at least 100 Mb per row. A backup takes almost a whole day and almost as long to copy onto usb drive. I have not had any trouble with these large inserts (using libpq calls) or in making a backup. Recently I started using slony, first for a 400 MB database, and then I attempted to use it on this large database. The slave machine is running windows vista and the same version of postgres. The copy failed at about 12GB with a windows error 10055, ran out of buffer space. I have now tried over 30 times, all without success. I tried using just two machines sitting next to each other and an ethernet switch. I tried adjusting the tcp parameters in the registry following microsoft's advice in a few of the knowledgebase articles. I tried running the slony process on a third linux machine. Always the same problem. I ran the process with no one logged onto these machines so no other processess were running. I shut off all unecessary services. Stil the copies failed after 12GB to 15GB. Same error. I turned off autovacuum, I increase all of the memory configs in postgres.conf. I added memory to both machines so they both have 2GB. I posted this problem to the slony.info list. The main suggestion seems to be that it is really a libpq error. Darcy Buskermolen suggested that the problem was with libpq since the initial subscribe process mimics a pg_dump | psql. He suggested a postgres mailing list post: http://archives.postgresql.org/pgsql-ports/2005-11/msg0.php, which describes a winsock problem in libpq. No resolution is given howerver. Does anyone have any suggestions? At the moment I am all out of ideas. Thank you for any help Al Rosenthal arosnethal at AtlantaHand dot com ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Difference between PRIMARY KEY index and UNIQUE-NOT NULL index
Hi all. Maybe mine is a stupid question, but I'd like to know the answer if possible. In an inner join involving a 16M+ rows table and a 100+ rows table performances got drastically improved by 100+ times by replacing a UNIQUE-NOT NULL index with a PRIMARY KEY on the very same columns in the very same order. The query has not been modified. In the older case, thanks to the EXPLAIN command, I saw that the join was causing a sort on the index elements, while the primary key was not. So ther's some difference for sure, but I'm missing it. Any hint? -- Vincenzo Romano -= NotOrAnd.IT Information Technologies =- tel +39 0823 454163 | cel +39 339 8083886 | fax +39 02 700506964 Smooth seas never make experienced sailormen ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] help with tsearch2 stem compile
hi all i'm using postgresql 8.2.4 and install tsearch2 , but i need spanish idiom. following http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/docs/README.gendict and downloading http://snowball.tartarus.org/algorithms/spanish/stemmer.html stem.c and stem.h ./config.sh -n pt -s -p spanish_ISO_8859_1 -v -C'Snowball stemmer for spanish' Dictname: 'pt' Snowball stemmer: yes Has init method: yes Function prefix: spanish_ISO_8859_1 Source files: stem.c Header files: stem.h Object files: stem.o dict_snowball.o Comment: 'Snowball stemmer for spanish' Directory: ../../dict_pt Build directory... ok Build Makefile... ok Build dict_pt.sql.in... ok Copy source and header files... ok Build sub-include header... ok Build Snowball stemmer... ok Build README.pt... ok All is done and cd ../../dict_pt/ make . . . . stem.c: In function 'spanish_ISO_8859_1_close_env': stem.c:1092: error: too many arguments to function 'SN_close_env' make: *** [stem.o] Error 1 any clue? best regards mdc __ Preguntá. Respondé. Descubrí. Todo lo que querías saber, y lo que ni imaginabas, está en Yahoo! Respuestas (Beta). ¡Probalo ya! http://www.yahoo.com.ar/respuestas ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Help with date math
Chris Hoover wrote: I need some help. I am trying to replicate a function from Sybase ASA, and am having difficulty. I need to be able to subtract 2 date (or timestamps) and return the results expressed in days, weeks, month, quarters, or years. How do I do this? I believe Postgres is returning the number of days when you subtract to days. i.e. postgres=# select current_date - '2007/01/01'; ?column? -- 200 (1 row) However, I can not figure out how to get the results expressed as 7 months, or 2 quarters, x weeks, or 0 years. I have tried to use date_part, but since the subtraction is returning an integer, it is not happy. Any help would be greatly appreciated. Chris P.S. I'm trying to replicate Sybase's datediff function. postgres=# select age(current_date,'2007/01/01'); age 6 mons 20 days (1 row) http://www.postgresql.org/docs/8.2/static/functions-datetime.html ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Help with date math
Chris Hoover wrote: I need some help. I am trying to replicate a function from Sybase ASA, and am having difficulty. I need to be able to subtract 2 date (or timestamps) and return the results expressed in days, weeks, month, quarters, or years. How do I do this? I believe Postgres is returning the number of days when you subtract to days. i.e. postgres=# select current_date - '2007/01/01'; ?column? -- 200 (1 row) However, I can not figure out how to get the results expressed as 7 months, or 2 quarters, x weeks, or 0 years. I have tried to use date_part, but since the subtraction is returning an integer, it is not happy. Any help would be greatly appreciated. Chris P.S. I'm trying to replicate Sybase's datediff function. postgres=# select date_part('month', age(current_date, '2007/01/01')); date_part --- 6 (1 row) ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] help with tsearch2 stem compile
try patch from http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/ which updates snowball api Oleg On Tue, 17 Jul 2007, marcelo Cortez wrote: hi all i'm using postgresql 8.2.4 and install tsearch2 , but i need spanish idiom. following http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/docs/README.gendict and downloading http://snowball.tartarus.org/algorithms/spanish/stemmer.html stem.c and stem.h ./config.sh -n pt -s -p spanish_ISO_8859_1 -v -C'Snowball stemmer for spanish' Dictname: 'pt' Snowball stemmer: yes Has init method: yes Function prefix: spanish_ISO_8859_1 Source files: stem.c Header files: stem.h Object files: stem.o dict_snowball.o Comment: 'Snowball stemmer for spanish' Directory: ../../dict_pt Build directory... ok Build Makefile... ok Build dict_pt.sql.in... ok Copy source and header files... ok Build sub-include header... ok Build Snowball stemmer... ok Build README.pt... ok All is done and cd ../../dict_pt/ make . . . . stem.c: In function 'spanish_ISO_8859_1_close_env': stem.c:1092: error: too many arguments to function 'SN_close_env' make: *** [stem.o] Error 1 any clue? best regards mdc __ Pregunt?. Respond?. Descubr?. Todo lo que quer?as saber, y lo que ni imaginabas, est? en Yahoo! Respuestas (Beta). ?Probalo ya! http://www.yahoo.com.ar/respuestas ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/ Regards, Oleg _ Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru), Sternberg Astronomical Institute, Moscow University, Russia Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/ phone: +007(495)939-16-83, +007(495)939-23-83 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Help with date math
On fös, 2007-07-20 at 11:08 -0400, Chris Hoover wrote: I need some help. I am trying to replicate a function from Sybase ASA, and am having difficulty. I need to be able to subtract 2 date (or timestamps) and return the results expressed in days, weeks, month, quarters, or years. How do I do this? I believe Postgres is returning the number of days when you subtract to days. However, I can not figure out how to get the results expressed as 7 months, would not 6 months be more logical? or 2 quarters, x weeks, or 0 years. have you tried to use extract() ? there is an old compatibility project on pgfoundry that has a datediff function, but i have no idea how well it copies the mssql behaviour. http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/mssqlsupport/datetimefunctions.sql?rev=1.1.1.1content-type=text/x-cvsweb-markup it looks a bit simplistic to me, but maybe that is exactly what you need. gnari ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] CentOS 4 RPMs for 8.2.4?
I'd like to update from 8.1.9 to 8.2.4 to take advantage of the multiple value INSERT but don't see at least two of the RPMs for 8.2.4 that I need. The installed 8.1.9 RPMs (from the centosplus repository) are: postgresql-8.1.9-1.el4s1.1 postgresql-contrib-8.1.9-1.el4s1.1 postgresql-devel-8.1.9-1.el4s1.1 postgresql-docs-8.1.9-1.el4s1.1 postgresql-jdbc-8.1.4-1.centos.1 postgresql-libs-8.1.9-1.el4s1.1 postgresql-odbc-08.01.0200-2.el4s1.3 postgresql-pl-8.1.9-1.el4s1.1 postgresql-python-8.1.9-1.el4s1.1 postgresql-server-8.1.9-1.el4s1.1 postgresql-tcl-8.1.9-1.el4s1.1 postgresql-test-8.1.9-1.el4s1.1 while the RH4 RPMs at postgresql.org are: compat-postgresql-libs-4-2PGDG.rhel4.i686.rpm postgresql-8.2.4-1PGDG.i686.rpm postgresql-contrib-8.2.4-1PGDG.i686.rpm postgresql-devel-8.2.4-1PGDG.i686.rpm postgresql-docs-8.2.4-1PGDG.i686.rpm postgresql-libs-8.2.4-1PGDG.i686.rpm postgresql-plperl-8.2.4-1PGDG.i686.rpm postgresql-plpython-8.2.4-1PGDG.i686.rpm postgresql-pltcl-8.2.4-1PGDG.i686.rpm postgresql-server-8.2.4-1PGDG.i686.rpm postgresql-test-8.2.4-1PGDG.i686.rpm I need the Java and Python interfaces supplied with (from 8.1.9): postgresql-jdbc-8.1.4-1.centos.1 postgresql-python-8.1.9-1.el4s1.1 Can I just continue to use those with the above 8.2.4 RPMs? What about the odbc support, which would be useful, but not as critical? While I'm at it, I also need libpqxx working for C++. The last set of RPMs I've found are really for PG7, but they seem to work alright with PG 8.1.x (after some trickery to get them to install with and use libpq.so.4 instead of libpq.so.3). Does anyone know if this version: libpqxx-devel-2.6.9-1.el4.rf libpqxx-2.6.9-1.el4.rf will work with 8.2.4? Thanks! -- Steve Wampler -- [EMAIL PROTECTED] The gods that smiled on your birth are now laughing out loud. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] 8.2.4 signal 11 with large transaction
On 7/20/07, Tom Lane [EMAIL PROTECTED] wrote: I guess what we need to do is hack the emergency-recovery path for error-during-error-processing such that it will prevent trying to print a very long debug_query_string. Maybe we should just not try to print the command at all in this case, or maybe there's some intermediate possibility like only printing the first 1K or so. Thoughts? regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster I know my 2 cents are not worth that much, but as a DBA, I would really like for you to print at least some of the string causing the abend. This would greatly assist in the tracing of the offending query. Chris
[GENERAL] spool function in nbtree index
Hi, I am trying to understand the function of spool structure in btree index creation. I understand that there are 2 of them, one for the itups to be sorted and the other for dead tuples. Now, if I want to see all the itups that are loaded into the main spool (not spool for the dead tuples), what function should i look into and which structure/variable holds this value? thanks in advance, TPC Be a better Globetrotter. Get better travel answers from someone who knows. Yahoo! Answers - Check it out. http://answers.yahoo.com/dir/?link=listsid=396545469 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] posgres tunning
I have a drupal site working with postgres that does not tolerate more than 200 concurrent connections(it is not hardware fault). Does anybody know how to adjust postgres parameters and what are them?, or how to get a better performance in any way?
Re: [GENERAL] Foreign key constraint question
Nis Jørgensen wrote: What if, for instance, I want to render a list of shapes? To render the shape, I need to get its data, to get its data, I need to know what type it is. ISTM that the easiest way to achieve this is storing the type info at the top of the table hierarchy. Try adding tableoid to the list of columns retrieved. Even better, cast that to regclass. -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] encodings
On Sat, Jul 21, 2007 at 10:24:38PM +0200, Zlatko Matić wrote: If I have an UTF8 database, dump it and the restore as WIN1250 database, then dump it again and restore as UTF8, would structure of the database (schema) be exactly the same as initial database, or something will change in the process? In other words, does encoding influence only data stored in tables, or it influences database structure as well? I can't think of how the encoding would influence the structure. Are you seeing behavior that suggests otherwise? -- Michael Fuhr ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] posgres tunning
On Sat, Jul 21, 2007 at 09:54:35PM -0500, Arnaldo Gandol wrote: I have a drupal site working with postgres that does not tolerate more than 200 concurrent connections(it is not hardware fault). What does not tolerate mean? Does the database refuse connections beyond 200? Does it permit connections but performance suffers? Or something else? Does anybody know how to adjust postgres parameters and what are them?, or how to get a better performance in any way? We'll need to know more about the nature of the problem before we can make recommendations. For configuration guidelines see the performance-related documents at Power PostgreSQL: http://www.powerpostgresql.com/Docs -- Michael Fuhr ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] 8.2.4 signal 11 with large transaction
On 7/20/07, Tom Lane [EMAIL PROTECTED] wrote: I guess what we need to do is hack the emergency-recovery path for error-during-error-processing such that it will prevent trying to print a very long debug_query_string. Maybe we should just not try to print the command at all in this case, or maybe there's some intermediate possibility like only printing the first 1K or so. Thoughts? regards, tom lane I think printing the first 1K would make more sense. If I understand you correctly, the code path which you are referring to is the send_message_to_server_log() function in elog.c? thanks, -- Sibte Abbas EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] Debugging postgresql source on gdb
Hi, I am trying to debug postgres source code using gdb. I set a breakpoint on a function, but it never stops when it reaches that function. Here are the steps: 1. gdb postgres 2. set args -D test (test is my dbcluster) 3. b hashbuild(this is the function i want to break on) 4. run 5. start a client using ./psql test 6. command to create hash index, which calls the above function. at this point, i see that the function is being executed because i see my printf statements, but breakpoint never gets triggered. please advice. regards, Shreya - Get the free Yahoo! toolbar and rest assured with the added security of spyware protection.
Re: [GENERAL] Debugging postgresql source on gdb
Shreya Bhargava [EMAIL PROTECTED] writes: 1. gdb postgres 2. set args -D test (test is my dbcluster) 3. b hashbuild(this is the function i want to break on) 4. run You've set the breakpoint in the postmaster process. It won't propagate to child backends, at least not without special gdb pushups. The way that I usually debug things is to start the client psql job, then determine the PID of the backend serving it, and attach to that process in gdb. In a development environment where you're likely to have only one or a few backends running, this shell script might help: #!/bin/sh # tee /dev/tty is for user to see the set of procs considered PROCS=`ps auxww | \ grep postgres: | \ grep -v -e 'grep postgres:' -e 'postgres: stats' -e 'postgres: writer' -e 'postgres: archiver' -e 'postgres: logger' -e 'postgres: autovacuum' | \ tee /dev/tty | \ awk '{print $2}'` if [ `echo $PROCS | wc -w` -eq 1 ] then exec gdb $PGINSTROOT/bin/postgres -silent $PROCS else exec gdb $PGINSTROOT/bin/postgres -silent fi This will attach directly to the target backend if there's only one, else you can examine the ps output to determine which PID to attach to. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/