Re: [HACKERS] streamlined standby procedure
On Wed, 2006-02-08 at 11:10 +0100, Csaba Nagy wrote: Another issue is that unless you got the archive_command right in the master server from the beginning, you will have to restart the server once you decide to build your standby... the archive_command is a start-up time parameter Much of your difficulty seems to come from your thinking that this parameter requires a restart. It doesn't - check it out. The script need not be complex, you only need to put a wait loop in the restore script so that it waits for the next log file. Best Regards, Simon Riggs ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Schema search for default operator classes (was: [ADMIN] Cross schema Primary Key Indexes problem with datatype in the public schema)
On Wed, Feb 08, 2006 at 09:04:46PM -0500, Tom Lane wrote: http://archives.postgresql.org/pgsql-admin/2006-02/msg00084.php reports a problem with default btree operator classes that are not in pg_catalog: you can create a UNIQUE or PRIMARY KEY constraint that depends on such an opclass, but then when you pg_dump and try to reload, you get something like snip The only other solution I can see is to extend the ADD CONSTRAINT syntax to allow explicit specification of an opclass for each column. This might be a good thing to do in itself, but it looks like a new feature to me, rather than something we could reasonably apply as a bug fix. It would certainly be a much larger code change (affecting both pg_dump and the backend) than changing the behavior of GetDefaultOpClass. And it'd not fix the problem for existing dump files, either. So I'm leaning towards removing the search-path dependency of GetDefaultOpClass. Comments? I'm for. IMHO, if you give someone has access to the type they should have access to the supporting machinary. Whoever created the type probably also created the operator class and intended it to be used. For a comparison, we don't check the schema on looking up type input/output functions (well, we don't need to because we have the oid, but the idea is important). W.R.T. the other option (per column opclass specification), if we ever do COLLATE users will be allowed to specify it on a per-column basis anyway. Then specifying opclasses becomes redundant. I've been seriously neglecting this patch but hope to get back to it soon... Have a nice day, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a tool for doing 5% of the work and then sitting around waiting for someone else to do the other 95% so you can sue them. signature.asc Description: Digital signature
Re: [HACKERS] streamlined standby procedure
OK, this is news to me, I recall that last looking at the configuration docs it was start-up time, but I might be wrong. [looking up the docs] OK, citing the 8.1 online docs: 17.5.3. Archiving archive_command (string) The shell command to execute to archive a completed segment of the WAL file series. If this is an empty string (the default), WAL archiving is disabled. Any %p in the string is replaced by the absolute path of the file to archive, and any %f is replaced by the file name only. Use %% to embed an actual % character in the command. For more information see Section 23.3.1. This option can only be set at server start or in the postgresql.conf ^^^ file. It is important for the command to return a zero exit status if and only if it succeeds. Examples: archive_command = 'cp %p /mnt/server/archivedir/%f' archive_command = 'copy %p /mnt/server/archivedir/%f' # Windows It's at least confusing... it does say or in the postgresql.conf file too, but I must have overlooked that... and the only word is really confusing there. [looking at: http://www.powerpostgresql.com/Downloads/annotated_conf_80.html] OK, this is what confused me. The annotated conf file states it's a startup time parameter. Well, good to know it's not... Actually, my needs of PITR/standby building are mostly solved by now, but it's sure not an easy ride, and I really wonder if there is any readily available script bundle to do it for a windows server... Maybe a standby-building-tutorial is all what is needed... Cheers, Csaba. Much of your difficulty seems to come from your thinking that this parameter requires a restart. It doesn't - check it out. The script need not be complex, you only need to put a wait loop in the restore script so that it waits for the next log file. Best Regards, Simon Riggs ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] pg_hba.conf alternative
To give it to you straight... its just to ease the minds of management. Someone pointed out to them how easy it really is to access the data, and this kind of started to make them feel uncomfortable. They know the admins are very computer literate and that any protection can be broken by them. But it's just like information locked inside a cabinet, it can be accessed by breaking in right? But employees wont do it, because it's just not ethical to break into your employers private stash. But if it was lying on a paper on a desk somewhere, even the most honest employee might peek onto it for interest sake. And this type of information can stir quite a bit, trust me. That is all I was wondering about, if there was a way to just lock it inside a cabinet with a tiny bit more security. After that you can always take measures to make sure they aren't installing malicious software, or taking information home. You can install software like Tripwire to make sure the binaries are kept fine, remove gcc and so forth. Tino Wildenhain wrote: Q Beukes schrieb: Well, I am not looking for 100% security. I know that full access if full access, and that even if you were to encrypt the system through Postgre the determined person WILL always be able to get it out if they have system level access. All I wanted to do was to prevent the basic SQL/Linux literate user from accessing the databases. At the moment it is very easy for them to access the data. I trust that they wont go as far as overwriting the system with custom compiled version, or copying the data and so forth. It just that we would feel much better if we knew the data wasn't as open as it is now, with a simple pg restart it is all open? Can this only be done by maybe modifying the source to make pg_hba fields statically compiled into the executable? Instead, you might want to read about SELinux. You can protect files even to root (unless they reboot ;) but really you should have only trusted people have admin accounts. How comes you have somebody untrusted as admin? Regards Tino ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[HACKERS] User Defined Types in Java
Hi, I'd like to enable UDT's written in Java and made some initial trial and error. I don't get very far. Here's what I do: I take the 'complex' type example described in '31.11 User-Defined Types' and change it to use Java functions (see below). But I get: ERROR: type complex does not exist as soon as I execute the first CREATE statement. If I change the language from java to C and try again, the message is different (and more according to the docs): NOTICE: type complex is not yet defined DETAIL: Creating a shell type definition. The documentation says Creating a new base type requires implementing functions to operate on the type in a low-level language, usually C. I read that as it would be possible to use other languages. Apparently java is not one of them. What can I do to change that? Kind regards, Thomas Hallgren CREATE FUNCTION complex_in(cstring) RETURNS complex AS 'org.postgresql.pljava.example.Complex.in' LANGUAGE java IMMUTABLE STRICT; CREATE FUNCTION complex_out(complex) RETURNS cstring AS 'org.postgresql.pljava.example.Complex.out' LANGUAGE java IMMUTABLE STRICT; CREATE FUNCTION complex_recv(internal) RETURNS complex AS 'org.postgresql.pljava.example.Complext.recv' LANGUAGE java IMMUTABLE STRICT; CREATE FUNCTION complex_send(complex) RETURNS bytea AS 'org.postgresql.pljava.example.Complext.send' LANGUAGE java IMMUTABLE STRICT; CREATE TYPE complex ( internallength = 16, input = complex_in, output = complex_out, receive = complex_recv, send = complex_send, alignment = double ); ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] User Defined Types in Java
On Thu, Feb 09, 2006 at 01:08:01PM +0100, Thomas Hallgren wrote: Hi, I'd like to enable UDT's written in Java and made some initial trial and error. I don't get very far. Here's what I do: I take the 'complex' type example described in '31.11 User-Defined Types' and change it to use Java functions (see below). But I get: ERROR: type complex does not exist If you look at the code it says in a comment: /* * Only C-coded functions can be I/O functions. We enforce this * restriction here mainly to prevent littering the catalogs with * shell types due to simple typos in user-defined function * definitions. */ However, you could probably work around this like so: CREATE FUNCTION dummy(cstring) RETURNS complex AS [random existing function] LANGUAGE INTERNAL; This will create the shell type. You then create your other functions and finally the type, at which point you can delete the dummy function again. Roundabout, but it should work (I hope). Note, if you have a validator on your java code that tries to lookup the return type, you might get some interesting issues. Have a nice day, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a tool for doing 5% of the work and then sitting around waiting for someone else to do the other 95% so you can sue them. signature.asc Description: Digital signature
Re: [HACKERS] Schema search for default operator classes (was: [ADMIN] Cross schema Primary Key Indexes problem with datatype in the public schema)
Tom Lane wrote: Given that we only allow one default opclass for a datatype regardless of schema (see DefineOpClass), it's not really necessary for GetDefaultOpClass to restrict its search. I can think of some corner cases involving multiple binary-compatible-datatype matches where the restriction might give a unique answer when an unrestricted search would not, but I kinda doubt this would ever arise in practice. How about doing the constrained search first, and revert to the unconstrained behavior if it doesn't find the desired opclass? -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] User Defined Types in Java
Martijn van Oosterhout wrote: On Thu, Feb 09, 2006 at 01:08:01PM +0100, Thomas Hallgren wrote: Hi, I'd like to enable UDT's written in Java and made some initial trial and error. I don't get very far. Here's what I do: I take the 'complex' type example described in '31.11 User-Defined Types' and change it to use Java functions (see below). But I get: ERROR: type complex does not exist If you look at the code it says in a comment: /* * Only C-coded functions can be I/O functions. We enforce this * restriction here mainly to prevent littering the catalogs with * shell types due to simple typos in user-defined function * definitions. */ Ouch. Any chance of getting this changed? If we can agree on a good design I'd be happy to implement and submit it. However, you could probably work around this like so: CREATE FUNCTION dummy(cstring) RETURNS complex AS [random existing function] LANGUAGE INTERNAL; This will create the shell type. You then create your other functions and finally the type, at which point you can delete the dummy function again. Great. Thanks. Then at least I can test if what I have in mind is feasible. Roundabout, but it should work (I hope). Note, if you have a validator on your java code that tries to lookup the return type, you might get some interesting issues. I don't yet. But I'll keep it in mind to watch out for shell types once I do. Regards, Thomas Hallgren ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] User Defined Types in Java
On Thu, Feb 09, 2006 at 01:53:13PM +0100, Thomas Hallgren wrote: If you look at the code it says in a comment: /* * Only C-coded functions can be I/O functions. We enforce this * restriction here mainly to prevent littering the catalogs with * shell types due to simple typos in user-defined function * definitions. */ Ouch. Any chance of getting this changed? If we can agree on a good design I'd be happy to implement and submit it. Actually, I'm think this whole automatic creation of a shell-type a bit silly anyway. Why not simply solve the problem directly like so: CREATE TYPE complex AS SHELL; or DECLARE TYPE complex; Don't beat around the bush, say what you mean. Thoughts? -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a tool for doing 5% of the work and then sitting around waiting for someone else to do the other 95% so you can sue them. signature.asc Description: Digital signature
Re: [HACKERS] User Defined Types in Java
Martijn van Oosterhout wrote: On Thu, Feb 09, 2006 at 01:53:13PM +0100, Thomas Hallgren wrote: If you look at the code it says in a comment: /* * Only C-coded functions can be I/O functions. We enforce this * restriction here mainly to prevent littering the catalogs with * shell types due to simple typos in user-defined function * definitions. */ Ouch. Any chance of getting this changed? If we can agree on a good design I'd be happy to implement and submit it. Actually, I'm think this whole automatic creation of a shell-type a bit silly anyway. Why not simply solve the problem directly like so: CREATE TYPE complex AS SHELL; or DECLARE TYPE complex; Don't beat around the bush, say what you mean. Thoughts? I'd perhaps take it even further. Why not just: CREATE TYPE complex; similar to an anonymous struct in C. My favorite alternative is to do something like this: CREATE TYPE complex ( internallength = 16, input = complex_in, output = complex_out, ... AS 'filename' LANGUAGE C ); A construct like that would remove a lot of clutter (and source of errors). The IMMUTABLE STRICT along with all return and parameter types are pre-defined anyway and the likelihood of the functions living in different files (or as in my case, different classes) or using different languages for one specific type is second to none. Regards, Thomas Hallgren ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] pg_hba.conf alternative
If you want the data hidden from system administrators, you need to have the client encrypt it before storing it. Of course, that will have massive implications for your application. Have you considered storing your data on an encrypted filesystem? I have no idea what kind of performance hit you would suffer, but you wouldn't have to change your application at all that way. Perhaps a private mount so that only the postgresql process tree could see the decrypted bits? -- Korry ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Schema search for default operator classes (was: [ADMIN] Cross schema Primary Key Indexes problem with datatype in the public schema)
Alvaro Herrera [EMAIL PROTECTED] writes: Tom Lane wrote: Given that we only allow one default opclass for a datatype regardless of schema (see DefineOpClass), it's not really necessary for GetDefaultOpClass to restrict its search. How about doing the constrained search first, and revert to the unconstrained behavior if it doesn't find the desired opclass? Seems like rather a lot of work to preserve a behavior that (AFAICS) isn't even documented anywhere. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] User Defined Types in Java
Thomas Hallgren [EMAIL PROTECTED] writes: I'd like to enable UDT's written in Java Does Java really give you enough control over the bit-level representation of an object for this goal to be considered sane? In particular, it seems unsafe to use a Java class as a PG UDT, because the method pointers wouldn't remain the same across backend runs. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] User Defined Types in Java
Tom Lane wrote: Thomas Hallgren [EMAIL PROTECTED] writes: I'd like to enable UDT's written in Java Does Java really give you enough control over the bit-level representation of an object for this goal to be considered sane? Most definitely yes! In particular, it seems unsafe to use a Java class as a PG UDT, because the method pointers wouldn't remain the same across backend runs. I'm not sure I understand what you mean. I assume that all calls would come in through the java_call_handler. The java_call_handler will make sure that the correct class and method is called. How is that different from any other declared function? Or are you thinking of the lifecycle of the binary data versus the lifecycle of the methods that manipulate it? They might be different and that might cause problems. But that's true for a UDT defined in C as well. Regards, Thomas Hallgren ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] User Defined Types in Java
Martijn van Oosterhout kleptog@svana.org writes: Actually, I'm think this whole automatic creation of a shell-type a bit silly anyway. Why not simply solve the problem directly like so: CREATE TYPE complex AS SHELL; One of the unwritten consequences of the way that it works now is that only superusers can clutter the catalogs with shell types. Not sure how important that is, but I suspect that the system is not all that robust against use of shell types where a completed type is expected. You'd have to go over a lot of code with a fine-tooth comb before putting this ability into the hands of ordinary users, else you'd be creating loopholes for DOS attacks (or worse). Having said that, I agree that this seems conceptually cleaner, though I'm not sure we could ever get rid of the old way because of backward compatibility issues. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] pg_hba.conf alternative
korry wrote: If you want the data hidden from system administrators, you need to have the client encrypt it before storing it. Of course, that will have massive implications for your application. Have you considered storing your data on an encrypted filesystem? I have no idea what kind of performance hit you would suffer, but you wouldn't have to change your application at all that way. Perhaps a private mount so that only the postgresql process tree could see the decrypted bits? Since what he is worried about is the ability of admins to get at the data by connecting to the postgres server (after changing pg_hba.conf), this will not make the slightest difference - the data would be decrypted before it ever got to the intruder. For encryption to be effective against some perceived threat, the data has to be encrypted before it gets anywhere the spy can see it. There really are no magic solutions. Unfortunately, there is not a similar shortage of snake oil. cheers andrew ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] User Defined Types in Java
Tom Lane wrote: Martijn van Oosterhout kleptog@svana.org writes: Actually, I'm think this whole automatic creation of a shell-type a bit silly anyway. Why not simply solve the problem directly like so: CREATE TYPE complex AS SHELL; One of the unwritten consequences of the way that it works now is that only superusers can clutter the catalogs with shell types. I suppose we could restrict this variant to superusers, at least initially. [snip] Having said that, I agree that this seems conceptually cleaner, though I'm not sure we could ever get rid of the old way because of backward compatibility issues. They are not mutually exclusive, are they? I too like Martijn's suggestion. cheers andrew ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[HACKERS] PostgreSQL 8.0.6 crash
PostgreSQL promptly uses all available memory for the query and subsequently crashes. I'm sure it can be corrected with a setting, but should it crash? freedb=# create table ucode as select distinct ucode from cdtitles group by ucode having count(ucode)1 ; server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. The connection to the server was lost. Attempting reset: Failed. (Table layout) freedb=# \d cdtitles Table public.cdtitles Column | Type| Modifiers ---+---+--- cdid | integer | default nextval('seq_cdid'::text) cdcode| character varying | ucode | character varying | title | character varying | artist| character varying | genre | character varying | dgenre| character varying | year | integer | processed | character varying | submit| character varying | revision | integer | disclen | integer | tracks| integer | extkey| integer | offsets | integer[] | Indexes: cdtitles_ucode btree (ucode) CLUSTER (Size of table) freedb=# select count(*) from cdtitles ; count - 1927912 (1 row) (Sample row) freedb=# select * from cdtitles where cdid = 11 limit 1; cdid | cdcode | ucode | title | artist | genre | dgenre | year |processed | submit | revision | disclen | tracks | extkey | offsets +--+---+--++---++--+-+-+--+-+++--- 11 | 320e9e14 | 0ea0-14-3399-0006ad6a | Var Slutar V#65533;arna (CD1) | Streaplers | rock ||0 | cddbd v1.5PL3 Copyright (c) Steve Scherf et al. | dBpowerAMP V4.0 |0 | | 19 | | {21,150,13920,31180,43664,59907,73163,86629,98447,113684,130205,141682,154920,166783,179028,192689,205161,222164,234649,249692,264340,3744,0} (1 row postgresql.conf Description: Binary data ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] streamlined standby procedure
On 2/7/06, Tom Lane [EMAIL PROTECTED] wrote: Andrew Rawnsley [EMAIL PROTECTED] writes: IMHO the #1 priority in the current PITR/WAL shipping system is to make the standby able to tolerate being shut down and restarted, i.e. actually having a true standby mode and not the current method of doing it only on startup. How is shutting down the standby a good idea? Seems like that will block the master too --- or at least result in WAL log files piling up rapidly. If the standby goes off-line, abandoning it and starting from a fresh base backup when you are ready to restart it seems like the most likely recovery path. For sure I don't see this as the #1 priority. For regular recovery it is indeed unnecessary. But I would also put this as #1 TODO for long-running hot-standby case. The requirement to start all over makes current setup rather cumbersome. And #2 would be running read-only queries while in recovery :) -- marko ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[HACKERS] Feature request - Add microsecond as a time unit for interval
The interval datatype can go to microsecond precision though currently the smallest unit is seconds. Microseconds are represented as decimal places, eg 5 microseconds is 0.05 seconds. To insert microseconds I have to use the following line, ($1*0.01 || ' seconds')::interval Being able to specify microseconds as a unit would avoid hacks like this and improve support for sub-second intervals. David ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Upcoming re-releases
Devrim GUNDUZ [EMAIL PROTECTED] wrote: Hi, On Wed, 2006-02-08 at 11:28 -0500, Stephen Frost wrote: I'd really like to see the multiple DB connections with different Kerberos credentials go in to 8.1.3. It solved the problem we were having authenticating to PostgreSQL using Kerberos from Apache. We were also able to get phppgadmin to use Kerberos authentication with this patch (which is very nice). That patch also went into the 8.1.2-2 release of the Debian packages [OT] So Debian has a patch that is not in 8.1.2? I can't believe that they are doing that -- personally I'm against to add any patch into binaries that is not in the core. [/OT] You haven't been looking too closely at the way some distributions are building their packages then. At least two of the distributions I use regularly (Gentoo and Debian) have the habit of adding a load of patches during package build. And not all of those go back to the upstream, to put it mildly ... Regards, Alex. -- Opportunity is missed by most people because it is dressed in overalls and looks like work. -- Thomas A. Edison ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[HACKERS] {I} One Information required...
Title: {I} One Information required... Hi all, When I try to run initdb id get the messing creating template1 database in c:/postgres/data/base/1 ... Execution of PostgreS QL by a user with administrative permissions is not permitted. The server must be started under an unprivileged user ID to prevent possible system security compromises. See the documentation for more information on how to properly start the server. OS I am using : Windows 2000 Professional I understand this message. Actually I created a new user postgres with password as different than postgres, also gave permission to the c:\postgres (installation folder) with postgres user permission. When I specify runas /noprofile /user:postgres cmd and then when I specify initdb I am getting the above error message, this is with Postgres 8.1, in the past with postgres 7.x I am not facing this problem. One surprising observation what I see is that = when I run initdb the cmd shell is showing The files belonging to this database system will be owned by user pk00. This user must also own the server process. where pk00 is the machine what I logged in. Question: How to configure initdb to use postgres instead of pk00 user. Anyone kindly help in this regard. Thank You. With warm regards, Premnath.k.n Siemens Communication Software, 2nd floor, Shankara Narayana Building-1, 25/1 Mahatma Gandhi Road, Bangalore - 560 001, India. Phone: Off : 91-80-2506 2400, 2559 4067 - 73 Extn : 6369 Fax : 91-80-2506 2406 Resi : 91-80-28436106 Mobile: 91-98445 19798 Email: [EMAIL PROTECTED]
Re: [HACKERS] PostgreSQL 8.0.6 crash
More info: the machine has 512M RAM and 512M swap Work mem is set to:work_mem = 1024 This should't have crashed, should it? PostgreSQL promptly uses all available memory for the query and subsequently crashes. I'm sure it can be corrected with a setting, but should it crash? freedb=# create table ucode as select distinct ucode from cdtitles group by ucode having count(ucode)1 ; server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. The connection to the server was lost. Attempting reset: Failed. (Table layout) freedb=# \d cdtitles Table public.cdtitles Column | Type| Modifiers ---+---+--- cdid | integer | default nextval('seq_cdid'::text) cdcode| character varying | ucode | character varying | title | character varying | artist| character varying | genre | character varying | dgenre| character varying | year | integer | processed | character varying | submit| character varying | revision | integer | disclen | integer | tracks| integer | extkey| integer | offsets | integer[] | Indexes: cdtitles_ucode btree (ucode) CLUSTER (Size of table) freedb=# select count(*) from cdtitles ; count - 1927912 (1 row) (Sample row) freedb=# select * from cdtitles where cdid = 11 limit 1; cdid | cdcode | ucode | title | artist | genre | dgenre | year |processed | submit | revision | disclen | tracks | extkey | offsets +--+---+--++---++--+-+-+--+-+++--- 11 | 320e9e14 | 0ea0-14-3399-0006ad6a | Var Slutar V#65533;arna (CD1) | Streaplers | rock ||0 | cddbd v1.5PL3 Copyright (c) Steve Scherf et al. | dBpowerAMP V4.0 |0 | | 19 | | {21,150,13920,31180,43664,59907,73163,86629,98447,113684,130205,141682,154920,166783,179028,192689,205161,222164,234649,249692,264340,3744,0} (1 row ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] Upcoming re-releases
Alexander Schreiber wrote: At least two of the distributions I use regularly (Gentoo and Debian) have the habit of adding a load of patches during package build. And not all of those go back to the upstream, to put it mildly ... And they are not always sensible. A while back the Gentoo packagers decided to improve the messages from initdb - the patch they applied was completely erroneous, and I believe they have now fixed it. I tend to be suspicious of distro applied patches. (I do like the change to use log rotation that recent Rh/Fedora packages have made.) cheers andrew ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] {I} One Information required...
Hey, Simply create a new non adminstrator user, say postgres with a password. Give this user write permissions to the empty data directory. Then login as this user and run initdb like did before. You are going to have to register the postgres service under this user too. You can do this by running pg_ctl like this: pg_ctl.exe register -N PostgreSQL -U [user] -P [user password] After this you can run: net start PostgreSQL to start postgres, or you can use the windows tool to start/stop services. If you dont want to register the service you will have to login to this user everytime in order to start postgresql. Services are a handy Windows feature. There is also a tool in windows with which you can run commands under other users, similiar to su in linux, but I can't remember the name. Google for it if you wish to take this route. any questions feel free to mail me ; regards Quintin Beukes Premnath, KN wrote: Hi all, When I try to run initdb id get the messing creating template1 database in c:/postgres/data/base/1 ... Execution of PostgreS QL by a user with administrative permissions is not permitted. The server must be started under an unprivileged user ID to prevent possible system security compromises. See the documentation for more information on how to properly start the server. OS I am using : Windows 2000 Professional I understand this message. Actually I created a new user postgres with password as different than postgres, also gave permission to the c:\postgres (installation folder) with postgres user permission. When I specify runas /noprofile /user:postgres cmd and then when I specify initdb I am getting the above error message, this is with Postgres 8.1, in the past with postgres 7.x I am not facing this problem. One surprising observation what I see is that = when I run initdb the cmd shell is showing The files belonging to this database system will be owned by user pk00. This user must also own the server process. where pk00 is the machine what I logged in. *Question:* *How to configure initdb to use postgres instead of pk00 user. Anyone kindly help in this regard.* Thank You. With warm regards, */Premnath.k.n/* Siemens Communication Software, 2nd floor, Shankara Narayana Building-1, 25/1 Mahatma Gandhi Road, Bangalore - 560 001, India. /Phone: Off : 91-80-2506 2400, 2559 4067 - 73 / /Extn : 6369/ / Fax : 91-80-2506 2406/ / Resi : 91-80-28436106/ / Mobile: 91-98445 19798/ /Email: [EMAIL PROTECTED]/ ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] PostgreSQL 8.0.6 crash
Mark Woodward [EMAIL PROTECTED] writes: PostgreSQL promptly uses all available memory for the query and subsequently crashes. I'll bet a nickel this is on a Linux machine with OOM kill enabled. What does the postmaster log show --- or look in the kernel log to see if it mentions anything about an out-of-memory kill. freedb=# create table ucode as select distinct ucode from cdtitles group by ucode having count(ucode)1 ; server closed the connection unexpectedly What does EXPLAIN show as the plan for that? If it's a hash aggregate, try with enable_hashagg turned off. How many distinct ucode values are there in the table? regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] User Defined Types in Java
Thomas Hallgren [EMAIL PROTECTED] writes: Tom Lane wrote: In particular, it seems unsafe to use a Java class as a PG UDT, because the method pointers wouldn't remain the same across backend runs. I'm not sure I understand what you mean. Doesn't a Java object contain a method-table pointer that is used at runtime to dispatch method calls on the object? If the object is dumped bitwise into a PG table, and then reloaded into another backend session (which maybe has loaded Java at different addresses), that pointer will be invalid. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] PostgreSQL 8.0.6 crash
Mark Woodward [EMAIL PROTECTED] writes: PostgreSQL promptly uses all available memory for the query and subsequently crashes. I'll bet a nickel this is on a Linux machine with OOM kill enabled. What does the postmaster log show --- or look in the kernel log to see if it mentions anything about an out-of-memory kill. That's a no brainer. Maybe I should have rephrased the condition, crash may be the wrong word, it was definitely killed by out of memory. Sorry. freedb=# create table ucode as select distinct ucode from cdtitles group by ucode having count(ucode)1 ; server closed the connection unexpectedly What does EXPLAIN show as the plan for that? If it's a hash aggregate, try with enable_hashagg turned off. How many distinct ucode values are there in the table? There are over 1.7M distinct rows, about 200K non-distinct that I want to somehow remove. It does have hash aggregate: freedb=# explain select distinct ucode from cdtitles group by ucode having count(ucode)1 ; QUERY PLAN - Unique (cost=106536.32..106537.32 rows=200 width=32) - Sort (cost=106536.32..106536.82 rows=200 width=32) Sort Key: ucode - HashAggregate (cost=106527.68..106528.68 rows=200 width=32) Filter: (count(ucode) 1) - Seq Scan on cdtitles (cost=0.00..96888.12 rows=1927912 width=32) (6 rows) Well, shouldn't hash aggregate respect work memory limits? ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Upcoming re-releases
Stephen Frost wrote: Oh, pah, I'm there already, as 'Snow-Man' and I've heard all about it. Sorry that Debian/stable releases havn't been coming out as frequently as they really should have been. We're working on that, honest! The only thing that I hate is that libpq defaults to searching the local socket in /var/postgresql/ or thereabouts. It really drives me crazy and I've banned the libpq packages from my system. -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Feature request - Add microsecond as a time unit for interval
David Tulloh [EMAIL PROTECTED] writes: To insert microseconds I have to use the following line, ($1*0.01 || ' seconds')::interval Actually, the preferred way to do that is to use the numeric-times-interval operator, eg regression=# select 7 * '0.01 second'::interval; ?column? - 00:00:00.07 (1 row) This generalizes to any scale factor you care to use, eg fortnights... so I don't see a pressing need to add microseconds. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] PostgreSQL 8.0.6 crash
Mark Woodward [EMAIL PROTECTED] writes: - HashAggregate (cost=106527.68..106528.68 rows=200 width=32) Filter: (count(ucode) 1) - Seq Scan on cdtitles (cost=0.00..96888.12 rows=1927912 width=32) Well, shouldn't hash aggregate respect work memory limits? If the planner thought there were 1.7M distinct values, it wouldn't have used hash aggregate ... but it only thinks there are 200, which IIRC is the default assumption. Have you ANALYZEd this table lately? Meanwhile, I'd strongly recommend turning off OOM kill. That's got to be the single worst design decision in the entire Linux kernel. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] User Defined Types in Java
Tom Lane wrote: Thomas Hallgren [EMAIL PROTECTED] writes: Tom Lane wrote: In particular, it seems unsafe to use a Java class as a PG UDT, because the method pointers wouldn't remain the same across backend runs. I'm not sure I understand what you mean. Doesn't a Java object contain a method-table pointer that is used at runtime to dispatch method calls on the object? If the object is dumped bitwise into a PG table, and then reloaded into another backend session (which maybe has loaded Java at different addresses), that pointer will be invalid. Ah, now I understand. I'm not planning on using the binary image of the Java object as such. There's no API that would allow me to do that (JNI uses handles, not pointers). Java has a number of standards for how objects can be serialized/deserialized in a safe, JVM independent way. If the class of the object is known at all times (as it will be for all UDT's), I can use a DataInputStream/DataOutputStream pair to read/write data. It's very similar to how the complex example type uses pg_sendxxx and pg_getmsgxxx functions. The Java implementation of that example will also use 16 bytes for storage. More complex types can use the standard Java serialization mechanism. It will waste some more space (must be declared variable in length) but it maintains data integrity over time through serialVersionUUID's (a 64 bit number generated based on the constitution of the class). JDBC defines a standard that involves usage of three different interfaces, SQLData, SQLInput, and SQLOutput. I plan to support that also. Kind Regards, Thomas Hallgren ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] PostgreSQL 8.0.6 crash
Mark Woodward [EMAIL PROTECTED] writes: - HashAggregate (cost=106527.68..106528.68 rows=200 width=32) Filter: (count(ucode) 1) - Seq Scan on cdtitles (cost=0.00..96888.12 rows=1927912 width=32) Well, shouldn't hash aggregate respect work memory limits? If the planner thought there were 1.7M distinct values, it wouldn't have used hash aggregate ... but it only thinks there are 200, which IIRC is the default assumption. Have you ANALYZEd this table lately? I thought that I had, but I did CLUSTER at some point. Or maybe I didn't I'm, not sure. I have been working on a file reader/parser/importer program. I created and dropped the DB so many times it is hard to keep track. Still, I would say that is is extremly bad behavior for not having stats, wouldn't you think? Meanwhile, I'd strongly recommend turning off OOM kill. That's got to be the single worst design decision in the entire Linux kernel. How is this any different than the FreeBSD having a default 512M process size limit? On FreeBSD, the process would have been killed earlier. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] PostgreSQL 8.0.6 crash
Mark Woodward [EMAIL PROTECTED] writes: Still, I would say that is is extremly bad behavior for not having stats, wouldn't you think? Think of it as a kernel bug. Meanwhile, I'd strongly recommend turning off OOM kill. That's got to be the single worst design decision in the entire Linux kernel. How is this any different than the FreeBSD having a default 512M process size limit? On FreeBSD, the process would have been killed earlier. No, the process would have been politely told it was out of memory, and would have told you the same. If the kernel's way of notifying a process that it's out of memory is SIGKILL, there is not a damn thing that we can do to operate robustly. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] PostgreSQL 8.0.6 crash
Mark Woodward [EMAIL PROTECTED] writes: Still, I would say that is is extremly bad behavior for not having stats, wouldn't you think? Think of it as a kernel bug. While I respect your viewpoint that the Linux kernel should not kill an offending process if the system runs out of memory, I sort of disagree in that OOM is a disaster preventor. It should be viewed as a last ditch him or me choice the kernel needs to make and it should not get into that position in the first place. Regardless, it is troubling that failing to have current stats can cause the system, with a large data set, to exceed working memory limits. I think it is still a bug. While it may manifest itself as a pg crash on Linux because of a feature with which you have issue, the fact remains that PG is exeeding its working memory limit. Should failing to run ANALYZE cause this behavior? If so, how does this get clearly documented? If not, can it be prevented? Meanwhile, I'd strongly recommend turning off OOM kill. That's got to be the single worst design decision in the entire Linux kernel. How is this any different than the FreeBSD having a default 512M process size limit? On FreeBSD, the process would have been killed earlier. No, the process would have been politely told it was out of memory, and would have told you the same. If the kernel's way of notifying a process that it's out of memory is SIGKILL, there is not a damn thing that we can do to operate robustly. Lets not waste time on a Linux discussion. Linux and FreeBSD have their strengths, and a debate on the dubious merits of either is a long and contentious debate. Both systems are fine, just with some subtle differences in design goals. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] PostgreSQL 8.0.6 crash
Tom Lane wrote: Mark Woodward [EMAIL PROTECTED] writes: Still, I would say that is is extremly bad behavior for not having stats, wouldn't you think? Think of it as a kernel bug. Meanwhile, I'd strongly recommend turning off OOM kill. That's got to be the single worst design decision in the entire Linux kernel. How is this any different than the FreeBSD having a default 512M process size limit? On FreeBSD, the process would have been killed earlier. No, the process would have been politely told it was out of memory, and would have told you the same. If the kernel's way of notifying a process that it's out of memory is SIGKILL, there is not a damn thing that we can do to operate robustly. And we have docco on it: http://www.postgresql.org/docs/current/static/kernel-resources.html#AEN18105 which I assume is still current. Back in October I mentioned the OOM killer to Andrew Morton - his reaction was a very pained look and a curt admonition: turn it off. cheers andrew ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] PostgreSQL 8.0.6 crash
Mark Woodward [EMAIL PROTECTED] writes: I think it is still a bug. While it may manifest itself as a pg crash on Linux because of a feature with which you have issue, the fact remains that PG is exeeding its working memory limit. The problem is that *we have no way to know what that limit is* --- short of exceeding it and being summarily killed. (BTW, the kernel doesn't know what the limit is either.) There is simply not any way to operate robustly under the OOM-kill regime. While I'll certainly acknowledge that it'd be nice if hashagg had spill-to-disk capability, that wouldn't alter the fundamental fact that if you want reliable behavior you MUST turn off OOM kill. There is not anything we can do at the database level to work around that kernel-level misdesign. regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] pg_hba.conf alternative
Since what he is worried about is the ability of admins to get at the data by connecting to the postgres server (after changing pg_hba.conf), this will not make the slightest difference - the data would be decrypted before it ever got to the intruder. I was suggesting that pg_hba.conf could be stored in the same encrypting filesystem. -- Korry ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] User Defined Types in Java
On Thu, Feb 09, 2006 at 09:33:35AM -0500, Tom Lane wrote: Martijn van Oosterhout kleptog@svana.org writes: Actually, I'm think this whole automatic creation of a shell-type a bit silly anyway. Why not simply solve the problem directly like so: CREATE TYPE complex AS SHELL; One of the unwritten consequences of the way that it works now is that only superusers can clutter the catalogs with shell types. Not sure how important that is, but I suspect that the system is not all that robust against use of shell types where a completed type is expected. You'd have to go over a lot of code with a fine-tooth comb before putting this ability into the hands of ordinary users, else you'd be creating loopholes for DOS attacks (or worse). Would it be reasonable to set this up so you can only create a shell type within an explicit transaction and that you are required to define the type completely before commit. That would minimize the exposure to incomplete types. I don't know if the structure exists to support this (dynamic on-commit trigger). Having said that, I agree that this seems conceptually cleaner, though I'm not sure we could ever get rid of the old way because of backward compatibility issues. True. But this way allows us to remove the restriction on only allow C functions for type input/output. Have a nice day, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a tool for doing 5% of the work and then sitting around waiting for someone else to do the other 95% so you can sue them. signature.asc Description: Digital signature
Re: [HACKERS] pg_hba.conf alternative
korry wrote: Since what he is worried about is the ability of admins to get at the data by connecting to the postgres server (after changing pg_hba.conf), this will not make the slightest difference - the data would be decrypted before it ever got to the intruder. I was suggesting that pg_hba.conf could be stored in the same encrypting filesystem. Then how can it be changed? What if you need to allow access from, say, another user or another network? Oh, the admins have to change it ... In the end you have to trust your admins or fire them and hire some you do trust. cheers andrew ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] User Defined Types in Java
Martijn van Oosterhout kleptog@svana.org writes: On Thu, Feb 09, 2006 at 09:33:35AM -0500, Tom Lane wrote: You'd have to go over a lot of code with a fine-tooth comb before putting this ability into the hands of ordinary users, else you'd be creating loopholes for DOS attacks (or worse). Would it be reasonable to set this up so you can only create a shell type within an explicit transaction and that you are required to define the type completely before commit. I don't see any very easy way to enforce that, and it would break existing datatype-definition scripts if we did. However, since posting that it's occurred to me that we could do a lot to make the shell-type situation more robust. The problem with shell types is that the pg_type row is mostly left as zeroes, which means that any code that inspects the type without checking typisdefined is going to get a bogus value that may make it behave strangely. But we've since come up with a much better approach: see pseudotypes. We should set things up so that a shell type has valid but dummy entries in its pg_type row, including references to I/O functions that will just report an error if invoked. Also a shell type should be properly owned by its creator, which would let the creator drop it if it had been a mistake (right now, I think you can't get rid of it except by DELETE FROM pg_type :-(). With an arrangement like that, I'd feel much less worried about shell-type-related bugs. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] pg_hba.conf alternative
I was suggesting that pg_hba.conf could be stored in the same encrypting filesystem. Then how can it be changed? What if you need to allow access from, say, another user or another network? Oh, the admins have to change it ... Not all admins are equal... the admin that takes care of the database would obviously have the decrypt password for the encrypting filesystem. That admin (but not other admins) can change the pg_hba.conf file. -- Korry ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] User Defined Types in Java
Tom Lane wrote: Martijn van Oosterhout kleptog@svana.org writes: On Thu, Feb 09, 2006 at 09:33:35AM -0500, Tom Lane wrote: You'd have to go over a lot of code with a fine-tooth comb before putting this ability into the hands of ordinary users, else you'd be creating loopholes for DOS attacks (or worse). Would it be reasonable to set this up so you can only create a shell type within an explicit transaction and that you are required to define the type completely before commit. I don't see any very easy way to enforce that, and it would break existing datatype-definition scripts if we did. What do you think of my earlier suggestion. Skip all the 'create function' statements and just add the AS 'filename' LANGUAGE C to the CREATE TYPE. It could be implemented while maintaining backward compatibility I think? Regards, Thomas Hallgren ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] PostgreSQL 8.0.6 crash
Mark Woodward [EMAIL PROTECTED] writes: I think it is still a bug. While it may manifest itself as a pg crash on Linux because of a feature with which you have issue, the fact remains that PG is exeeding its working memory limit. The problem is that *we have no way to know what that limit is* --- short of exceeding it and being summarily killed. (BTW, the kernel doesn't know what the limit is either.) There is simply not any way to operate robustly under the OOM-kill regime. No, you misunderstand what I said, the working memory as defined in postgresql.conf. I don't care about the OS debate. While I'll certainly acknowledge that it'd be nice if hashagg had spill-to-disk capability, that wouldn't alter the fundamental fact that if you want reliable behavior you MUST turn off OOM kill. There is not anything we can do at the database level to work around that kernel-level misdesign. Again, regardless of OS used, hashagg will exceed working memory as defined in postgresql.conf. At issue is would a lack of ANALYZE justify this behavior? If so, it should be documented. ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] pg_hba.conf alternative
korry wrote: I was suggesting that pg_hba.conf could be stored in the same encrypting filesystem. Then how can it be changed? What if you need to allow access from, say, another user or another network? Oh, the admins have to change it ... Not all admins are equal... the admin that takes care of the database would obviously have the decrypt password for the encrypting filesystem. That admin (but not other admins) can change the pg_hba.conf file. Why would you not simply set this up on a seperate machine to which only the trusted admins had access? Most data centers I am familiar with use single purpose machines anyway. If someone is trusted as root on your box they can screw you no matter what you do. Pretending otherwise is just folly. cheers andrew ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Upcoming re-releases
* Alvaro Herrera ([EMAIL PROTECTED]) wrote: Stephen Frost wrote: Oh, pah, I'm there already, as 'Snow-Man' and I've heard all about it. Sorry that Debian/stable releases havn't been coming out as frequently as they really should have been. We're working on that, honest! The only thing that I hate is that libpq defaults to searching the local socket in /var/postgresql/ or thereabouts. It really drives me crazy and I've banned the libpq packages from my system. Perhaps /var/run/postgresql/? Where do you think it should go...? Thanks, Stephen signature.asc Description: Digital signature
Re: [HACKERS] pg_hba.conf alternative
Why would you not simply set this up on a seperate machine to which only the trusted admins had access? Most data centers I am familiar with use single purpose machines anyway. If someone is trusted as root on your box they can screw you no matter what you do. Pretending otherwise is just folly. Agreed - that would be a much better (easier and more secure) solution where practical. -- Korry ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] PostgreSQL 8.0.6 crash
Mark Woodward [EMAIL PROTECTED] writes: Again, regardless of OS used, hashagg will exceed working memory as defined in postgresql.conf. So? If you've got OOM kill enabled, it can zap a process whether it's strictly adhered to work_mem or not. The OOM killer is entirely capable of choosing a victim process whose memory footprint hasn't changed materially since it started (eg, the postmaster). regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[HACKERS] Server Programming in C: palloc() and pfree()
I'm having some problems when using pfree() on functions in C.Calling it on psql gives the exception below on both versions of function insert() [1,2] if pfree() is enabled: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.conexão com servidor foi perdida(connection to the server was lost) The strange is that it doesn't happen with the function delstr() [3], which has pfree().What could am I doing wrong?// 1)void insert(char *str, const int start, const char *piece) { int lstr = strlen(str); int lnew = lstr + strlen(piece) + 1; char* temp = palloc(lnew); memset((void*) temp, 0, lnew); /* FILE *debug; debug = fopen(/dev/xconsole, w); fprintf(debug, insert('%s', %d, '%s')\n, str, start, piece); //fprintf(debug, 0) '%s'\n, temp); */ if (start = lstr + 1) { strncpy(temp, str, start - 1); strcat(temp, piece); char* ptr = str + start - 1; strcat(temp, ptr); strcpy(str, temp); }// pfree(temp); // -- here it doesn't work... /* fprintf(debug, - '%s'\n, str); fflush(debug); fclose(debug); */}// 2)void insert(char *str, const int start, const char *piece){ int i, j; char* temp = palloc(strlen(str) + strlen(piece) + 1); if (start - 1 = strlen(str)) { for (i = 0; i start - 1; i++) temp[i] = str[i]; for (j = i; j strlen(piece) + i; j++) temp[j] = piece[j - i]; for (; i strlen(str); i++, j++) temp[j] = str[i]; temp[j] = '\0'; strcpy(str, temp); }// pfree(temp); // doesn't work...}// 3)void delstr(char *str, const int start, const int size){ int i, j; char* temp = palloc(strlen(str) - size + 1); for (i = 0; (i start - 1) (i strlen(str)); i++) temp[i] = str[i]; for (j = start + size - 1; j strlen(str); i++, j++) temp[i] = str[j]; temp[i] = '\0'; strcpy(str, temp); pfree(temp); // -- here it works!}--Regards,Rodrigo HjortGTI - Projeto PostgreSQL CELEPAR - Cia de Informática do Paraná http://www.pr.gov.br
Re: [HACKERS] streamlined standby procedure
On Thu, Feb 09, 2006 at 10:37:34AM +0100, Csaba Nagy wrote: option can only be set at server start or in the postgresql.conf Yeah, this is something that was actually discussed on -docs recently. I believe -HEAD was changed so that every parameter that used to have that text now says: option can be set in postgresql.conf or on the server command line. Maybe a standby-building-tutorial is all what is needed... Having that would be very handy indeed. In fact, if you want to get rough notes put together I'd be happy to edit it into a finished product, though I'm not sure of the best place to put it. I could certainly post it somewhere on pervasive-postgres.com if nothing else... -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] streamlined standby procedure
On Thu, Feb 09, 2006 at 04:44:20PM +0200, Marko Kreen wrote: On 2/7/06, Tom Lane [EMAIL PROTECTED] wrote: Andrew Rawnsley [EMAIL PROTECTED] writes: IMHO the #1 priority in the current PITR/WAL shipping system is to make the standby able to tolerate being shut down and restarted, i.e. actually having a true standby mode and not the current method of doing it only on startup. How is shutting down the standby a good idea? Seems like that will block the master too --- or at least result in WAL log files piling up rapidly. If the standby goes off-line, abandoning it and starting from a fresh base backup when you are ready to restart it seems like the most likely recovery path. For sure I don't see this as the #1 priority. For regular recovery it is indeed unnecessary. But I would also put this as #1 TODO for long-running hot-standby case. The requirement to start all over makes current setup rather cumbersome. What happens right now when you want to bring the standby up? Do you have to kill it out of recovery mode and re-start, forcing it to replay WAL again anyway? And #2 would be running read-only queries while in recovery :) That would be damn handy :) -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] PostgreSQL 8.0.6 crash
* Tom Lane ([EMAIL PROTECTED]) wrote: Mark Woodward [EMAIL PROTECTED] writes: Again, regardless of OS used, hashagg will exceed working memory as defined in postgresql.conf. So? If you've got OOM kill enabled, it can zap a process whether it's strictly adhered to work_mem or not. The OOM killer is entirely capable of choosing a victim process whose memory footprint hasn't changed materially since it started (eg, the postmaster). Unless I've missed something here, disabling the OOM killer doesn't really solve the problem here. What solves the problem is running ANALYZE but it's still certainly the case that it would make some sense for the Postmaster, upon realizing that it's gone well beyond its work_mem boundary, to ideally switch plans to one which isn't going to exceed its work_mem limit. Less ideally, it could give up and issue an error to the user instead of running the box out of memory. I appriciate that this is probably not very easy to implement but I do believe the current situation could be improved in this regard. Thanks, Stephen signature.asc Description: Digital signature
Re: [HACKERS] PostgreSQL 8.0.6 crash
Mark Woodward [EMAIL PROTECTED] writes: Again, regardless of OS used, hashagg will exceed working memory as defined in postgresql.conf. So? If you've got OOM kill enabled, it can zap a process whether it's strictly adhered to work_mem or not. The OOM killer is entirely capable of choosing a victim process whose memory footprint hasn't changed materially since it started (eg, the postmaster). Sorry, I must strongly disagree here. The postgresql.conf working mem is a VERY IMPORTANT setting, it is intended to limit the consumption of memory by the postgresql process. Often times PostgreSQL will work along side other application servers on the same system, infact, may be a sub-part of application servers on the same system. (This is, in fact, how it is used on one of my site servers.) Clearly, if the server will use 1000 times this number (Set for 1024K, but exceeds 1G) this is broken, and it may cause other systems to fail or perform very poorly. If it is not something that can be fixed, it should be clearly documented. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Feature request - Add microsecond as a time unit for interval
On Thu, Feb 09, 2006 at 10:30:30AM -0500, Tom Lane wrote: David Tulloh [EMAIL PROTECTED] writes: To insert microseconds I have to use the following line, ($1*0.01 || ' seconds')::interval Actually, the preferred way to do that is to use the numeric-times-interval operator, eg regression=# select 7 * '0.01 second'::interval; ?column? - 00:00:00.07 (1 row) This generalizes to any scale factor you care to use, eg fortnights... so I don't see a pressing need to add microseconds. Something that's always struck me about intervals in PostgreSQL is the how obtuse it is to generate one. Is there some reasonable way we could add support for something like interval('7 microseconds')? Or perhaps seconds(0.07)? -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] PostgreSQL 8.0.6 crash
On Thu, Feb 09, 2006 at 02:03:41PM -0500, Mark Woodward wrote: Mark Woodward [EMAIL PROTECTED] writes: Again, regardless of OS used, hashagg will exceed working memory as defined in postgresql.conf. So? If you've got OOM kill enabled, it can zap a process whether it's strictly adhered to work_mem or not. The OOM killer is entirely capable of choosing a victim process whose memory footprint hasn't changed materially since it started (eg, the postmaster). Sorry, I must strongly disagree here. The postgresql.conf working mem is a VERY IMPORTANT setting, it is intended to limit the consumption of memory by the postgresql process. Often times PostgreSQL will work along Actually, no, it's not designed for that at all. side other application servers on the same system, infact, may be a sub-part of application servers on the same system. (This is, in fact, how it is used on one of my site servers.) Clearly, if the server will use 1000 times this number (Set for 1024K, but exceeds 1G) this is broken, and it may cause other systems to fail or perform very poorly. If it is not something that can be fixed, it should be clearly documented. work_mem (integer) Specifies the amount of memory to be used by internal sort operations and hash tables before switching to temporary disk files. The value is specified in kilobytes, and defaults to 1024 kilobytes (1 MB). Note that for a complex query, several sort or hash operations might be running in parallel; each one will be allowed to use as much memory as this value specifies before it starts to put data into temporary files. Also, several running sessions could be doing such operations concurrently. So the total memory used could be many times the value of work_mem; it is necessary to keep this fact in mind when choosing the value. Sort operations are used for ORDER BY, DISTINCT, and merge joins. Hash tables are used in hash joins, hash-based aggregation, and hash-based processing of IN subqueries. So it says right there that it's very easy to exceed work_mem by a very large amount. Granted, this is a very painful problem to deal with and will hopefully be changed at some point, but it's pretty clear as to how this works. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] PostgreSQL 8.0.6 crash
On Thu, Feb 09, 2006 at 11:42:57AM -0500, Mark Woodward wrote: Mark Woodward [EMAIL PROTECTED] writes: Still, I would say that is is extremly bad behavior for not having stats, wouldn't you think? Think of it as a kernel bug. While I respect your viewpoint that the Linux kernel should not kill an offending process if the system runs out of memory, I sort of disagree in that OOM is a disaster preventor. It should be viewed as a last ditch him or me choice the kernel needs to make and it should not get into that position in the first place. I've had processes run away on a FreeBSD box before, to the extent of running entirely out of swap and memory. Instead of random processes just dying for no apparent reason, I instead started getting a bunch of out-of-memory errors. No disaster, I just fixed the problem and life went on. Well, ok, the box did become rather unresponsive when my fix for the problem meant that all the sudden there were about 950 perl processes trying to run at the same time. I wish I'd captured top showing 900+ runnable processes. But after a few minutes the processes started completing and exiting and everything was soon back to normal. I rather doubt Linux would survive that... -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] PostgreSQL 8.0.6 crash
* Jim C. Nasby ([EMAIL PROTECTED]) wrote: On Thu, Feb 09, 2006 at 02:03:41PM -0500, Mark Woodward wrote: If it is not something that can be fixed, it should be clearly documented. work_mem (integer) Specifies the amount of memory to be used by internal sort operations and hash tables before switching to temporary disk files. The value is specified in kilobytes, and defaults to 1024 kilobytes (1 MB). Note that for a complex query, several sort or hash operations might be running in parallel; each one will be allowed to use as much memory as this value specifies before it starts to put data into temporary files. Also, several running sessions could be doing such operations concurrently. So the total memory used could be many times the value of work_mem; it is necessary to keep this fact in mind when choosing the value. Sort operations are used for ORDER BY, DISTINCT, and merge joins. Hash tables are used in hash joins, hash-based aggregation, and hash-based processing of IN subqueries. So it says right there that it's very easy to exceed work_mem by a very large amount. Granted, this is a very painful problem to deal with and will hopefully be changed at some point, but it's pretty clear as to how this works. It also says that when it goes over, it'll spill to disk. Additionally, we're talking about one hash here, not multiple ones. It seems at least misleading as, if I understand correctly, Postgres isn't actually actively checking to see if the amount of memory used by an in-progress hash creation has gone over the limit but rather it guesses at how much memory will be used during the planning stage to decide if a hash plan is possible or not. That guess can certainly be wrong but there's nothing in place to handle the situation where the guess is wrong... Thanks, Stephen signature.asc Description: Digital signature
Re: [HACKERS] PostgreSQL 8.0.6 crash
On Thu, Feb 09, 2006 at 02:03:41PM -0500, Mark Woodward wrote: Mark Woodward [EMAIL PROTECTED] writes: Again, regardless of OS used, hashagg will exceed working memory as defined in postgresql.conf. So? If you've got OOM kill enabled, it can zap a process whether it's strictly adhered to work_mem or not. The OOM killer is entirely capable of choosing a victim process whose memory footprint hasn't changed materially since it started (eg, the postmaster). Sorry, I must strongly disagree here. The postgresql.conf working mem is a VERY IMPORTANT setting, it is intended to limit the consumption of memory by the postgresql process. Often times PostgreSQL will work along Actually, no, it's not designed for that at all. I guess that's a matter of opinion. side other application servers on the same system, infact, may be a sub-part of application servers on the same system. (This is, in fact, how it is used on one of my site servers.) Clearly, if the server will use 1000 times this number (Set for 1024K, but exceeds 1G) this is broken, and it may cause other systems to fail or perform very poorly. If it is not something that can be fixed, it should be clearly documented. work_mem (integer) Specifies the amount of memory to be used by internal sort operations and hash tables before switching to temporary disk files. The value is specified in kilobytes, and defaults to 1024 kilobytes (1 MB). Note that for a complex query, several sort or hash operations might be running in parallel; each one will be allowed to use as much memory as this value specifies before it starts to put data into temporary files. Also, several running sessions could be doing such operations concurrently. So the total memory used could be many times the value of work_mem; it is necessary to keep this fact in mind when choosing the value. Sort operations are used for ORDER BY, DISTINCT, and merge joins. Hash tables are used in hash joins, hash-based aggregation, and hash-based processing of IN subqueries. So it says right there that it's very easy to exceed work_mem by a very large amount. Granted, this is a very painful problem to deal with and will hopefully be changed at some point, but it's pretty clear as to how this works. Well, if you read that paragraph carefully, I'll admit that I was a little too literal in my statement apliying it to the process and not specific functions within the process, but in the documentation: each one will be allowed to use as much memory as this value specifies before it starts to put data into temporary files. According to the documentation the behavior of hashagg is broken. It did not use up to this amount and then start to use temporary files, it used 1000 times this limit and was killed by the OS. I think it should be documented as the behavior is unpredictable. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] [GENERAL] Sequences/defaults and pg_dump
On Wed, Feb 08, 2006 at 10:57:20PM -0500, Bruce Momjian wrote: TODO has: * %Disallow changing default expression of a SERIAL column Sure, the DROP SERIAL I proposed would rather change the data type to int by dropping the default and would delete referring pg_depend entries. Read it more as a kind of drop autoincrement functionality for this column. The problem I see (but you might see it differently) is that you can't drop this autoincrement stuff without also dropping the column once you forbid to change the default (yeah I know, changing the default is even worse and leaves you with incorrect dependencies). I assume an ALTER COLUMN ... TYPE INTEGER would drop the SERIAL part. So far it doesn't because it doesn't know the difference between serial and int. What about this proposal for serial columns: - DROP DEFAULT drops serial and removes dependencies - SET DEFAULT forbidden, issues a hint to DROP DEFAULT first Is it also desired to convert an int column to a serial column? (moving to -hackers) Joachim -- Joachim Wieland [EMAIL PROTECTED] C/ Usandizaga 12 1°B ICQ: 37225940 20002 Donostia / San Sebastian (Spain) GPG key available ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] PostgreSQL 8.0.6 crash
Stephen Frost [EMAIL PROTECTED] writes: Unless I've missed something here, disabling the OOM killer doesn't really solve the problem here. Well in a way it does. Postgres would get an out-of-memory error from malloc which it would handle properly and the world would be happy. Except not quite, since I think an out of memory error still means that backend exits instead of just that query failing. That means if you have an application running such as apache then all subsequent transactions on that connection fail too, instead of just the transaction that misbehaved. And as the other poster mentioned, having Postgres use up every available byte of memory isn't really very friendly to anything else running on the box. It doesn't seem like a bad idea to have a max_memory parameter that if a backend ever exceeded it would immediately abort the current transaction. That would let an application continue operating normally after getting an error. -- greg ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] PostgreSQL 8.0.6 crash
Greg Stark [EMAIL PROTECTED] writes: Except not quite, since I think an out of memory error still means that backend exits instead of just that query failing. Not at all! PG will recover from this perfectly well ... if it's given the opportunity, rather than being SIGKILLed. It doesn't seem like a bad idea to have a max_memory parameter that if a backend ever exceeded it would immediately abort the current transaction. See ulimit (or local equivalent). regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] [GENERAL] Sequences/defaults and pg_dump
Joachim Wieland wrote: On Wed, Feb 08, 2006 at 10:57:20PM -0500, Bruce Momjian wrote: TODO has: * %Disallow changing default expression of a SERIAL column Sure, the DROP SERIAL I proposed would rather change the data type to int by dropping the default and would delete referring pg_depend entries. Read it more as a kind of drop autoincrement functionality for this column. The problem I see (but you might see it differently) is that you can't drop this autoincrement stuff without also dropping the column once you forbid to change the default (yeah I know, changing the default is even worse and leaves you with incorrect dependencies). I assume an ALTER COLUMN ... TYPE INTEGER would drop the SERIAL part. So far it doesn't because it doesn't know the difference between serial and int. What about this proposal for serial columns: - DROP DEFAULT drops serial and removes dependencies - SET DEFAULT forbidden, issues a hint to DROP DEFAULT first Is it also desired to convert an int column to a serial column? I think the only sane solution is if a SERIAL column is changed to INTEGER, the default and dependencies are removed. Do you want a TODO for that? -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] User Defined Types in Java
Thomas Hallgren [EMAIL PROTECTED] writes: What do you think of my earlier suggestion. Skip all the 'create function' statements and just add the AS 'filename' LANGUAGE C to the CREATE TYPE. Very little, as it makes unjustifiable assumptions about all the datatype's support functions being predictably propertied. (There's more than one possible signature, let alone any secondary properties such as volatility or other stuff we might think of in future.) I think it'd be unworkable from pg_dump's point of view, as well. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] PostgreSQL 8.0.6 crash
Greg Stark wrote: Well in a way it does. Postgres would get an out-of-memory error from malloc which it would handle properly and the world would be happy. Except not quite, since I think an out of memory error still means that backend exits instead of just that query failing. Not at all -- the transaction is aborted, but the backend can continue working perfectly fine. -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] PostgreSQL 8.0.6 crash
* Tom Lane ([EMAIL PROTECTED]) wrote: Greg Stark [EMAIL PROTECTED] writes: It doesn't seem like a bad idea to have a max_memory parameter that if a backend ever exceeded it would immediately abort the current transaction. See ulimit (or local equivalent). As much as setting ulimit in shell scripts is fun, I have to admit that I really don't see it happening very much. Having Postgres set a ulimit for itself may not be a bad idea and would perhaps provide a least suprise for new users. Perhaps shared_buffers + 10*work_mem + maintenance_work_mem + max_stack_depth? Then errors from running out of memory could provide a 'HINT: Memory consumption went well over allowed work_mem, perhaps you need to run ANALYZE or raise work_mem?'. Just some thoughts, Stephen signature.asc Description: Digital signature
Re: [HACKERS] [GENERAL] Sequences/defaults and pg_dump
Bruce Momjian pgman@candle.pha.pa.us writes: Is it also desired to convert an int column to a serial column? I think the only sane solution is if a SERIAL column is changed to INTEGER, the default and dependencies are removed. Do you want a TODO for that? If we are going to do something like that, I think we should take a hard look at the idea I floated of putting SERIAL back to a pure creation-time macro for type and default expression. This is getting to have way too much hidden behavior, and what we are buying for it is very little as of 8.1. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] [GENERAL] Sequences/defaults and pg_dump
Tom Lane wrote: Bruce Momjian pgman@candle.pha.pa.us writes: Is it also desired to convert an int column to a serial column? I think the only sane solution is if a SERIAL column is changed to INTEGER, the default and dependencies are removed. Do you want a TODO for that? If we are going to do something like that, I think we should take a hard look at the idea I floated of putting SERIAL back to a pure creation-time macro for type and default expression. This is getting to have way too much hidden behavior, and what we are buying for it is very little as of 8.1. OK, but I was confused how 8.1 has improved the way SERIAL works. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Upcoming re-releases
Stephen Frost wrote: * Alvaro Herrera ([EMAIL PROTECTED]) wrote: Stephen Frost wrote: Oh, pah, I'm there already, as 'Snow-Man' and I've heard all about it. Sorry that Debian/stable releases havn't been coming out as frequently as they really should have been. We're working on that, honest! The only thing that I hate is that libpq defaults to searching the local socket in /var/postgresql/ or thereabouts. It really drives me crazy and I've banned the libpq packages from my system. Perhaps /var/run/postgresql/? Where do you think it should go...? Where the upstream package puts it, /tmp ... But yeah, I know about the Debian Policy. -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Upcoming re-releases
Alvaro Herrera wrote: Stephen Frost wrote: * Alvaro Herrera ([EMAIL PROTECTED]) wrote: The only thing that I hate is that libpq defaults to searching the local socket in /var/postgresql/ or thereabouts. It really drives me crazy and I've banned the libpq packages from my system. Perhaps /var/run/postgresql/? Where do you think it should go...? Where the upstream package puts it, /tmp ... But yeah, I know about the Debian Policy. Maybe this should be a configure flag, just like the port number is. cheers andrew ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] [GENERAL] Sequences/defaults and pg_dump
Bruce Momjian pgman@candle.pha.pa.us writes: Tom Lane wrote: If we are going to do something like that, I think we should take a hard look at the idea I floated of putting SERIAL back to a pure creation-time macro for type and default expression. This is getting to have way too much hidden behavior, and what we are buying for it is very little as of 8.1. OK, but I was confused how 8.1 has improved the way SERIAL works. I already said this up-thread, but: a plain old DEFAULT nextval('foo') now has the properties that you can't drop sequence foo without dropping the default expression, and renaming the sequence isn't a problem. That takes care of the worst problems that we invented the SERIAL dependency for. If we dropped the special sequence-to-column dependency that SERIAL now adds, and got rid of the special pg_dump behavior for serials, we'd have less code instead of more and it would work a lot more transparently. The only thing we'd lose is that dropping a column originally declared as serial wouldn't implicitly drop the sequence. That's somewhat annoying but I'm not convinced that preserving that one thing is worth the amount of infrastructure that's getting built (and I hope you don't think that Joachim's proposal will be the end of it). Basically we're sticking more and more band-aids on a design that wasn't such a great idea to start with. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] PostgreSQL 8.0.6 crash
Stephen Frost [EMAIL PROTECTED] writes: * Tom Lane ([EMAIL PROTECTED]) wrote: Greg Stark [EMAIL PROTECTED] writes: It doesn't seem like a bad idea to have a max_memory parameter that if a backend ever exceeded it would immediately abort the current transaction. See ulimit (or local equivalent). As much as setting ulimit in shell scripts is fun, I have to admit that I really don't see it happening very much. For one thing it requires admin access to the startup scripts to arrange this. And it's always cluster-wide. Having a GUC parameter would mean it could be set per-session. Even if the GUC parameter were just implemented by calling setrlimit it might be useful. -- greg ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] [GENERAL] Sequences/defaults and pg_dump
Tom Lane wrote: The only thing we'd lose is that dropping a column originally declared as serial wouldn't implicitly drop the sequence. Wasn't that the primary purpose that the main coder for dependencies did the work for? AFAIR the fact that the sequence wasn't dropped was a big gotcha. Everyone was annoyed any time they wanted to experiment with creating and dropping a table. -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Upcoming re-releases
Andrew Dunstan [EMAIL PROTECTED] writes: Maybe this should be a configure flag, just like the port number is. It is ... that isn't the issue, the problem is exactly that Debian chooses to exercise the option to make their installations different from everyone else's. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] PostgreSQL 8.0.6 crash
Stephen Frost [EMAIL PROTECTED] writes: * Tom Lane ([EMAIL PROTECTED]) wrote: Greg Stark [EMAIL PROTECTED] writes: It doesn't seem like a bad idea to have a max_memory parameter that if a backend ever exceeded it would immediately abort the current transaction. See ulimit (or local equivalent). As much as setting ulimit in shell scripts is fun, I have to admit that I really don't see it happening very much. For one thing it requires admin access to the startup scripts to arrange this. And it's always cluster-wide. Having a GUC parameter would mean it could be set per-session. Even if the GUC parameter were just implemented by calling setrlimit it might be useful. I don't think it needs a new GUC parameter, just having hashagg respect work_mem would fix the problem. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] [GENERAL] Sequences/defaults and pg_dump
Alvaro Herrera [EMAIL PROTECTED] writes: Tom Lane wrote: The only thing we'd lose is that dropping a column originally declared as serial wouldn't implicitly drop the sequence. Wasn't that the primary purpose that the main coder for dependencies did the work for? My recollection is that the dependency for serials was added as an afterthought without too much consideration of the long-term implications. It was a cheap way of sort-of solving an immediate problem using a mechanism that we were putting in place anyway. But what we've got now is a misbegotten cross between the theory that a SERIAL is a unitary object you mustn't muck with the innards of, and the theory that SERIAL is just a macro that sets up an initial state you can ALTER to your heart's content later. IMHO we should make a choice between those plans and stick to it, not add more and more infrastructure to let you ALTER things you shouldn't be altering. Either a SERIAL is a black box or it isn't. If it is not to be a black box, we need to reduce rather than increase the amount of hidden semantics. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] Server Programming in C: palloc() and pfree()
On Thu, Feb 09, 2006 at 04:16:51PM -0200, Rodrigo Hjort wrote: I'm having some problems when using pfree() on functions in C. Calling it on psql gives the exception below on both versions of function insert() [1,2] if pfree() is enabled: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. conexão com servidor foi perdida (connection to the server was lost) The strange is that it doesn't happen with the function delstr() [3], which has pfree(). What could am I doing wrong? You havn't said how you are calling the functions. For example, did you declare them as cstring or text? text is not null terminated but a varlena struct. So if you use text you'll end up with buffer overruns. Could you provide a complete example with SQL? -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a tool for doing 5% of the work and then sitting around waiting for someone else to do the other 95% so you can sue them. signature.asc Description: Digital signature
Re: [HACKERS] Upcoming re-releases
On Thu, Feb 09, 2006 at 03:16:29PM -0500, Tom Lane wrote: Andrew Dunstan [EMAIL PROTECTED] writes: Maybe this should be a configure flag, just like the port number is. It is ... that isn't the issue, the problem is exactly that Debian chooses to exercise the option to make their installations different from everyone else's. FWIW, I prefer the Debian location. AFAICS the only rationale for putting it in /tmp is because it's always been there. I also agree with suggestions to move ssh and X11 sockets out of /tmp. /tmp should be for, well, temporary files... Have a nice day, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a tool for doing 5% of the work and then sitting around waiting for someone else to do the other 95% so you can sue them. signature.asc Description: Digital signature
Re: [HACKERS] PostgreSQL 8.0.6 crash
On Thu, Feb 09, 2006 at 02:35:34PM -0500, Tom Lane wrote: Greg Stark [EMAIL PROTECTED] writes: Except not quite, since I think an out of memory error still means that backend exits instead of just that query failing. Not at all! PG will recover from this perfectly well ... if it's given the opportunity, rather than being SIGKILLed. FWIW, the problem is mainly from the situation where some process accesses a piece of memory that has been swapped out, but there is no memory available to swap the page in. Or write to a page marked copy-on-write. What do you do? There's is no way to return -ENOMEM from a normal memory access and PostgreSQL wouldn't handle that anyway. When people talk about disabling the OOM killer, it doesn't stop the SIGKILL behaviour, it just causes the kernel to return -ENOMEM for malloc() much much earlier... (ie when you still actually have memory available). Have a nice day, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a tool for doing 5% of the work and then sitting around waiting for someone else to do the other 95% so you can sue them. signature.asc Description: Digital signature
Re: [HACKERS] Upcoming re-releases
Martijn van Oosterhout kleptog@svana.org writes: FWIW, I prefer the Debian location. AFAICS the only rationale for putting it in /tmp is because it's always been there. Actually, it's because it's certain to be there and be accessible to unprivileged users. If we tried to change to something like /var/postgresql, then getting that directory made and properly permissioned would be an additional installation-time stumbling block in the way of newbies. That's not an issue for prepackaged builds that (at some level) require root privs to install, but I don't foresee it becoming the default for builds from source. Especially not given the client/server compatibility problems it'd create. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] PostgreSQL 8.0.6 crash
On Feb 9, 2006, at 11:22 AM, Stephen Frost wrote: * Tom Lane ([EMAIL PROTECTED]) wrote: Mark Woodward [EMAIL PROTECTED] writes: Again, regardless of OS used, hashagg will exceed working memory as defined in postgresql.conf. So? If you've got OOM kill enabled, it can zap a process whether it's strictly adhered to work_mem or not. The OOM killer is entirely capable of choosing a victim process whose memory footprint hasn't changed materially since it started (eg, the postmaster). Unless I've missed something here, disabling the OOM killer doesn't really solve the problem here. What solves the problem is running ANALYZE but it's still certainly the case that it would make some sense for the Postmaster, upon realizing that it's gone well beyond its work_mem boundary, to ideally switch plans to one which isn't going to exceed its work_mem limit. Less ideally, it could give up and issue an error to the user instead of running the box out of memory. So is the work_mem paramater that is set not strictly enforced? Is it more like a suggestions as to what it SHOULD use and not a hard limit on how much memory the each process is ALLOWED to use? If his work_mem is set to 1 mb and that process is using 500 mb for tasks that are supposed to stay in work_mem then doesn't that mean that that limit is not really a hard limit but rather a suggestion? Rick ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] PostgreSQL 8.0.6 crash
Martijn van Oosterhout kleptog@svana.org writes: When people talk about disabling the OOM killer, it doesn't stop the SIGKILL behaviour, Yes it does, because the situation will never arise. it just causes the kernel to return -ENOMEM for malloc() much much earlier... (ie when you still actually have memory available). Given the current price of disk, there is no sane reason not to have enough swap space configured to make this not-a-problem. The OOM kill mechanism was a reasonable solution for running systems that were not expected to be too reliable anyway on small hardware, but if you're trying to run a 24/7 server you're simply incompetent if you don't disable it. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Server Programming in C: palloc() and pfree()
Rodrigo Hjort [EMAIL PROTECTED] writes: I'm having some problems when using pfree() on functions in C. I think your pfree is just the bearer of bad news, ie, it's the victim of a memory clobber that you've already executed. Take another look at your string manipulation --- that strncpy followed by strcat in particular looks pretty dangerous, because strncpy doesn't guarantee a trailing null. [ looks again... ] Hmm, not to mention that you are overwriting the input str, which is bad enough in itself, but you are doing so with a string longer than the original. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] PostgreSQL 8.0.6 crash
Martijn van Oosterhout kleptog@svana.org writes: When people talk about disabling the OOM killer, it doesn't stop the SIGKILL behaviour, Yes it does, because the situation will never arise. it just causes the kernel to return -ENOMEM for malloc() much much earlier... (ie when you still actually have memory available). Given the current price of disk, there is no sane reason not to have enough swap space configured to make this not-a-problem. The OOM kill mechanism was a reasonable solution for running systems that were not expected to be too reliable anyway on small hardware, but if you're trying to run a 24/7 server you're simply incompetent if you don't disable it. And people say I have STRONG opinions. Don't hold back Tom, let us know what you really think. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] PostgreSQL 8.0.6 crash
On Friday 10 February 2006 00:53, Mark Woodward wrote: Martijn van Oosterhout kleptog@svana.org writes: When people talk about disabling the OOM killer, it doesn't stop the SIGKILL behaviour, Yes it does, because the situation will never arise. it just causes the kernel to return -ENOMEM for malloc() much much earlier... (ie when you still actually have memory available). Given the current price of disk, there is no sane reason not to have enough swap space configured to make this not-a-problem. The OOM kill mechanism was a reasonable solution for running systems that were not expected to be too reliable anyway on small hardware, but if you're trying to run a 24/7 server you're simply incompetent if you don't disable it. And people say I have STRONG opinions. Don't hold back Tom, let us know what you really think. Read http://linux-mm.org/OverCommitAccounting or file://usr/src/linux/Documentation/vm/overcommit-accounting It is a good idea to have enough swap space. If not, set vm.overcommit_memory=2 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] PostgreSQL 8.0.6 crash
On Feb 9, 2006, at 12:49 PM, Mark Woodward wrote: On Thu, Feb 09, 2006 at 02:03:41PM -0500, Mark Woodward wrote: Mark Woodward [EMAIL PROTECTED] writes: Again, regardless of OS used, hashagg will exceed working memory as defined in postgresql.conf. So? If you've got OOM kill enabled, it can zap a process whether it's strictly adhered to work_mem or not. The OOM killer is entirely capable of choosing a victim process whose memory footprint hasn't changed materially since it started (eg, the postmaster). Sorry, I must strongly disagree here. The postgresql.conf working mem is a VERY IMPORTANT setting, it is intended to limit the consumption of memory by the postgresql process. Often times PostgreSQL will work along Actually, no, it's not designed for that at all. I guess that's a matter of opinion. side other application servers on the same system, infact, may be a sub-part of application servers on the same system. (This is, in fact, how it is used on one of my site servers.) Clearly, if the server will use 1000 times this number (Set for 1024K, but exceeds 1G) this is broken, and it may cause other systems to fail or perform very poorly. If it is not something that can be fixed, it should be clearly documented. work_mem (integer) Specifies the amount of memory to be used by internal sort operations and hash tables before switching to temporary disk files. The value is specified in kilobytes, and defaults to 1024 kilobytes (1 MB). Note that for a complex query, several sort or hash operations might be running in parallel; each one will be allowed to use as much memory as this value specifies before it starts to put data into temporary files. Also, several running sessions could be doing such operations concurrently. So the total memory used could be many times the value of work_mem; it is necessary to keep this fact in mind when choosing the value. Sort operations are used for ORDER BY, DISTINCT, and merge joins. Hash tables are used in hash joins, hash-based aggregation, and hash-based processing of IN subqueries. So it says right there that it's very easy to exceed work_mem by a very large amount. Granted, this is a very painful problem to deal with and will hopefully be changed at some point, but it's pretty clear as to how this works. Well, if you read that paragraph carefully, I'll admit that I was a little too literal in my statement apliying it to the process and not specific functions within the process, but in the documentation: each one will be allowed to use as much memory as this value specifies before it starts to put data into temporary files. According to the documentation the behavior of hashagg is broken. It did not use up to this amount and then start to use temporary files, it used 1000 times this limit and was killed by the OS. I think it should be documented as the behavior is unpredictable. It seems to me that the solution for THIS INCIDENT is to run an analyze. That should fix the problem at hand. I have nothing to say about the OOM issue except that hopefully the analyze will prevent him from running out of memory at all. However if hashagg truly does not obey the limit that is supposed to be imposed by work_mem then it really ought to be documented. Is there a misunderstanding here and it really does obey it? Or is hashagg an exception but the other work_mem associated operations work fine? Or is it possible for them all to go out of bounds? Even if you've got 100 terabyts of swap space though if seems like if your system is very heavy on reads then you would really want that single backend to start using up your disk space and leave your memory alone so that most of your data can stay cached and largely unaffeted by the problem of one backend. If your bottleneck is writing to the disk then it doesn't really seem to matter. You just need to make sure that huge out of control hashagg never occurs. If your disks get saturated with writes because of the hashagg of one backend then all other processes that need to write a lot of info to disk are going to come to a grinding halt and queries are not going to complete quickly and build up and you will have a huge mess on your hands that will essentially prevent postgres from being able to do it's job even if it doesn't actually die. In this situation disk bandwidth is a scarce commodity and whether you let the OS handle it all with virtual memory or you let postgres swap everything out to disc for that one operation you are still using disc to make up for a lack of RAM. At some point you you've either got to stock up on enough RAM to run your queries properly or alter how your queries run to use less RAM. Having a process go out of control in resource usage is going to cause big problems one way or another.
Re: [HACKERS] PostgreSQL 8.0.6 crash
Rick Gigger [EMAIL PROTECTED] writes: However if hashagg truly does not obey the limit that is supposed to be imposed by work_mem then it really ought to be documented. Is there a misunderstanding here and it really does obey it? Or is hashagg an exception but the other work_mem associated operations work fine? Or is it possible for them all to go out of bounds? hashagg is the exception. It should be fixed, not documented, but no one's got round to that. One point to consider is that if the planner's estimate is as far off as exhibited in the OP's example, a hashagg that does spill to disk is likely to take so long that he'll be back here complaining that the query never terminates ;-). In most practical situations, I think exceeding work_mem is really the best solution, as long as it's not by more than 10x or 100x. It's when the estimate is off by many orders of magnitude that you've got a problem. Running out of memory is not necessarily the worst response ... as long as the system doesn't kill the process in response to that. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Feature request - Add microsecond as a time unit for
This generalizes to any scale factor you care to use, eg fortnights... so I don't see a pressing need to add microseconds. Perhaps an argument for adding microseconds to interval declarations is that you can extract them using extract()... Those two lists of allowed scales should be the same, no? Chris ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Backslashes in string literals
Kevin Grittner wrote: On Wed, Feb 1, 2006 at 10:50 am, in message [EMAIL PROTECTED], Bruce Momjian pgman@candle.pha.pa.us wrote: (1) I couldn't figure out the best way to obtain a value for standard_conforming_strings in the psql version of the scanner. The proper way to do (1) is to call libpq's pqSaveParameterStatus() from psql. Take a look for psql's session_username(). It is called everytime the prompt is printed if the username is required. One great feature of using pqSaveParameterStatus() is that it reads server packets and keeps the tracked value updated for you without query overhead. My attempt to do as you suggest isn't working. It behaves as though the standard_strings() function I added to common.c is always returning false. (If I comment out the reference the function, and the else clause, I can get psql to work with the on state; otherwise, no joy. The back end is working fine in all my tests.) I tried to mimic the technique in the existing functions. Can you give me a clue where I'm going wrong? OK, I got it working. The fix is to add GUC_REPORT to guc.c for standard_conforming_strings. See the same flag on session_authorization. That will cause libpq to see any changes made to that variable. Sorry I didn't know that detail before. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] streamlined standby procedure
On 2/9/06, Jim C. Nasby [EMAIL PROTECTED] wrote: What happens right now when you want to bring the standby up? Do you have to kill it out of recovery mode and re-start, forcing it to replay WAL again anyway? touch $LOGDIR/STOP ... -- marko ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match