[HACKERS] Where does catalog store CHAR length ?
hithe table pg_attribute have the attribute "attlen" , and this attribute willstore the attribute length. For example , when integer , this value will be 4.But if attribute type is Char(xx), this value will be -1. This is because itreference the bychar in pg_type table.So, where is the Char() length information store ??and what time will system check this string length constraint ?thanks for your help!Kao , Nchu Taiwan
[HACKERS] Foreign key data type mismatches
I was just annoyed to find out that a foreign key doesn't check whether the referenced column has a sufficiently similar data type, it only checks whether an = operator exists. This masks schema design errors and typos. Should this be tightened up, for example using the castability characteristics between the two data types? -- Peter Eisentraut [EMAIL PROTECTED] ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Foreign key data type mismatches
Peter Eisentraut wrote: I was just annoyed to find out that a foreign key doesn't check whether the referenced column has a sufficiently similar data type, it only checks whether an = operator exists. This masks schema design errors and typos. Should this be tightened up, for example using the castability characteristics between the two data types? Sounds like a good idea. IMHO normally only *equal* data types should be referenced in a good data model design, but there's probably the need for some relaxed check option too. Regards, Andreas ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] initdb should create a warning message [was Re:
On Sun, 2003-11-30 at 23:18, Neil Conway wrote: Oliver Elphick [EMAIL PROTECTED] writes: The use of the word log in the directory name does tend to invite this error, and some have acted on it without asking first. I think initdb should put a README.IMPORTANT file in $PGDATA to say [...] If someone deletes something from $PGDATA without understanding what it is, they deserve what they get. People have a distressing tendency to want to shoot themselves in the foot; and the somewhat unfortunate naming of those files contributes to the problem. While it is satisfying to see stupidity properly rewarded, it is more neighbourly at least to attempt to protect a fool from his folly. It is also kinder to those who may be depending on him for the protection of their data. I do agree that we could stand to document the purpose of pg_clog and pg_xlog more clearly. However, this information belongs in the standard documentation, not scattered throughout $PGDATA. Then it needs to be stated very prominently. But the place to put a sign saying Dangerous cliff edge is beside the path that leads along it. -- Oliver Elphick[EMAIL PROTECTED] Isle of Wight, UK http://www.lfix.co.uk/oliver GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C Who is like unto thee, O LORD, among the gods? who is like thee, glorious in holiness, fearful in praises, doing wonders? Exodus 15:11 ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] Where does catalog store CHAR length ?
On Mon, Dec 01, 2003 at 06:31:56PM +0800, phd9110 wrote: the table pg_attribute have the attribute attlen , and this attribute will store the attribute length. For example , when integer , this value will be 4. But if attribute type is Char(xx), this value will be -1. This is because it reference the bychar in pg_type table. So, where is the Char() length information store ?? atttypmod is the maximum length + 4. -- Alvaro Herrera (alvherre[a]dcc.uchile.cl) El destino baraja y nosotros jugamos (A. Schopenhauer) ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] default operator class: btree or hash
On Mon, Dec 01, 2003 at 08:12:54AM +0300, Dmitry G. Mastrukov wrote: I'm preparing new release of uniqueidentifier datatype. In new relaease in addition to existing btree operator class I've implemented hash operator class. But what class should be default for this datatype? Uniqueidentifier is used for identification, so main operation with it is '=', on machines with /dev/urandom '','' operations have no meaning (parts for date and time are randomized). May be hash operator class should be default for uniqueidentifier? btree is a better choice anyway because it allows for more concurrent operation and it's far more tested. -- Alvaro Herrera (alvherre[a]dcc.uchile.cl) No reniegues de lo que alguna vez creĆste ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] Foreign key data type mismatches
Andreas Pflug wrote: Peter Eisentraut wrote: I was just annoyed to find out that a foreign key doesn't check whether the referenced column has a sufficiently similar data type, it only checks whether an = operator exists. This masks schema design errors and typos. Should this be tightened up, for example using the castability characteristics between the two data types? Sounds like a good idea. IMHO normally only *equal* data types should be referenced in a good data model design, but there's probably the need for some relaxed check option too. Yes, we already talked about throwing a warning if the primary/foreign key data types are mismatched. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] Foreign key data type mismatches
On Mon, 1 Dec 2003, Peter Eisentraut wrote: I was just annoyed to find out that a foreign key doesn't check whether the referenced column has a sufficiently similar data type, it only checks whether an = operator exists. This masks schema design errors and typos. Should this be tightened up, for example using the castability characteristics between the two data types? Maybe, but IIRC the spec only requires comparability between the types involved. Since we don't use the same rules as the spec for that, existance of equality comparison was treated as the closest match to the requirement at the time. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[HACKERS] Max number of rows in a table
--- Tom Lane [EMAIL PROTECTED] wrote: Actually you can only have 4 billion SQL commands per xid, because the CommandId datatype is also just 32 bits. I've never heard of anyone running into that limit, though. Wouldn't the above put a limit on a number of records one could have in table? One have to use pg_dump/pg_restore to be able to upgrade between pgSql releases and/or to backup data. If one cannot backup data and/or upgrade between pg releases, then 4B records is the effective limit on the number of records in pgSql ... or am I missing something? Thanks __ Do you Yahoo!? Free Pop-Up Blocker - Get it now http://companion.yahoo.com/ ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] Max number of rows in a table
ow [EMAIL PROTECTED] writes: --- Tom Lane [EMAIL PROTECTED] wrote: Actually you can only have 4 billion SQL commands per xid, because the CommandId datatype is also just 32 bits. I've never heard of anyone running into that limit, though. Wouldn't the above put a limit on a number of records one could have in table? No. regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] [GENERAL] Was: Triggers, Stored Procedures, PHP
Joe Conway [EMAIL PROTECTED] writes: How is a parameterized view any different than a set returning SQL function? In either case, you've got the same work to do to teach the optimizer how to understand it, no? Seems like the todo is just that, teach the optimizer how to do better with set-returning SQL functions. I find almost always that when I wish I had parameterized views the view can be rewritten into more sophisticated views that push the parameterized constraint outside the view. The problem is that databases usually can't push the clause back inside. So parameterized views usually are a crutch for working around optimizer limitations but a different limitation than you're thinking. For example: parameterized view: create view view_1 as select count(*) from foo where x = $1 rewritten view and query using it: create view view_2 as select x, count(*) from foo group by x; select * from view_2 where x = ? Actually in this case Postgres does fairly well. It does manage to use the index though it still uses a GroupAggregate instead of a simple Aggregate node. The run-time is almost as fast as the straightforward query. -- greg ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] Max number of rows in a table
--- Tom Lane [EMAIL PROTECTED] wrote: Wouldn't the above put a limit on a number of records one could have in table? No. If I understand correctly, a table that has 4B+ rows cannot be restored after the dump and that, in turn, may/will affect the ability to upgrade to new versions of pgSql. This is somewhat similar to saying yes to the question Can I drive this car at 55 mph? and then forgetting to mention that the brakes will fail if the car reaches 55 mph. Thanks __ Do you Yahoo!? Free Pop-Up Blocker - Get it now http://companion.yahoo.com/ ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Max number of rows in a table
On Mon, 1 Dec 2003, ow wrote: --- Tom Lane [EMAIL PROTECTED] wrote: Actually you can only have 4 billion SQL commands per xid, because the CommandId datatype is also just 32 bits. I've never heard of anyone running into that limit, though. Wouldn't the above put a limit on a number of records one could have in table? One have to use pg_dump/pg_restore to be able to upgrade between pgSql releases and/or to backup data. If one cannot backup data and/or upgrade between pg releases, then 4B records is the effective limit on the number of records in pgSql ... or am I missing something? I'd expect copy to be a single command, no matter how many rows were copied. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Partitions implementation with views
Jonathan Gardner [EMAIL PROTECTED] writes: Usually, it is data that is related to each other somehow. I guess an example would be for an internet host who wants to provide a common shopping cart functionality for all of its customers, but put their specific data on a specific partition for ease of management and query speed. They can't put the data into seperate databases because they also need to examine the complete data set occasionally. The most commonly cited example is date-based partitions. Separating financial records by fiscal year for example. So for example the current year may be on the super-fast raid 0+1 15k rpm SCSI disks, but the previous 7 years may be on your slow but big raid5 farm of IDE drives. And at year-end you want to create a new year, drop all the hundreds of gigabytes of data from the 7th year from the archive quickly, and move the current year to the archive. The common response is Use partial indexes. But I imagine that they want the ability to move partitions onto seperate OS partitions (hence the name, partition). Separate OS partitions is a reasonable use of partitioned tables, but the biggest advantage is being able to drop and load partitions very quickly, and without impacting performance at all. loading or dropping millions of records becomes a simple DDL operation instead of a huge DML operation. ... Now that view is the partitioned table for all intents and purposes. The partition tables are the partitions themselves. Is this what they are looking for, or is it something completely different? That's the idea, though the purpose of having it as a built-in feature is to hide all the details you're describing from the user. The user can already do all the above if they wanted to. And there's more work to do: 1) The optimizer needs to know about the partitions to be able to check the query to see if it needs all the partitions or only a small subset. Often partitioned tables are used when most queries only need a single partition which drastically affects the costs for plans. 2) You want to be able to manage indexes across all the partitions in a single operation. Any index on the table where the leading columns of the index are the partition key can automatically create a separate index on every table. 3) You want DDL commands to move around the partitions in various ways. Creating new partitions, moving tables into the partitioned table and moving partitions out into separate tables of their own. splitting a partition into two partitions, merging two into one, etc. 4) I always managed to avoid them, and they don't seem very useful to me, but Oracle also supports global indexes which are indexes that span all the partitions without having the partition key as the leading columns. -- greg ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] initdb should create a warning message [was Re:
Oliver Elphick [EMAIL PROTECTED] writes: Then it needs to be stated very prominently. But the place to put a sign saying Dangerous cliff edge is beside the path that leads along it. The only way to make this prominent would be a file with the *name* THIS DIRECTORY CONTAINS CRITICAL DATA. Not a README with that message inside. -- greg ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Max number of rows in a table
Stephan Szabo [EMAIL PROTECTED] writes: I'd expect copy to be a single command, no matter how many rows were copied. It might prevent you from using pg_dump --inserts ? -- greg ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [HACKERS] Max number of rows in a table
ow [EMAIL PROTECTED] writes: If I understand correctly, a table that has 4B+ rows cannot be restored after the dump and that, in turn, may/will affect the ability to upgrade to new versions of pgSql. You don't understand correctly. regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] initdb should create a warning message [was Re:
Greg Stark wrote: Oliver Elphick [EMAIL PROTECTED] writes: Then it needs to be stated very prominently. But the place to put a sign saying Dangerous cliff edge is beside the path that leads along it. The only way to make this prominent would be a file with the *name* THIS DIRECTORY CONTAINS CRITICAL DATA. Not a README with that message inside. Renaming the directories is the only suggestion I've seen that makes sense. The others remind me of the warning that is now placed on coffee cup lids at fast food places: Caution, Contents May Be Hot. cheers andrew ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Max number of rows in a table
--- Tom Lane [EMAIL PROTECTED] wrote: ow [EMAIL PROTECTED] writes: If I understand correctly, a table that has 4B+ rows cannot be restored after the dump and that, in turn, may/will affect the ability to upgrade to new versions of pgSql. You don't understand correctly. I see. Thanks __ Do you Yahoo!? Free Pop-Up Blocker - Get it now http://companion.yahoo.com/ ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] initdb should create a warning message [was Re:
On Mon, 2003-12-01 at 16:39, Andrew Dunstan wrote: Renaming the directories is the only suggestion I've seen that makes sense. The others remind me of the warning that is now placed on coffee cup lids at fast food places: Caution, Contents May Be Hot. I agree that renaming the directories is the best solution. -- Oliver Elphick[EMAIL PROTECTED] Isle of Wight, UK http://www.lfix.co.uk/oliver GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C Who is like unto thee, O LORD, among the gods? who is like thee, glorious in holiness, fearful in praises, doing wonders? Exodus 15:11 ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] Max number of rows in a table
Clinging to sanity, [EMAIL PROTECTED] (ow) mumbled into her beard: --- Tom Lane [EMAIL PROTECTED] wrote: Wouldn't the above put a limit on a number of records one could have in table? No. If I understand correctly, a table that has 4B+ rows cannot be restored after the dump and that, in turn, may/will affect the ability to upgrade to new versions of pgSql. This is somewhat similar to saying yes to the question Can I drive this car at 55 mph? and then forgetting to mention that the brakes will fail if the car reaches 55 mph. No, you are NOT understanding correctly. Restoring a table from pg_dump generally involves _ONE_ command. For instance, the following is what pg_dump generates for my table, stocks. COPY stocks (symbol, description, exchange) FROM stdin; AADBX AADBX NYSE AADEX AADEX NYSE AAIEX AAIEX NYSE BTS.A BTS.A TSX CTSTK TD Canadian Equity CTE CASHCASHTSX CTAMER TD AmeriGrowth RSP CTE CTASIA TD AsiaGrowth RSP CTE CTEMER TD Emerging Markets RSP CTE CTEURO TD European Growth RSP CTE CTIBND TD Global RSP Bond CTE FDIVX FDIVX NYSE FDRXX FDRXX NYSE FUSEX FUSEX NYSE MOT MOT NYSE NCX NOVA Chemicals Corporation TSX NT NT NYSE PCA Petro CanadaTSX RY Royal Bank of CanadaTSX TOC Thomson Corporation TSX TRP TransCanada PipeLines Limited TSX WORKVE Working VenturesOTHER CTSPEC TD SPECIAL EQUITY CTE CTUSEQ TD US EQUITYCTE CTMMTD MONEY MARKET PL CTE CTCBOND TD Canadian BondCTE \. Recovery from this involves the SQL processor using ONE transaction ID, and ONE SQL statement. If there were 8 billion rows in the table, whatever other challenges there might be, it would still use ONE transaction ID and ONE SQL statement. What is there about This involves just one SQL statement that isn't making sense? -- wm(X,Y):-write(X),write('@'),write(Y). wm('cbbrowne','acm.org'). http://www.ntlug.org/~cbbrowne/lisp.html Pagers are cases for holding dead batteries. -Richard Wolff ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Max number of rows in a table
Greg Stark [EMAIL PROTECTED] writes: Stephan Szabo [EMAIL PROTECTED] writes: I'd expect copy to be a single command, no matter how many rows were copied. It might prevent you from using pg_dump --inserts ? Not even that, unless you *also* modified the dump output to wrap BEGIN/END around it. Otherwise each INSERT is a separate xid. (Of course you could definitely take a pretty long coffee break while waiting for a 4-billion-row table to be restored with INSERTs. Also I think it would be necessary to run VACUUM partway through to avoid transaction wraparound issues. pg_autovacuum could be expected to take care of that for you, if it were running. But in practice anyone sane would use COPY for this, anyway.) regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] initdb should create a warning message [was Re:
Greg Stark writes: Oliver Elphick [EMAIL PROTECTED] writes: Then it needs to be stated very prominently. But the place to put a sign saying Dangerous cliff edge is beside the path that leads along it.Greg Stark [EMAIL PROTECTED], p The only way to make this prominent would be a file with the *name* THIS DIRECTORY CONTAINS CRITICAL DATA. Not a README with that message inside. That's exacly what I did, after some root came along and moved my pgdata away while postmaster was running. The data was not that important in that case, but nevertheless I put a file with a name like NEVER_MOVE_THIS_DIRECTORY_WHILE_POSTMASTER_PROCESS_IS_RUNNING.txt in pgdata and wrote a few lines in that file, how to shutdown postmaster properly. But renaming pgdata to something like that would be even better and could be done alrealy (if I'm right). Regards, Tilo ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[HACKERS] Heading to Japan
I will be in Japan December 2-10 speaking about PostgreSQL. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[HACKERS] remaining patches?
What's happening to the remaining patches that were held over for 7.5, e.g. mine which does some logging enhancements? cheers andrew ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] remaining patches?
Andrew Dunstan wrote: What's happening to the remaining patches that were held over for 7.5, e.g. mine which does some logging enhancements? I want to review that more thoroughly. It is still in the 7.5 queue. I am not done with that yet: http:/momjian.postgresql.org/cgi-bin/pgpatches2 -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Partitions implementation with views
Greg Stark kirjutas E, 01.12.2003 kell 18:15: Separate OS partitions is a reasonable use of partitioned tables, but the biggest advantage is being able to drop and load partitions very quickly, and without impacting performance at all. loading or dropping millions of records becomes a simple DDL operation instead of a huge DML operation. How does that mix with foreign key checks ? -- Hannu ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] [CORE] Commit privs
Marc G. Fournier wrote: On Mon, 1 Dec 2003, Josh Berkus wrote: Hmmm. Robert spoke to an IP attorney off the record who felt differently. I think it would really be to our benefit to get an attorney to go on the record about this (which means a fee, unfortunately). Your opinion vs. my opinion doesn't really mean much since neither of us is a copyright patent attorney. One thing to note heavily here though is that the project is not in the US, so any attorney we deal with will have to be able to know how to deal with International Law, and not just US (which, I suspect, will probably be even pricier) ... there are several projects that don't maintain 'US mirrors' for reasons such as this, maybe we should trim off our US mirrors and let ppl download from the International mirrors instead ... Yes, I see that now: CREATE TABLE x ( y integer ) WITH OIDS; We need a solution to this. One idea is to use SET to change the default_with_oids setting when a table changes characteristics. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] [CORE] Commit privs
Sorry for this posting. I was trying to do too many things at once. I have posted the WITH OID email to the proper thread. --- Bruce Momjian wrote: Marc G. Fournier wrote: On Mon, 1 Dec 2003, Josh Berkus wrote: Hmmm. Robert spoke to an IP attorney off the record who felt differently. I think it would really be to our benefit to get an attorney to go on the record about this (which means a fee, unfortunately). Your opinion vs. my opinion doesn't really mean much since neither of us is a copyright patent attorney. One thing to note heavily here though is that the project is not in the US, so any attorney we deal with will have to be able to know how to deal with International Law, and not just US (which, I suspect, will probably be even pricier) ... there are several projects that don't maintain 'US mirrors' for reasons such as this, maybe we should trim off our US mirrors and let ppl download from the International mirrors instead ... Yes, I see that now: CREATE TABLE x ( y integer ) WITH OIDS; We need a solution to this. One idea is to use SET to change the default_with_oids setting when a table changes characteristics. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[HACKERS] rebuilding rpm for RH9 error
Hi all, I'm still experiencing problem trying to rebuild the rpm from the file: postgresql-7.4-0.5PGDG.src.rpm what I get is: checking for library containing com_err... -lcom_err checking for library containing krb5_encrypt... no configure: error: could not find function 'krb5_encrypt' required for Kerberos 5 error: Bad exit status from /var/tmp/rpm-tmp.40171 (%build) I seen that the configure is done with: --with-krb5=/usr. I also try to install the RPM already builded but I obtain: file /usr/include/sqltypes.h from install of postgresql-devel-7.4-0.5PGDG conflicts with file from package unixODBC-devel-2.2.3-6 Regards Gaetano Mendola ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [HACKERS] rebuilding rpm for RH9 error
I seen that the configure is done with: --with-krb5=/usr. make sure that you have krb5-devel installed. I also try to install the RPM already builded but I obtain: file /usr/include/sqltypes.h from install of postgresql-devel-7.4-0.5PGDG conflicts with file from package unixODBC-devel-2.2.3-6 Regards Gaetano Mendola ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html -- Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC Postgresql support, programming shared hosting and dedicated hosting. +1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com Mammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [PATCHES] [HACKERS] Index creation takes for ever
On Mon, 1 Dec 2003 00:02:54 -0500 (EST), Bruce Momjian [EMAIL PROTECTED] wrote: Tom Lane wrote: Bruce Momjian [EMAIL PROTECTED] writes: And if it doesn't help index creation speed, at least the resulting index has better correlation. ... which has been shown by the example in the original message: Result without patch: ctid -- (153,14) (306,23) (305,80) (152,91) (76,68) (38,34) (153,34) (305,50) (9,62) (305,40) (10 rows) Result with patch: ctid (0,5) (0,10) (0,15) (0,20) (0,25) (0,30) (0,35) (0,40) (0,45) (0,50) (10 rows) And I think we all agree, that better index correlation leads to better performance. I think this is a *very* dubious idea. It introduces a low-level implementation dependency into our sort behavior The patch modifies the static function comparetup_index() in tuplesort.c. The comment above this function says /* * Routines specialized for IndexTuple case * * NOTE: actually, these are specialized for the btree case; [...] */ comparetup_index() compares two IndexTuples. The structure IndexTupleData consists basically of not much more than an ItemPointer, and the patch is not much more than adding a comparison of two ItemPointers. So how does the patch introduce a new low level implementation dependency? Roger --- patch removed. Thanks. Could we agree on only removing the first five a half lines of the comment in the patch? Servus Manfred ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PATCHES] [HACKERS] Index creation takes for ever
Manfred Koizar [EMAIL PROTECTED] writes: comparetup_index() compares two IndexTuples. The structure IndexTupleData consists basically of not much more than an ItemPointer, and the patch is not much more than adding a comparison of two ItemPointers. So how does the patch introduce a new low level implementation dependency? Because it sorts on tuple position, which is certainly about as low level as you can get. More to the point, though, no evidence has been provided that this is a good idea. regards, tom lane ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] initdb mkdir_p() doesn't work
Patch applied. Thanks. --- Andrew Dunstan wrote: Andrew Dunstan wrote: Peter Eisentraut wrote: Here is what I get: peter ~$ pg-install/bin/initdb pg-install/var/data ... creating directory pg-install/var/data ... initdb: failed No points for details in the error message here either. If I create pg-install/var first, then it work. I will check it out. I know I spent quite some time making sure this worked, but I might have missed something obvious. I wonder if it is platform specific? I don't remember why the code is the way it is. The failure appears to be before we ever get to mkdir_p(). I can't see any reason right now why we can't call mkdir_p() in all cases. The attached patch does that (and makes the code slightly simpler as a result). I tested it with one element and 2 element existant and nonexistant paths, and it appeared to work for all of them. cheers andrew ? .deps ? initdb Index: initdb.c === RCS file: /projects/cvsroot/pgsql-server/src/bin/initdb/initdb.c,v retrieving revision 1.11 diff -c -w -r1.11 initdb.c *** initdb.c 17 Nov 2003 20:35:28 - 1.11 --- initdb.c 23 Nov 2003 19:46:56 - *** *** 797,803 mkdatadir(char *subdir) { char *path; - int res; path = xmalloc(strlen(pg_data) + 2 + (subdir == NULL ? 0 : strlen(subdir))); --- 797,802 *** *** 807,819 else strcpy(path, pg_data); ! res = mkdir(path, 0700); ! if (res == 0) ! return true; ! else if (subdir == NULL || errno != ENOENT) ! return false; ! else ! return !mkdir_p(path, 0700); } --- 806,812 else strcpy(path, pg_data); ! return (mkdir_p(path, 0700) == 0); } ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [HACKERS] initdb should create a warning message [was Re: [ADMIN] Size on Disk]
Oliver Elphick [EMAIL PROTECTED] writes: On Sun, 2003-11-30 at 23:18, Neil Conway wrote: I do agree that we could stand to document the purpose of pg_clog and pg_xlog more clearly. However, this information belongs in the standard documentation, not scattered throughout $PGDATA. Then it needs to be stated very prominently. But the place to put a sign saying Dangerous cliff edge is beside the path that leads along it. How about changing the names of those directories? regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [HACKERS] initdb should create a warning message [was Re: [ADMIN]
Tom Lane wrote: Oliver Elphick [EMAIL PROTECTED] writes: On Sun, 2003-11-30 at 23:18, Neil Conway wrote: I do agree that we could stand to document the purpose of pg_clog and pg_xlog more clearly. However, this information belongs in the standard documentation, not scattered throughout $PGDATA. Then it needs to be stated very prominently. But the place to put a sign saying Dangerous cliff edge is beside the path that leads along it. How about changing the names of those directories? I thought about that, but what would we call them? We could change xlog to wal, I guess. That might actually be clearer. xlog could become xstatus or xactstatus or just xact. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] initdb should create a warning message [was Re: [ADMIN]
Bruce Momjian wrote: Tom Lane wrote: Oliver Elphick [EMAIL PROTECTED] writes: On Sun, 2003-11-30 at 23:18, Neil Conway wrote: I do agree that we could stand to document the purpose of pg_clog and pg_xlog more clearly. However, this information belongs in the standard documentation, not scattered throughout $PGDATA. Then it needs to be stated very prominently. But the place to put a sign saying Dangerous cliff edge is beside the path that leads along it. How about changing the names of those directories? I thought about that, but what would we call them? We could change xlog to wal, I guess. That might actually be clearer. xlog could become xstatus or xactstatus or just xact. active_xdata active_cdata Mike Mascari [EMAIL PROTECTED] ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] Partitions implementation with views
Hannu Krosing [EMAIL PROTECTED] writes: Greg Stark kirjutas E, 01.12.2003 kell 18:15: Separate OS partitions is a reasonable use of partitioned tables, but the biggest advantage is being able to drop and load partitions very quickly, and without impacting performance at all. loading or dropping millions of records becomes a simple DDL operation instead of a huge DML operation. How does that mix with foreign key checks ? That's a good question. I don't know how it mixes in Oracle as the partitioned tables were always the ones at the end of the chain of references. That is, there were foreign key constraints on them but no other tables referenced them. Perhaps that may be common as partitioning is useful on BIG tables and that makes them likely to be on the many side of all the one-to-many relationships. Or perhaps one often has lots of tables partitioned on the same key (like fiscal year) and design things so you never have references across years. Then you would be able to disable constraints and drop the old year without risking any broken references. As I say I'm not sure, but I from what I'm reading now It seems they don't mix at all well in Oracle. It looks like if you have any foreign key references from any non-empty tables to your partitioned table then you're basically barred from removing any partitions. I guess you're expected to disable the constraints while you do the operation. That does make a lot of sense if you think of a partitioned table as just a bunch of syntactic sugar over a view with a big union of all the partition tables. You can't expect the database to recheck everything whenever you want to redefine the view. Alternatively you can think of partitioned tables as a bunch of syntactic sugar over inherited tables. In which case it's no coincidence that foreign keys and inherited tables don't mix very well either. I do think it would be possible to design something better than just ruling them incompatible. Presumably you would want an index on the foreign key target columns to make the constraints fast. Therefore presumably the partition key is the leading columns of the foreign key target columns. Therefore all you really need to verify the partition drop is safe is an index on the partition key columns in the referencing table and you can do a simple index lookup to see if any records with the same leading columns exist to verify the foreign key reference. So for example: Table: invoice -- invoice_FY integer invoice_id integer ... invoice_pkey primary key, btree (invoice_FY,invoice_id) Table payment - payment_FY integer payment_id integer ... invoice_FY integer invoice_id integer ... invoice_idx btree (invoice_FY,invoice_id) $1 FOREIGN KEY (invoice_FY,invoice_id) REFERENCES invoice(invoice_FY,invoice_id) ON DELETE CASCADE In this situation when you drop a partition from invoice for FY 2002 only one fast lookup to check for EXISTS (select 1 WHERE invoice_FY = 2002) would be necessary. -- greg ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster