[HACKERS] Re: [HACKERS] Re: [HACKERS] Re: [HACKERS] Re: [HACKERS] Is it necessary to rewrite table while increasing the scale of datatype numeric?
On Sun, Sep 15, 2013 at 8:05 PM, Jeff Janes jeff.ja...@gmail.com wrote: But note that the current behavior is worse in this regard. If you specify a scale of 4 at the column level, than it is not possible to distinguish between 5.000 and 5. on a per-value basis within that column. If the scale at the column level was taken as the maximum scale, not the only allowed one, then that distinction could be recorded. That behavior seems more sensible to me (metrologically speaking, regardless of alter table performance aspects), but I don't see how to get there from here with acceptable compatibility breakage. I think I'd probably agree with that in a green field, but as you say, I can't see accepting the backward compatibility break at this point. After all, you can get variable-precision in a single column by declaring it as unqualified numeric. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: [HACKERS] Is it necessary to rewrite table while increasing the scale of datatype numeric?
wangs...@highgo.com.cn wangs...@highgo.com.cn wrote: I modified the code for this situation.I consider it very simple. It will does not modify the table file, when the scale has been increased exclusively. Kevin Grittner kgri...@ymail.com wrote: This patch would allow data in a column which was not consistent with the column definition: test=# create table n (val numeric(5,2)); CREATE TABLE test=# insert into n values ('123.45'); INSERT 0 1 test=# select * from n; val 123.45 (1 row) test=# alter table n alter column val type numeric(5,4); ALTER TABLE test=# select * from n; val 123.45 (1 row) Without your patch the ALTER TABLE command gets this error (as it should): test=# alter table n alter column val type numeric(5,4); ERROR: numeric field overflow DETAIL: A field with precision 5, scale 4 must round to an absolute value less than 10^1. -- Kevin Grittner EDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company Thanks for your reply and test. I had added a new function named ATNumericColumnChangeRequiresCheck to check the data when the scale of numeric increase. Now,the ALTER TABLE command could prompt this error: postgres=# alter table tt alter COLUMN t1 type numeric (5,4); ERROR: numeric field overflow DETAIL: A field with precision 5, scale 4 must round to an absolute value less than 10^1. STATEMENT: alter table tt alter COLUMN t1 type numeric (5,4); ERROR: numeric field overflow DETAIL: A field with precision 5, scale 4 must round to an absolute value less than 10^1. I packed a new patch about this modification. I think this ' altering field type model ' could modify all the type in database. Make different modification to column‘s datatype for different situation. For example when you modify the scale of numeric, if you think that the 5.0 and 5.00 is different, the table file must be rewritten; otherwise, needn't be rewritten. Wang Shuo HighGo Software Co.,Ltd. September 16, 2013 diff -uNr b/src/backend/commands/tablecmds.c a/src/backend/commands/tablecmds.c --- b/src/backend/commands/tablecmds.c 2013-08-31 17:11:00.529744869 +0800 +++ a/src/backend/commands/tablecmds.c 2013-09-16 16:33:49.527455560 +0800 @@ -367,7 +367,10 @@ AlteredTableInfo *tab, Relation rel, bool recurse, bool recursing, AlterTableCmd *cmd, LOCKMODE lockmode); -static bool ATColumnChangeRequiresRewrite(Node *expr, AttrNumber varattno); +static void ATNumericColumnChangeRequiresCheck(AlteredTableInfo *tab); +static bool ATColumnChangeRequiresRewrite(Node *expr, AttrNumber varattno, + int32 oldtypemod, int32 newtypemod, + AlteredTableInfo *tab); static void ATExecAlterColumnType(AlteredTableInfo *tab, Relation rel, AlterTableCmd *cmd, LOCKMODE lockmode); static void ATExecAlterColumnGenericOptions(Relation rel, const char *colName, @@ -7480,7 +7483,7 @@ newval-expr = (Expr *) transform; tab-newvals = lappend(tab-newvals, newval); - if (ATColumnChangeRequiresRewrite(transform, attnum)) + if (ATColumnChangeRequiresRewrite(transform, attnum, attTup-atttypmod, targettypmod, tab)) tab-rewrite = true; } else if (transform) @@ -7520,6 +7523,102 @@ } /* + * check the data when the scale of numeric increase + */ + +static void +ATNumericColumnChangeRequiresCheck(AlteredTableInfo *tab) +{ + Relation oldrel; + TupleDesc oldTupDesc; + int i; + ListCell *l; + EState *estate; + ExprContext *econtext; + bool *isnull; + TupleTableSlot *oldslot; + HeapScanDesc scan; + HeapTuple tuple; + Snapshot snapshot; + List *dropped_attrs = NIL; + ListCell *lc; + Datum *values; + + /* + * Open the relation(s). We have surely already locked the existing + * table. + */ + + oldrel = heap_open(tab-relid, NoLock); + oldTupDesc = tab-oldDesc; + + for (i = 0; i oldTupDesc-natts; i++) + { + if (oldTupDesc-attrs[i]-attisdropped) + dropped_attrs = lappend_int(dropped_attrs, i); + } + /* + * Generate the constraint and default execution states + */ + + estate = CreateExecutorState(); + + foreach(l, tab-newvals) + { + NewColumnValue *ex = lfirst(l); + + /* expr already planned */ + ex-exprstate = ExecInitExpr((Expr *) ex-expr, NULL); + } + + econtext = GetPerTupleExprContext(estate); + oldslot = MakeSingleTupleTableSlot(oldTupDesc); + + /* Preallocate values/isnull arrays */ + i = oldTupDesc-natts; + values = (Datum *) palloc(i * sizeof(Datum)); + isnull = (bool *) palloc(i * sizeof(bool)); + memset(values, 0, i * sizeof(Datum)); + memset(isnull, true, i * sizeof(bool)); + + /* + * Scan through the rows, generating a new row if needed and then + * checking all the constraints. + */ + snapshot = RegisterSnapshot(GetLatestSnapshot()); + scan = heap_beginscan(oldrel, snapshot, 0, NULL); + while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL) + { + foreach(lc, dropped_attrs) + isnull[lfirst_int(lc)] = true; + + heap_deform_tuple(tuple, oldTupDesc, values,
[HACKERS] Re: [HACKERS] Re: [HACKERS] Re: [HACKERS] Re: [HACKERS] Is it necessary to rewrite table while increasing the scale of datatype numeric?
On Fri, Sep 6, 2013 at 11:47 AM, Tom Lane t...@sss.pgh.pa.us wrote: Robert Haas robertmh...@gmail.com writes: Sure, but the point is that 5. is not the same as 5.000 today. If you start whacking this around you'll be changing that behavior, I think. Yeah. And please note that no matter what the OP may think, a lot of people *do* consider that there's a useful distinction between 5.000 and 5. --- it might indicate the number of significant digits in a measurement, for example. I do not see us abandoning that just to make certain cases of ALTER TABLE faster. But note that the current behavior is worse in this regard. If you specify a scale of 4 at the column level, than it is not possible to distinguish between 5.000 and 5. on a per-value basis within that column. If the scale at the column level was taken as the maximum scale, not the only allowed one, then that distinction could be recorded. That behavior seems more sensible to me (metrologically speaking, regardless of alter table performance aspects), but I don't see how to get there from here with acceptable compatibility breakage. My lesson from going over this thread is, just use numeric, not numeric(x,y), unless you are storing currency or need compatibility with a different database system (in which case, good luck with that). Cheers, Jeff
Re: [HACKERS] Re: [HACKERS] Is it necessary to rewrite table while increasing the scale of datatype numeric?
于 2013-09-06 01:41, Jeff Janes 回复: On Wed, Sep 4, 2013 at 10:06 PM, wangs...@highgo.com.cn wrote: 于 2013-09-04 23:41, Jeff Janes 回复: On Tue, Sep 3, 2013 at 9:08 PM, wangs...@highgo.com.cn wrote: Hi, Hackers! I find that it takes a long time when I increase the scale of a numeric datatype. By checking the code, I found that's because it needs to rewrite that table's file. After checking that table's data file, I found only parameter n_header changed. And, I found the data in that numeric field never changed. So I thank It's not necessary to rewrite the table's file in this case. Anyone has more idea about this, please come to talk about this! Jeff Janes jeff.ja...@gmail.com wrote: This was fixed in version 9.2. You must be using an older version. Cheers, Jeff Thanks for your reply. To declare a column of type numeric use the syntax: NUMERIC(precision, scale). What I said is this scale,not yours. Jeff Janes jeff.ja...@gmail.com wrote: You're right, I had tested a change in precision, not in scale. Sorry. In order to avoid the rewrite, the code would have to be changed to look up the column definition and if it specifies the scale, then ignore the per-row n_header, and look at the n_header only if the column is NUMERIC with no precision or scale. That should conceptually be possible, but I don't know how hard it would be to implement--it sounds pretty invasive to me. Then if the column was altered from NUMERIC with scale to be a plain NUMERIC, it would have to rewrite the table to enforce the row-wise scale to match the old column-wise scale. Where as now that alter doesn't need a re-write. I don't know if this would be an overall gain or not. Cheers, Jeff I modified the code for this situation.I consider it very simple. It will does not modify the table file, when the scale has been increased exclusively. I modified the code , as follow: static bool ATColumnChangeRequiresRewrite(Node *expr, AttrNumber varattno, int32 oldtypemod, int32 newtypemod); in function ATExecAlterColumnGenericOptions: if (ATColumnChangeRequiresRewrite(transform, attnum, attTup-atttypmod, targettypmod)) tab-rewrite = true; in the function ATColumnChangeRequiresRewrite: else if (IsA(expr, FuncExpr)) { int32 between = 0; /* * Check whether funcresulttype == 1700 and funcid == 1703 when user modify datatype. * If true, then we know user modify the datatype numeric; * Then we go to get value 'between'. */ if(((FuncExpr *) expr)-funcresulttype == 1700 ((FuncExpr *) expr)-funcid == 1703) between = newtypemod - oldtypemod; /* * If 'between' satisfy the following condition, * Then we know the scale of the numeric was increased. */ if(between 0 between 1001) return false; else return true; } I packed a patch about this modification. Wang Shuo HighGo Software Co.,Ltd. September 6, 2013diff -uNr b/src/backend/commands/tablecmds.c a/src/backend/commands/tablecmds.c --- b/src/backend/commands/tablecmds.c 2013-08-31 17:11:00.529744869 +0800 +++ a/src/backend/commands/tablecmds.c 2013-09-04 11:20:28.797652760 +0800 @@ -367,7 +367,7 @@ AlteredTableInfo *tab, Relation rel, bool recurse, bool recursing, AlterTableCmd *cmd, LOCKMODE lockmode); -static bool ATColumnChangeRequiresRewrite(Node *expr, AttrNumber varattno); +static bool ATColumnChangeRequiresRewrite(Node *expr, AttrNumber varattno, int32 oldtypemod, int32 newtypemod); static void ATExecAlterColumnType(AlteredTableInfo *tab, Relation rel, AlterTableCmd *cmd, LOCKMODE lockmode); static void ATExecAlterColumnGenericOptions(Relation rel, const char *colName, @@ -7480,7 +7480,7 @@ newval-expr = (Expr *) transform; tab-newvals = lappend(tab-newvals, newval); - if (ATColumnChangeRequiresRewrite(transform, attnum)) + if (ATColumnChangeRequiresRewrite(transform, attnum, attTup-atttypmod, targettypmod)) tab-rewrite = true; } else if (transform) @@ -7530,7 +7530,7 @@ * try to do that. */ static bool -ATColumnChangeRequiresRewrite(Node *expr, AttrNumber varattno) +ATColumnChangeRequiresRewrite(Node *expr, AttrNumber varattno, int32 oldtypemod, int32 newtypemod) { Assert(expr != NULL); @@ -7549,6 +7549,18 @@ return true; expr = (Node *) d-arg; } + else if (IsA(expr, FuncExpr)) + { + int32 between = 0; + + if(((FuncExpr *) expr)-funcresulttype == 1700 ((FuncExpr *) expr)-funcid == 1703) +between = newtypemod - oldtypemod; + + if(between 0 between 1001) +
[HACKERS] Re: [HACKERS] Re: [HACKERS] Is it necessary to rewrite table while increasing the scale of datatype numeric?
wangs...@highgo.com.cn wangs...@highgo.com.cn wrote: I modified the code for this situation.I consider it very simple. It will does not modify the table file, when the scale has been increased exclusively. This patch would allow data in a column which was not consistent with the column definition: test=# create table n (val numeric(5,2)); CREATE TABLE test=# insert into n values ('123.45'); INSERT 0 1 test=# select * from n; val 123.45 (1 row) test=# alter table n alter column val type numeric(5,4); ALTER TABLE test=# select * from n; val 123.45 (1 row) Without your patch the ALTER TABLE command gets this error (as it should): test=# alter table n alter column val type numeric(5,4); ERROR: numeric field overflow DETAIL: A field with precision 5, scale 4 must round to an absolute value less than 10^1. -- Kevin Grittner EDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: [HACKERS] Re: [HACKERS] Is it necessary to rewrite table while increasing the scale of datatype numeric?
On Thu, Sep 5, 2013 at 8:53 PM, Alvaro Herrera alvhe...@2ndquadrant.comwrote: Greg Stark escribió: The main difficulty is that Postgres is very extensible. So to implement this you need to think bigger than NUMERIC. It should also be possible to alter a column from varchar(5) to varchar(10) for example (but not the other way around). We already allow that. See commits 8f9fe6edce358f7904e0db119416b4d1080a83aa and 3cc0800829a6dda5347497337b0cf43848da4acf Ah, nice. i missed that. So the issue here is that NUMERIC has an additional concept of scale that is buried in the values and that this scale is set based on the typmod that was in effect when the value was stored. If you change the typmod on the column it currently rescales all the values in the table? There's even a comment to that effect on the commit you pointed at. But I wonder if we could just declare that that's not what the scale typmod does. That it's just a maximum scale but it's perfectly valid for NUMERIC data with lower scales to be stored in a column than the typmod says. In a way the current behaviour is like bpchar but it would be nice if it was more like varchar -- greg
[HACKERS] Re: [HACKERS] Re: [HACKERS] Re: [HACKERS] Re: [HACKERS] Is it necessary to rewrite table while increasing the scale of datatype numeric?
On 09/06/2013 07:57 PM, Robert Haas wrote: On Fri, Sep 6, 2013 at 12:34 PM, Greg Stark st...@mit.edu wrote: But I wonder if we could just declare that that's not what the scale typmod does. That it's just a maximum scale but it's perfectly valid for NUMERIC data with lower scales to be stored in a column than the typmod says. In a way the current behaviour is like bpchar but it would be nice if it was more like varchar Sure, but the point is that 5. is not the same as 5.000 today. If you start whacking this around you'll be changing that behavior, I think. So we already get it wrong by rewriting ? -- Hannu Krosing PostgreSQL Consultant Performance, Scalability and High Availability 2ndQuadrant Nordic OÜ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: [HACKERS] Re: [HACKERS] Re: [HACKERS] Is it necessary to rewrite table while increasing the scale of datatype numeric?
On Fri, Sep 6, 2013 at 12:34 PM, Greg Stark st...@mit.edu wrote: But I wonder if we could just declare that that's not what the scale typmod does. That it's just a maximum scale but it's perfectly valid for NUMERIC data with lower scales to be stored in a column than the typmod says. In a way the current behaviour is like bpchar but it would be nice if it was more like varchar Sure, but the point is that 5. is not the same as 5.000 today. If you start whacking this around you'll be changing that behavior, I think. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: [HACKERS] Re: [HACKERS] Re: [HACKERS] Is it necessary to rewrite table while increasing the scale of datatype numeric?
Robert Haas robertmh...@gmail.com writes: Sure, but the point is that 5. is not the same as 5.000 today. If you start whacking this around you'll be changing that behavior, I think. Yeah. And please note that no matter what the OP may think, a lot of people *do* consider that there's a useful distinction between 5.000 and 5. --- it might indicate the number of significant digits in a measurement, for example. I do not see us abandoning that just to make certain cases of ALTER TABLE faster. There was some upthread discussion about somehow storing the scale info at the column level rather than the individual-datum level. If we could do that, then it'd be possible to make this type of ALTER TABLE fast. However, the work involved to do that seems enormously out of proportion to the benefit, mainly because there just isn't any convenient way to trace a Datum to its source column, even assuming it's got one. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: [HACKERS] Re: [HACKERS] Is it necessary to rewrite table while increasing the scale of datatype numeric?
Greg Stark st...@mit.edu writes: But I wonder if we could just declare that that's not what the scale typmod does. That it's just a maximum scale but it's perfectly valid for NUMERIC data with lower scales to be stored in a column than the typmod says. In a way the current behaviour is like bpchar but it would be nice if it was more like varchar BTW, note that if you want varying scale in a column, you can declare it as unconstrained numeric. So that case corresponds to text, whereas as you rightly say, numeric(m,n) is more like bpchar(n). It's true there is nothing corresponding to varchar(n), but how much do you really need that case? The SQL standard didn't see fit to invent a variant of numeric that worked that way, so they at least aren't buying it. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: [HACKERS] Re: [HACKERS] Is it necessary to rewrite table while increasing the scale of datatype numeric?
On Fri, Sep 6, 2013 at 9:34 AM, Greg Stark st...@mit.edu wrote: But I wonder if we could just declare that that's not what the scale typmod does. That it's just a maximum scale but it's perfectly valid for NUMERIC data with lower scales to be stored in a column than the typmod says. In a way the current behaviour is like bpchar but it would be nice if it was more like varchar I agree that this makes more sense than what is currently done. But are we going to break backwards compatibility to achieve it? Do the standards specify a behavior here? Cheers, Jeff -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: [HACKERS] Re: [HACKERS] Re: [HACKERS] Re: [HACKERS] Is it necessary to rewrite table while increasing the scale of datatype numeric?
On Fri, Sep 6, 2013 at 2:34 PM, Hannu Krosing ha...@2ndquadrant.com wrote: On 09/06/2013 07:57 PM, Robert Haas wrote: On Fri, Sep 6, 2013 at 12:34 PM, Greg Stark st...@mit.edu wrote: But I wonder if we could just declare that that's not what the scale typmod does. That it's just a maximum scale but it's perfectly valid for NUMERIC data with lower scales to be stored in a column than the typmod says. In a way the current behaviour is like bpchar but it would be nice if it was more like varchar Sure, but the point is that 5. is not the same as 5.000 today. If you start whacking this around you'll be changing that behavior, I think. So we already get it wrong by rewriting ? Ah, no, I don't think so. If you have 5.0 and lower the scale, it'll truncate off some of those zeroes to make it fit. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: [HACKERS] Is it necessary to rewrite table while increasing the scale of datatype numeric?
On 9/5/13 10:47 PM, Noah Misch wrote: On Thu, Sep 05, 2013 at 10:41:25AM -0700, Jeff Janes wrote: In order to avoid the rewrite, the code would have to be changed to look up the column definition and if it specifies the scale, then ignore the per-row n_header, and look at the n_header only if the column is NUMERIC with no precision or scale. That should conceptually be possible, but I don't know how hard it would be to implement--it sounds pretty invasive to me. Then if the column was altered from NUMERIC with scale to be a plain NUMERIC, it would have to rewrite the table to enforce the row-wise scale to match the old column-wise scale. Where as now that alter doesn't need a re-write. I don't know if this would be an overall gain or not. Invasive indeed. The type-supplementary data would need to reach essentially everywhere we now convey a type OID. Compare the invasiveness of adding collation support. However, this is not the first time it would have been useful. We currently store a type OID in every array and composite datum. That's wasteful and would be unnecessary if we reliably marshalled similar information to all the code needing it. Given a few more use cases, the effort would perhaps start to look credible relative to the benefits. Aren't there cases where PL/pgsql gets hosed by this? Or even functions in general? I also have a vague memory of some features that would benefit from being able to have typemod info available at a tuple level in a table, not just for the entire table. Unfortunately I can't remember why we wanted that... (Alvaro, do you recall? I'm pretty sure it's something we'd discussed at some point.) -- Jim C. Nasby, Data Architect j...@nasby.net 512.569.9461 (cell) http://jim.nasby.net -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: [HACKERS] Is it necessary to rewrite table while increasing the scale of datatype numeric?
On Wed, Sep 4, 2013 at 10:06 PM, wangs...@highgo.com.cn wrote: 于 2013-09-04 23:41, Jeff Janes 回复: On Tue, Sep 3, 2013 at 9:08 PM, wangs...@highgo.com.cn wrote: Hi, Hackers! I find that it takes a long time when I increase the scale of a numeric datatype. By checking the code, I found that's because it needs to rewrite that table's file. After checking that table's data file, I found only parameter n_header changed. And, I found the data in that numeric field never changed. So I thank It's not necessary to rewrite the table's file in this case. Anyone has more idea about this, please come to talk about this! Jeff Janes jeff.ja...@gmail.com wrote: This was fixed in version 9.2. You must be using an older version. Cheers, Jeff Thanks for your reply. To declare a column of type numeric use the syntax: NUMERIC(precision, scale). What I said is this scale,not yours. You're right, I had tested a change in precision, not in scale. Sorry. In order to avoid the rewrite, the code would have to be changed to look up the column definition and if it specifies the scale, then ignore the per-row n_header, and look at the n_header only if the column is NUMERIC with no precision or scale. That should conceptually be possible, but I don't know how hard it would be to implement--it sounds pretty invasive to me. Then if the column was altered from NUMERIC with scale to be a plain NUMERIC, it would have to rewrite the table to enforce the row-wise scale to match the old column-wise scale. Where as now that alter doesn't need a re-write. I don't know if this would be an overall gain or not. Cheers, Jeff -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: [HACKERS] Is it necessary to rewrite table while increasing the scale of datatype numeric?
On Thu, Sep 5, 2013 at 6:41 PM, Jeff Janes jeff.ja...@gmail.com wrote: Then if the column was altered from NUMERIC with scale to be a plain NUMERIC, it would have to rewrite the table to enforce the row-wise scale to match the old column-wise scale. Where as now that alter doesn't need a re-write. I don't know if this would be an overall gain or not. We've talked about cases like this in the past. It's mostly a SOP and I think it may already be on the TODO. The main difficulty is that Postgres is very extensible. So to implement this you need to think bigger than NUMERIC. It should also be possible to alter a column from varchar(5) to varchar(10) for example (but not the other way around). One way to do it would be to extend pg_type to have another column that specifies a function. That function would take the old and new typmod (which is what stores the 5 in varchar(5)) and tell the server whether it's a safe change to make without rechecking. Another way might be to overload the cast functions, though they currently receive no information about the typmod. They might have the benefit of being able to handle things like varchar(5) - text though. But it has to be that general. Any data type should be able to specify whether an old and new typmod are compatible. -- greg
Re: [HACKERS] Re: [HACKERS] Is it necessary to rewrite table while increasing the scale of datatype numeric?
Greg Stark escribió: The main difficulty is that Postgres is very extensible. So to implement this you need to think bigger than NUMERIC. It should also be possible to alter a column from varchar(5) to varchar(10) for example (but not the other way around). We already allow that. See commits 8f9fe6edce358f7904e0db119416b4d1080a83aa and 3cc0800829a6dda5347497337b0cf43848da4acf -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: [HACKERS] Is it necessary to rewrite table while increasing the scale of datatype numeric?
On Tue, Sep 3, 2013 at 9:08 PM, wangs...@highgo.com.cn wrote: Hi, Hackers! I find that it takes a long time when I increase the scale of a numeric datatype. By checking the code, I found that's because it needs to rewrite that table's file. After checking that table's data file, I found only parameter n_header changed. And, I found the data in that numeric field never changed. So I thank It's not necessary to rewrite the table's file in this case. Anyone has more idea about this, please come to talk about this! This was fixed in version 9.2. You must be using an older version. Cheers, Jeff -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers