[GENERAL] Limit+Offset query wrong result in Postgres 9.0.3 ?
Hi, Is this bug in Postgres ? If yes, is it fixed in latest release ? Second query should return 2 rows instead of 1 ? create table t(i int); insert into t values(1); insert into t values(2); insert into t values(3); pgdb=# select i from t order by i limit 9223372036854775806 offset 1; select i from t order by i limit 9223372036854775806 offset 1; i 2 3 (2 rows) pgdb=# select i from t order by i limit 9223372036854775807 offset 1; select i from t order by i limit 9223372036854775807 offset 1; i 2 (1 row) pgdb=# My server Version is postgres (PostgreSQL) 9.0.3 Thanks in advance!
Re: [GENERAL] Expensive log_line_prefix ?
Tom Lane escribió: > Sergey Konoplev writes: > > On Thu, Oct 11, 2012 at 12:19 PM, Sahagian, David > > wrote: > >> Are there any "extra" costs to logging all this cool stuff ? > >> > >> log_line_prefix = '%m %a %u %p %c %m %v %x' > > > The formatting cost only but it is so insignificant that does not > > worth bothering with it. > > IIRC, %m would imply a gettimeofday call, which might be expensive > depending on your OS and hardware. I think all the other info is > "just sitting around", though. Yeah, and having two %m means two gettimeofday calls. Also, %c sort of implies %p, so having both is probably just a waste. One other point is that it's probably a good idea to put in a %q in there so that processes that don't have things like %v, %x don't have to print a bunch of zeros (and also to prevent %a, %u from adding spaces unnecessarily). -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgres DB Migration from 8.3 to 9.1
> > We have our production environment database server in Postgres 8.3 > version. we have planned to upgrade to lastest version 9.1. Dump from 8.3 > and restore in Postgres 9.1 takes more than 5 hours. Any other quick > method to upgrade from 8.3 to 9.1. We need to reduce our downtime below 1 > hour. Any Possibilities..? > > You should consider 9.2.. Ofcourse, you need to test your application against 9.2 before migrating, because as you mentioned you are on 8.3 where you application might be compatible to it. There are some changes to functions,string handling etc which are more advanced in 9.2. Once you agree that app.. is good to go with 9.2, now to think for option with less downtime or affordable downtime. I would say, use pg_upgrade which has minimal downtime. I guess pg_uprades supports from 8.3 onwards. If you say no downtime or even minimal then prefer Slony-I (trigger based replication). --- Regards, Raghavendra EnterpriseDB Corporation Blog: http://raghavt.blogspot.com/
Re: [GENERAL] Postgres DB Migration from 8.3 to 9.1
On 10/11/12 7:15 PM, Vishalakshi Navaneethakrishnan wrote: We have our production environment database server in Postgres 8.3 version. we have planned to upgrade to lastest version 9.1. Dump from 8.3 and restore in Postgres 9.1 takes more than 5 hours. Any other quick method to upgrade from 8.3 to 9.1. We need to reduce our downtime below 1 hour. Any Possibilities..? is it the dump or the restore taking the lions share of that time? I don't know if pg_upgrade supports 8.3, but that would be one approach. getting it setup to work correctly can require some tinkering, but once you have that sorted, you start with a base backup of the 8.3 file system, and pg_upgrade 'converts' it to the newer version. you need both runtimes setup side by side so either can be run as pg_upgrade will need to start the old version in order to dump its metadata catalogs prior to migrating the data files. if you put both data directories on the same file system, it can use hard linking to 'move' the datafiles. -- john r pierceN 37, W 122 santa cruz ca mid-left coast -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgres DB Migration from 8.3 to 9.1
On 12/10/12 15:15, Vishalakshi Navaneethakrishnan wrote: Hi Friends, We have our production environment database server in Postgres 8.3 version. we have planned to upgrade to lastest version 9.1. Dump from 8.3 and restore in Postgres 9.1 takes more than 5 hours. Any other quick method to upgrade from 8.3 to 9.1. We need to reduce our downtime below 1 hour. Any Possibilities..? Thanks in Advance. -- Best Regards, Vishalakshi.N Have you considered migrating to 9.2? It has some performance and other useful enhancements over 9.1. I suspect that testing on early versions of 9.2.2 might be a good idea, with the intent of migrating to 9.2.2 when it is ready for production use. Cheers. Gavin
Re: [GENERAL] AS s(a) vs. AS a
Thalis Kalfigkopoulos writes: > I see in the docs the following: > SELECT current_date + s.a AS dates FROM generate_series(0,14,7) AS s(a); > Trying it as: > SELECT current_date + a AS dates FROM generate_series(0,14,7) AS a; > yields exactly the same result. > Does the finer granularity/expressiveness offer some extra > functionality in some cases or is it a matter of style? IIRC, it's important for functions returning composite types. For a function returning scalar, "AS a" is equivalent to "AS a(a)". regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Postgres DB Migration from 8.3 to 9.1
Hi Friends, We have our production environment database server in Postgres 8.3 version. we have planned to upgrade to lastest version 9.1. Dump from 8.3 and restore in Postgres 9.1 takes more than 5 hours. Any other quick method to upgrade from 8.3 to 9.1. We need to reduce our downtime below 1 hour. Any Possibilities..? Thanks in Advance. -- Best Regards, Vishalakshi.N
[GENERAL] AS s(a) vs. AS a
Hi all. I see in the docs the following: SELECT current_date + s.a AS dates FROM generate_series(0,14,7) AS s(a); Trying it as: SELECT current_date + a AS dates FROM generate_series(0,14,7) AS a; yields exactly the same result. Does the finer granularity/expressiveness offer some extra functionality in some cases or is it a matter of style? regards, --Thalis -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to raise index points when equal and like is used with gist ?
Condor writes: > explain analyze SELECT * FROM table WHERE phone LIKE '12%' AND > firstname = 'OLEG' AND middlename || lastname LIKE '%KUZNICOV%IGORU%'; > QUERY PLAN > - > Bitmap Heap Scan on abonats_tbl (cost=1638.89..1816.65 rows=1 > width=601) (actual time=219.793..219.793 rows=0 loops=1) > Recheck Cond: ((firstname = 'OLEG'::text) AND (phone ~~ > '12%'::text)) > Filter: ((middlename || lastname) ~~ '%KUZNICOV%IGORU%'::text) > Rows Removed by Filter: 65 > -> BitmapAnd (cost=1638.89..1638.89 rows=45 width=0) (actual > time=219.197..219.197 rows=0 loops=1) > -> Bitmap Index Scan on table_firstname_idx > (cost=0.00..34.42 rows=1690 width=0) (actual time=0.867..0.867 rows=1732 > loops=1) > Index Cond: (firstname = 'OLEG'::text) > -> Bitmap Index Scan on table_phonegist_idx > (cost=0.00..1604.22 rows=33995 width=0) (actual time=217.639..217.639 > rows=33256 loops=1) > Index Cond: (phone ~~ '12%'::text) > Total runtime: 220.426 ms You sure that server is 9.2? Because that looks like a planner bug we squelched some time ago, wherein it was way too enthusiastic about adding more indexes to a BitmapAnd. If it is 9.2, please send a self-contained test case, that is some test data (and settings, if you're using nondefault ones) that makes it do this. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] non-integer constant in ORDER BY: why exactly, and documentation?
> > I think the argument was that it's almost certainly a mistake, so we're > more helpful by throwing an error than by silently executing a query > that probably won't do what the user was expecting. In this particular > example, it seems quite likely that the programmer meant "foo" (ie a > quoted column reference) and got the quote style wrong ... > I guess it depends what you mean by mistake. In this case, here was the actual code involved: if ($GLOBALS['AG_DEMO_MODE']) { $label_field="'XX, XXX'"; } else { $label_field= $object . '_name(' . $id_field . ')'; // e.g., client_name(client_id) } $op .= selectto('objectPickerPickList',$obj_opt ) . do_pick_sql("SELECT $id_field AS value,$label_field AS label FROM " . $def['table'] . " ORDER BY $label_field") ... So yes there are lots of workarounds (and thanks all for the suggestions), including for this case just "ORDER BY 2". And there surely are better ways to code this, but finding areas for potential improvement is a target-rich environment, and one usually in need of prioritization. In this case, there's no reason the code above _couldn't_ have been adequately functional, had not some well-meaning software gotten in the way by trying to watch out for me... ;) Then again, my personal Postgres score of times it has helped me versus times it has not is probably about 1.5 million to 7, so don't hear this as a giant grumble or complaint. I really was more curious than anything... Cheers, Ken On Thu, Oct 11, 2012 at 2:06 PM, Tom Lane wrote: > Ken Tanzer writes: > > Hi. I recently ran a query that generate the same error as this: > > SELECT * FROM generate_series(1,10) ORDER BY 'foo'; > > ERROR: non-integer constant in ORDER BY > > > I am curious though about why this "limitation" exists. I get that > integer > > constants are reserved for sorting by column numbers. But if Postgres > > already knows that it's a non-integer constant, why not let it go > through > > with the (admittedly pointless) ordering? > > I think the argument was that it's almost certainly a mistake, so we're > more helpful by throwing an error than by silently executing a query > that probably won't do what the user was expecting. In this particular > example, it seems quite likely that the programmer meant "foo" (ie a > quoted column reference) and got the quote style wrong ... > > regards, tom lane > -- AGENCY Software A data system that puts you in control *http://agency-software.org/* ken.tan...@agency-software.org (253) 245-3801
Re: [GENERAL] Expensive log_line_prefix ?
Sergey Konoplev writes: > On Thu, Oct 11, 2012 at 12:19 PM, Sahagian, David > wrote: >> Are there any "extra" costs to logging all this cool stuff ? >> >> log_line_prefix = '%m %a %u %p %c %m %v %x' > The formatting cost only but it is so insignificant that does not > worth bothering with it. IIRC, %m would imply a gettimeofday call, which might be expensive depending on your OS and hardware. I think all the other info is "just sitting around", though. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to raise index points when equal and like is used with gist ?
On Thu, Oct 11, 2012 at 2:23 AM, Condor wrote: > explain analyze SELECT * FROM table WHERE phone LIKE '12%' AND firstname = > 'OLEG' AND middlename || lastname LIKE '%KUZNICOV%IGORU%'; > QUERY > PLAN > - > Bitmap Heap Scan on abonats_tbl (cost=1638.89..1816.65 rows=1 width=601) > (actual time=219.793..219.793 rows=0 loops=1) >Recheck Cond: ((firstname = 'OLEG'::text) AND (phone ~~ '12%'::text)) >Filter: ((middlename || lastname) ~~ '%KUZNICOV%IGORU%'::text) >Rows Removed by Filter: 65 >-> BitmapAnd (cost=1638.89..1638.89 rows=45 width=0) (actual > time=219.197..219.197 rows=0 loops=1) > -> Bitmap Index Scan on table_firstname_idx (cost=0.00..34.42 > rows=1690 width=0) (actual time=0.867..0.867 rows=1732 loops=1) >Index Cond: (firstname = 'OLEG'::text) > -> Bitmap Index Scan on table_phonegist_idx (cost=0.00..1604.22 > rows=33995 width=0) (actual time=217.639..217.639 rows=33256 loops=1) >Index Cond: (phone ~~ '12%'::text) > Total runtime: 220.426 ms > > > My question is: Is there any way how to make postgresql first to search from > field that is with equal I have index there and then to filter result based > to other conditions first gist and then other. > I think may be I should play with index points. What about dropping table_phonegist_idx index? Is it used somewhere else? ps. BTW how do you cope with the pg_trgm ASCII alphanumeric restriction? Transliteration? -- Sergey Konoplev a database and software architect http://www.linkedin.com/in/grayhemp Jabber: gray...@gmail.com Skype: gray-hemp Phone: +14158679984 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Expensive log_line_prefix ?
On 10/11/12 2:57 PM, Sergey Konoplev wrote: On Thu, Oct 11, 2012 at 12:19 PM, Sahagian, David wrote: >Are there any "extra" costs to logging all this cool stuff ? > > log_line_prefix = '%m %a %u %p %c %m %v %x' The formatting cost only but it is so insignificant that does not worth bothering with it. i'd think the time spent writing the extra output to the log file would be the most significant part of it, along with any disk IO contention if those logs are on the same physical storage as the database. -- john r pierceN 37, W 122 santa cruz ca mid-left coast -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Disabling inferred "group by" columns in 9.1?
Ok. Thanks for your help. On Oct 11, 2012, at 3:31 PM, Tom Lane wrote: > "Evan D. Hoffman" writes: >> Because we have both 9.0 and 9.1 running, and a query that succeeds >> under 9.1 was failing under 9.0. Can I assume the answer then is >> "no"? > > There are any number of reasons a query might work in version N and not > in N-1. If you've got requirements like that, you should be developing > on the oldest version you're worried about supporting ... > >regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Expensive log_line_prefix ?
On Thu, Oct 11, 2012 at 12:19 PM, Sahagian, David wrote: > Are there any "extra" costs to logging all this cool stuff ? > > log_line_prefix = '%m %a %u %p %c %m %v %x' The formatting cost only but it is so insignificant that does not worth bothering with it. -- Sergey Konoplev a database and software architect http://www.linkedin.com/in/grayhemp Jabber: gray...@gmail.com Skype: gray-hemp Phone: +14158679984 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] non-integer constant in ORDER BY: why exactly, and documentation?
On Thu, Oct 11, 2012 at 2:07 PM, A.M. wrote: > On Oct 11, 2012, at 4:48 PM, Ken Tanzer wrote: > select * from generate_series(1,10) order by coalesce('foo'); Another workaround is select bar.* from generate_series(1,3) as bar, (values ('foo')) as foo order by foo; or even simpler select * from generate_series(1,3) order by (values ('foo')); or my favorite select * from generate_series(1,3) order by 'foo'::text; -- Sergey Konoplev a database and software architect http://www.linkedin.com/in/grayhemp Jabber: gray...@gmail.com Skype: gray-hemp Phone: +14158679984 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] moving from MySQL to pgsql
On 10/11/12 2:07 AM, Vineet Deodhar wrote: To give an example, I have tables for storing master records (year master, security master, etc.) for which pkid TINYINT is just sufficient. These pkid's are used as fk constraints in tables for storing business transactions. The no. of rows in business transactions tables is in millions. Here, I NEED to worry about the storage space occupied by the pkid fields. IMHO, its not significant to save a byte per field unless there's 30 or 100 of these fields in a single row. -- john r pierceN 37, W 122 santa cruz ca mid-left coast -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] non-integer constant in ORDER BY: why exactly, and documentation?
On Oct 11, 2012, at 4:48 PM, Ken Tanzer wrote: > Hi. I recently ran a query that generate the same error as this: > > SELECT * FROM generate_series(1,10) ORDER BY 'foo'; > ERROR: non-integer constant in ORDER BY > LINE 1: SELECT * FROM generate_series(1,10) ORDER BY 'foo'; > > The query was generated by an app (and the result somewhat inadvertent), so > it was easy enough to change and I'm not asking here about a practical > problem. > > I am curious though about why this "limitation" exists. I get that integer > constants are reserved for sorting by column numbers. But if Postgres > already knows that it's a non-integer constant, why not let it go through > with the (admittedly pointless) ordering? > > Also, I couldn't see that this was explictly mentioned in the documentation. > The relevant pieces seemed to be: > > Each expression can be the name or ordinal number of an output column (SELECT > list item), or it can be an arbitrary expression formed from input-column > values. > > followed closely by: > > It is also possible to use arbitrary expressions in the ORDER BY clause, > including columns that do not appear in the SELECT output list. > (http://www.postgresql.org/docs/9.1/static/sql-select.html#SQL-ORDERBY) > > And looking at the expressions page > (http://www.postgresql.org/docs/8.4/static/sql-expressions.html), the first > type of value expression is a "constant or literal expression." So nothing > seems to explicitly rule out a literal ORDER BY. > > I'm not sure if it would do violence to something I'm missing, but would the > following combined statement work for the documentation? > > "Each expression can be the name or ordinal number of an output column > (SELECT list item), or it can be an arbitrary expression. The expression > can include column values--whether they appear in the SELECT output list or > not. An expression may not, however, consist solely of a non-integer > constant. And an integer constant will be interpreted as the ordinal number > of an output column " Apparently, the parser tries to pull an column index out of any constant appearing in that position. It can be trivially worked around: select * from generate_series(1,10) order by coalesce('foo'); but that doesn't help if your query is automatically generated. Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] non-integer constant in ORDER BY: why exactly, and documentation?
Ken Tanzer writes: > Hi. I recently ran a query that generate the same error as this: > SELECT * FROM generate_series(1,10) ORDER BY 'foo'; > ERROR: non-integer constant in ORDER BY > I am curious though about why this "limitation" exists. I get that integer > constants are reserved for sorting by column numbers. But if Postgres > already knows that it's a non-integer constant, why not let it go through > with the (admittedly pointless) ordering? I think the argument was that it's almost certainly a mistake, so we're more helpful by throwing an error than by silently executing a query that probably won't do what the user was expecting. In this particular example, it seems quite likely that the programmer meant "foo" (ie a quoted column reference) and got the quote style wrong ... regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] non-integer constant in ORDER BY: why exactly, and documentation?
From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Ken Tanzer Sent: Thursday, October 11, 2012 4:49 PM To: pgsql-general@postgresql.org Subject: [GENERAL] non-integer constant in ORDER BY: why exactly, and documentation? Hi. I recently ran a query that generate the same error as this: SELECT * FROM generate_series(1,10) ORDER BY 'foo'; ERROR: non-integer constant in ORDER BY LINE 1: SELECT * FROM generate_series(1,10) ORDER BY 'foo'; The query was generated by an app (and the result somewhat inadvertent), so it was easy enough to change and I'm not asking here about a practical problem. I am curious though about why this "limitation" exists. I get that integer constants are reserved for sorting by column numbers. But if Postgres already knows that it's a non-integer constant, why not let it go through with the (admittedly pointless) ordering? Also, I couldn't see that this was explictly mentioned in the documentation. The relevant pieces seemed to be: Each expression can be the name or ordinal number of an output column (SELECT list item), or it can be an arbitrary expression formed from input-column values. followed closely by: It is also possible to use arbitrary expressions in the ORDER BY clause, including columns that do not appear in the SELECT output list. (http://www.postgresql.org/docs/9.1/static/sql-select.html#SQL-ORDERBY) And looking at the expressions page (http://www.postgresql.org/docs/8.4/static/sql-expressions.html), the first type of value expression is a "constant or literal expression." So nothing seems to explicitly rule out a literal ORDER BY. I'm not sure if it would do violence to something I'm missing, but would the following combined statement work for the documentation? "Each expression can be the name or ordinal number of an output column (SELECT list item), or it can be an arbitrary expression. The expression can include column values--whether they appear in the SELECT output list or not. An expression may not, however, consist solely of a non-integer constant. And an integer constant will be interpreted as the ordinal number of an output column " I would categorize this under "help people avoid shooting themselves in the foot". A possible situation is that the user meant to use double-quotes to specify an identifier but instead used single quotes. Since a literal constant would not impact the sort order the planner should either discard it silently or throw an exception. The exception is preferred since the presence of a constant literal likely means whatever generated the query is broken and should be fixed. The documentation tweak probably is overkill given the rarity of the issue and the fact the system generates an appropriate error message when it does occur. David J.
[GENERAL] non-integer constant in ORDER BY: why exactly, and documentation?
Hi. I recently ran a query that generate the same error as this: SELECT * FROM generate_series(1,10) ORDER BY 'foo'; ERROR: non-integer constant in ORDER BY LINE 1: SELECT * FROM generate_series(1,10) ORDER BY 'foo'; The query was generated by an app (and the result somewhat inadvertent), so it was easy enough to change and I'm not asking here about a practical problem. I am curious though about why this "limitation" exists. I get that integer constants are reserved for sorting by column numbers. But if Postgres already knows that it's a non-integer constant, why not let it go through with the (admittedly pointless) ordering? Also, I couldn't see that this was explictly mentioned in the documentation. The relevant pieces seemed to be: *Each expression can be the name or ordinal number of an output column ( SELECT list item), or it can be an arbitrary expression formed from input-column values.* followed closely by: *It is also possible to use arbitrary expressions in the ORDER BY clause, including columns that do not appear in the SELECT output list. (http://www.postgresql.org/docs/9.1/static/sql-select.html#SQL-ORDERBY) * And looking at the expressions page ( http://www.postgresql.org/docs/8.4/static/sql-expressions.html), the first type of value expression is a "constant or literal expression." So nothing seems to explicitly rule out a literal ORDER BY. I'm not sure if it would do violence to something I'm missing, but would the following combined statement work for the documentation? *"Each expression can be the name or ordinal number of an output column ( SELECT list item), or it can be an arbitrary expression. The expression can include column values--whether they appear in the SELECT output list or not. An expression may not, however, consist solely of a non-integer constant. And an integer constant will be interpreted as the ordinal number of an output column** " * Thanks in advance. Ken -- AGENCY Software A data system that puts you in control *http://agency-software.org/* ken.tan...@agency-software.org (253) 245-3801
Re: [GENERAL] Disabling inferred "group by" columns in 9.1?
"Evan D. Hoffman" writes: > Because we have both 9.0 and 9.1 running, and a query that succeeds > under 9.1 was failing under 9.0. Can I assume the answer then is > "no"? There are any number of reasons a query might work in version N and not in N-1. If you've got requirements like that, you should be developing on the oldest version you're worried about supporting ... regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Expensive log_line_prefix ?
In 9.1.x Are there any "extra" costs to logging all this cool stuff ? log_line_prefix = '%m %a %u %p %c %m %v %x' Are any of these expensive ? Or is the only cost the number of bytes that the textual representation of their values take in the written log file ? Thanks, -dvs-
Re: [GENERAL] Disabling inferred "group by" columns in 9.1?
Evan D. Hoffman wrote on 11.10.2012 20:54: Is there any way to disable the guessing of missing columns in a group-by (the feature added in 9.1)? Because we have both 9.0 and 9.1 running, and a query that succeeds under 9.1 was failing under 9.0. Can I assume the answer then is "no"? If you cannot upgrade the 9.0 installation, the only way how you can solve that problem is to develop the statements using 9.0 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Disabling inferred "group by" columns in 9.1?
Because we have both 9.0 and 9.1 running, and a query that succeeds under 9.1 was failing under 9.0. Can I assume the answer then is "no"? On Thu, Oct 11, 2012 at 2:38 PM, Tom Lane wrote: > "Evan D. Hoffman" writes: >> Is there any way to disable the guessing of missing columns in a >> group-by (the feature added in 9.1)? > > Why would you want to do that? There's no "guessing" involved: the > behavior is always correct, and it is required by the SQL standard. > > regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Disabling inferred "group by" columns in 9.1?
"Evan D. Hoffman" writes: > Is there any way to disable the guessing of missing columns in a > group-by (the feature added in 9.1)? Why would you want to do that? There's no "guessing" involved: the behavior is always correct, and it is required by the SQL standard. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Storing large files in multiple schemas: BLOB or BYTEA
>I believe the general consensus around here is to not do that, if you can avoid it. File systems are much better equipped to handle files of that magnitude, especially when it comes to retrieving them, scanning >through their contents, or really, any access pattern aside from simple storage. > >You're better off storing the blob on disk somewhere and storing a row that refers to its location. Either key pieces for a naming scheme or the full path. > >This is especially true if you mean to later access that data with PHP. > >-- >Shaun Thomas Using files stored outside the database creates all sorts of problems. For starters you lose ACID guaranties. I would prefer to keep them in database. We did a lot of experiments with Large Objects and they really worked fine (stored hundreds of LOBs ranging from a few MB up to 1GB). Postgres does a really good job with Large Objects. If it was not the pg_dump problem I would not hesitate to use LOBs.
[GENERAL] Disabling inferred "group by" columns in 9.1?
Is there any way to disable the guessing of missing columns in a group-by (the feature added in 9.1)? I haven't found any option but I might not be searching for the right thing. http://wiki.postgresql.org/wiki/What%27s_new_in_PostgreSQL_9.1#SQL_and_PL.2FPgSQL_features -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] moving from MySQL to pgsql
On Thu, Oct 11, 2012 at 9:38 AM, Gavin Flower wrote: > On 12/10/12 04:39, Merlin Moncure wrote: > >> On Thu, Oct 11, 2012 at 4:44 AM, Vineet Deodhar >> wrote: >> >>> Thanks all for your replies. >>> This is my first experience with postgres mailing list. >>> Hats Off to the active community of pgsql. >>> This has definitely raised my confidence level with postgres. >>> >> thanks. we like emails that start off 'moving to pgsql from xxx'. >> >> If you want a 24x7 crack dedicated support team, claim to be >> frustrated with postgres and evaluating migration to SQL Server. :-). >> >> merlin >> >> >> Surely we should le people migrate to SQL Server grom postgres - as it > will be its own punishment? :-) > > I'll take SQL Server over Oracle any day of the week though..
Re: [GENERAL] moving from MySQL to pgsql
On 12/10/12 04:39, Merlin Moncure wrote: On Thu, Oct 11, 2012 at 4:44 AM, Vineet Deodhar wrote: Thanks all for your replies. This is my first experience with postgres mailing list. Hats Off to the active community of pgsql. This has definitely raised my confidence level with postgres. thanks. we like emails that start off 'moving to pgsql from xxx'. If you want a 24x7 crack dedicated support team, claim to be frustrated with postgres and evaluating migration to SQL Server. :-). merlin Surely we should le people migrate to SQL Server grom postgres - as it will be its own punishment? :-) Cheers, Gavin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] OpenSCG 9.0.10 package
On Thu, Oct 11, 2012 at 8:56 AM, Scott Mead wrote: > Hey Ian, > > > On Thu, Oct 11, 2012 at 10:52 AM, Ian Harding wrote: >> >> I know this is the wrong place, but I tried creating an account on >> their site to contact them and it does not work. >> > > I'll take a look at that right away, sorry it happened! > >> >> The 9.0.10 package throws an error when I try to install it that it >> has an unsatisfied dependency on libpython. Since it brings its own >> libraries, I'm not sure why that would be, but there it is. Hoping >> someone from openscg sees this... >> >> rpm -ivh postgres-9.0.10-1.x86_64.openscg.rpm >> warning: postgres-9.0.10-1.x86_64.openscg.rpm: Header V3 DSA/SHA1 >> Signature, key ID 38379aec: NOKEY >> error: Failed dependencies: >> libpython2.4.so.1.0()(64bit) is needed by postgres90-9.0.10-1.x86_64 > > > Which linux distribution / version are you installing on ? If you have a > package manager that can help with dependency resolution (like yum), can you > use that to either install the postgres RPM or just the lib python package? > The machine is fenced off from the internet and I'm pretty ignorant about package managers, but I have been installing the other OpenSCG packages with plain old rpm and it's been working. I thought they didn't have any external dependencies so I could get away with that. Also, I do have the libpython package installed, it's just a bit newer than what the package wants. The Linux distro is called Scientific Linux and uname -a looks like this: Linux db02 2.6.32-131.6.1.el6.x86_64 #1 SMP Tue Jul 12 17:14:50 CDT 2011 x86_64 x86_64 x86_64 GNU/Linux Thanks! -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] OpenSCG 9.0.10 package
Hey Ian, On Thu, Oct 11, 2012 at 10:52 AM, Ian Harding wrote: > I know this is the wrong place, but I tried creating an account on > their site to contact them and it does not work. > > I'll take a look at that right away, sorry it happened! > The 9.0.10 package throws an error when I try to install it that it > has an unsatisfied dependency on libpython. Since it brings its own > libraries, I'm not sure why that would be, but there it is. Hoping > someone from openscg sees this... > > rpm -ivh postgres-9.0.10-1.x86_64.openscg.rpm > warning: postgres-9.0.10-1.x86_64.openscg.rpm: Header V3 DSA/SHA1 > Signature, key ID 38379aec: NOKEY > error: Failed dependencies: > libpython2.4.so.1.0()(64bit) is needed by postgres90-9.0.10-1.x86_64 > Which linux distribution / version are you installing on ? If you have a package manager that can help with dependency resolution (like yum), can you use that to either install the postgres RPM or just the lib python package? --Scott OpenSCG > > > Also... > > ls /usr/lib64/libpython2.6.so.1.0 > /usr/lib64/libpython2.6.so.1.0 > > > although I'm pretty sure that doesn't matter. > > - Ian > > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general >
Re: [GENERAL] moving from MySQL to pgsql
On Thu, Oct 11, 2012 at 4:44 AM, Vineet Deodhar wrote: > Thanks all for your replies. > This is my first experience with postgres mailing list. > Hats Off to the active community of pgsql. > This has definitely raised my confidence level with postgres. thanks. we like emails that start off 'moving to pgsql from xxx'. If you want a 24x7 crack dedicated support team, claim to be frustrated with postgres and evaluating migration to SQL Server. :-). merlin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] OpenSCG 9.0.10 package
I know this is the wrong place, but I tried creating an account on their site to contact them and it does not work. The 9.0.10 package throws an error when I try to install it that it has an unsatisfied dependency on libpython. Since it brings its own libraries, I'm not sure why that would be, but there it is. Hoping someone from openscg sees this... rpm -ivh postgres-9.0.10-1.x86_64.openscg.rpm warning: postgres-9.0.10-1.x86_64.openscg.rpm: Header V3 DSA/SHA1 Signature, key ID 38379aec: NOKEY error: Failed dependencies: libpython2.4.so.1.0()(64bit) is needed by postgres90-9.0.10-1.x86_64 Also... ls /usr/lib64/libpython2.6.so.1.0 /usr/lib64/libpython2.6.so.1.0 although I'm pretty sure that doesn't matter. - Ian -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Index only scan
Thanks all for the help and insights. I will continue to read up on the details of partitioning and pgpool. best regards, Lars On Thu, Oct 11, 2012 at 2:47 AM, Gavin Flower wrote: > On 11/10/12 12:41, Tom Lane wrote: >> >> Gavin Flower writes: >>> >>> On 11/10/12 01:03, Lars Helge Øverland wrote: My question is: Would it be feasible and/or possible to implement index only scans in a way that it could take advantage of several, single-column indexes? For example, a query spanning columns a, b, c could take advantage of 3 single-column indexes put on columns a, b, c. >>> >>> Index only scans do use multiple indexes of single fields where >>> appropriate. Here the planner determined it only needed to scan 2 of >>> the 3 relevant single field indexes. >> >> But your example isn't an index-only scan ... it's a plain old bitmap >> scan, and so it does touch the heap. >> >> The difficulty with what Lars proposes is that there's no way to scan >> the different indexes "in sync" --- each one will be ordered according >> to its own column order. In principle I guess we could read out the >> index data and do a join using the ctid's, but it's far from clear that >> such a thing would be worth the trouble. >> >> regards, tom lane > > Thanks for the correction! > > Cheers, > Gavin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] moving from MySQL to pgsql
On 10/11/2012 05:07 PM, Vineet Deodhar wrote: On Thu, Oct 11, 2012 at 1:12 PM, Craig Ringer mailto:ring...@ringerc.id.au>> wrote: The difference between SMALLINT and BOOLEAN (or TINYINT if Pg supported it) is 1 byte per column. If you had 30 smallint columns and quite a few million rows it might start making a difference, but it's *really* not worth obsessing about. Unless you have high-column-count tables that contain nothing but lots of integers of range 0-255 there's no point caring. -- Craig Ringer To give an example, I have tables for storing master records (year master, security master, etc.) for which pkid TINYINT is just sufficient. These pkid's are used as fk constraints in tables for storing business transactions. The no. of rows in business transactions tables is in millions. Here, I NEED to worry about the storage space occupied by the pkid fields. AFAIK in most situations alignment requirements will mean you won't gain any space in those situations anyway. I would be truly amazed if you saw more than something like a 1% difference in size due to this; it'll be *massively* outweighed by all the other differences. You're optimising prematurely. See if it's a problem in practice, and if it is look into using a custom data type (warning: lots of work) or some other approach. -- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] moving from MySQL to pgsql
Thanks all for your replies. This is my first experience with postgres mailing list. Hats Off to the active community of pgsql. This has definitely raised my confidence level with postgres. --- Vineet
Re: [GENERAL] Storing large files in multiple schemas: BLOB or BYTEA
On Wed, Oct 10, 2012 at 10:56 PM, Craig Ringer wrote: > On 10/11/2012 01:35 PM, tigran2-postg...@riatest.com wrote: > >> Using files stored outside the database creates all sorts of problems. >> For starters you lose ACID guaranties. I would prefer to keep them in >> database. We did a lot of experiments with Large Objects and they really >> worked fine (stored hundreds of LOBs ranging from a few MB up to 1GB). >> Postgres does a really good job with Large Objects. If it was not the >> pg_dump problem I would not hesitate to use LOBs. >> > > Yeah, a pg_dump mode that dumped everything but large objects would be > nice. > Remembering when that was the only way pg_dump worked and it caused plenty of problems. But yeah, --only-lobs and --no-lobs might be nice switches. > > Right now I find storing large objects in the DB such a pain from a backup > management point of view that I avoid it where possible. > > > I'm now wondering about the idea of implementing a pg_dump option that > dumped large objects into a directory tree like > lobs/[loid]/[lob_md5] > and wrote out a restore script that loaded them using `lo_import`. > Thinking of the problems that occurred when we used to require lobs to be backed up to binary archive formats > During dumping temporary copies could be written to something like > lobs/[loid]/.tmp. with the md5 being calculated on the fly as the byte > stream is read. If the dumped file had the same md5 as the existing one > it'd just delete the tempfile; otherwise the tempfile would be renamed to > the calculated md5. > > > That way incremental backup systems could manage the dumped LOB tree > without quite the same horrible degree of duplication as is currently faced > when using lo in the database with pg_dump. > How do incremental backup systems work with lots of data anyway with pg_dump? I would think thats not the approach I would take to incremental backups and PostgreSQL > > A last_modified timestamp on `pg_largeobject_metadata` would be even > better, allowing the cost of reading and discarding rarely-changed large > objects to be avoided. > It might be interesting to look at the issue of large objects from a total backup perspective.I do wonder though where the end would be. You could have 500MB text fields and those might pose backup issues as well. I suppose with better LOB support on backups, it would give additional options for management. Best Wishes, Chris Travers
Re: [GENERAL] moving from MySQL to pgsql
On Thu, Oct 11, 2012 at 3:04 PM, Craig Ringer wrote: > > AFAIK in most situations alignment requirements will mean you won't gain > any space in those situations anyway. > > I would be truly amazed if you saw more than something like a 1% > difference in size due to this; it'll be *massively* outweighed by all the > other differences. You're optimising prematurely. See if it's a problem in > practice, and if it is look into using a custom data type (warning: lots of > work) or some other approach. > -- > Craig Ringer > > OK. As per your advise, let me put smallint into practice and analyze the storage requirement. Thanks a lot. -- Vineet -- Best regards, Vineet Deodhar 0 9270012422( Sales) 0 9420 6767 41 / 0 844 6268 488 (Service) 0233-2300136 (Back-Office)
Re: [GENERAL] auto-increment field : in a simple way
On 10/11/2012 05:11 PM, Vineet Deodhar wrote: On Thu, Oct 11, 2012 at 12:56 PM, Scott Marlowe mailto:scott.marl...@gmail.com>> wrote: Can't you just add this to your create table: CREATE TABLE tablename ( colname SERIAL , check (colname>0 and colname < 32768)); ); With this constraint, whether the storage space requirement would reduce? OR Is it just for validation of data? It's purely validation and has no effect on storage size. -- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] auto-increment field : in a simple way
On Thu, Oct 11, 2012 at 5:11 PM, Vineet Deodhar wrote: > On Thu, Oct 11, 2012 at 12:56 PM, Scott Marlowe > wrote: > >> >> Can't you just add this to your create table: >> >> >> CREATE TABLE tablename ( >>colname SERIAL >> , check (colname>0 and colname < 32768)); >> ); >> >> >> > With this constraint, whether the storage space requirement would reduce? > OR > Is it just for validation of data? > > --- Vineet > Well, there's smallserial when creating tables... create table test(a smallserial); NOTICE: CREATE TABLE will create implicit sequence "test_a_seq" for serial column "test.a" CREATE TABLE Table "public.test" Column | Type |Modifiers +--+-- a | smallint | not null default nextval('test_a_seq'::regclass) -- JC de Villa
[GENERAL] How to raise index points when equal and like is used with gist ?
Hello, I have a problem with query and index scan based on pg_trgm module. Here is few examples: First example is with equal: explain analyze SELECT * FROM table WHERE firstname = 'OLEG' AND middlename || lastname LIKE '%KUZNICOV%IGORU%'; QUERY PLAN - Bitmap Heap Scan on abonats_tbl (cost=34.42..6043.65 rows=1 width=601) (actual time=2.885..14.062 rows=1 loops=1) Recheck Cond: (firstname = 'OLEG'::text) Filter: ((middlename || lastname) ~~ '%KUZNICOV%IGORU%'::text) Rows Removed by Filter: 1731 -> Bitmap Index Scan on table_firstname_idx (cost=0.00..34.42 rows=1690 width=0) (actual time=0.699..0.699 rows=1732 loops=1) Index Cond: (firstname = 'OLEG'::text) Total runtime: 14.126 ms (7 rows) But if I add one like with gist index result is not filtered here is example: explain analyze SELECT * FROM table WHERE phone LIKE '12%' AND firstname = 'OLEG' AND middlename || lastname LIKE '%KUZNICOV%IGORU%'; QUERY PLAN - Bitmap Heap Scan on abonats_tbl (cost=1638.89..1816.65 rows=1 width=601) (actual time=219.793..219.793 rows=0 loops=1) Recheck Cond: ((firstname = 'OLEG'::text) AND (phone ~~ '12%'::text)) Filter: ((middlename || lastname) ~~ '%KUZNICOV%IGORU%'::text) Rows Removed by Filter: 65 -> BitmapAnd (cost=1638.89..1638.89 rows=45 width=0) (actual time=219.197..219.197 rows=0 loops=1) -> Bitmap Index Scan on table_firstname_idx (cost=0.00..34.42 rows=1690 width=0) (actual time=0.867..0.867 rows=1732 loops=1) Index Cond: (firstname = 'OLEG'::text) -> Bitmap Index Scan on table_phonegist_idx (cost=0.00..1604.22 rows=33995 width=0) (actual time=217.639..217.639 rows=33256 loops=1) Index Cond: (phone ~~ '12%'::text) Total runtime: 220.426 ms My question is: Is there any way how to make postgresql first to search from field that is with equal I have index there and then to filter result based to other conditions first gist and then other. I think may be I should play with index points. Im using postgresql 9.2.1 x86_64 Regards, C. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] auto-increment field : in a simple way
On Thu, Oct 11, 2012 at 12:56 PM, Scott Marlowe wrote: > > Can't you just add this to your create table: > > > CREATE TABLE tablename ( >colname SERIAL > , check (colname>0 and colname < 32768)); > ); > > > With this constraint, whether the storage space requirement would reduce? OR Is it just for validation of data? --- Vineet
Re: [GENERAL] moving from MySQL to pgsql
On Thu, Oct 11, 2012 at 1:12 PM, Craig Ringer wrote: > The difference between SMALLINT and BOOLEAN (or TINYINT if Pg supported > it) is 1 byte per column. If you had 30 smallint columns and quite a few > million rows it might start making a difference, but it's *really* not > worth obsessing about. Unless you have high-column-count tables that > contain nothing but lots of integers of range 0-255 there's no point caring. > > -- > Craig Ringer > > To give an example, I have tables for storing master records (year master, security master, etc.) for which pkid TINYINT is just sufficient. These pkid's are used as fk constraints in tables for storing business transactions. The no. of rows in business transactions tables is in millions. Here, I NEED to worry about the storage space occupied by the pkid fields. -- Vineet
Re: [GENERAL] auto-increment field : in a simple way
On 10/11/2012 03:04 PM, Vineet Deodhar wrote: user_id smallint NOT NULL DEFAULT nextval('user_id_seq') I'm kind of puzzled about why you'd want to use a serial on a field that can contain at most 65,536 entries anyway. If you're only going to have a max of 65,536 entries then the space saving over INTEGER is at most 65536*2 = 131072 bytes or 128kb for the table. Planning on creating many thousands of these tables? If you're not, then it isn't worth caring. If you are, then you're automating it so you won't mind doing it the longhand way. If you're creating few enough tables that you care about the syntax of defining an unusually small data type for a generated primary key, you're creating few enough that the space doesn't actually matter. -- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] moving from MySQL to pgsql
On 10/11/2012 02:07 PM, Vineet Deodhar wrote: On Thu, Oct 11, 2012 at 5:26 AM, Ondrej Ivanič mailto:ondrej.iva...@gmail.com>> wrote: Hi, On 10 October 2012 19:47, Vineet Deodhar mailto:vineet.deod...@gmail.com>> wrote: > 3) Can I simulate MySQL's TINYINT data-type (using maybe the custom data > type or something else) What do you exactly mean? Do you care about storage requirements or constraints? The smallest numeric type in postgres is smallint: range is +/- 32K and you need two bytes. You can use check constraint to restrict the range (postgres doesn't have signed / unsigned types): create table T ( tint_signed smallint check ( tint_signed >= -128 and tint_signed =< 127 ), tint_unsigned smallint check ( tint_unsigned >= 0 and tint_unsigned =< 255 ) ) Yes. Considering the storage requirements , I am looking for TINYINT kind of data type. The storage difference between `SMALLINT` and a `TINYINT` would be ... tiny, given the space taken up by tuple headers, etc. As it is, a row containing four SMALLINT columns is 32 bytes, vs 40 bytes for INTEGER columns or 28 for BOOLEAN. regress=# SELECT pg_column_size( (BOOLEAN 't', BOOLEAN 't', BOOLEAN 'f', BOOLEAN 'f') ); pg_column_size 28 (1 row) regress=# SELECT pg_column_size( (SMALLINT '2', SMALLINT '3', SMALLINT '4', SMALLINT '5') ); pg_column_size 32 (1 row) regress=# SELECT pg_column_size( (INTEGER '2', INTEGER '3', INTEGER '4', INTEGER '5') ); pg_column_size 40 (1 row) The difference between SMALLINT and BOOLEAN (or TINYINT if Pg supported it) is 1 byte per column. If you had 30 smallint columns and quite a few million rows it might start making a difference, but it's *really* not worth obsessing about. Unless you have high-column-count tables that contain nothing but lots of integers of range 0-255 there's no point caring. -- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] auto-increment field : in a simple way
On Thu, Oct 11, 2012 at 1:04 AM, Vineet Deodhar wrote: > I wish to know regarding auto-increment field. > I learn that the required table definition would be something like -- > > CREATE TABLE tablename ( > colname SERIAL > ); > > For more granular control over the size of field, I need to do the > following--- > > CREATE SEQUENCE user_id_seq; > > CREATE TABLE user ( > > user_id smallint NOT NULL DEFAULT nextval('user_id_seq') > > ); > ALTER SEQUENCE user_id_seq OWNED BY user.user_id; > > > > I am not pinpointing MySQL v/s Postgres; but since I am accustomed to using > simply "AUTOINCREMENT" in MySQL, > I find this a bit cumbersome procedure. > > 1] Is there any simpler way of doing this? > > 2] Whether the development team has this point in their TO DO list for > future release of postgres? Can't you just add this to your create table: CREATE TABLE tablename ( colname SERIAL , check (colname>0 and colname < 32768)); ); Also I can't imagine this problem being common enough to justify much work to provide a "smallserial" type etc. What about when we want a sequence with a different increment by, start, min/max, cycle, or cache value? Any idea for making things a bit different here should probably address all of those possibilites. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] auto-increment field : in a simple way
I wish to know regarding auto-increment field. I learn that the required table definition would be something like -- CREATE TABLE tablename ( colname SERIAL ); For more granular control over the size of field, I need to do the following--- CREATE SEQUENCE user_id_seq; CREATE TABLE user ( user_id smallint NOT NULL DEFAULT nextval('user_id_seq') ); ALTER SEQUENCE user_id_seq OWNED BY user.user_id; I am not pinpointing MySQL v/s Postgres; but since I am accustomed to using simply "AUTOINCREMENT" in MySQL, I find this a bit cumbersome procedure. 1] Is there any simpler way of doing this? 2] Whether the development team has this point in their TO DO list for future release of postgres? Thanks, Vineet