Re: [HACKERS] Remove behaviour of postmaster -o

2006-05-08 Thread Tom Lane
Peter Eisentraut <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Not sure why Peter didn't continue working on it.

> I'm still working on the postmaster/postgres merge.  But the behavior of 
> postmaster -o is not going to be removed.  That TODO item might be 
> appropriate in a release or three at best.

I think the point though is that -o becomes a no-op: whether you put -o
in front of some options won't matter anymore, because they'll be
interpreted the same either way.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] Inheritance, Primary Keys and Foreign Keys

2006-05-08 Thread Albert Cervera Areny
Hi,
I'm developing an object persistency framework for which I'd love to 
have
better support for inheritance in PostgreSQL. I could already map subclasses
with the current inheritance facilities, but the problem is with Primary and
Foreign Keys.

There's a TODO for implementing Indexes that hold information contained 
in
different tables, but that seems to be difficult because of the need to
create a new index structure. The new structure shouldn't be used by tables
that don't have inherited tables because the new structure would hold a
pointer to the appropiate table per entry and thus redundant in these cases.
Even more, I've seen pointed by Tom Lane in a previous thread, that this
would cause lock problems where a lock in a table is needed, as locking a
table means locking its indexes.

In my particular case (don't know about the SQL standard or other 
cases),
it'd be enough if when an inherited table is created:
- A primary key in the inherited table is created with the same columns 
as
the super table.
- A trigger is created in the new table that ensures that this primary 
key
doesn't exist in the super table.
- A trigger is created in the super table that ensures that this 
primary key
doesn't exist in it's sub tables.

As I'm not an expert at all, I don't know if these would cause some side
effects or if it's a good enough solution for the general problem. I don't
know how multiple inheritance of tables with primary keys should be held
(maybe all super tables should have the same primary key).

For foreign keys, it seems as if simply selecting FROM a table instead 
of
 the current FROM ONLY would have the expected (by me :) behaviour.

I'm very interested in improving inheritance support in PostgreSQL, and 
I'm
willing to learn the current design and implementation in order to do it
myself, or help wherever possible. So I'd like to know your ideas or problems
you may find with this solution (if it's a solution at all :)

Thanks in advance!


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] BTree on-disk page ordering

2006-05-08 Thread Tom Lane
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> The mention of the changes to the btree scan code in the latest weekly
> news got me curious so I started looking at the 'executive summary'
> (read as: README) of the patch changes for both the scan patch and the
> btbulkdelete patch. If my understanding is correct, vacuum will only see
> a speed improvement when an index's on-disk storage order has a low
> correlation to index order. Is there any way to see what that
> correlation is on a running system? I'm wondering if anyone has checked
> to see what kind of performance impact a highly out-of-order index has
> on index scans.

There's nothing built in.  If you feel like hacking something, I've
attached a truly ugly tool that I've used once or twice in the past to
debug broken indexes.  It's got some smarts about detecting inconsistent
index structure and it looks like this latest version was meant to dump
out the index keys of a particular index.  You could modify it to just
scan the level-zero pages and compute some statistics about their
ordering.

regards, tom lane

/*
 * Usage: checkindex filename
 *
 * Note: we read in the entire index file, hence this does not work well
 * for indexes bigger than available memory
 */
#include "postgres.h"

#include 

#include "access/nbtree.h"

static char *buffers;
static BlockNumber nblocks;

#define GetPage(p)  ((Page) (buffers + (p) * BLCKSZ))

static BlockNumber rootblk;
static uint32 rootlevel;

bool assert_enabled = true;


static void
check_metapage(void)
{
Pagepage = GetPage(0);
BTMetaPageData *metad;
BTPageOpaque metaopaque;

metad = BTPageGetMeta(page);
if (metad->btm_magic != BTREE_MAGIC)
fprintf(stderr, "Bogus magic %lu\n", (long) metad->btm_magic);
if (metad->btm_version != BTREE_VERSION)
fprintf(stderr, "Bogus version %lu\n", (long) 
metad->btm_version);
rootblk = metad->btm_root;
rootlevel = metad->btm_level;

metaopaque = (BTPageOpaque) PageGetSpecialPointer(page);
if (metaopaque->btpo_flags != BTP_META)
fprintf(stderr, "Bogus metapage flags 0x%x\n", 
metaopaque->btpo_flags);
}

static void
check_rootpage(void)
{
Pagepage;
BTPageOpaque rootopaque;

if (rootblk <= 0 || rootblk >= nblocks)
{
fprintf(stderr, "Bogus root block # %lu\n", (long) rootblk);
return;
}
page = GetPage(rootblk);

rootopaque = (BTPageOpaque) PageGetSpecialPointer(page);
if (rootopaque->btpo_flags != BTP_ROOT)
fprintf(stderr, "Bogus rootpage flags 0x%x\n", 
rootopaque->btpo_flags);
if (rootopaque->btpo.level != rootlevel)
fprintf(stderr, "Bogus rootpage level %u, expected %u\n",
rootopaque->btpo.level, rootlevel);
if (rootopaque->btpo_prev != P_NONE)
fprintf(stderr, "Bogus rootpage left-link 0x%x\n",
rootopaque->btpo_prev);
if (rootopaque->btpo_next != P_NONE)
fprintf(stderr, "Bogus rootpage right-link 0x%x\n",
rootopaque->btpo_next);
}

static int
print_text_key(unsigned char *tdata, int tsize)
{
int orig_tsize = tsize;
int dsize;

Assert(tsize >= 4);
dsize = *((int *) tdata);
tdata += 4;
tsize -= 4;
dsize -= 4;
Assert(dsize >= 0);
Assert(tsize >= dsize);

while (dsize-- > 0)
{
printf("%c", *tdata);
tdata++;
tsize--;
}
printf("\t");

// kluge alignment
while ((long) tdata % MAXIMUM_ALIGNOF)
{
tdata++;
tsize--;
}
return orig_tsize - tsize;
}

static int
print_int_key(unsigned char *tdata, int tsize)
{
Assert(tsize >= 4);
printf("%d\t", *((int *) tdata));
return 4;
}

static void
print_item(Page page, BlockNumber blk, OffsetNumber off)
{
BTPageOpaque opaque;
BTItem  btitem;
IndexTuple  itup;
ItemPointer ip;

opaque = (BTPageOpaque) PageGetSpecialPointer(page);

btitem = (BTItem) PageGetItem(page, PageGetItemId(page, off));
itup = &(btitem->bti_itup);
ip = &itup->t_tid;

/* no key in upper-level first data item */
if (P_ISLEAF(opaque) || off != P_FIRSTDATAKEY(opaque))
{
int tsize;
unsigned char *tdata;
int dsize;

tsize = IndexTupleSize(itup) - sizeof(IndexTupleData);
tdata = (unsigned char *) itup + sizeof(IndexTupleData);

dsize = print_int_key(tdata, tsize);
tdata += dsize; tsize -= dsize;

dsize = print_int_key(tda

[HACKERS] BTree on-disk page ordering

2006-05-08 Thread Jim C. Nasby
The mention of the changes to the btree scan code in the latest weekly
news got me curious so I started looking at the 'executive summary'
(read as: README) of the patch changes for both the scan patch and the
btbulkdelete patch. If my understanding is correct, vacuum will only see
a speed improvement when an index's on-disk storage order has a low
correlation to index order. Is there any way to see what that
correlation is on a running system? I'm wondering if anyone has checked
to see what kind of performance impact a highly out-of-order index has
on index scans.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Remove behaviour of postmaster -o

2006-05-08 Thread Peter Eisentraut
Tom Lane wrote:
> http://archives.postgresql.org/pgsql-patches/2006-01/msg00239.php
>
> Not sure why Peter didn't continue working on it.

I'm still working on the postmaster/postgres merge.  But the behavior of 
postmaster -o is not going to be removed.  That TODO item might be 
appropriate in a release or three at best.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Number of dimensions of an array parameter

2006-05-08 Thread Thomas Hallgren

Martijn van Oosterhout wrote:

On Mon, May 08, 2006 at 07:31:14PM +0200, Thomas Hallgren wrote:
  
Would it be hard to enforce a real check? The implementation could use 
GUC settings like 'enforce_array_dimensions' and 'enforce_array_lengths' 
that could be set to false for the legacy implementations that rely on 
the current behavior. I know Tom added the ability to have NULL values 
in the arrays. Perhaps now is the time to improve the type semantics as 
well?



The big probem is where do you store the number of declared dimensions?
It's not stored anywhere, so there's nowhere to check against either.
If we can fix that first we might get to the checking part.

test=# create function foo(int[][]) returns int4 as 'select 1' language sql;
CREATE FUNCTION
test=# \df foo
   List of functions
 Result data type | Schema | Name | Argument data types 
--++--+-

 integer  | public | foo  | integer[]
(1 row)

  
Let each type have it's own entry in pg_type. I.e. let the int[] and 
int[][] be two distinct types (like int and int[] already are). In 
addition, perhaps introduce a new syntax that denotes 'arbitrary number 
of dimensions' and let that too be a distinct type.


Regards,
Thomas Hallgren




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] Number of dimensions of an array parameter

2006-05-08 Thread Thomas Hallgren

Tom Lane wrote:

Thomas Hallgren <[EMAIL PROTECTED]> writes:
  
Would it be hard to enforce a real check? The implementation could use 
GUC settings like 'enforce_array_dimensions' and 'enforce_array_lengths' 
that could be set to false for the legacy implementations that rely on 
the current behavior.



The fact that it doesn't exactly match Java semantics does not make it
"legacy behavior".  I don't agree that it's a bug; I think it's a
feature, precisely because many functions can work on arrays of
different dimensions.  Why should we change to make PL/Java happier,
when it will move us further away from the semantics of, say, PL/R?

  
Would it really? The way I see it, the choice of language is irrelevant. 
Either you support dimensions or you don't. The way PostgreSQL does it, 
you get the impression that it is supported while in fact it's not. I 
can't see how anyone would consider that a feature. If you want the 
ability to use an arbitrary number of dimensions, then you should have a 
syntax that supports that particular use-case. An int[][] cannot be 
anything but a two dimensional int array. Not in my book anyway. That 
opinion has nothing to do with Java.



I think reasonable choices for PL/Java would be to reject
multidimensional array arguments, or to silently ignore the
dimensionality and treat the data as 1-D in storage order
(as I think plperl for instance already does).

  

I agree. That's the way I'll do it.

Regards,
Thomas Hallgren


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Pragma linking?

2006-05-08 Thread Magnus Hagander
> > It will work just fine when you don't, as long as you include the 
> > directory where the lib file is in your LIB path. Which is the same 
> > way the linker commandline option works. I don't see how 
> that makes it 
> > useless, though.
> 
> If you have to include (the equivalent of) -L in your link 
> command anyway, I don't see where being able to leave off -l 
> buys much.

You don't, as long as you either stick the LIB file in the default
library directory, or modify the environment variable LIB to include
wherever you stick the LIB file. Both of which could be handled by an
installer.

//Magnus

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Pragma linking?

2006-05-08 Thread Tom Lane
"Magnus Hagander" <[EMAIL PROTECTED]> writes:
> It will work just fine when you don't, as long as you include the
> directory where the lib file is in your LIB path. Which is the same way
> the linker commandline option works. I don't see how that makes it
> useless, though.

If you have to include (the equivalent of) -L in your link command
anyway, I don't see where being able to leave off -l buys much.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Number of dimensions of an array parameter

2006-05-08 Thread Tom Lane
Thomas Hallgren <[EMAIL PROTECTED]> writes:
> Would it be hard to enforce a real check? The implementation could use 
> GUC settings like 'enforce_array_dimensions' and 'enforce_array_lengths' 
> that could be set to false for the legacy implementations that rely on 
> the current behavior.

The fact that it doesn't exactly match Java semantics does not make it
"legacy behavior".  I don't agree that it's a bug; I think it's a
feature, precisely because many functions can work on arrays of
different dimensions.  Why should we change to make PL/Java happier,
when it will move us further away from the semantics of, say, PL/R?

I think reasonable choices for PL/Java would be to reject
multidimensional array arguments, or to silently ignore the
dimensionality and treat the data as 1-D in storage order
(as I think plperl for instance already does).

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] bug? non working casts for domain

2006-05-08 Thread elein
I'll see what I can do about expanding my requirements/test 
cases.  Casting was not in my original test cases.  
What else have I missed?  Copy domain gripes to [EMAIL PROTECTED]

--elein
[EMAIL PROTECTED]



On Sat, May 06, 2006 at 10:19:39PM -0400, Tom Lane wrote:
> Bruce Momjian  writes:
> > The error is coming from parse_expr.c::typecast_expression, and its call
> > to typenameTypeId().  I wish I understood how we do domains better to
> > fix this properly.  Anyone?
> 
> The reason the cast isn't found is that find_coercion_pathway() strips
> off the domains before it ever even looks in pg_cast.  We can't simply
> remove that logic without breaking things (notably, the ability to cast
> between a domain and its base type).  I think it would be a mistake to
> consider this behavior in isolation anyway --- it's fairly tightly tied
> to the way that domains are handled (or, mostly, ignored) in
> operator/function lookup.  See recent gripes from Elein.
> 
> If someone can put together a coherent proposal for how domains should
> be dealt with in operator/function resolution, I'm all ears.
> 
>   regards, tom lane
> 
> ---(end of broadcast)---
> TIP 3: Have you checked our extensive FAQ?
> 
>http://www.postgresql.org/docs/faq
> 

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] XLOG_BLCKSZ vs. wal_buffers table

2006-05-08 Thread Mark Wong
On Mon, 08 May 2006 19:08:59 +0100
Simon Riggs <[EMAIL PROTECTED]> wrote:

> On Fri, 2006-05-05 at 16:00 -0700, Mark Wong wrote:
> > On Tue, 02 May 2006 10:52:38 +0100
> > Simon Riggs <[EMAIL PROTECTED]> wrote:
> > 
> > > On Sun, 2006-04-30 at 22:14 -0700, Mark Wong wrote:
> > > > I would have gotten this out sooner but I'm having trouble with our
> > > > infrastructure.  Here's a link to a table of data I've started putting
> > > > together regarding XLOG_BLCKSZ and wal_buffers on a 4-way Opteron
> > > > system:
> > > > http://developer.osdl.org/markw/pgsql/xlog_blcksz.html
> > > > 
> > > > There are a couple of holes in the table but I think it shows enough
> > > > evidence to say that with dbt2 having a larger XLOG_BLCKSZ improves the
> > > > overall throughput of the test.
> > > > 
> > > > I'm planning on continuing to increase XLOG_BLCKSZ and wal_buffers to
> > > > determine when the throughput starts to level out or drop off, and then
> > > > start experimenting with varying BLCKSZ.  Let me know if there are other
> > > > things that would be more interesting to experiment with first.
> > > 
> > > IMHO you should be testing with higher wal_buffers settings. ISTM likely
> > > that the improved performance is due to there being more buffer space,
> > > rather than actually improving I/O. Setting wal_buffers to something
> > > fairly high say 4096 would completely remove any such effect so we are
> > > left with a view on the I/O.
> > 
> > I ran another few tests at the 600 scale factor just in case I was
> > getting close to peaking at 500 warehouses.  (Link above has updated
> > data.)  With wal_buffers set to 4096 the difference between 2048, 8192,
> > and 32768 seem negligible.  Some of the disks are at 90% utilization so
> > perhaps I need to take a close look to make sure none of the other
> > system resources are pegged.
> 
> The profiles are fairly different though.
> 
> Could you turn full_page_writes = off and do a few more tests? I think
> the full page writes is swamping the xlog and masking the performance we
> might see for normal small xlog writes.
> I'd try XLOG_BLCKSZ = 4096 and 8192 to start with. Thanks.

Ok, will get on it.

> (Is VACUUM running at the start of these tests?)

VACUUM is run immediately after the database tables are loaded.  I've
been reloading the database prior to each test.

Mark

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] XLOG_BLCKSZ vs. wal_buffers table

2006-05-08 Thread Simon Riggs
On Fri, 2006-05-05 at 16:00 -0700, Mark Wong wrote:
> On Tue, 02 May 2006 10:52:38 +0100
> Simon Riggs <[EMAIL PROTECTED]> wrote:
> 
> > On Sun, 2006-04-30 at 22:14 -0700, Mark Wong wrote:
> > > I would have gotten this out sooner but I'm having trouble with our
> > > infrastructure.  Here's a link to a table of data I've started putting
> > > together regarding XLOG_BLCKSZ and wal_buffers on a 4-way Opteron
> > > system:
> > >   http://developer.osdl.org/markw/pgsql/xlog_blcksz.html
> > > 
> > > There are a couple of holes in the table but I think it shows enough
> > > evidence to say that with dbt2 having a larger XLOG_BLCKSZ improves the
> > > overall throughput of the test.
> > > 
> > > I'm planning on continuing to increase XLOG_BLCKSZ and wal_buffers to
> > > determine when the throughput starts to level out or drop off, and then
> > > start experimenting with varying BLCKSZ.  Let me know if there are other
> > > things that would be more interesting to experiment with first.
> > 
> > IMHO you should be testing with higher wal_buffers settings. ISTM likely
> > that the improved performance is due to there being more buffer space,
> > rather than actually improving I/O. Setting wal_buffers to something
> > fairly high say 4096 would completely remove any such effect so we are
> > left with a view on the I/O.
> 
> I ran another few tests at the 600 scale factor just in case I was
> getting close to peaking at 500 warehouses.  (Link above has updated
> data.)  With wal_buffers set to 4096 the difference between 2048, 8192,
> and 32768 seem negligible.  Some of the disks are at 90% utilization so
> perhaps I need to take a close look to make sure none of the other
> system resources are pegged.

The profiles are fairly different though.

Could you turn full_page_writes = off and do a few more tests? I think
the full page writes is swamping the xlog and masking the performance we
might see for normal small xlog writes.
I'd try XLOG_BLCKSZ = 4096 and 8192 to start with. Thanks.

(Is VACUUM running at the start of these tests?)

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Number of dimensions of an array parameter

2006-05-08 Thread Martijn van Oosterhout
On Mon, May 08, 2006 at 07:31:14PM +0200, Thomas Hallgren wrote:
> Would it be hard to enforce a real check? The implementation could use 
> GUC settings like 'enforce_array_dimensions' and 'enforce_array_lengths' 
> that could be set to false for the legacy implementations that rely on 
> the current behavior. I know Tom added the ability to have NULL values 
> in the arrays. Perhaps now is the time to improve the type semantics as 
> well?

The big probem is where do you store the number of declared dimensions?
It's not stored anywhere, so there's nowhere to check against either.
If we can fix that first we might get to the checking part.

test=# create function foo(int[][]) returns int4 as 'select 1' language sql;
CREATE FUNCTION
test=# \df foo
   List of functions
 Result data type | Schema | Name | Argument data types 
--++--+-
 integer  | public | foo  | integer[]
(1 row)

Have a ncie day,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [HACKERS] Number of dimensions of an array parameter

2006-05-08 Thread Rod Taylor
> thhal=# CREATE DOMAIN twodims as int[][];
> CREATE DOMAIN

While still not perfect, you can use a CHECK constraint on the domain to
enforce dimension.

It's not perfect because domain constraints are not enforced in all
locations in versions earlier than 8.2. Adding extra explicit casts can
often work around that though.

ru=# create domain twodims as int[][] check(array_dims(value) =
'[1:2][1:2]');

ru=# select
array_dims('{{{1,2},{3,4}},{{5,3},{9,9}}}'::twodims);
ERROR:  value for domain twodims violates check constraint
"twodims_check"

ru=# select array_dims('{{1,2},{3,4}}'::twodims);
 array_dims

 [1:2][1:2]
(1 row)

If you want to be fancy, use something like this:

check(array_dims(value) ~ '^[1:\\d+][1:\\d+]$');


-- 


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Pragma linking?

2006-05-08 Thread Magnus Hagander
> > For VC++ you can add a #pragma directive to the header 
> files to direct 
> > the compiler/linker to link with a specific library.
> 
> Count on Microsoft to invent stupid "features" :-(.  

I guess stupid is a relative matter - I find it quite handy.

> If the directive includes a full path then we can't put it in our 
> standard headers, and if it doesn't then it's useless.

It can optionally include the full path. Which we can't use, of course.

It will work just fine when you don't, as long as you include the
directory where the lib file is in your LIB path. Which is the same way
the linker commandline option works. I don't see how that makes it
useless, though.

//Magnus

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Pragma linking?

2006-05-08 Thread Tom Lane
"Magnus Hagander" <[EMAIL PROTECTED]> writes:
> For VC++ you can add a #pragma directive to the header files to direct
> the compiler/linker to link with a specific library.

Count on Microsoft to invent stupid "features" :-(.  If the directive
includes a full path then we can't put it in our standard headers, and
if it doesn't then it's useless.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Number of dimensions of an array parameter

2006-05-08 Thread Stefan Kaltenbrunner
Thomas Hallgren wrote:
> I can create a function that takes a two dimension int array:
> 
>  CREATE FUNCTION twodims(int[][]) RETURNS void AS ...
> 
> but there's nothing stopping me from calling this function with an
> arbitrary number of dimensions on the array.
> 
> I'd like to map a parameter like the one above to a corresponding
> representation in Java (it would be int[][] there too). As it turns out,
> I can't do that. PostgreSQL will not store any information that can tell
> me how many dimensions that where used in the declaration, i.e. it's
> impossible to write a language VALIDATOR that, based on the information
> in pg_proc, builds a signature where the number of dimensions is reflected.
> 
> This leaves me with two choices:
> 
> Only allow arrays with one dimension unless the parameter is of a domain
> type (domains are apparently stored with the actual number of
> dimensions). Any call that uses an array parameter with more then one
> dimension will yield an exception.
>   --OR--
> Always map to Object[] instead of mapping to the correct type, . This
> will work since an array in Java is also an Object and all primitive
> types can be represented as objects (i.e. int can be a
> java.lang.Integer). The strong typing and the ability to use primitives
> are lost however.
> 
> I'm leaning towards #1 and hoping that PostgreSQL will enhance the
> parameter type declarations to include the dimensions in future releases.
> 
> ... After some more testing ...
> 
> Unfortunately, I run into problems even when I use domains. Consider the
> following:
> 
> thhal=# CREATE DOMAIN twodims as int[][];
> CREATE DOMAIN
> thhal=# SELECT typndims FROM pg_type WHERE typname = 'twodims';
> typndims
> --
>2
> (1 row)
> 
> thhal=# SELECT
> array_dims('{{{1,2,3},{3,4,3}},{{5,3,2},{9,9,9}}}'::twodims);
>   array_dims   -
> [1:2][1:2][1:3]
> (1 row)
> 
> IMO, there is something seriously wrong here. Clearly the number of
> dimensions is a property of the type. Any array with a different number
> of dimensions should yield an error or at least be coerced into the
> right number of dimensions.

while it would be nice to improve that - it is actually documented quite
clearly.

http://www.postgresql.org/docs/current/static/arrays.html has:

"However, the current implementation does not enforce the array size
limits — the behavior is the same as for arrays of unspecified length.

Actually, the current implementation does not enforce the declared
number of dimensions either. Arrays of a particular element type are all
considered to be of the same type, regardless of size or number of
dimensions. So, declaring number of dimensions or sizes in CREATE TABLE
is simply documentation, it does not affect run-time behavior. "



Stefan

---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] Pragma linking?

2006-05-08 Thread Magnus Hagander
When working through my cleanup of the vc++ build scripts, I came across
a handy feature. I originally thought it'd cut down the size of my
scripts, and it does - but not very much. However, it might be handy
elsewhere. I have no idea if this feature is available for other
platforms/compilers.


For VC++ you can add a #pragma directive to the header files to direct
the compiler/linker to link with a specific library. This could be used
to automatically link with libpq whenever libpq-fe.h is brought in, and
similar things for the backend (links to postgres.exe) and ecpg.
(Naturally, there'd be a #define you can set to have it *not* do this,
for special cases). That would make it even easier to build client and
server projects - if you use MSVC++.

What do you think? If it's good, I can incorporate it in the next
version of the VC++ patch.

//Magnus

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Number of dimensions of an array parameter

2006-05-08 Thread Thomas Hallgren

Stefan Kaltenbrunner wrote:

while it would be nice to improve that - it is actually documented quite
clearly.

http://www.postgresql.org/docs/current/static/arrays.html has:

"However, the current implementation does not enforce the array size
limits — the behavior is the same as for arrays of unspecified length.

Actually, the current implementation does not enforce the declared
number of dimensions either. Arrays of a particular element type are all
considered to be of the same type, regardless of size or number of
dimensions. So, declaring number of dimensions or sizes in CREATE TABLE
is simply documentation, it does not affect run-time behavior. "

  
A documented flaw is much better than an undocumented one but it's still 
a flaw, and a pretty bad one at that. It's like having a compiler that 
doesn't complain when you define a C-function that takes an int** and 
then pass an int*.


Would it be hard to enforce a real check? The implementation could use 
GUC settings like 'enforce_array_dimensions' and 'enforce_array_lengths' 
that could be set to false for the legacy implementations that rely on 
the current behavior. I know Tom added the ability to have NULL values 
in the arrays. Perhaps now is the time to improve the type semantics as 
well?


Regards,
Thomas Hallgren



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] Number of dimensions of an array parameter

2006-05-08 Thread Tom Lane
Thomas Hallgren <[EMAIL PROTECTED]> writes:
> Only allow arrays with one dimension unless the parameter is of a domain 
> type (domains are apparently stored with the actual number of 
> dimensions).

No, they don't enforce dimensionality any more than ordinary array
columns do.  typndims and attndims are both effectively just booleans:
is it an array or not?

> IMO, there is something seriously wrong here. Clearly the number of 
> dimensions is a property of the type.

[ shrug... ] That's debatable.  You could just as well argue that the
exact array size should be enforced by the type system.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[HACKERS] Number of dimensions of an array parameter

2006-05-08 Thread Thomas Hallgren

I can create a function that takes a two dimension int array:

 CREATE FUNCTION twodims(int[][]) RETURNS void AS ...

but there's nothing stopping me from calling this function with an 
arbitrary number of dimensions on the array.


I'd like to map a parameter like the one above to a corresponding 
representation in Java (it would be int[][] there too). As it turns out, 
I can't do that. PostgreSQL will not store any information that can tell 
me how many dimensions that where used in the declaration, i.e. it's 
impossible to write a language VALIDATOR that, based on the information 
in pg_proc, builds a signature where the number of dimensions is reflected.


This leaves me with two choices:

Only allow arrays with one dimension unless the parameter is of a domain 
type (domains are apparently stored with the actual number of 
dimensions). Any call that uses an array parameter with more then one 
dimension will yield an exception.

  --OR--
Always map to Object[] instead of mapping to the correct type, . This 
will work since an array in Java is also an Object and all primitive 
types can be represented as objects (i.e. int can be a 
java.lang.Integer). The strong typing and the ability to use primitives 
are lost however.


I'm leaning towards #1 and hoping that PostgreSQL will enhance the 
parameter type declarations to include the dimensions in future releases.


... After some more testing ...

Unfortunately, I run into problems even when I use domains. Consider the 
following:


thhal=# CREATE DOMAIN twodims as int[][];
CREATE DOMAIN
thhal=# SELECT typndims FROM pg_type WHERE typname = 'twodims';
typndims
--
   2
(1 row)

thhal=# SELECT array_dims('{{{1,2,3},{3,4,3}},{{5,3,2},{9,9,9}}}'::twodims);
  array_dims   
-

[1:2][1:2][1:3]
(1 row)

IMO, there is something seriously wrong here. Clearly the number of 
dimensions is a property of the type. Any array with a different number 
of dimensions should yield an error or at least be coerced into the 
right number of dimensions.


Kind Regards,
Thomas Hallgren


---(end of broadcast)---
TIP 6: explain analyze is your friend