Re: [HACKERS] the un-vacuumable table

2008-07-03 Thread Andrew Hammond
On Thu, Jul 3, 2008 at 3:47 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Andrew Hammond" <[EMAIL PROTECTED]> writes:
>> On Thu, Jul 3, 2008 at 2:35 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
>>> The whole thing is pretty mystifying, especially the ENOSPC write
>>> failure on what seems like it couldn't have been a full disk.
>
>> Yes, I've passed along the task of explaining why PG thought the disk
>> was full to the sysadmin responsible for the box. I'll post the answer
>> here, when and if we have one.
>
> I just noticed something even more mystifying: you said that the ENOSPC
> error occurred once a day during vacuuming.

Actually, the ENOSPC happened once. After that first error, we got

vacuumdb: vacuuming of database "adecndb" failed: ERROR:  failed to
re-find parent key in "ledgerdetail_2008_03_idx2" for deletion target
page 64767

repeatedly.

> That doesn't make any
> sense, because a write error would leave the shared buffer still marked
> dirty, and so the next checkpoint would try to write it again.  If
> there's a persistent write error on a particular block, you should see
> it being complained of at least once per checkpoint interval.
>
> If you didn't see that, it suggests that the ENOSPC was transient,
> which isn't unreasonable --- but why would it recur for the exact
> same block each night?
>
> Have you looked into the machine's kernel log to see if there is any
> evidence of low-level distress (hardware or filesystem level)?  I'm
> wondering if ENOSPC is being reported because it is the closest
> available errno code, but the real problem is something different than
> the error message text suggests.  Other than the errno the symptoms
> all look quite a bit like a bad-sector problem ...

I will pass this along to the sysadmin in charge of this box.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Truncated queries when select * from pg_stat_activity - wishlist / feature request

2008-07-03 Thread Alvaro Herrera
Dave Witt wrote:

> ..but Postgres only shows the first ~1000 chars from each queued/running  
> query.  I know this is a long-standing issue (from reading other  
> posts/sites), but I'd like to throw in my vote for showing the entire  
> query, in some future version (easier said than done, I know).  After 5  
> months of Postgres use, this is by far my biggest wishlist item.

Heh, it was just done last month.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Truncated queries when select * from pg_stat_activity - wishlist / feature request

2008-07-03 Thread Dave Witt

Hi Everyone,

My first posting to the group..

I came from a mysql background (~7 years, small/mid database, about 
55gig), where I could do "SHOW FULL PROCESSLIST;" to see all the queries 
currently queued/running on the system.  I found it a very useful feature.


The Postgres equivalent is (roughly) "SELECT * FROM pg_stat_activity;".

..but Postgres only shows the first ~1000 chars from each queued/running 
query.  I know this is a long-standing issue (from reading other 
posts/sites), but I'd like to throw in my vote for showing the entire 
query, in some future version (easier said than done, I know).  After 5 
months of Postgres use, this is by far my biggest wishlist item.


Thanks, congratulations, great work, etc...!

-Dave

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCHES] Removal of the patches email list

2008-07-03 Thread Russell Smith
Bruce Momjian wrote:
> We have come to agreement that there is no longer a need for a separate
> 'patches' email list --- the size of patches isn't a significant issue
> anymore, and tracking threads between the patches and hackers lists is
> confusing.
>
> I propose we close the patches list and tell everyone to start using
> only the hackers list.  This will require email server changes and web
> site updates, and some people who are only subscribed to patches have to
> figure out if they want to subscribe to hackers.
>
> I have CC'ed hackers, patches, and www because this does affect all
> those lists.
>
>   
I think this is a good idea, and was expecting this to have happened
already.  Is there any time line or consensus that this is going to happen?

Regards

Russell Smith

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Git Repository for WITH RECURSIVE and others

2008-07-03 Thread Robert Haas
I just had this same problem.

Perhaps the wiki and http://git.postgresql.org/static/serviceinfo.html
should also be updated with the working (i.e. http) URL?

...Robert

On Thu, Jul 3, 2008 at 12:56 AM, Yoshiyuki Asaba <[EMAIL PROTECTED]> wrote:
> Hi,
>
> From: Abhijit Menon-Sen <[EMAIL PROTECTED]>
> Subject: Re: [HACKERS] Git Repository for WITH RECURSIVE and others
> Date: Thu, 3 Jul 2008 09:18:17 +0530
>
>> At 2008-07-03 11:16:49 +0900, [EMAIL PROTECTED] wrote:
>> >
>> >   # WITH RECURSIVE repository
>> >   % git-clone git://git.postgresql.org/git/~davidfetter/postgresql/.git
>> >   Initialized empty Git repository in /home/y-asaba/x/postgresql/.git/
>> >   fatal: The remote end hung up unexpectedly
>>
>> Run git-clone http://git.postgresql.org/git/~davidfetter/postgresql/.git
>> instead. "git://..." apparently doesn't work on that repository (I don't
>> know why not).
>
> Thanks for the advice. I could get the repository via HTTP.
>
> Regards,
> --
> Yoshiyuki Asaba
> [EMAIL PROTECTED]
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] the un-vacuumable table

2008-07-03 Thread Tom Lane
"Andrew Hammond" <[EMAIL PROTECTED]> writes:
> On Thu, Jul 3, 2008 at 2:35 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
>> The whole thing is pretty mystifying, especially the ENOSPC write
>> failure on what seems like it couldn't have been a full disk.

> Yes, I've passed along the task of explaining why PG thought the disk
> was full to the sysadmin responsible for the box. I'll post the answer
> here, when and if we have one.

I just noticed something even more mystifying: you said that the ENOSPC
error occurred once a day during vacuuming.  That doesn't make any
sense, because a write error would leave the shared buffer still marked
dirty, and so the next checkpoint would try to write it again.  If
there's a persistent write error on a particular block, you should see
it being complained of at least once per checkpoint interval.

If you didn't see that, it suggests that the ENOSPC was transient,
which isn't unreasonable --- but why would it recur for the exact
same block each night?

Have you looked into the machine's kernel log to see if there is any
evidence of low-level distress (hardware or filesystem level)?  I'm
wondering if ENOSPC is being reported because it is the closest
available errno code, but the real problem is something different than
the error message text suggests.  Other than the errno the symptoms
all look quite a bit like a bad-sector problem ...

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CommitFest rules

2008-07-03 Thread Dave Page
On Thu, Jul 3, 2008 at 10:37 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Merlin Moncure" <[EMAIL PROTECTED]> writes:
>> Are you suggesting that omission of a patch on the 'fest' page means
>> that you are bumped from the fest?
>
> No, if you had submitted the patch on time then the correct next step
> is to get it added to the fest page; I don't think that should be
> controversial.  But the reviewers aren't gonna review it if it's not
> listed on that page...

Right, but the author should take some responsibility for ensuring the
patch is listed on time. What we don't want is forgotten patches
getting added at the last minute, right as the CommitFest manager is
wrapping things up having got 95% of the patches reviewed and the
other 5% in progress.

-- 
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] the un-vacuumable table

2008-07-03 Thread Andrew Hammond
On Thu, Jul 3, 2008 at 2:35 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Andrew Hammond" <[EMAIL PROTECTED]> writes:
>> Does anyone else have any suggestions about what I can do to diagnose this?
>
> The whole thing is pretty mystifying, especially the ENOSPC write
> failure on what seems like it couldn't have been a full disk.

Yes, I've passed along the task of explaining why PG thought the disk
was full to the sysadmin responsible for the box. I'll post the answer
here, when and if we have one.

>> Jun 27 15:54:31 qadb2 postgres[92519]: [44-1] PANIC:  could not open
>> relation 1663/16386/679439393: No such file or directory
>
> I don't think anyone asked before --- after the restore fails with the
> above, does the directory $PGDATA/base/16386/ exist?  Although WAL
> recovery should attempt to create missing files, I think it won't
> try to create missing directories.

The directory exists (and the 679439393 file does not).

Andrew

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CommitFest rules

2008-07-03 Thread Tom Lane
"Merlin Moncure" <[EMAIL PROTECTED]> writes:
> Are you suggesting that omission of a patch on the 'fest' page means
> that you are bumped from the fest?

No, if you had submitted the patch on time then the correct next step
is to get it added to the fest page; I don't think that should be
controversial.  But the reviewers aren't gonna review it if it's not
listed on that page...

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] the un-vacuumable table

2008-07-03 Thread Tom Lane
"Andrew Hammond" <[EMAIL PROTECTED]> writes:
> Does anyone else have any suggestions about what I can do to diagnose this?

The whole thing is pretty mystifying, especially the ENOSPC write
failure on what seems like it couldn't have been a full disk.

> Jun 27 15:54:31 qadb2 postgres[92519]: [44-1] PANIC:  could not open
> relation 1663/16386/679439393: No such file or directory

I don't think anyone asked before --- after the restore fails with the
above, does the directory $PGDATA/base/16386/ exist?  Although WAL
recovery should attempt to create missing files, I think it won't
try to create missing directories.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CommitFest rules

2008-07-03 Thread Merlin Moncure
On Thu, Jul 3, 2008 at 3:45 PM, Dave Page <[EMAIL PROTECTED]> wrote:
> On Thu, Jul 3, 2008 at 8:34 PM, Alex Hunsaker <[EMAIL PROTECTED]> wrote:
>
>> The one I advised be added (the Auto Explain patch) was posted on Mar
>> 29, 2008 (http://archives.postgresql.org/pgsql-hackers/2008-03/msg01214.php),
>> re-posted the Jun 30th and then an updated patch today... It only
>> being the 3rd i figured it still meet the criteria. If thats not the
>> case, I apologize.
>
> You'll note I didn't actually mention any specific patches.
>
> The CommitFests are designed to be short and quick to complete. We
> need *all* patches to be listed (not just posted to a list sometime in
> the past couple of months) before it starts, otherwise organisation of
> reviewers and completion of the Fest becomes chaotic for all and a
> nightmare task for the CommitFest manager.

Are you suggesting that omission of a patch on the 'fest' page means
that you are bumped from the fest?   Note that Tom asked on July 1 if
there were any missing entries that hadn't been plucked out of the
lists (there were).

If that's the case, that would suggest that it is the patch author's
responsibility to to update the fest entry (or at least, make sure it
was updated).  That may already be the case -- i'm not sure -- and I'm
not saying that's wrong, but if you are going to adopt a strict policy
(which btw I have no problem with) it's probably better to remind
stragglers to get with the program a week or so the fest begins,
especially on the last fest of the release cycle.

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] the un-vacuumable table

2008-07-03 Thread Andrew Hammond
Does anyone else have any suggestions about what I can do to diagnose this?

Do I need to re-initdb or can I reasonably keep running with the existing db?

A

On Mon, Jun 30, 2008 at 7:20 PM, Andrew Hammond
<[EMAIL PROTECTED]> wrote:
> On Fri, Jun 27, 2008 at 8:14 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
>> "Andrew Hammond" <[EMAIL PROTECTED]> writes:
>>> (I thought this line was interesting)
>>> Jun 27 15:54:31 qadb2 postgres[92519]: [44-1] PANIC:  could not open
>>> relation 1663/16386/679439393: No such file or directory
>>
>>> I googled to find out what the numbers 1663/16386/679439393 from the
>>> PANIC message mean, but no luck.
>>
>> tablespaceOID/databaseOID/relfilenode.  Looks like just some random user
>> table.  Not clear why this would be a crash, *especially* since WAL
>> recovery is generally willing to create nonexistent files.  Is this
>> reproducible?
>
> Yes, both when I just tried to restart the recovery:
>
> Jun 30 16:04:43 qadb2 postgres[20797]: [27-1] DEBUG:  invoking
> IpcMemoryCreate(size=92938240)
> Jun 30 16:04:43 qadb2 postgres[20797]: [28-1] DEBUG:  max_safe_fds =
> 983, usable_fds = 1000, already_open = 7
> Jun 30 16:04:43 qadb2 postgres[20798]: [29-1] LOG:  database system
> was interrupted while in recovery at 2008-06-27 15:54:31 PDT
> Jun 30 16:04:43 qadb2 postgres[20798]: [29-2] HINT:  This probably
> means that some data is corrupted and you will have to use the last
> backup for recovery.
> Jun 30 16:04:43 qadb2 postgres[20798]: [30-1] LOG:  starting archive recovery
> Jun 30 16:04:43 qadb2 postgres[20798]: [31-1] LOG:  restore_command =
> "cp -p /usr/tmp/2008-06-25_wals/%f %p"
> Jun 30 16:04:43 qadb2 postgres[20798]: [32-1] DEBUG:  executing
> restore command "cp -p /usr/tmp/2008-06-25_wals/0001.history
> pg_xlog/RECOVERYHISTORY"
> Jun 30 16:04:43 qadb2 postgres[20798]: [33-1] DEBUG:  could not
> restore file "0001.history" from archive: return code 256
> Jun 30 16:04:43 qadb2 postgres[20798]: [34-1] DEBUG:  executing
> restore command "cp -p
> /usr/tmp/2008-06-25_wals/000101D60078.0055F0B8.backup
> Jun 30 16:04:43 qadb2 postgres[20798]: [34-2]  pg_xlog/RECOVERYHISTORY"
> Jun 30 16:04:43 qadb2 postgres[20798]: [35-1] LOG:  restored log file
> "000101D60078.0055F0B8.backup" from archive
> Jun 30 16:04:43 qadb2 postgres[20798]: [36-1] DEBUG:  executing
> restore command "cp -p
> /usr/tmp/2008-06-25_wals/000101D60078
> pg_xlog/RECOVERYXLOG"
> Jun 30 16:04:44 qadb2 postgres[20797]: [29-1] DEBUG:  forked new
> backend, pid=20805 socket=8
> Jun 30 16:04:44 qadb2 postgres[20805]: [29-1] LOG:  connection
> received: host=[local]
> Jun 30 16:04:44 qadb2 postgres[20805]: [30-1] FATAL:  the database
> system is starting up
> Jun 30 16:04:44 qadb2 postgres[20805]: [31-1] DEBUG:  proc_exit(0)
> Jun 30 16:04:44 qadb2 postgres[20805]: [32-1] DEBUG:  shmem_exit(0)
> Jun 30 16:04:44 qadb2 postgres[20805]: [33-1] DEBUG:  exit(0)
> Jun 30 16:04:44 qadb2 postgres[20797]: [30-1] DEBUG:  reaping dead processes
> Jun 30 16:04:44 qadb2 postgres[20797]: [31-1] DEBUG:  server process
> (PID 20805) exited with exit code 0
> Jun 30 16:04:44 qadb2 postgres[20798]: [37-1] LOG:  restored log file
> "000101D60078" from archive
> Jun 30 16:04:44 qadb2 postgres[20798]: [38-1] LOG:  checkpoint record
> is at 1D6/7855F0B8
> Jun 30 16:04:44 qadb2 postgres[20798]: [39-1] LOG:  redo record is at
> 1D6/7855F0B8; undo record is at 0/0; shutdown FALSE
> Jun 30 16:04:44 qadb2 postgres[20798]: [40-1] LOG:  next transaction
> ID: 397171279; next OID: 679516596
> Jun 30 16:04:44 qadb2 postgres[20798]: [41-1] LOG:  next MultiXactId:
> 857318; next MultiXactOffset: 1718141
> Jun 30 16:04:44 qadb2 postgres[20798]: [42-1] LOG:  automatic recovery
> in progress
> Jun 30 16:04:44 qadb2 postgres[20798]: [43-1] LOG:  redo starts at 
> 1D6/7855F108
>
> Jun 30 16:04:45 qadb2 postgres[20798]: [44-1] PANIC:  could not open
> relation 1663/16386/679439393: No such file or directory
>
> Jun 30 16:04:45 qadb2 postgres[20797]: [32-1] DEBUG:  reaping dead processes
> Jun 30 16:04:45 qadb2 postgres[20797]: [33-1] LOG:  startup process
> (PID 20798) was terminated by signal 6
> Jun 30 16:04:45 qadb2 postgres[20797]: [34-1] LOG:  aborting startup
> due to startup process failure
> Jun 30 16:04:45 qadb2 postgres[20797]: [35-1] DEBUG:  proc_exit(1)
> Jun 30 16:04:45 qadb2 postgres[20797]: [36-1] DEBUG:  shmem_exit(1)
> Jun 30 16:04:45 qadb2 postgres[20797]: [37-1] DEBUG:  exit(1)
>
>
>
> And also when I tried to wipe the slate clean and recover it freshly.
>
> Jun 30 19:11:59 qadb2 postgres[23091]: [1-1] DEBUG:  postmaster:
> PostmasterMain: initial environ dump:
> Jun 30 19:11:59 qadb2 postgres[23091]: [2-1] DEBUG:
> -
> Jun 30 19:11:59 qadb2 postgres[23091]: [3-1] DEBUG: USER=pgsql
> Jun 30 19:11:59 qadb2 postgres[23091]: [4-1] DEBUG: MAIL=/var/mail/pgsql
> Jun 30 19:11:59 qadb2 postgres[23091]: [5-1] DEBUG:
> LD_LIBRARY_PATH=:/usr/lo

Re: [HACKERS] CommitFest rules

2008-07-03 Thread Dave Page
On Thu, Jul 3, 2008 at 8:34 PM, Alex Hunsaker <[EMAIL PROTECTED]> wrote:

> The one I advised be added (the Auto Explain patch) was posted on Mar
> 29, 2008 (http://archives.postgresql.org/pgsql-hackers/2008-03/msg01214.php),
> re-posted the Jun 30th and then an updated patch today... It only
> being the 3rd i figured it still meet the criteria. If thats not the
> case, I apologize.

You'll note I didn't actually mention any specific patches.

The CommitFests are designed to be short and quick to complete. We
need *all* patches to be listed (not just posted to a list sometime in
the past couple of months) before it starts, otherwise organisation of
reviewers and completion of the Fest becomes chaotic for all and a
nightmare task for the CommitFest manager.

-- 
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CommitFest rules

2008-07-03 Thread Alex Hunsaker
On Thu, Jul 3, 2008 at 12:44 PM, Dave Page <[EMAIL PROTECTED]> wrote:
> it concerns me that despite it being day 3 of the July commit fest,
> people are still being advised to add new items to the wiki page.
>
> To make the idea work, we need to stick to the rules we defined when
> we came up with the concept - specifically, no new patches once the
> fest begins!
>
> So please - new patches to the September  page!
>
> Regards, Dave

The one I advised be added (the Auto Explain patch) was posted on Mar
29, 2008 (http://archives.postgresql.org/pgsql-hackers/2008-03/msg01214.php),
re-posted the Jun 30th and then an updated patch today... It only
being the 3rd i figured it still meet the criteria. If thats not the
case, I apologize.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] gsoc, text search selectivity and dllist enhancments

2008-07-03 Thread Tom Lane
=?UTF-8?B?SmFuIFVyYmHFhHNraQ==?= <[EMAIL PROTECTED]> writes:
> attached are two patches against HEAD. The smaller one is meant to be 
> commited - it adds some functions that manipulate double-linked lists, 
> namely inserting a new cell after or before another cell and swapping 
> two adjacent cells.

> The gzipped one is WIP for my GSoC project. I've reworked the algorithm 
> for determing most common lexemes.

I looked over this a bit.  I'm not excited about adding functionality to
Dllist --- that data structure is barely used at all in the backend,
and I think a better case could be made for getting rid of it than
adding code to it.  The analyze patch doesn't change my mind on the
point, because I don't think that Dllist is really helping you there
anyway.  The data structure I'd suggest is a simple array of pointers
to the underlying hash table entries.  Since you have a predetermined
maximum number of lexemes to track, you can just palloc the array once
--- you don't need the expansibility properties of a list.  The only
operations you need are "add an entry at the end" (if you keep the
array sorted by descending count not ascending), "remove the end
entry", and "swap adjacent entries", all of which are actually cheaper
on an array than on a Dllist.

Another point is that you don't really need the array to be sorted all
the time.  Instead of using what is basically an O(N^2) incremental
sort, you could consider applying qsort() when you do need it to be
sorted, which is at the end or when the table overflows and you need to
discard some entries.  If you are willing to discard multiple entries
per overflow event, this could be quite cheap --- although I think in
the worst case where there are many overflows, it'd go back to being
O(N^2).  Note BTW that adding a count=1 entry at the end cannot make the
array unsorted if it was sorted before.  The only event that renders the
array unsorted is increasing an item's count to more than the count of
its predecessor --- so it might be worth keeping a predecessor pointer
in each hashtable entry that identifies its predecessor as of the time
of the last array sort, so you could check on-the-fly and avoid
unnecessary re-sorts.  I'm not totally sure that this idea is a win,
but it seems worth investigating.

Other than that, the analyze patch looked generally sane to me,
and I think you're on the right track.  Please rework and resubmit.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CommitFest rules

2008-07-03 Thread Merlin Moncure
On Thu, Jul 3, 2008 at 2:44 PM, Dave Page <[EMAIL PROTECTED]> wrote:
> it concerns me that despite it being day 3 of the July commit fest,
> people are still being advised to add new items to the wiki page.
>
> To make the idea work, we need to stick to the rules we defined when
> we came up with the concept - specifically, no new patches once the
> fest begins!
>
> So please - new patches to the September  page!

One of the two patches added post jul-1 was libpq event hooks, which
was a holdover from the june fest -- it wasn't a 'late breaking
patch', just a procedural oversight.  It was put there immediately
after reading the 'did we miss any patches' thread on -hackers.  Can't
speak as to the other patch.

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CommitFest rules

2008-07-03 Thread Joshua D. Drake


On Thu, 2008-07-03 at 20:06 +0100, Dave Page wrote:
> On Thu, Jul 3, 2008 at 8:02 PM, Marko Kreen <[EMAIL PROTECTED]> wrote:
> > On 7/3/08, Dave Page <[EMAIL PROTECTED]> wrote:
> >> it concerns me that despite it being day 3 of the July commit fest,
> >>  people are still being advised to add new items to the wiki page.

> >>  So please - new patches to the September  page!
> >
> > But updates to existing patches should be ok?
> 
> Yes.

Perhaps this would be helpful:

http://wiki.postgresql.org/wiki/CommitFest:Help

Joshua D. Drake


> 
> 
> -- 
> Dave Page
> EnterpriseDB UK: http://www.enterprisedb.com
> 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CommitFest rules

2008-07-03 Thread Dave Page
On Thu, Jul 3, 2008 at 8:02 PM, Marko Kreen <[EMAIL PROTECTED]> wrote:
> On 7/3/08, Dave Page <[EMAIL PROTECTED]> wrote:
>> it concerns me that despite it being day 3 of the July commit fest,
>>  people are still being advised to add new items to the wiki page.
>>
>>  To make the idea work, we need to stick to the rules we defined when
>>  we came up with the concept - specifically, no new patches once the
>>  fest begins!
>>
>>  So please - new patches to the September  page!
>
> But updates to existing patches should be ok?

Yes.


-- 
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CommitFest rules

2008-07-03 Thread Marko Kreen
On 7/3/08, Dave Page <[EMAIL PROTECTED]> wrote:
> it concerns me that despite it being day 3 of the July commit fest,
>  people are still being advised to add new items to the wiki page.
>
>  To make the idea work, we need to stick to the rules we defined when
>  we came up with the concept - specifically, no new patches once the
>  fest begins!
>
>  So please - new patches to the September  page!

But updates to existing patches should be ok?

-- 
marko

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Solaris ident authentication using unix domain sockets

2008-07-03 Thread Garick Hamlin
On Thu, Jul 03, 2008 at 02:01:22PM -0400, Tom Lane wrote:
> Garick Hamlin <[EMAIL PROTECTED]> writes:
> >   I have a patch that I have been using to support postgresql's
> > notion of ident authentication when using unix domain sockets on
> > Solaris.  This patch basically just adds support for using
> > getupeercred() on Solaris so unix sockets and ident auth works just
> > like it does on Linux and elsewhere.
> 
> Cool.
> 
> > + #if defined(HAVE_GETPEERUCRED)
> > + #include 
> > + #endif
> 
> But this is not cool.  There might be systems out there that have
> getpeerucred() but not , and this coding would cause a compile
> failure (even if they actually wouldn't be trying to use getpeerucred()
> because they have some other way to do it).  You need an explicit
> configure probe for the header file too, I think.
Ok, I can fix that.
> 
> Also, what is the rationale for putting this before the
> HAVE_STRUCT_CMSGCRED case instead of after?  Again, that seems like it
> could cause unexpected behavioral changes on platforms that work fine
> now (consider possibility that getpeerucred is there but broken).
Good Point, It should be the other way.
> 
> regards, tom lane

Thanks,

Garick

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] CommitFest rules

2008-07-03 Thread Dave Page
it concerns me that despite it being day 3 of the July commit fest,
people are still being advised to add new items to the wiki page.

To make the idea work, we need to stick to the rules we defined when
we came up with the concept - specifically, no new patches once the
fest begins!

So please - new patches to the September  page!

Regards, Dave

-- 
Dave Page
EnterpriseDB UK:   http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Solaris ident authentication using unix domain sockets

2008-07-03 Thread Tom Lane
Garick Hamlin <[EMAIL PROTECTED]> writes:
>   I have a patch that I have been using to support postgresql's
> notion of ident authentication when using unix domain sockets on
> Solaris.  This patch basically just adds support for using 
> getupeercred() on Solaris so unix sockets and ident auth works just
> like it does on Linux and elsewhere.

Cool.

> + #if defined(HAVE_GETPEERUCRED) 
> + #include 
> + #endif

But this is not cool.  There might be systems out there that have
getpeerucred() but not , and this coding would cause a compile
failure (even if they actually wouldn't be trying to use getpeerucred()
because they have some other way to do it).  You need an explicit
configure probe for the header file too, I think.

Also, what is the rationale for putting this before the
HAVE_STRUCT_CMSGCRED case instead of after?  Again, that seems like it
could cause unexpected behavioral changes on platforms that work fine
now (consider possibility that getpeerucred is there but broken).

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Auto-explain patch

2008-07-03 Thread Alex Hunsaker
On Thu, Jul 3, 2008 at 10:58 AM, Dean Rasheed <[EMAIL PROTECTED]> wrote:
>
> Here is an updated version of the patch, with a debug_explain_min_duration
> parameter to allow explaining of just slow-running queries. I've also 
> incorporated
> a couple of Simon Riggs' suggestions for formatting the output better.
>
> Do I need to post this to -patches, or is that now obsolete?

Yes its obsolete now, instead add it to July's commit fest at
http://wiki.postgresql.org/wiki/CommitFest:July.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Resolving polymorphic functions with relateddatatypes

2008-07-03 Thread Pavel Stehule
>
> What I'd be inclined to think about is making
> check_generic_type_consistency and related functions allow the
> arguments matched to ANYELEMENT to be of different actual types
> so long as select_common_type could determine a unique type to
> coerce them all to.  It'd take some refactoring (notably, because
> select_common_type wants to throw error on failure, and because
> there'd have to be a way to pass back the type that was selected
> for use later).

+1
it's same like current implementation coalesce, least, greatest
functions, thats works well. And with this change and with variatic
functions we can move these functions from parser.

Regards
Pavel Stehule



>
>regards, tom lane
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Solaris ident authentication using unix domain sockets

2008-07-03 Thread Garick Hamlin
Hi,
I have a patch that I have been using to support postgresql's
notion of ident authentication when using unix domain sockets on
Solaris.  This patch basically just adds support for using 
getupeercred() on Solaris so unix sockets and ident auth works just
like it does on Linux and elsewhere.

This was my first attempt wrestling with automake.  I've 
tested it builds properly after it is applied and autoreconf is run
on RHEL4/Linux/x86.  I am using the patch currently on Solaris 10 / 
x86.

Garick

diff -cr postgresql_CVS/configure.in postgresql/configure.in
*** postgresql_CVS/configure.in Tue Jun 24 15:52:30 2008
--- postgresql/configure.in Tue Jun 24 15:57:22 2008
***
*** 1095,1101 
  AC_FUNC_ACCEPT_ARGTYPES
  PGAC_FUNC_GETTIMEOFDAY_1ARG
  
! AC_CHECK_FUNCS([cbrt dlopen fcvt fdatasync getpeereid getrlimit memmove poll 
pstat readlink setproctitle setsid sigprocmask symlink sysconf towlower utime 
utimes waitpid wcstombs])
  
  AC_CHECK_DECLS(fdatasync, [], [], [#include ])
  AC_CHECK_DECLS(posix_fadvise, [], [], [#include ])
--- 1095,1101 
  AC_FUNC_ACCEPT_ARGTYPES
  PGAC_FUNC_GETTIMEOFDAY_1ARG
  
! AC_CHECK_FUNCS([getpeerucred cbrt dlopen fcvt fdatasync getpeereid getrlimit 
memmove poll pstat readlink setproctitle setsid sigprocmask symlink sysconf 
towlower utime utimes waitpid wcstombs])
  
  AC_CHECK_DECLS(fdatasync, [], [], [#include ])
  AC_CHECK_DECLS(posix_fadvise, [], [], [#include ])
diff -cr postgresql_CVS/src/backend/libpq/hba.c 
postgresql/src/backend/libpq/hba.c
*** postgresql_CVS/src/backend/libpq/hba.c  Tue Jun 24 15:52:32 2008
--- postgresql/src/backend/libpq/hba.c  Tue Jun 24 15:53:00 2008
***
*** 25,30 
--- 25,33 
  #include 
  #include 
  #endif
+ #if defined(HAVE_GETPEERUCRED) 
+ #include 
+ #endif
  #include 
  #include 
  #include 
***
*** 1500,1505 
--- 1503,1539 
strlcpy(ident_user, pass->pw_name, IDENT_USERNAME_MAX + 1);
  
return true;
+ #elif defined(HAVE_GETPEERUCRED) /* Solaris > 10 */
+   uid_t   uid;
+   gid_t   gid;
+   struct passwd   *pass;
+   int ucred_ok=1;
+   ucred_t *ucred = NULL;
+   if (getpeerucred(sock, &ucred) == -1)
+   ucred_ok = 0;
+   if (ucred_ok && (uid = ucred_geteuid(ucred)) == -1 )
+   ucred_ok = 0;
+   if (ucred_ok && (gid = ucred_getrgid(ucred)) == -1 )
+   ucred_ok = 0;
+   if (ucred)
+   ucred_free(ucred);
+   if (!ucred_ok) {
+   /* We didn't get a valid credentials struct. */
+   ereport(LOG, (
+"could not get peer credentials: %s",
+   strerror(errno)));
+   return false;
+   }
+   pass = getpwuid(uid);
+   if (pass == NULL)
+   {
+   ereport(LOG,
+   (errmsg("local user with ID %d does not exist",
+   (int) uid)));
+   return false;
+   }
+   strlcpy(ident_user, pass->pw_name, IDENT_USERNAME_MAX + 1);
+   return true;
  #elif defined(HAVE_STRUCT_CMSGCRED) || defined(HAVE_STRUCT_FCRED) || 
(defined(HAVE_STRUCT_SOCKCRED) && defined(LOCAL_CREDS))
struct msghdr msg;
  

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A Windows x64 port of PostgreSQL

2008-07-03 Thread Mark Mielke

Tom Lane wrote:

To get a working WIN64 port it'd be necessary to go around and replace
long with size_t/ssize_t in the places where it matters --- but there
are not 450 of them, I don't think.  And I'd advise not touching the
places that use int; that will just bloat the patch and make it harder
to review, without actually buying any functionality


Plus - changing them all to 64-bit integers even for cases that will not 
ever require > 32-bit integers, is likely to be slower in all cases 
except for cases those that can be optimized to use only registers. I 
would use "int" by choice for any size that will never extend beyond 1 
Gb as it is likely to perform the best.


Cheers,
mark

--
Mark Mielke <[EMAIL PROTECTED]>


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Auto-explain patch

2008-07-03 Thread Dean Rasheed

Here is an updated version of the patch, with a debug_explain_min_duration
parameter to allow explaining of just slow-running queries. I've also 
incorporated
a couple of Simon Riggs' suggestions for formatting the output better.

Do I need to post this to -patches, or is that now obsolete?

Regards, Dean


> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
> CC: pgsql-hackers@postgresql.org
> Subject: RE: [HACKERS] Auto-explain patch
> Date: Wed, 2 Jul 2008 19:42:06 +
>
>
>> Its certainly not useful to *me* in its current form. It would
>> produce way to much (usless) output. However if it were tied to
>> log_min_duration_statement so I get auto explains for long running
>> queries... That would be very useful indeed. Even if it has to
>> explain everything just to toss out the explain if it did not meet
>> log_min_duration_statement. Unless I missed something and thats
>> exactly what it does?
>
> Thanks for the feedback. I agree, it does need a way to limit the
> output, and target just the slow-running queries.
>
> I also remember now the problem I had last time:- since this debug
> output is produced at a lower level than the other statement logging
> (so it can explain *all* SQL executed, not just top-level statements), it
> is difficult to control using the normal statement logging parameters.
>
> It would be easy to add another parameter, debug_explain_min_duration,
> specific to this option, to limit it to slow low-level queries.
>
> This would allow setting debug_explain_min_duration to be smaller than
> log_min_duration_statement, which makes sense, since the latter
> controls logging of top-level statements which may result in multiple
> low-level queries.
>
> Doing it this way would mean instrumenting all queries, but only
> explaining the slow ones, when debug_explain_plan is on.
> I'll have a play and see how it goes...
>
> Regards, Dean
>
> _
> Live Search Charades - guess correctly and find hidden videos
> http://www.searchcharades.com/

_
The next generation of Windows Live is here
http://www.windowslive.co.uk/get-live*** ./doc/src/sgml/config.sgml.orig	2008-03-11 16:59:09.0 +
--- ./doc/src/sgml/config.sgml	2008-07-03 14:20:15.0 +0100
***
*** 2674,2679 
--- 2674,2700 
 

  
+  
+   debug_explain_min_duration (integer)
+   
+debug_explain_min_duration configuration parameter
+   
+   
+
+ This option, together with ,
+ enables logging of debug messages explaining all SQL queries which
+ run for at least the specified number of milliseconds. Setting this
+ to zero (the default) will cause all statement execution plans to be
+ explained, when  is on.
+
+ 
+
+ When  if off, no statements
+ are explained, and this parameter has no effect.
+
+   
+  
+ 
   
silent_mode (boolean)

***
*** 2794,2799 
--- 2815,2822 
debug_print_rewritten (boolean)
debug_print_plan (boolean)
debug_pretty_print (boolean)
+   
+debug_explain_plan (boolean)

 debug_print_parse configuration parameter

***
*** 2806,2811 
--- 2829,2837 

 debug_pretty_print configuration parameter

+   
+debug_explain_plan configuration parameter
+   

 
  These parameters enable various debugging output to be emitted.
***
*** 2813,2824 
  the resulting parse tree, the query rewriter output, or the
  execution plan.  debug_pretty_print indents
  these displays to produce a more readable but much longer
! output format.  client_min_messages or
  log_min_messages must be
  DEBUG1 or lower to actually send this output
  to the client or the server log, respectively.
  These parameters are off by default.
 

   
  
--- 2839,2867 
  the resulting parse tree, the query rewriter output, or the
  execution plan.  debug_pretty_print indents
  these displays to produce a more readable but much longer
! output format.  debug_explain_plan prints
! the plan for each executed query in the same format as
! EXPLAIN ANALYZE. This includes queries executed from
! within functions.  client_min_messages or
  log_min_messages must be
  DEBUG1 or lower to actually send this output
  to the client or the server log, respectively.
  These parameters are off by default.
 
+ 
+
+ 
+  The reports produced by debug_explain_plan
+  are produced at a lower level in the database, as each query
+  is execut

Re: [HACKERS] PATCH: CITEXT 2.0

2008-07-03 Thread Alvaro Herrera
David E. Wheeler wrote:
> On Jul 3, 2008, at 00:19, Teodor Sigaev wrote:
>
>>> Hash opclass is 5-times simpler that btree one :)
>>
>> CREATE FUNCTION citext_hash(mchar)
>> RETURNS int4
>> AS 'MODULE_PATHNAME'
>> LANGUAGE C IMMUTABLE RETURNS NULL ON NULL INPUT;
>>
>> CREATE OPERATOR CLASS citext_ops
>> DEFAULT FOR TYPE mchar USING hash AS
>>  OPERATOR1   =  (citext, citext),
>>  FUNCTION1   citext_hash(citext);
>
> Thanks. What would citext_hash() look like? I don't see a text_hash() to 
> borrow from anywhere in src/.

See hash_any().  I assume the difficulty is making sure that
hash("FOO") = hash("foo") ...

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A Windows x64 port of PostgreSQL

2008-07-03 Thread Tom Lane
Peter Eisentraut <[EMAIL PROTECTED]> writes:
> Present us the actual problems as you discover them, and we will find a 
> solution.  Right now we are just guessing.

>> There seems to be two problems that affect 64-bit POSIX systems too:

> Well, 64-bit POSIX works just fine, so unless you can present an actual 
> failure in practice, I suggest you do not worry about this.

I think the main thing Ken is missing is that there are large swaths of
the system that don't deal in objects larger than 1Gb, and thus do not
have any need of 64-bit sizes.  In the places where it actually matters,
we use long or size_t.

To get a working WIN64 port it'd be necessary to go around and replace
long with size_t/ssize_t in the places where it matters --- but there
are not 450 of them, I don't think.  And I'd advise not touching the
places that use int; that will just bloat the patch and make it harder
to review, without actually buying any functionality.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: CITEXT 2.0

2008-07-03 Thread David E. Wheeler

On Jul 3, 2008, at 00:19, Teodor Sigaev wrote:


Hash opclass is 5-times simpler that btree one :)


CREATE FUNCTION citext_hash(mchar)
RETURNS int4
AS 'MODULE_PATHNAME'
LANGUAGE C IMMUTABLE RETURNS NULL ON NULL INPUT;

CREATE OPERATOR CLASS citext_ops
DEFAULT FOR TYPE mchar USING hash AS
 OPERATOR1   =  (citext, citext),
 FUNCTION1   citext_hash(citext);


Thanks. What would citext_hash() look like? I don't see a text_hash()  
to borrow from anywhere in src/.


Thanks,

David


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Resolving polymorphic functions with relateddatatypes

2008-07-03 Thread Tom Lane
Gregory Stark <[EMAIL PROTECTED]> writes:
> I think what you're suggesting is making integer and floating point constants
> like 0 and 0.1 be treated as "unknown" or perhaps a different kind of unknown,
> "unknown integral type" and "unknown numeric type".

No, that would be a pretty dangerous way to go about it, because it
would have side-effects on all sorts of queries whether or not they
made any use of polymorphic functions.  Plus, it would only fix the
issue for numeric-group types, but the same thing would come up if
you had, say, NVL(text, varchar).

What I'd be inclined to think about is making
check_generic_type_consistency and related functions allow the
arguments matched to ANYELEMENT to be of different actual types
so long as select_common_type could determine a unique type to
coerce them all to.  It'd take some refactoring (notably, because
select_common_type wants to throw error on failure, and because
there'd have to be a way to pass back the type that was selected
for use later).

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: CITEXT 2.0

2008-07-03 Thread David E. Wheeler

On Jul 2, 2008, at 22:14, Tom Lane wrote:


Note that this sort of stuff will mostly be fixed by pg_indent,
whether or not David does anything about it.  But certainly
conforming to the project style to begin with will cause less
pain to reviewers' eyeballs.


Yeah, I'll change it. I'm JAPH, so kind of made up the formatting as I  
went, though I did try to copy the style in varlena.c.



+// PostgreSQL 8.2 Magic.
+#ifdef PG_MODULE_MAGIC
+PG_MODULE_MAGIC;
+#endif


Here however is an outright bug: we do not allow // comments,  
because we

still support ANSI-spec compilers that don't recognize them.


Forgot about that. I'll change it for the next version.


btree cmp functions are allowed to return int32 negative, zero, or
positive.  There is *not* any requirement that negative or positive
results be exactly -1 or +1.  However, if you are comparing values
that are int32 or wider then you can't just return their difference
because it might overflow.


Thanks for the explanation. I'll make sure that they're both int32.

The "leak" is irrelevant for larger/smaller.  The only place where  
it's

actually useful to do PG_FREE_IF_COPY is in a btree or hash index
support function.  In other cases you can assume that you're being
called in a memory context that's too short-lived for it to matter.


So would that be for any function used by

CREATE OPERATOR CLASS citext_ops
DEFAULT FOR TYPE CITEXT USING btree AS
OPERATOR1   <  (citext, citext),
OPERATOR2   <= (citext, citext),
OPERATOR3   =  (citext, citext),
OPERATOR4   >= (citext, citext),
OPERATOR5   >  (citext, citext),
FUNCTION1   citext_cmp(citext, citext);

? (And then the btree operator and function to be added, of course.)





5) There are several commented out lines in CREATE OPERATOR
statement mostly related to NEGATOR. Is there some reason for that?


I copied it from the original citext.sql. Not sure what effect it  
has.


http://developer.postgresql.org/pgdocs/postgres/xoper- 
optimization.html


Thanks. Sounds like it'd be valuable to have them in there. I'll add  
tests, as well.


Best,

David


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCHES] pg_dump lock timeout

2008-07-03 Thread Tom Lane
daveg <[EMAIL PROTECTED]> writes:
> On Thu, Jul 03, 2008 at 11:15:10AM +0300, Marko Kreen wrote:
>> - The statement_timeout is set back with "statement_timeout = default"
>> Maybe it would be better to do "= 0" here?  Although such decision
>> would go outside the scope of the patch, I see no sense having
>> any other statement_timeout for actual dumping.

> I'd prefer to leave whatever policy is otherwise in place alone.

The policy in place in CVS HEAD is that pg_dump explicitly sets
statement_timeout to 0.  Setting it to default would break that,
and will certainly not be accepted.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Switching between terminals

2008-07-03 Thread Gregory Stark

"cinu" <[EMAIL PROTECTED]> writes:

Could anyone please tell me where I am going wrong and if there is a way I can 
get the same behaviour that I am getting while I am executing the through psql 
prompt.

a) you might try hitting return occasionally in your email :)

b) you maybe need to put a SELECT pg_sleep(10) between the two queries in the
first file you run so that it hasn't updated both tables and exited before the
second one even starts. But I'm just guessing since you haven't sent the
actual files you're running.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL 
training!

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Switching between terminals

2008-07-03 Thread Csaba Nagy
On Thu, 2008-07-03 at 19:56 +0530, cinu wrote:
> Could anyone please tell me where I am going wrong and if there is a
> way I can get the same behaviour that I am getting while I am
> executing the through psql prompt.

You're mistake is that you think a transaction is related to your
terminal, but it is in fact tied to the psql session you are running...

Your first example is running one psql instance per terminal, hence one
transaction per terminal, while in your second example the transaction
is terminated each time psql finishes to run. Basically what you're
asking for is to keep a transaction opened by one session (the first
psql execution) and connect to it with the second session (the second
psql call) and continue the transaction which was opened by the first
one... which I'm pretty sure is wrong to want. It is likely possible to
do (using PREPARE TRANSACTION), but even likelier that it is a wrong
thing to do in normal circumstances. If you'll say what you really want
to do, I bet you'll get a lot more useful advices...

Cheers,
Csaba.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Switching between terminals

2008-07-03 Thread cinu
Hi All, I am having an issue with a deadlock scenario in PostgreSQL 8.3.1I have 
the following database postgres, what I do is create two tables t1 and t2 in 
this database and I have the following fileds t1(a_id smallint, fn 
character(20), ln character(20), rt smallint)t2( c_id smallint, c_name 
character(20));The connection to the "postgres" database is established through 
two terminals;From the 1st terminal I give the following command1) begin 
transaction; update t2 set c_name = 'lock' where c_id = 1;From the 2nd terminal 
I give the following command2) begin transaction; update t1 set ln = 'lock' 
where a_id = 1;Then I come back to the 1st terminal and execute the following3) 
update t1 set ln = 'lock' where a_id = 1;Then I come to 2nd Terminal and 
execute the following 4) update t2 set c_name = 'lock' where c_id = 1;When I 
come out I get the following error message ERROR:  deadlock detected
DETAIL:  Process 15171 waits for ShareLock on transaction 12738; blocked by 
process 15183.
Process 15183 waits for ShareLock on transaction 12739; blocked by process 
15171.This is perfectly fine, but what i am trying to acheive is that I am 
putting the above four queries in 4 different .sql files and executing it in 
the same way as displayed above by using two different terminals, please refer 
below the sequence which I am using.From the 1st terminal I give the following 
command1) psql -f dl11.sql -U postgres -d postgresFrom the 2nd terminal I give 
the following command2) psql -f dl21.sql -U postgres -d postgresThen I come 
back to the 1st terminal and execute the following3) psql -f dl12.sql -U 
postgres -d postgresThen I come to 2nd Terminal and execute the 
following4) psql -f dl22.sql -U postgres -d postgresI should be getting the 
same message about deadlock detection, but I am unable to get that.Could anyone 
please tell me where I am going wrong and if there is a way I can get the same 
behaviour that I am getting while I am executing the
 through psql prompt.Thanks in advanceWaiting for replyRegardsCinu




  Explore your hobbies and interests. Go to 
http://in.promos.yahoo.com/groups/

Re: [HACKERS] Adding variables for segment_size, wal_segment_size and block sizes

2008-07-03 Thread Bernd Helmle
--On Montag, Juni 30, 2008 18:47:33 -0400 Bruce Momjian <[EMAIL PROTECTED]> 
wrote:




I'd like to implement them if we agree on them


Bernd, have you made any progress on this?


Here's a patch for this. I'll add it to the commit fest wiki page if it's 
okay for you.


--
 Thanks

   Bernd*** a/doc/src/sgml/config.sgml
--- b/doc/src/sgml/config.sgml
***
*** 4759,4764  dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir'
--- 4759,4807 

   
  
+  
+   segment_size (integer)
+   
+segment_size configuration parameter
+   
+   
+
+ Reports the number of pages which can be stored within a file segment.  
+ The total physical size of a segment file in bytes can be determined by multiplying
+ the block_size parameter with segment_size.
+
+   
+  
+ 
+  
+   wal_block_size (integer)
+   
+wal_block_size configuration parameter
+   
+   
+
+ Reports the size of a write ahead log disk block.  It is determined by the value
+ of XLOG_BLCKSZ when building the server. The default
+ value is 8192 bytes. wal_block_size influences the total physical
+ size of a write ahead log segment. See  for more information.
+
+   
+  
+ 
+  
+   wal_segment_size (integer)
+   
+wal_segment_size configuration parameter
+   
+   
+
+ Reports the number of pages within a write ahead log segment file. wal_segment_size multiplied with wal_block_size gives the total physical size of a write ahead
+ log segment file in bytes.
+
+   
+  
+ 
   
integer_datetimes (boolean)

*** a/src/backend/utils/misc/guc.c
--- b/src/backend/utils/misc/guc.c
***
*** 29,34 
--- 29,35 
  #include "access/transam.h"
  #include "access/twophase.h"
  #include "access/xact.h"
+ #include "access/xlog_internal.h"
  #include "catalog/namespace.h"
  #include "commands/async.h"
  #include "commands/prepare.h"
***
*** 355,360  static int	max_function_args;
--- 356,364 
  static int	max_index_keys;
  static int	max_identifier_length;
  static int	block_size;
+ static int  segment_size;
+ static int  wal_block_size;
+ static int  wal_segment_size;
  static bool integer_datetimes;
  
  /* should be static, but commands/variable.c needs to get at these */
***
*** 1731,1736  static struct config_int ConfigureNamesInt[] =
--- 1735,1774 
  	},
  
  	{
+ 		{"segment_size", PGC_INTERNAL, PRESET_OPTIONS,
+ 		gettext_noop("Shows the number of pages per disk file."),
+ 		NULL,
+ 		GUC_UNIT_BLOCKS | GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE
+ 		},
+ 		&segment_size,
+ 		RELSEG_SIZE,
+ 		RELSEG_SIZE,
+ 		RELSEG_SIZE, NULL, NULL
+ 	},
+ 
+ 	{
+ 		{"wal_block_size", PGC_INTERNAL, PRESET_OPTIONS,
+ 			gettext_noop("Shows the write ahead log block size."),
+ 			NULL,
+ 			GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE
+ 		},
+ 		&wal_block_size,
+ 		XLOG_BLCKSZ, XLOG_BLCKSZ, XLOG_BLCKSZ, NULL, NULL
+ 	},
+ 
+ 	{
+ 		{"wal_segment_size", PGC_INTERNAL, PRESET_OPTIONS,
+ 			gettext_noop("Shows the number of pages per write ahead log segment."),
+ 			NULL,
+ 			GUC_UNIT_XBLOCKS | GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE
+ 		},
+ 		&wal_segment_size,
+ 		(XLOG_SEG_SIZE / XLOG_BLCKSZ), 
+ 		(XLOG_SEG_SIZE / XLOG_BLCKSZ), 
+ 		(XLOG_SEG_SIZE / XLOG_BLCKSZ), NULL, NULL
+ 	},
+ 
+ 	{
  		{"autovacuum_naptime", PGC_SIGHUP, AUTOVACUUM,
  			gettext_noop("Time to sleep between autovacuum runs."),
  			NULL,

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Command execution

2008-07-03 Thread Aaron Spiteri

Thanks for pointers, that has cleared a few things up for me.

On 03/07/2008, at 11:27 PM, Csaba Nagy wrote:


On Thu, 2008-07-03 at 23:15 +1000, Aaron Spiteri wrote:

Inside foo there was a INSERT and UPDATE, and the INSERT failed but
the UPDATE succeeded would the UPDATE be rolled back?


Just to add to the other answers, if the INSERT is before the  
UPDATE in
the function, the function execution stops when the INSERT fails,  
and so

the UPDATE will never be executed in the first place...

Cheers,
Csaba.





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Command execution

2008-07-03 Thread Csaba Nagy
On Thu, 2008-07-03 at 23:15 +1000, Aaron Spiteri wrote:
> Inside foo there was a INSERT and UPDATE, and the INSERT failed but  
> the UPDATE succeeded would the UPDATE be rolled back?

Just to add to the other answers, if the INSERT is before the UPDATE in
the function, the function execution stops when the INSERT fails, and so
the UPDATE will never be executed in the first place...

Cheers,
Csaba.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Command execution

2008-07-03 Thread A. Kretschmer
am  Thu, dem 03.07.2008, um 23:15:33 +1000 mailte Aaron Spiteri folgendes:
> Hi guys,
> 
> I have been following the mailing list and reading the source code  
> for a little while,  and was wandering if someone could fill in the  
> gaps for me.  Does PostgresQL view updates and inserts performed in a  
> function as part of the same transaction or are they considered  
> separate transactions:
> 
> For instance say I had a function name foo and ran the command:
> 
> SELECT foo();
> 
> Inside foo there was a INSERT and UPDATE, and the INSERT failed but  
> the UPDATE succeeded would the UPDATE be rolled back?

The whole function is an own transaction. Fails the INSERT, the complete
function foo failed. In other words; yes, also the UPDATE rolled back.


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Resolving polymorphic functions with relateddatatypes

2008-07-03 Thread Simon Riggs

On Thu, 2008-07-03 at 13:54 +0100, Gregory Stark wrote:
> "Simon Riggs" <[EMAIL PROTECTED]> writes:
> 
> > On Thu, 2008-07-03 at 14:11 +0300, Heikki Linnakangas wrote:
> >
> >> > What I'd like it to do is to recognise that the 0 should be cast
> >> > implicitly to another datatype within the same family. I want and expect
> >> >  nvl(char_column, 0)
> >> > to fail, but I expect the various numeric/integer types we have to play
> >> > nicely together without tears.
> >> 
> >> So, it would be analogous to the 'unknown' type, but for numeric 
> >> literals instead of text literals. Seems reasonable. It still wouldn't 
> >> allow nvl(1::bigint, 2::int4), though, just as the unknown type doesn't 
> >> help with nvl('foo'::text, 'bar'::varchar).
> >
> > Well, it would be nice if we could work with the unknown type also, but
> > I don't expect that's meaningful.
> 
> Postgres's way of spelling constants of unknown type is to put them in single
> quotes. That is, 'foo' isn't a character string in Postgres, it's *any* kind
> of constant with an unknown type. So this would work:
> 
> nvl(numeric_column, '0')
> 
> I think what you're suggesting is making integer and floating point constants
> like 0 and 0.1 be treated as "unknown" or perhaps a different kind of unknown,
> "unknown integral type" and "unknown numeric type".
> 
> Personally I think the way it works now is weird too, but it's been that way
> forever and changing it would be a pretty massive behaviour change.

Well, I can workaround the problem, it just seems like there shouldn't
be one.

I'm OK with massive behaviour change (like 8.3) as long as its a
controllable option.

By far the biggest behaviour change is to get the rest of the world to
work the way we do. People write (and *have written*) SQL that doesn't
work this way.

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Command execution

2008-07-03 Thread Aaron Spiteri

Hi guys,

I have been following the mailing list and reading the source code  
for a little while,  and was wandering if someone could fill in the  
gaps for me.  Does PostgresQL view updates and inserts performed in a  
function as part of the same transaction or are they considered  
separate transactions:


For instance say I had a function name foo and ran the command:

SELECT foo();

Inside foo there was a INSERT and UPDATE, and the INSERT failed but  
the UPDATE succeeded would the UPDATE be rolled back?


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCHES] pg_dump lock timeout

2008-07-03 Thread daveg
On Thu, Jul 03, 2008 at 11:15:10AM +0300, Marko Kreen wrote:
> On 5/11/08, daveg <[EMAIL PROTECTED]> wrote:
> >  Attached is a patch to add a commandline option to pg_dump to limit how 
> > long
> >  pg_dump will wait for locks during startup.
> 
> My quick review:
> 
> - It does not seem important enough to waste a short option on.
>   Having only long option should be enough.

Agreed. I'll change it.
 
> - It would be more polite to do SET LOCAL instead SET.
>   (Eg. it makes safer to use pg_dump through pooler.)

Also agreed. Thanks.

> - The statement_timeout is set back with "statement_timeout = default"
>   Maybe it would be better to do "= 0" here?  Although such decision
>   would go outside the scope of the patch, I see no sense having
>   any other statement_timeout for actual dumping.

I'd prefer to leave whatever policy is otherwise in place alone. I can see
use cases for either having or not having a timeout for pg_dump, but it does
seem  outside the scope of this patch.

Thanks for you review and comments.

-dg

-- 
David Gould   [EMAIL PROTECTED]  510 536 1443510 282 0869
If simplicity worked, the world would be overrun with insects.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Resolving polymorphic functions with relateddatatypes

2008-07-03 Thread Gregory Stark
"Simon Riggs" <[EMAIL PROTECTED]> writes:

> On Thu, 2008-07-03 at 14:11 +0300, Heikki Linnakangas wrote:
>
>> > What I'd like it to do is to recognise that the 0 should be cast
>> > implicitly to another datatype within the same family. I want and expect
>> >  nvl(char_column, 0)
>> > to fail, but I expect the various numeric/integer types we have to play
>> > nicely together without tears.
>> 
>> So, it would be analogous to the 'unknown' type, but for numeric 
>> literals instead of text literals. Seems reasonable. It still wouldn't 
>> allow nvl(1::bigint, 2::int4), though, just as the unknown type doesn't 
>> help with nvl('foo'::text, 'bar'::varchar).
>
> Well, it would be nice if we could work with the unknown type also, but
> I don't expect that's meaningful.

Postgres's way of spelling constants of unknown type is to put them in single
quotes. That is, 'foo' isn't a character string in Postgres, it's *any* kind
of constant with an unknown type. So this would work:

nvl(numeric_column, '0')

I think what you're suggesting is making integer and floating point constants
like 0 and 0.1 be treated as "unknown" or perhaps a different kind of unknown,
"unknown integral type" and "unknown numeric type".

Personally I think the way it works now is weird too, but it's been that way
forever and changing it would be a pretty massive behaviour change.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's RemoteDBA services!

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A Windows x64 port of PostgreSQL

2008-07-03 Thread Peter Eisentraut
Am Donnerstag, 3. Juli 2008 schrieb Ken Camann:
> > Anyway, back to the immediate problem.  What would probably make sense
> > to try as a first step is something like
> >
> > #ifndef WIN64
> > typedef unsigned long Datum;/* XXX sizeof(long) >= sizeof(void *) */
> > #else
> > typedef unsigned long long Datum;   /* Microsoft's out in left field
> > */ #endif
> >
> > and see how many warnings that eliminates ...
>
> It's a question about style.  If Microsoft Visual C really is the only
> one like this, then I guess there is no harm in #ifdef _WIN64 instead
> of #ifdef (some other name that captures the peculiarity of what is
> happening but isn't MSFT dependent).  win32.h (not written by me)
> already defines SIZEOF_SIZE_T and SIZEOF_LONG_INT (or something like
> that)...It might be a better idea to use those two.

Style is something that we can worry about later, once we know how the code is 
supposed to behave.  Coding Datum to be >= sizeof(* void) _and_ >= 
sizeof(long) and whatever else isn't that hard to do stylishly later on.

> But the thing is, this isn't the only issue.  There is the fact that
> "int" appears in c.h for memory offsets and not long.  As long as I
> might have to change a whole lot of this stuff to make exceptions for
> windows, I was wondering what the community thinks of the way this is
> all currently handled and what advice they have might have for me when
> changing stuff.

Present us the actual problems as you discover them, and we will find a 
solution.  Right now we are just guessing.

> There seems to be two problems that affect 64-bit POSIX systems too:

Well, 64-bit POSIX works just fine, so unless you can present an actual 
failure in practice, I suggest you do not worry about this.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Resolving polymorphic functions with related datatypes

2008-07-03 Thread Simon Riggs

On Thu, 2008-07-03 at 14:11 +0300, Heikki Linnakangas wrote:

> > What I'd like it to do is to recognise that the 0 should be cast
> > implicitly to another datatype within the same family. I want and expect
> >  nvl(char_column, 0)
> > to fail, but I expect the various numeric/integer types we have to play
> > nicely together without tears.
> 
> So, it would be analogous to the 'unknown' type, but for numeric 
> literals instead of text literals. Seems reasonable. It still wouldn't 
> allow nvl(1::bigint, 2::int4), though, just as the unknown type doesn't 
> help with nvl('foo'::text, 'bar'::varchar).

Well, it would be nice if we could work with the unknown type also, but
I don't expect that's meaningful.

Yet
func(column_of_typeX, constant)
ought to be able to resolve correctly when
* no function exists with signature
func(typeX, typeY)
* yet there exists a function
func(anyelement, anyelement)
* and an implicit cast exists typeY => typeX
(assuming constant is normally resolved to typeY)

> > If we can do it for indexes, can we do it for polymorphic functions also
> > when there is no matching function?
> 
> Umm, what do indexes have to do with this?

Nothing, except that we solved implicit casting for that situation, so
perhaps it is possible for this situation...

Anyway, just posting for reference. Workarounds exist, just wanted to
make sure the issue was mentioned.

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Attaching and using the Postgres shared memory segment

2008-07-03 Thread Heikki Linnakangas

Paul van den Bogaard wrote:
Since these data structures are for collecting information I was able to 
create a new function that retrieves its information from these data 
structures and returns them as a result from a query.


However this is too intrusive. Since everything is in shared memory it 
should be possible for an external, yet to be created, process to attach 
to the shared memory segment (read only mode only) to collect these data 
structures.


That sounds dangerous. I'm seeing problems with postmaster stop and 
restart if there's an external process attached to the postgres shared 
memory segment.


I have the address mapping of the shared memory (doing stuff on Solaris 
only). I can pass this information to InitShmemAccess (ipc/shmem.c).  
There is however a missing settting for ShmemIndex. This one can be 
initialsed by calling InitShmemIndex(). The only thing that makes me 
wonder is its need to use ShmemIndexLock. Although this is just an enum, 
and  therefore an index in some array created in CreateLWLock 
(lwlock.c).  I do not see how I can get access to this lock. 


You can't acquire a lightweight lock safely outside a real backend. 
You'd need to have a valid PGPROC entry, at least.


I would suggest forgetting about that idea and sticking to your original 
approach of a function that returns the data as a result from a query. 
Or, create a completely separate shared memory block for your own data, 
using plain Solaris shmem functions.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Resolving polymorphic functions with related datatypes

2008-07-03 Thread Heikki Linnakangas

Simon Riggs wrote:

I'm using the nvl() function from the orafce package. It is defined as a
polymorphic function so its function signature is 
  nvl(anyelement, anyelement)


Now if I try to use the function in this very typical way
  nvl(numeric_col, 0)

we get

 ERROR: function nvl(numeric, integer) does not exist

The same error occurs if we have nvl(smallint, integer) etc

This is a real shame 'cos polymorphic functions ought to be a great way
of saving development time and catalog space, yet they seem to fall down
a hole without implicit casting.

What I'd like it to do is to recognise that the 0 should be cast
implicitly to another datatype within the same family. I want and expect
 nvl(char_column, 0)
to fail, but I expect the various numeric/integer types we have to play
nicely together without tears.


So, it would be analogous to the 'unknown' type, but for numeric 
literals instead of text literals. Seems reasonable. It still wouldn't 
allow nvl(1::bigint, 2::int4), though, just as the unknown type doesn't 
help with nvl('foo'::text, 'bar'::varchar).



If we can do it for indexes, can we do it for polymorphic functions also
when there is no matching function?


Umm, what do indexes have to do with this?
--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [WIP] patch - Collation at database level

2008-07-03 Thread Zdenek Kotala

Tom Lane napsal(a):

Gregory Stark <[EMAIL PROTECTED]> writes:

Out of curiosity, what is a "user-defined collation"? Are there SQL statements
to go around declaring what order code points should be sorted in? That seems
like it would be... quite tedious!


Hm, that's a good point.  SQL99 has

  ::=
  CREATE COLLATION  FOR
  
FROM 
  [  ]

  ::= 

  ::=
NO PAD
  | PAD SPACE

which seems pretty stupid if you ask me --- all the mechanism required
to manage a new object type, just to enable PAD SPACE or not?
(Especially when PAD SPACE itself is an utterly broken, useless concept
... but I digress.)  You might as well just provide all the standard
collations in both variants and be done with it.

The statement looks the same in last year's 200n draft, so it's not
like they were just about to add some more capability.


The proposed syntax of CREATE COLLATION is:

CREATE COLLATION  FOR 
FROM  [STRCOLFN ]
[  ] [  ] [  ]
[ LCCOLLATE  ] [ LCCTYPE  ]

Which extends ANSI specification.



We might be best off to treat collations like index access methods,
ie, they're theoretically add-able but there's no infrastructure for
managing them, and what's expected is that all the ones you need are
created by initdb.


I think you cannot create all collation at bootstrap. You can only create record 
for actual LC_COLLATION, because I there is not standard way how to obtain 
complete list of supported collations and there is also problem if you install 
new locales after initdb.


When I looked to another DB (MS SQL, MySQL, DB2, Firebird) then only Firebird 
supports CREATE COLLATION command. Other databases has hard coded list of 
locales. Hardcoded solution means to use some lib (e.g ICU) with unified names 
or has locale name mapping for all supported OS.


I personally prefer open solution when I can create own collation and specify 
collation function to handle it.



Zdenek





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Resolving polymorphic functions with related datatypes

2008-07-03 Thread Simon Riggs

On Thu, 2008-07-03 at 12:22 +0200, Pavel Stehule wrote:
> 2008/7/3 Simon Riggs <[EMAIL PROTECTED]>:
> > I'm using the nvl() function from the orafce package. It is defined as a
> > polymorphic function so its function signature is
> >  nvl(anyelement, anyelement)
> >
> > Now if I try to use the function in this very typical way
> >  nvl(numeric_col, 0)
> >
> > we get
> >
> >  ERROR: function nvl(numeric, integer) does not exist
> >
> > The same error occurs if we have nvl(smallint, integer) etc
> >
> > This is a real shame 'cos polymorphic functions ought to be a great way
> > of saving development time and catalog space, yet they seem to fall down
> > a hole without implicit casting.
> >
> > What I'd like it to do is to recognise that the 0 should be cast
> > implicitly to another datatype within the same family. I want and expect
> >  nvl(char_column, 0)
> > to fail, but I expect the various numeric/integer types we have to play
> > nicely together without tears.
> >
> > If we can do it for indexes, can we do it for polymorphic functions also
> > when there is no matching function?
> >
> 
> +1
> 
> there is similar problem with literal constant.

as well as NULL itself, which doesn't have a type when attempting to
resolve to anyelement.

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Resolving polymorphic functions with related datatypes

2008-07-03 Thread Pavel Stehule
2008/7/3 Simon Riggs <[EMAIL PROTECTED]>:
> I'm using the nvl() function from the orafce package. It is defined as a
> polymorphic function so its function signature is
>  nvl(anyelement, anyelement)
>
> Now if I try to use the function in this very typical way
>  nvl(numeric_col, 0)
>
> we get
>
>  ERROR: function nvl(numeric, integer) does not exist
>
> The same error occurs if we have nvl(smallint, integer) etc
>
> This is a real shame 'cos polymorphic functions ought to be a great way
> of saving development time and catalog space, yet they seem to fall down
> a hole without implicit casting.
>
> What I'd like it to do is to recognise that the 0 should be cast
> implicitly to another datatype within the same family. I want and expect
>  nvl(char_column, 0)
> to fail, but I expect the various numeric/integer types we have to play
> nicely together without tears.
>
> If we can do it for indexes, can we do it for polymorphic functions also
> when there is no matching function?
>

+1

there is similar problem with literal constant.

Pavel Stehule

> --
>  Simon Riggs   www.2ndQuadrant.com
>  PostgreSQL Training, Services and Support
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Attaching and using the Postgres shared memory segment

2008-07-03 Thread Paul van den Bogaard
to look into an idea I currently have I need (and implemented)  a new  
piece of memory that resides in (Postgres) shared memory.  My data  
structures are in place an the new database seems running fine. Even  
under high load :-)


Since these data structures are for collecting information I was able  
to create a new function that retrieves its information from these  
data structures and returns them as a result from a query.


However this is too intrusive. Since everything is in shared memory it  
should be possible for an external, yet to be created, process to  
attach to the shared memory segment (read only mode only) to collect  
these data structures.


However, when I read the source I feel a little stuck. Kind of chicken  
and egg situation perhaps.


I have the address mapping of the shared memory (doing stuff on  
Solaris only). I can pass this information to InitShmemAccess (ipc/ 
shmem.c).  There is however a missing settting for ShmemIndex. This  
one can be initialsed by calling InitShmemIndex(). The only thing that  
makes me wonder is its need to use ShmemIndexLock. Although this is  
just an enum, and  therefore an index in some array created in  
CreateLWLock (lwlock.c).  I do not see how I can get access to this  
lock. Is it "just" there since obviously the whole Postgres shared  
memory is allocated, initialised and likely already heavily used.
Do the locks map to a "well know address" so I can do without further  
in process initialisation? Or is there another routine (or two ...)  
that I need to call in order to make things working.


Advise on how to proceed or pointers to docs in which this stuff is  
explained are highly appreciated.


Thanks
Paul.



-
Paul van den Bogaard   [EMAIL PROTECTED]
ISV-E  -- ISV Engineering, Opensource Engineering group

Sun Microsystems, Inc  phone:+31  
334 515 918
Saturnus 1 extentsion:  
x (70)15918
3824 ME Amersfoort mobile:   +31  
651 913 354
The Netherlandsfax: 
+31 334 515 001



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP patch: reducing overhead for repeat de-TOASTing

2008-07-03 Thread Mark Cave-Ayland

Gregory Stark wrote:

> Well at least it caught the bug that Mark was performance testing with a
> --enable-cassert build :/

True ;)  I appreciated that there would be some overhead, but I didn't 
think it would be that much. This was mainly since I seem to remember 
there was talk a while back of enabling some assertions in production 
builds.



ATB,

Mark.

--
Mark Cave-Ayland
Sirius Corporation - The Open Source Experts
http://www.siriusit.co.uk
T: +44 870 608 0063

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Resolving polymorphic functions with related datatypes

2008-07-03 Thread Simon Riggs
I'm using the nvl() function from the orafce package. It is defined as a
polymorphic function so its function signature is 
  nvl(anyelement, anyelement)

Now if I try to use the function in this very typical way
  nvl(numeric_col, 0)

we get

 ERROR: function nvl(numeric, integer) does not exist

The same error occurs if we have nvl(smallint, integer) etc

This is a real shame 'cos polymorphic functions ought to be a great way
of saving development time and catalog space, yet they seem to fall down
a hole without implicit casting.

What I'd like it to do is to recognise that the 0 should be cast
implicitly to another datatype within the same family. I want and expect
 nvl(char_column, 0)
to fail, but I expect the various numeric/integer types we have to play
nicely together without tears.

If we can do it for indexes, can we do it for polymorphic functions also
when there is no matching function?

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP patch: reducing overhead for repeat de-TOASTing

2008-07-03 Thread Gregory Stark

>> I'm inclined to think that we'd better turn that off by default,
>> since it's not looking like it's catching anything new.

Well at least it caught the bug that Mark was performance testing with a
--enable-cassert build :/


-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's PostGIS support!

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP patch: reducing overhead for repeat de-TOASTing

2008-07-03 Thread Mark Cave-Ayland

Tom Lane wrote:

> OK, I've reproduced the test case locally.  I believe that when you
> say "worse", you mean "worse than 8.3", right?  And you did tell me
> offlist that you were testing with --enable-cassert.  CVS HEAD has
> very substantially greater cassert overhead because of the
> randomize_memory addition --- oprofile output for this test looks like
>
> samples  %image name   symbol name
> 1239580  78.7721  postgres randomize_mem
> 1435449.1218  libc-2.7.so  memcpy
> 48039 3.0528  libc-2.7.so  memset
> 13838 0.8794  postgres LWLockAcquire
> 12176 0.7738  postgres index_getnext
> 11697 0.7433  postgres LWLockRelease
> 10406 0.6613  postgres hash_search_with_hash_value
> 4739  0.3012  postgres toast_fetch_datum
> 4099  0.2605  postgres _bt_checkkeys
> 3905  0.2482  postgres AllocSetAlloc
> 3751  0.2384  postgres PinBuffer
> 3545  0.2253  postgres UnpinBuffer
>
> I'm inclined to think that we'd better turn that off by default,
> since it's not looking like it's catching anything new.

Yes, I suspect that's probably it. I applied the patch straight to CVS 
tip as I wasn't aware of any changes that would affect the unpatched 
result, but I was obviously wrong ;)


(cut)

> On the whole I'm still feeling pretty discouraged about this patch ...

At the very least we have some more information on how an eventual 
solution should work, and a test case to help analyse the effectiveness 
of any potential solution.



ATB,

Mark.

--
Mark Cave-Ayland
Sirius Corporation - The Open Source Experts
http://www.siriusit.co.uk
T: +44 870 608 0063

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCHES] pg_dump lock timeout

2008-07-03 Thread Marko Kreen
On 5/11/08, daveg <[EMAIL PROTECTED]> wrote:
>  Attached is a patch to add a commandline option to pg_dump to limit how long
>  pg_dump will wait for locks during startup.

My quick review:

- It does not seem important enough to waste a short option on.
  Having only long option should be enough.

- It would be more polite to do SET LOCAL instead SET.
  (Eg. it makes safer to use pg_dump through pooler.)

- The statement_timeout is set back with "statement_timeout = default"
  Maybe it would be better to do "= 0" here?  Although such decision
  would go outside the scope of the patch, I see no sense having
  any other statement_timeout for actual dumping.

-- 
marko

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Concurrent Restores

2008-07-03 Thread Volkan YAZICI
Hi,

[I've searched archives for the subject, but couldn't find a related
discussion. If there is any, sorry for duplication.]

We're migrating nearly a dozen of MSSQL servers of size ~100GiB per
cluster. For this purpose, we dump MSSQL data to COPY files using a Java
program. We have database schemas for PostgreSQL which are equivalent to
their correponding ones in MSSQL side. The problem is, while we're
creating primary key, foreign key and index relations, I'm manually
partitioning related SQL files into separate files to gain performance
from CPU usage. One can argue that, concurrent processes will consume
larger disk I/O in this scheme and cause I/O bottleneck this time. But
as far as I monitored the system statistics, during concurrent
restoration, in our situation operation is CPU bounded, not disk
I/O. (Thanks SAN!)

pg_dump is capable of dumping objects with respect to their dependency
relations. It'd be really awesome if pg_dump can also handle
parallelizing primary key, foreign key and index creation queries into
separate files. Would such a think be possible? Comments?


Regards.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: CITEXT 2.0

2008-07-03 Thread Teodor Sigaev
CREATE FUNCTION citext_hash(*citext*) 



DEFAULT FOR TYPE *citext* USING hash AS


Oops, citext of course.

--
Teodor Sigaev   E-mail: [EMAIL PROTECTED]
   WWW: http://www.sigaev.ru/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A Windows x64 port of PostgreSQL

2008-07-03 Thread Mark Mielke
A bit long - the summary is that "intptr_t" should probably be used, 
assuming I understand the problem this thread is talking about:


Ken Camann wrote:

1. An object in memory can have size "Size" (= size_t).  So its big
(maybe 8 bytes).
2. An index into the buffer containing that object has index "Index"
(= int)  So its smaller (maybe 4 bytes).  Now you can't index your big
object, unless sizeof(size_t) = sizeof(int).  But sizeof(size_t) must
be at least 8 bytes on just about any 64-bit system.  And sizeof(int)
is still 4 most of the time, right


I believe one of the mistakes here is an assumption that "int" is always 
the correct type to use for an index. This is not correct. "int" will be 
a type that is probably the most efficient word size for the target 
machine, and since "int" is usually ~32 bits these days, it will have a 
range that is sufficient for most common operations, therefore, it is 
commonly used. But, the C and C++ specifications do not define that an 
index into an array is of type "int". Rather, they defined E1[E2] as 
*((E1) + (E2)), and then the + operator is defined such that if one 
operand E1 is a pointer and operand E2 is an integer type, the result 
will be a pointer to the E2th element of E1 with the same pointer type 
as E1. "integer type" is not "int". It is any integer type. If the 
useful range of the array is 256 values, a "char" is acceptable for use 
as a "char" is an integer type. The optimizer might promote the "char" 
to a 32-bit or 64-bit machine register before calculating the result of 
the addition, but this is irrelevant to the definition of the C language.


I think one could successfully argue that ptrdiff_t is the correct value 
to use for an array index that might use a range larger than "int" on a 
machine where sizeof(int) < sizeof(void*). ptrdiff_t represents the 
difference between two pointers. If P and Q are void* and I is 
ptrdiff_t, and Q - P = I, then &P[I] = Q. Though, I think it might be 
easier to use size_t. If I is of type size_t, and P = malloc(I), then 
P[0] ... P[I-1] are guaranteed to be addressable using a size_t.


There is also the usable range, even on a machine with sizeof(size_t) of 
64 bits. I don't think any existing machine can actually address 64-bits 
worth of continuous memory. 48-bits perhaps. Technically, sizeof(size_t) 
does not need to be sizeof(void*), and in fact, the C standard has this 
to say: "The types used for size_t and ptrdiff_t should not have an 
integer conversion rank greater than that of signed long int unless the 
implementation supports objects large enough to make this necessary." It 
doesn't define sizeof(size_t) in terms of sizeof(void*).


The C standard defines long int as:
"Their implementation-defined values shall be equal or greater in 
magnitude (absolute value) to those shown, with the same sign.

...
— minimum value for an object of type long int
LONG_MIN -2147483647 // −(2**31 − 1)
— maximum value for an object of type long int
LONG_MAX +2147483647 // 2**31 − 1"

Based upon this definition, it appears that Windows 64 is compatible 
with the standard. That GCC took a different route that is also 
compatible with the standard is inconvenient, but a reality that should 
be dealt with.


More comments from the C standard on this issue: "Any pointer type may 
be converted to an integer type. Except as previously specified, the 
result is implementation-defined. If the result cannot be represented in 
the integer type, the behavior is undefined. The result need not be in 
the range of values of any integer type."


The "portable" answer to this problem, is supposed to be intptr_t:
"7.18.1.4 Integer types capable of holding object pointers
The following type designates a signed integer type with the property 
that any valid
pointer to void can be converted to this type, then converted back to 
pointer to void,

and the result will compare equal to the original pointer:
intptr_t
The following type designates an unsigned integer type with the property 
that any valid
pointer to void can be converted to this type, then converted back to 
pointer to void,

and the result will compare equal to the original pointer:
uintptr_t
These types are optional."

If Windows 64 has this type (not sure - I don't use Windows 64), then I 
believe intptr_t is the portable way to solve this problem. Note, 
though, that intptr_t does not guarantee that it can hold every integer 
value. For example, on a 32-bit platform, sizeof(intptr_t) might be 32 
bits, and sizeof(long long) might be 64 bits. There is also this 
portable type:

" 7.18.1.5 Greatest-width integer types
The following type designates a signed integer type capable of 
representing any value of

any signed integer type:
intmax_t
The following type designates an unsigned integer type capable of 
representing any value

of any unsigned integer type:
uintmax_t
These types are required."

I think this means that if PostgreSQL were to be designed to support all 
IS

Re: [HACKERS] PATCH: CITEXT 2.0

2008-07-03 Thread Teodor Sigaev
Douglass book, though I probably missed it. Anyone got a link for me to 
read to make it happen?


Hash opclass is 5-times simpler that btree one :)

CREATE FUNCTION citext_hash(mchar)
RETURNS int4
AS 'MODULE_PATHNAME'
LANGUAGE C IMMUTABLE RETURNS NULL ON NULL INPUT;

CREATE OPERATOR CLASS citext_ops
DEFAULT FOR TYPE mchar USING hash AS
  OPERATOR1   =  (citext, citext),
  FUNCTION1   citext_hash(citext);


--
Teodor Sigaev   E-mail: [EMAIL PROTECTED]
   WWW: http://www.sigaev.ru/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers