Re: [HACKERS] Why does LOG have higher priority than ERROR and WARNING?

2009-09-14 Thread Peter Eisentraut
On Mon, 2009-09-14 at 09:16 +0900, Itagaki Takahiro wrote:
 Another matter is that we use LOG level both cases of important
 activity logging and mere performance or query logging. Maybe
 we should have used another log level (PERFORMANCE?) for the
 latter case, and its priority is less than WARNINGs and LOGs.

Ideally, LOG messages are messages that you explicitly requested using
various log_* parameters.  If you need more control, we could
conceivably add more of those.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Elementary dependency look-up

2009-09-14 Thread Greg Smith

On Wed, 9 Sep 2009, decibel wrote:

What might be more useful is a view that takes the guesswork out of using 
pg_depend. Namely, convert (ref)classid into a catalog table name (or better 
yet, what type of object it is), (ref)objid into an actual object name, and 
(ref)objsubid into a real name.


Here's V1 of a depend unraveler I needed recently and that's saved me a 
bunch of time:


SELECT
  c1.oid as relid,
  n1.nspname || '.' || c1.relname as relation,
  c1.relkind,
  CASE
WHEN c1.relkind='r' THEN 'table'
WHEN c1.relkind='i' THEN 'index'
WHEN c1.relkind='S' THEN 'sequence'
WHEN c1.relkind='v' THEN 'view'
WHEN c1.relkind='c' THEN 'composite'
WHEN c1.relkind='t' THEN 'TOAST'
ELSE '?'
  END as kind,
  c2.oid as relid,
  n2.nspname || '.' || c2.relname as dependency,
  c2.relkind,
  CASE
WHEN c2.relkind='r' THEN 'table'
WHEN c2.relkind='i' THEN 'index'
WHEN c2.relkind='S' THEN 'sequence'
WHEN c2.relkind='v' THEN 'view'
WHEN c2.relkind='c' THEN 'composite'
WHEN c2.relkind='t' THEN 'TOAST'
ELSE '?'
  END as kind
FROM
  pg_depend d,
  pg_class c1,
  pg_namespace n1,
  pg_class c2,
  pg_namespace n2
WHERE
  d.objid = c1.oid AND
  c1.relnamespace = n1.oid AND
  n1.nspname NOT IN('information_schema', 'pg_catalog') AND
  n1.nspname !~ '^pg_toast' AND

  d.refobjid = c2.oid AND
  c2.relnamespace = n2.oid AND
  n2.nspname NOT IN('information_schema', 'pg_catalog') AND
  n2.nspname !~ '^pg_toast' AND

  c1.oid != c2.oid

GROUP BY n1.nspname,c1.relname,c1.oid,c1.relkind,
 n2.nspname,c2.relname,c2.oid,c2.relkind
ORDER BY n1.nspname,c1.relname;

I could throw this on the Wiki as a code snippet if anyone else wanted to 
tinker with it.


--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Why does LOG have higher priority than ERROR and WARNING?

2009-09-14 Thread Magnus Hagander
On Mon, Sep 14, 2009 at 02:16, Itagaki Takahiro
itagaki.takah...@oss.ntt.co.jp wrote:

 Tom Lane t...@sss.pgh.pa.us wrote:

 Itagaki Takahiro itagaki.takah...@oss.ntt.co.jp writes:
  Can I reorder them to ERROR  WARNING  LOG ?

 No.  That was an intentional decision.  LOG is for stuff that we
 really want to get logged, in most cases.  ERROR is very often not
 that interesting, and WARNING even more so.

 I think the decision is in hacker's viewpoint. Many times I see
 DBAs are interested in only WARNING, ERROR and FATAL, but often
 ignores LOG messages. We should use WARNING level for really important
 message -- and also priority of WARNINGs should be higher than LOGs.

 Another matter is that we use LOG level both cases of important
 activity logging and mere performance or query logging. Maybe
 we should have used another log level (PERFORMANCE?) for the
 latter case, and its priority is less than WARNINGs and LOGs.

I think the requirement you're talking about is the same one I was
when I said I wanted a logging source thing. Which is basically that
an ERROR log from a user query or stored procedure is often not
interesting at all to the DBA - but it is to the developer. But an
ERROR log from the background writer or a low-level routine is *very*
interesting to the DBA. Basically, the log levels mean completely
different things depending on where they're coming in from.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] syslog_line_prefix

2009-09-14 Thread Magnus Hagander
On Mon, Sep 14, 2009 at 02:43, Itagaki Takahiro
itagaki.takah...@oss.ntt.co.jp wrote:
 Here is a patch to add a GUC parameter syslog_line_prefix.
 It adds prefixes to syslog and eventlog. We still have
 log_line_prefix, that will be used only for stderr logs.

 We have a tip that log_line_prefix is not required for syslog
 in the documentation, but we'd better to have independent setttings
 if we set log_destination to 'stderr, syslog'.

 http://developer.postgresql.org/pgdocs/postgres/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT
 | Tip:  Syslog produces its own time stamp and process ID
 | information, so you probably do not want to use those escapes
 | if you are logging to syslog.

I'm not sure I like this as a GUC. We're going to end up with a lot of
different GUCs, and everytime we add a new log destination (admittedly
not often, of course), that increases even further. And GUCs really
don't provide the level of flexibility you'd really like to have. I've
been thinking (long-term) in the direction of a separate config file,
since that could contain an arbitrary number of lines, with rules on
them (somewhat like pg_hba.conf maybe). You'd do the matching on
things like error level and destination, and then specify a bunch of
flags. Or potentially do it on error level and contents, and filtering
which destinations get it.

Forcing it into the guc framework seems like a limiting long-term strategy.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] opportunistic tuple freezing

2009-09-14 Thread Jeff Davis
On Mon, 2009-08-17 at 10:22 -0400, Tom Lane wrote:
 As always with patches that are meant to improve performance,
 some experimental evidence would be a good thing.

I haven't had time to performance test this patch yet, and it looks like
it will take a significant amount of effort to do so. I'm focusing on my
other work, so I don't know if this one is going to be in shape for the
September commitfest.

If someone is interested in doing some performance testing for this
patch, let me know. I still think it has potential.

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: generalized index constraints

2009-09-14 Thread Jeff Davis
On Sun, 2009-09-13 at 19:08 +1000, Brendan Jurd wrote:
 The September CF starts in a couple of days, so this patch is in
 danger of missing the boat.

Thanks for keeping track. I accomplished a significant amount today, so
there's still hope for 9/15.

I will most likely just focus on the core functionality so that I have
something complete and reviewable.

 The unresolved points seem to be:
 
  * What to do about INCLUDING INDEXES EXCLUDING CONSTRAINTS --
 Postgres gets this wrong for unique indexes currently.  Should we
 persist with the existing behaviour or fix it as part of this patch?
 My personal feeling was +1 for fixing it in this patch.

I don't think that it should make a difference whether EXCLUDING
CONSTRAINTS is specified or omitted. There is no [INCLUDING|EXCLUDING]
CONSTRAINTS option in the standard, but for the other LIKE options,
EXCLUDING is implied when INCLUDING is not specified.

So, I think we have to make a decision:
 1. If INCLUDING CONSTRAINTS is specified, but not INCLUDING INDEXES,
do we: copy the indexes silently; or emit a nice message; or throw 
an ERROR?
 2. What if INCLUDING INDEXES is specified, but not INCLUDING 
CONSTRAINTS?

  * Should we emit some sort of message when the user specifies
 INCLUDING INDEXES or INCLUDING CONSTRAINTS but not both?  I didn't
 have strong feelings about this one but there was some differing
 thoughts about what log level to use.  I thought NOTICE but Alvaro
 reckons we've got too many of those already.  Tom mentioned the
 suggested (but unimplemented) NOVICE level, which seems like a good
 move but doesn't resolve the problem of what to do in this patch.  One
 option would be to add a message at the NOTICE level with a TODO to
 downgrade it to NOVICE if/when that becomes available.

I don't think either of these things are a huge amount of work; they are
mostly just decisions that need to be made. I'll start off implementing
whatever is easiest/cleanest, and we'll continue the discussion.

Regards,
Jeff Davis




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Issues for named/mixed function notation patch

2009-09-14 Thread Pavel Stehule
Hello Robert,

2009/9/14 Robert Haas robertmh...@gmail.com:
 On Mon, Aug 24, 2009 at 3:19 PM, Pavel Stehule pavel.steh...@gmail.com 
 wrote:
 I reworked patch to respect mentioned issues. - this patch still
 implement mixed notation - I am thing so this notation is really
 important. All others I respect. The behave is without change, fixed
 some bugs, enhanced regress tests.

 This does not compile.


please, can you try this version? I hope so this in commitfest form
too. I didn't do any changes, but it can be broken. I compiled
attached patch today without problems. I have Federa 11. If you will
have a problems still, please, send me log.

Thank You
Pavel

 ...Robert



mnnotation.diff.gz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Patch LWlocks instrumentation

2009-09-14 Thread Pierre Frédéric Caillau d


A little bit of a reply to Jeff's email about WALInsertLock.

This patch instruments LWLocks, it is controlled with the following  
#define's in lwlock.c :


LWLOCK_STATS
LWLOCK_TIMING_STATS

It is an upgrade of current lwlocks stats.

When active, at backend exit, it will display stats as shown below (here,  
we have a parallel COPY with 4 concurrent processes into the same table,  
on a 4 core machine).
If the (rather wide) sample output is mangled in your mail client, I've  
attached it as a separate text file.


 Lock stats for PID 22403
PIDLock  ShAcq ShWait ShWaitT ShHeldT   
ExAcq ExWait ExWaitTExHeldT Name
  22403   7  0  00.000.00 
252 730338   24.02 ( 53.49 %)7.25 ( 16.14 %)  WALInsert
  22403   8  0  00.000.00   
19501 733.48 (  7.75 %)0.40 (  0.88 %)  WALWrite


 Lock stats for PID 22404
PIDLock  ShAcq ShWait ShWaitT ShHeldT   
ExAcq ExWait ExWaitTExHeldT Name
  22404   7  0  00.000.00 
252 724683   23.34 ( 51.59 %)8.24 ( 18.20 %)  WALInsert
  22404   8  0  00.000.00   
19418 904.37 (  9.67 %)0.44 (  0.97 %)  WALWrite


 Lock stats for PID 22402
PIDLock  ShAcq ShWait ShWaitT ShHeldT   
ExAcq ExWait ExWaitTExHeldT Name
  22402   7  0  00.000.00 
252 735958   24.06 ( 52.73 %)8.05 ( 17.63 %)  WALInsert
  22402   8  0  00.000.00   
19154 974.21 (  9.22 %)0.39 (  0.85 %)  WALWrite


 Lock stats for PID 22400
PIDLock  ShAcq ShWait ShWaitT ShHeldT   
ExAcq ExWait ExWaitTExHeldT Name
  22400   7  0  00.000.00 
252 736265   25.50 ( 55.59 %)6.74 ( 14.70 %)  WALInsert
  22400   8  0  00.000.00   
19391 662.95 (  6.42 %)0.39 (  0.85 %)  WALWrite



Here we see that PID 22400 spent :

25.50 s waiting to get exclusive on WALInsert
6.74  s while holding exclusive on WALInsert

The percentages represent the fraction of time relative to the backend  
process' lifetime.
Here, I've exited the processes right after committing the transactions,  
but if you use psql and want accurate %, you'll need to exit quickly after  
the query to benchmark.


Here, for example, backends spend more than 50% of their time waiting on  
WALInsert...


lwlock_instrumentation.patch
Description: Binary data


lwlock_instrumentation.sample
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Encoding issues in console and eventlog on win32

2009-09-14 Thread Heikki Linnakangas
Itagaki Takahiro wrote:
 We can choose different encodings from platform-dependent one
 for database, but postgres writes serverlogs in the database encoding.
 As the result, serverlogs are filled with broken characters.
 
 The problem could occur on all platforms, however, there is a solution
 for win32. Since Windows supports wide characters to write logs, we can
 convert log texts = UTF-8 = UTF-16 and pass them to WriteConsoleW()
 and ReportEventW().
 
 Especially in Japan, encoding troubles on Windows are unavoidable
 because postgres doesn't support Shift-JIS for database encoding,
 that is the native encoding for Windows Japanese edition.
 
 If we also want to support the same functionality on non-win32 platform,
 we might need non-throwable version of pg_do_encoding_conversion():
 
 log_message_to_write = pg_do_encoding_conversion_nothrow(
 log_message_in_database_encoding,
 GetDatabaseEncoding() /* as src_encoding */,
 GetPlatformEncoding() /* as dst_encoding */)
 
 and pass the result to stderr and syslog. But it requires major rewrites
 of conversion functions, so I'd like to submit a solution only for win32
 for now. Also, the issue is not so serious on non-win32 platforms because
 we can choose UTF-8 or EUC_* on those platforms.

Something like that seems reasonable for the Windows event log; that is
clearly supposed to be written using a specific encoding. With the log
files, we're more free to do what we want, and IMHO we shouldn't put a
Windows-specific hack there because as you say we have the same problem
on all platforms.

There's no guarantee that conversion to UTF-8 won't fail, so this isn't
totally risk-free on Windows either. Theoretically, MultiByteToWideChar
could fail too (the patch neglects to check for that), although I
suppose it can't really happen for UTF-8 - UTF-16 conversion.

Can't we use MultiByteToWideChar() to convert directly to the required
encoding, avoiding the double conversion?

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Bulk Inserts

2009-09-14 Thread Pierre Frédéric Caillau d


I've done a little experiment with bulk inserts.

= heap_bulk_insert()

Behaves like heap_insert except it takes an array of tuples (HeapTuple  
*tups, int ntups).


- Grabs a page (same as heap_insert)

- While holding exclusive lock, inserts as many tuples as it can on the  
page.

- Either the page gets full
- Or we run out of tuples.

- Generate xlog : choice between
- Full Xlog mode :
		- if we inserted more than 10 tuples (totaly bogus heuristic), log the  
entire page

- Else, log individual tuples as heap_insert does
- Light log mode :
		- if page was empty, only xlog a new empty page record, not page  
contents

- else, log fully
- heap_sync() at the end

- Release the page
- If we still have tuples to insert, repeat.

Am I right in assuming that :

1)
- If the page was empty,
- and log archiving isn't used,
- and the table is heap_sync()'d at the end,
= only a new empty page record needs to be created, then the page can  
be completely filled ?


2)
- If the page isn't empty
- or log archiving is used,
= logging either the inserted tuples or the entire page is OK to  
guarantee persistence ?


(I used kill -9 to test it, recovery seems to work).

Test on a concurrent COPY, 4 threads, on a table with 8 INT columns.

* 8.5 HEAD : Total Time 44 s
* Bulk inserts, Full XLog : Total Time 24 s
* Bulk inserts, Light XLog : Total Time 10 s

Quite a bit faster... I presume with more CPUs it would scale.

I'm not posting the patch because it's quite ugly (especially the part to  
store tuples in copy.c and bulk-insert them, I should probably have used a  
tuplestore...)
I think the tuples need to be stored and then bulk-inserted because the  
exclusive lock on the buffer can't be held for a long time.



Lock stats (from the patch I just posted) :

* 8.5 HEAD : Total Time 44 s

 Lock stats for PID 28043
PIDLock  ShAcq ShWait ShWaitT ShHeldT   
ExAcq ExWait ExWaitTExHeldT Name
  28043   7  0  00.000.00 
252 804378   23.59 ( 53.11 %)7.38 ( 16.61 %)  WALInsert
  28043   8  0  00.000.00   
25775 322.91 (  6.54 %)0.90 (  2.02 %)  WALWrite


 Lock stats for PID 28044
PIDLock  ShAcq ShWait ShWaitT ShHeldT   
ExAcq ExWait ExWaitTExHeldT Name
  28044   7  0  00.000.00 
252 802515   22.26 ( 50.11 %)8.70 ( 19.59 %)  WALInsert
  28044   8  0  00.000.00   
25620 424.00 (  9.01 %)1.12 (  2.52 %)  WALWrite


 Lock stats for PID 28045
PIDLock  ShAcq ShWait ShWaitT ShHeldT   
ExAcq ExWait ExWaitTExHeldT Name
  28045   7  0  00.000.00 
252 799145   22.47 ( 50.32 %)8.72 ( 19.52 %)  WALInsert
  28045   8  0  00.000.00   
25725 384.08 (  9.14 %)1.05 (  2.35 %)  WALWrite


 Lock stats for PID 28042
PIDLock  ShAcq ShWait ShWaitT ShHeldT   
ExAcq ExWait ExWaitTExHeldT Name
  28042   7  0  00.000.00 
252 809477   23.49 ( 52.44 %)7.89 ( 17.62 %)  WALInsert
  28042   8  0  00.000.00   
25601 373.27 (  7.31 %)1.05 (  2.34 %)  WALWrite



* Bulk inserts, Full XLog : Total Time 24 s

 Lock stats for PID 32486
PIDLock  ShAcq ShWait ShWaitT ShHeldT   
ExAcq ExWait ExWaitTExHeldT Name
  32486   7  0  00.000.00   
23765   11289.22 ( 38.98 %)4.05 ( 17.14 %)  WALInsert
  32486   8  0  00.000.00   
21120 192.64 ( 11.17 %)1.32 (  5.59 %)  WALWrite


 Lock stats for PID 32484
PIDLock  ShAcq ShWait ShWaitT ShHeldT   
ExAcq ExWait ExWaitTExHeldT Name
  32484   7  0  00.000.00   
23865   10839.87 ( 41.68 %)2.87 ( 12.11 %)  WALInsert
  32484   8  0  00.000.00   
21105 111.68 (  7.11 %)1.09 (  4.62 %)  WALWrite
  324848508  0  00.000.00   
1  10.19 (  0.81 %)0.00 (  0.00 %)
  32484   18846  0  00.000.00   
1  10.25 (  1.05 %)0.00 (  0.00 %)


 Lock stats for PID 32485
PID

Re: [HACKERS] [PATCH] 8.5 plpgsql change for named notation: treat word following AS keyword as label v3

2009-09-14 Thread Robert Haas
On Mon, Sep 14, 2009 at 11:02 AM, Steve Prentice prent...@cisco.com wrote:
 On Sep 13, 2009, at 10:22 PM, Pavel Stehule wrote:

 2009/9/14 Tom Lane t...@sss.pgh.pa.us:

 Robert Haas robertmh...@gmail.com writes:

 So, I guess I'm sadly left feeling that we should probably reject this
 patch.  Anyone want to argue otherwise?

 +1.  I'm really hoping to get something done about the plpgsql parsing
 situation before 8.5 is out, so this should be a dead end anyway.


 I have a WIP patch for integration main SQL parser to plpgsql. I'll
 send it to this weekend.

 I certainly don't mind the patch getting rejected and agree that refactoring
 the plpgsql parser is probably the best approach to this issue. However, I
 think it would be more than a little strange to ship the named notation
 feature without a solution for this problem. For reference, the problem is
 that the function below causes a compile error because of the way plpgsql
 blindly does variable replacement:

 create function fun1(pDisplayName text) returns void as $$
 begin
    perform fun2(pDisplayName as pDisplayName);
 -- Above line compiles as:
 --  SELECT  fun2( $1  as  $1 )
 end
 $$ language plpgsql;

Yeah but we already have this problem.  Right now, it typically
happens because of some statement of the form SELECT ... AS ...; this
just adds one more case where it can happen, and I doubt it's any more
common than the case we already struggle with.

But at any rate Tom is planning a fix for 8.5, so I don't think
there's any need to get excited just yet.  If Tom doesn't get his
stuff finished by January, we can revisit the issue then.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Streaming Replication patch for CommitFest 2009-09

2009-09-14 Thread Heikki Linnakangas
Greg Smith wrote:
 Putting on my DBA hat for a minute, the first question I see people
 asking is how do I measure how far behind the slaves are?.  Presumably
 you can get that out of pg_controldata; my first question is whether
 that's complete enough information?  If not, what else should be monitored?
 
 I don't think running that program going to fly for a production quality
 integrated replication setup though.  The UI admins are going to want
 would allow querying this easily via a standard database query.  Most
 monitoring systems can issue psql queries but not necessarily run a
 remote binary.  I think that parts of pg_controldata needs to get
 exposed via some number of built-in UDFs instead, and whatever new
 internal state makes sense too.  I could help out writing those, if
 someone more familiar with the replication internals can help me nail
 down a spec on what to watch.

Yep, assuming for a moment that hot standby goes into 8.5, status
functions that return such information is the natural interface. It
should be trivial to write them as soon as hot standby and streaming
replication are in place.

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] 8.5 plpgsql change for named notation: treat word following AS keyword as label v3

2009-09-14 Thread Robert Haas
On Mon, Sep 14, 2009 at 11:56 AM, Pavel Stehule pavel.steh...@gmail.com wrote:
 2009/9/14 Steve Prentice prent...@cisco.com:
 On Sep 13, 2009, at 10:22 PM, Pavel Stehule wrote:

 2009/9/14 Tom Lane t...@sss.pgh.pa.us:

 Robert Haas robertmh...@gmail.com writes:

 So, I guess I'm sadly left feeling that we should probably reject this
 patch.  Anyone want to argue otherwise?

 +1.  I'm really hoping to get something done about the plpgsql parsing
 situation before 8.5 is out, so this should be a dead end anyway.


 I have a WIP patch for integration main SQL parser to plpgsql. I'll
 send it to this weekend.

 I certainly don't mind the patch getting rejected and agree that refactoring
 the plpgsql parser is probably the best approach to this issue. However, I
 think it would be more than a little strange to ship the named notation
 feature without a solution for this problem. For reference, the problem is
 that the function below causes a compile error because of the way plpgsql
 blindly does variable replacement:

 create function fun1(pDisplayName text) returns void as $$
 begin
    perform fun2(pDisplayName as pDisplayName);
 -- Above line compiles as:
 --  SELECT  fun2( $1  as  $1 )
 end
 $$ language plpgsql;


 I am sure, so this this will be solved in next commitfest. This
 problem is related only to plpgsql. Other PL languages are well,
 because doesn't try to emulate SQL parser.

And the emphasis here is on try.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] 8.5 plpgsql change for named notation: treat word following AS keyword as label v3

2009-09-14 Thread Pavel Stehule
2009/9/14 Steve Prentice prent...@cisco.com:
 On Sep 13, 2009, at 10:22 PM, Pavel Stehule wrote:

 2009/9/14 Tom Lane t...@sss.pgh.pa.us:

 Robert Haas robertmh...@gmail.com writes:

 So, I guess I'm sadly left feeling that we should probably reject this
 patch.  Anyone want to argue otherwise?

 +1.  I'm really hoping to get something done about the plpgsql parsing
 situation before 8.5 is out, so this should be a dead end anyway.


 I have a WIP patch for integration main SQL parser to plpgsql. I'll
 send it to this weekend.

 I certainly don't mind the patch getting rejected and agree that refactoring
 the plpgsql parser is probably the best approach to this issue. However, I
 think it would be more than a little strange to ship the named notation
 feature without a solution for this problem. For reference, the problem is
 that the function below causes a compile error because of the way plpgsql
 blindly does variable replacement:

 create function fun1(pDisplayName text) returns void as $$
 begin
    perform fun2(pDisplayName as pDisplayName);
 -- Above line compiles as:
 --  SELECT  fun2( $1  as  $1 )
 end
 $$ language plpgsql;


I am sure, so this this will be solved in next commitfest. This
problem is related only to plpgsql. Other PL languages are well,
because doesn't try to emulate SQL parser.

Pavel

 -Steve


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Streaming Replication patch for CommitFest 2009-09

2009-09-14 Thread Andrew Dunstan



Greg Smith wrote:
This is looking really neat now, making async replication really solid 
first before even trying to move on to sync is the right way to go 
here IMHO.


I agree with both of those sentiments.

One question I have is what is the level of traffic involved between the 
master and the slave. I know numbers of people have found the traffic 
involved in shipping of log files to be a pain, and thus we get things 
like pglesslog.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] 8.5 plpgsql change for named notation: treat word following AS keyword as label v3

2009-09-14 Thread Steve Prentice

On Sep 13, 2009, at 10:22 PM, Pavel Stehule wrote:


2009/9/14 Tom Lane t...@sss.pgh.pa.us:

Robert Haas robertmh...@gmail.com writes:
So, I guess I'm sadly left feeling that we should probably reject  
this

patch.  Anyone want to argue otherwise?


+1.  I'm really hoping to get something done about the plpgsql  
parsing

situation before 8.5 is out, so this should be a dead end anyway.



I have a WIP patch for integration main SQL parser to plpgsql. I'll
send it to this weekend.


I certainly don't mind the patch getting rejected and agree that  
refactoring the plpgsql parser is probably the best approach to this  
issue. However, I think it would be more than a little strange to ship  
the named notation feature without a solution for this problem. For  
reference, the problem is that the function below causes a compile  
error because of the way plpgsql blindly does variable replacement:


create function fun1(pDisplayName text) returns void as $$
begin
perform fun2(pDisplayName as pDisplayName);
-- Above line compiles as:
--  SELECT  fun2( $1  as  $1 )
end
$$ language plpgsql;

-Steve

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Rough draft: easier translation of psql help

2009-09-14 Thread Josh Berkus
Peter,

 This is what the attached patch produces.
 
 Comments?

This is how other project handle transation of these kinds of strings.

-- 
Josh Berkus
PostgreSQL Experts Inc.
www.pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Streaming Replication patch for CommitFest 2009-09

2009-09-14 Thread Kevin Grittner
Greg Smith gsm...@gregsmith.com wrote:
 
 Putting on my DBA hat for a minute, the first question I see people
 asking is how do I measure how far behind the slaves are?. 
 Presumably you can get that out of pg_controldata; my first question
 is whether that's complete enough information?  If not, what else
 should be monitored?
 
 I don't think running that program going to fly for a production
 quality integrated replication setup though.  The UI admins are
 going to want would allow querying this easily via a standard
 database query.  Most monitoring systems can issue psql queries but
 not necessarily run a remote binary.  I think that parts of
 pg_controldata needs to get exposed via some number of built-in UDFs
 instead, and whatever new internal state makes sense too.  I could
 help out writing those, if someone more familiar with the
 replication internals can help me nail down a spec on what to watch.
 
IMO, it would be best if the status could be sent via NOTIFY.  In my
experience, this results in monitoring which both has less overhead
and is more current.  We tend to be almost as interested in metrics on
throughput as lag.  Backlogged volume can be interesting, too, if it's
available.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch LWlocks instrumentation

2009-09-14 Thread Jeff Janes
2009/9/14 Pierre Frédéric Caillaud li...@peufeu.com


 A little bit of a reply to Jeff's email about WALInsertLock.

 This patch instruments LWLocks, it is controlled with the following
 #define's in lwlock.c :

 LWLOCK_STATS
 LWLOCK_TIMING_STATS

 It is an upgrade of current lwlocks stats.


Hi Pierre,

Have you looked at the total execution time with and without the
LWLOCK_TIMING_STATS?

I've implemented something similar to this myself (only without attempting
to make it portable and otherwise worthy of submitting as a general-interest
patch), what I found is that attempting to time every hold time
substantially increased the overall run time (which I would worry distorts
the reported times, queue bad Heisenberg analogies).  The problem is that
gettimeofday is slow, and on some multi-processor systems it is a global
point of serialization, making it even slower.  I decided to time only the
time spent waiting on a block, and not the time spent holding the lock.
This way you only call gettimeofday twice if you actually need to block, and
not at all if you immediately get the lock.  This had a much smaller effect
on runtime, and the info produced was sufficient for my purposes.

Not that this changes your conclusion.  With or without that distortion I
completely believe that WALInsertLock is the bottleneck of parallel bulk
copy into unindexed tables.  I just can't find anything else it is a primary
bottleneck on.  I think the only real solution for bulk copy is to call
XLogInsert less often.  For example, it could build blocks in local memory,
then when done copy it into the shared buffers and then toss the entire
block into WAL in one call.  Easier said than implemented, of course.

Cheers,

Jeff


Re: [HACKERS] Re: [COMMITTERS] Can not create more than 32766 databases in ufs file system.

2009-09-14 Thread Josh Berkus

 So the question I would ask goes more like do you really need 32K
 databases in one installation?  Have you considered using schemas
 instead?  Databases are, by design, pretty heavyweight objects.
 
 I agree, but at the same time, we might: a) update our documentation to
 indicate it depends on the filesystem, and b) consider how we might
 work around this limit (and if we feel the effort to be worth it).

I don't feel it's worth the effort.

I can think of lots of hosted application configurations where one might
need 33K tables.  Note that PostgreSQL *already* handles this better
than Oracle or MySQL do -- I know at least one case where our ability to
handle large numbers of tables was a reason for migration from Oracle to
PostgreSQL.

However, I can think of no legitimate reason to need 33K active
databases in a single instance.  I think someone has confused databases
with schema ... or even with tables.  Filemaker developer, maybe?  Or
maybe it 10 active databases and 32.99K archive ones ... in which case
they should be dumped to compressed backup and dropped.

-- 
Josh Berkus
PostgreSQL Experts Inc.
www.pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Streaming Replication patch for CommitFest 2009-09

2009-09-14 Thread Greg Smith
This is looking really neat now, making async replication really solid 
first before even trying to move on to sync is the right way to go here 
IMHO.  I just cleaned up the docs on the Wiki page, when this patch is 
closer to being committed I officially volunteer to do the same on the 
internal SGML docs; someone should nudge me when the patch is at that 
point if I don't take care of it before then.


Putting on my DBA hat for a minute, the first question I see people asking 
is how do I measure how far behind the slaves are?.  Presumably you can 
get that out of pg_controldata; my first question is whether that's 
complete enough information?  If not, what else should be monitored?


I don't think running that program going to fly for a production quality 
integrated replication setup though.  The UI admins are going to want 
would allow querying this easily via a standard database query.  Most 
monitoring systems can issue psql queries but not necessarily run a remote 
binary.  I think that parts of pg_controldata needs to get exposed via 
some number of built-in UDFs instead, and whatever new internal state 
makes sense too.  I could help out writing those, if someone more familiar 
with the replication internals can help me nail down a spec on what to 
watch.


--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Disable and enable of table and column constraints

2009-09-14 Thread Kevin Grittner
Martijn van Oosterhout klep...@svana.org wrote:
 FWIW, I find the ability in Slony to configure triggers so they work
 or not depending on the replication role to be extremely useful.
 Absolutely a major positive feature.
 
Yeah, as a general rule it doesn't make sense to try to enforce
constraints on a replication *target*.  Check and report, perhaps, but
you don't normally want to error out on anything which you know was
actually applied to the source database.  It's even worse for some
classes of triggers which generate derived data; you don't want the
replication to generate one value and then a trigger on the
replication target to try to do the same.  A count, for example, could
easily wind up with an off by one error much of the time.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Timestamp to time_t

2009-09-14 Thread Kevin Grittner
Scott Mohekey scott.mohe...@telogis.com wrote:
 
 What is the relationship between Timestamp and TimestampTz?
 
TIMESTAMP WITHOUT TIME ZONE does not identify a moment in time without
first associating it with a time zone.  When Daylight Saving Time
ends, the same TIMESTAMP WITHOUT TIME ZONE values will recur, without
any way to distinguish them from those from the previous hour.
 
The only use case I have been able to think of, personally, for
TIMESTAMP WITHOUT TIME ZONE is a release date for, say a Harry Potter
book or World of Warcraft expansion set, where as the given moment
arrives in each time zone, stores in that time zone can begin to sell
the given work.
 
I suspect there's probably one or two other valid uses, but most uses
are just mistakes, waiting to be exposed.  For almost every reasonable
use, the right data type is TIMESTAMP WITH TIME ZONE.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RfD: more powerful any types

2009-09-14 Thread decibel

On Sep 14, 2009, at 12:13 AM, Pavel Stehule wrote:

2009/9/13 decibel deci...@decibel.org:

On Sep 12, 2009, at 5:54 PM, Andrew Dunstan wrote:


decibel wrote:


Speaking of concatenation...

Something I find sorely missing in plpgsql is the ability to put
variables inside of a string, ie:

DECLARE
v_table text := ...
v_sql text;
BEGIN
v_sql := SELECT * FROM $v_table;

Of course, I'm assuming that if it was easy to do that it would  
be done

already... but I thought I'd just throw it out there.



Then use a language that supports variable interpolation in  
strings, like

plperl, plpythonu, plruby  instead of plpgsql.



Which makes executing SQL much, much harder.

At least if we get sprintf dealing with strings might become a bit  
easier...


This feature is nice - but very dangerous - it the most easy way how
do vulnerable (on SQL injection) application!



How is it any worse than what people can already do? Anyone who isn't  
aware of the dangers of SQL injection has already screwed themselves.  
You're basically arguing that they would put a variable inside of  
quotes, but they would never use ||.

--
Decibel!, aka Jim C. Nasby, Database Architect  deci...@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Streaming Replication patch for CommitFest 2009-09

2009-09-14 Thread Heikki Linnakangas
Kevin Grittner wrote:
 Greg Smith gsm...@gregsmith.com wrote:
 I don't think running that program going to fly for a production
 quality integrated replication setup though.  The UI admins are
 going to want would allow querying this easily via a standard
 database query.  Most monitoring systems can issue psql queries but
 not necessarily run a remote binary.  I think that parts of
 pg_controldata needs to get exposed via some number of built-in UDFs
 instead, and whatever new internal state makes sense too.  I could
 help out writing those, if someone more familiar with the
 replication internals can help me nail down a spec on what to watch.
  
 IMO, it would be best if the status could be sent via NOTIFY.

To where?

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] clang's static checker report.

2009-09-14 Thread Grzegorz Jaskiewicz


On 14 Sep 2009, at 06:04, Tom Lane wrote:

Looks like the clang guys still have some work to do.



Thanks Tom, reported to clang dev's .


meanwhile, since quite a lot stuff went in over weekend, and since  
Yesterday, new report at:


http://zlew.org/postgresql_static_check/scan-build-2009-09-14-1/


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch for automating partitions in PostgreSQL 8.4 Beta 2

2009-09-14 Thread Grzegorz Jaskiewicz

Anyone knows what's the latest on that patch ?
To be honest, this was the thing that I was looking forward most in  
8.5 ... (and probably not only me alone).



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Elementary dependency look-up

2009-09-14 Thread decibel

On Sep 14, 2009, at 1:36 AM, Greg Smith wrote:

  CASE
WHEN c1.relkind='r' THEN 'table'
WHEN c1.relkind='i' THEN 'index'
WHEN c1.relkind='S' THEN 'sequence'
WHEN c1.relkind='v' THEN 'view'
WHEN c1.relkind='c' THEN 'composite'
WHEN c1.relkind='t' THEN 'TOAST'
ELSE '?'
  END as kind,



I think part of this patch should be providing a function or  
something that converts things like pg_class.relkind into a useful  
string. I know I've created a function that does that (though, I  
return a cased string, since it's easier to run it through lower than  
to try and case it after the fact). I'm not sure if a function is the  
best way to do this or if a table or view would be better (something  
you could join to). One benefit of a table or view is that you could  
provide both cased and lower versions of the names.

--
Decibel!, aka Jim C. Nasby, Database Architect  deci...@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RfD: more powerful any types

2009-09-14 Thread Pavel Stehule
2009/9/14 decibel deci...@decibel.org:
 On Sep 14, 2009, at 12:13 AM, Pavel Stehule wrote:

 2009/9/13 decibel deci...@decibel.org:

 On Sep 12, 2009, at 5:54 PM, Andrew Dunstan wrote:

 decibel wrote:

 Speaking of concatenation...

 Something I find sorely missing in plpgsql is the ability to put
 variables inside of a string, ie:

 DECLARE
 v_table text := ...
 v_sql text;
 BEGIN
 v_sql := SELECT * FROM $v_table;

 Of course, I'm assuming that if it was easy to do that it would be done
 already... but I thought I'd just throw it out there.


 Then use a language that supports variable interpolation in strings,
 like
 plperl, plpythonu, plruby  instead of plpgsql.


 Which makes executing SQL much, much harder.

 At least if we get sprintf dealing with strings might become a bit
 easier...

 This feature is nice - but very dangerous - it the most easy way how
 do vulnerable (on SQL injection) application!


 How is it any worse than what people can already do? Anyone who isn't aware
 of the dangers of SQL injection has already screwed themselves. You're
 basically arguing that they would put a variable inside of quotes, but they
 would never use ||.

simply - people use functions quote_literal or quote_ident.

regards
Pavel Stehule

 --
 Decibel!, aka Jim C. Nasby, Database Architect  deci...@decibel.org
 Give your computer some brain candy! www.distributed.net Team #1828




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RfD: more powerful any types

2009-09-14 Thread Merlin Moncure
On Mon, Sep 14, 2009 at 1:42 PM, Pavel Stehule pavel.steh...@gmail.com wrote:
 How is it any worse than what people can already do? Anyone who isn't aware
 of the dangers of SQL injection has already screwed themselves. You're
 basically arguing that they would put a variable inside of quotes, but they
 would never use ||.

 simply - people use functions quote_literal or quote_ident.

you still have use of those functions:
execute sprintf('select * from %s', quote_ident($1));

sprintf is no more or less dangerous than || operator.

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Streaming Replication patch for CommitFest 2009-09

2009-09-14 Thread Kevin Grittner
Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote:
 Kevin Grittner wrote:
 
 IMO, it would be best if the status could be sent via NOTIFY.
 
 To where?
 
To registered listeners?
 
I guess I should have worded that as it would be best if a change is
replication status could be signaled via NOTIFY -- does that satisfy,
or am I missing your point entirely?
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Streaming Replication patch for CommitFest 2009-09

2009-09-14 Thread Heikki Linnakangas
Fujii Masao wrote:
 Here is the latest version of Streaming Replication (SR) patch.

The first thing that caught my eye is that I don't think replication
should be a real database. Rather, it should by a keyword in
pg_hba.conf, like the existing all, sameuser, samerole keywords
that you can put into the database-column.

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RfD: more powerful any types

2009-09-14 Thread Pavel Stehule
2009/9/14 Merlin Moncure mmonc...@gmail.com:
 On Mon, Sep 14, 2009 at 1:42 PM, Pavel Stehule pavel.steh...@gmail.com 
 wrote:
 How is it any worse than what people can already do? Anyone who isn't aware
 of the dangers of SQL injection has already screwed themselves. You're
 basically arguing that they would put a variable inside of quotes, but they
 would never use ||.

 simply - people use functions quote_literal or quote_ident.

 you still have use of those functions:
 execute sprintf('select * from %s', quote_ident($1));

 sprintf is no more or less dangerous than || operator.

sure. I commented different feature

some := 'select * from $1'

regards
Pavel

p.s. In this case, I am not sure what is more readable:

execute 'select * from ' || quote_ident($1)

is readable well too.





 merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch for automating partitions in PostgreSQL 8.4 Beta 2

2009-09-14 Thread Emmanuel Cecchet

Grzegorz Jaskiewicz wrote:

Anyone knows what's the latest on that patch ?
To be honest, this was the thing that I was looking forward most in 
8.5 ... (and probably not only me alone).
We are also interested in integrating our autopartitioning patch for 
COPY with that implementation. I can help with the partitioning 
implementation and/or testing of that feature since this is of interest 
for Aster too.


Emmanuel

--
Emmanuel Cecchet
Aster Data Systems
Web: http://www.asterdata.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] new version of PQconnectdb was:(Re: [HACKERS] Determining client_encoding from client locale)

2009-09-14 Thread Jaime Casanova
On Thu, Sep 10, 2009 at 12:01 AM, Jaime Casanova
jcasa...@systemguards.com.ec wrote:
 On Mon, Jul 6, 2009 at 10:00 AM, Heikki
 Linnakangasheikki.linnakan...@enterprisedb.com wrote:

 Could we
 have a version of PQconnectdb() with an API more suited for setting the
 params programmatically? The PQsetdbLogin() approach doesn't scale as
 parameters are added/removed in future versions, but we could have
 something like this:

 PGconn *PQconnectParams(const char **params)

 Where params is an array with an even number of parameters, forming
 key/value pairs. Usage example:


i extracted the functions to connect that Heikki put on psql in his
patch for determining client_encoding from client locale and put it in
libpq so i follow the PQconnectdbParams(* params[]) approach.

i put the new function at the end of the exports.txt file, there's a
reason to renumber the exports to put it at the beginning with the
other PQconnectdb function?

this patch still lacks documentation, i will add it in the next days
but want to know if you have any comments about this...

-- 
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. +59387171157
Index: src/bin/psql/command.c
===
RCS file: /home/postgres/pgrepo/pgsql/src/bin/psql/command.c,v
retrieving revision 1.206
diff -c -r1.206 command.c
*** src/bin/psql/command.c	11 Jun 2009 14:49:07 -	1.206
--- src/bin/psql/command.c	14 Sep 2009 17:34:00 -
***
*** 1239,1246 
  
  	while (true)
  	{
! 		n_conn = PQsetdbLogin(host, port, NULL, NULL,
! 			  dbname, user, password);
  
  		/* We can immediately discard the password -- no longer needed */
  		if (password)
--- 1239,1254 
  
  	while (true)
  	{
! 		const char *params[] = {
! 			host, host,
! 			port, port,
! 			dbname, dbname,
! 			user, user,
! 			password, password,
! 			NULL, NULL
! 		};
! 
! 		n_conn = PQconnectdbParams(params);
  
  		/* We can immediately discard the password -- no longer needed */
  		if (password)
Index: src/bin/psql/startup.c
===
RCS file: /home/postgres/pgrepo/pgsql/src/bin/psql/startup.c,v
retrieving revision 1.156
diff -c -r1.156 startup.c
*** src/bin/psql/startup.c	5 Apr 2009 04:19:58 -	1.156
--- src/bin/psql/startup.c	14 Sep 2009 17:33:43 -
***
*** 171,181 
  	/* loop until we have a password if requested by backend */
  	do
  	{
  		new_pass = false;
! 		pset.db = PQsetdbLogin(options.host, options.port, NULL, NULL,
! 	options.action == ACT_LIST_DB  options.dbname == NULL ?
! 			   postgres : options.dbname,
! 			   options.username, password);
  
  		if (PQstatus(pset.db) == CONNECTION_BAD 
  			PQconnectionNeedsPassword(pset.db) 
--- 171,189 
  	/* loop until we have a password if requested by backend */
  	do
  	{
+ 		const char *params[] = {
+ 			host, options.host,
+ 			port, options.port,
+ 			dbname, (options.action == ACT_LIST_DB  
+options.dbname == NULL) ? postgres : options.dbname,
+ 			user, options.username,
+ 			password, password,
+ 			NULL, NULL
+ 		};
+ 
  		new_pass = false;
! 
! 		pset.db = PQconnectdbParams(params);
  
  		if (PQstatus(pset.db) == CONNECTION_BAD 
  			PQconnectionNeedsPassword(pset.db) 
Index: src/interfaces/libpq/exports.txt
===
RCS file: /home/postgres/pgrepo/pgsql/src/interfaces/libpq/exports.txt,v
retrieving revision 1.23
diff -c -r1.23 exports.txt
*** src/interfaces/libpq/exports.txt	31 Mar 2009 01:41:27 -	1.23
--- src/interfaces/libpq/exports.txt	14 Sep 2009 17:33:03 -
***
*** 153,155 
--- 153,156 
  PQfireResultCreateEvents  151
  PQconninfoParse   152
  PQinitOpenSSL 153
+ PQconnectdbParams		  154
Index: src/interfaces/libpq/fe-connect.c
===
RCS file: /home/postgres/pgrepo/pgsql/src/interfaces/libpq/fe-connect.c,v
retrieving revision 1.376
diff -c -r1.376 fe-connect.c
*** src/interfaces/libpq/fe-connect.c	24 Jul 2009 17:58:31 -	1.376
--- src/interfaces/libpq/fe-connect.c	14 Sep 2009 17:34:49 -
***
*** 211,216 
--- 211,219 
  	GSS-library, , 7},		/* sizeof(gssapi) = 7 */
  #endif
  
+ 	{appname, NULL, NULL, NULL,
+ 	Client-application, , 45},
+ 
  	/* Terminating entry --- MUST BE LAST */
  	{NULL, NULL, NULL, NULL,
  	NULL, NULL, 0}
***
*** 283,288 
--- 286,333 
   */
  
  /*
+  *  PQconnectdbParams
+  */
+ PGconn *
+ PQconnectdbParams(const char * const *params)
+ {
+ 	PGconn *ret;
+ 	PQExpBufferData buf;
+ 
+ 	initPQExpBuffer(buf);
+  
+ 	while(*params)
+ 	{
+ 		const char *option = params[0];
+ 		const char *value  = params[1];
+  
+ 		if (value != NULL)
+ 		{
+ 			/* write option name */
+ 			appendPQExpBuffer(buf, %s = ', option);
+ 	

Re: [HACKERS] new version of PQconnectdb was:(Re: [HACKERS] Determining client_encoding from client locale)

2009-09-14 Thread Andrew Chernow

Jaime Casanova wrote:

On Thu, Sep 10, 2009 at 12:01 AM, Jaime Casanova
jcasa...@systemguards.com.ec wrote:

On Mon, Jul 6, 2009 at 10:00 AM, Heikki
Linnakangasheikki.linnakan...@enterprisedb.com wrote:

Could we
have a version of PQconnectdb() with an API more suited for setting the
params programmatically? The PQsetdbLogin() approach doesn't scale as
parameters are added/removed in future versions, but we could have
something like this:

PGconn *PQconnectParams(const char **params)

Where params is an array with an even number of parameters, forming
key/value pairs. Usage example:



i extracted the functions to connect that Heikki put on psql in his
patch for determining client_encoding from client locale and put it in
libpq so i follow the PQconnectdbParams(* params[]) approach.


I was following this and never saw any firm decision on the prototype 
for this function.  Although, I can say the single argument version did 
not appear to win any votes.


The below posts agreed on a two argument version of parallel arrays 
(keywords, values):


http://archives.postgresql.org/pgsql-hackers/2009-09/msg00533.php
http://archives.postgresql.org/pgsql-hackers/2009-09/msg00559.php

There is also the idea of passing an array of structs floating around, 
NULL terminated list or include an additional argument specifying 
element count.


--
Andrew Chernow
eSilo, LLC
every bit counts
http://www.esilo.com/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COPY enhancements

2009-09-14 Thread Emmanuel Cecchet

Greg Smith wrote:

On Fri, 11 Sep 2009, Emmanuel Cecchet wrote:

I guess the problem with extra or missing columns is to make sure 
that you know exactly which data belongs to which column so that you 
don't put data in the wrong columns which is likely to happen if this 
is fully automated.


Allowing the extra column case is easy:  everwhere in copy.c you find 
the error message extra data after last expected column, just ignore 
the overflow fields rather than rejecting the line just based on 
that.  And the default information I mentioned you might want to 
substitute for missing columns is already being collected by the code 
block with the comment Get default info if needed.
If I understand it well, you expect the garbage to be after the last 
column. But what if the extra or missing column is somewhere upfront or 
in the middle? Sometimes you might have a type conflict problem that 
will help you detect the problem, sometimes you will just insert 
garbage. This might call for another mechanism that would log the lines 
that are automatically 'adjusted' to be able to rollback any mistake 
that might happen during this automated process.


Emmanuel

--
Emmanuel Cecchet
Aster Data Systems
Web: http://www.asterdata.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] generic copy options

2009-09-14 Thread Emmanuel Cecchet
This looks good. Shoud I try to elaborate on that for the patch with 
error logging and autopartitioning in COPY?


manu

Robert Haas wrote:

Here's a half-baked proof of concept for the above approach.  This
probably needs more testing than I've given it, and I haven't
attempted to fix the psql parser or update the documentation, but it's
at least an outline of a solution.  I did patch all the regression
tests to use the new syntax, so you can look at that part of the patch
to get a flavor for it.  If this is broadly acceptable I can attempt
to nail down the details, or someone else is welcome to pick it up.
It's on my git repo as well, as usual.
  


--
Emmanuel Cecchet
Aster Data Systems
Web: http://www.asterdata.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] new version of PQconnectdb was:(Re: [HACKERS] Determining client_encoding from client locale)

2009-09-14 Thread Jaime Casanova
On Mon, Sep 14, 2009 at 1:34 PM, Andrew Chernow a...@esilo.com wrote:
 Jaime Casanova wrote:

 i extracted the functions to connect that Heikki put on psql in his
 patch for determining client_encoding from client locale and put it in
 libpq so i follow the PQconnectdbParams(* params[]) approach.

[...]

 The below posts agreed on a two argument version of parallel arrays
 (keywords, values):

 http://archives.postgresql.org/pgsql-hackers/2009-09/msg00533.php
 http://archives.postgresql.org/pgsql-hackers/2009-09/msg00559.php


actually, Tom said: it's hard to be sure which way is
actually more convenient without having tried coding some likely
calling scenarios both ways.

so i tried one scenario. :) do you think is worth the trouble make the
other approach? i could make the patch if someone is interested...
personally, i think it will cause more problems than solve because you
have to be sure your arrays have relationship between them...

 There is also the idea of passing an array of structs floating around, NULL
 terminated list or include an additional argument specifying element count.


one more variable to the equation, more innecesary complexity and
another source of errors, IMO...

-- 
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. +59387171157

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] new version of PQconnectdb was:(Re: [HACKERS] Determining client_encoding from client locale)

2009-09-14 Thread Tom Lane
Jaime Casanova jcasa...@systemguards.com.ec writes:
 i put the new function at the end of the exports.txt file, there's a
 reason to renumber the exports to put it at the beginning with the
 other PQconnectdb function?

Exports.txt numbers do not change.  EVER.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch LWlocks instrumentation

2009-09-14 Thread Pierre Frédéric Caillau d



Have you looked at the total execution time with and without the
LWLOCK_TIMING_STATS?


	It didn't show any significant overhead on the little COPY test I made.  
On selects, it probably does (just like EXPLAIN ANALYZE), but I didn't  
test.
	It is not meant to be always active, it's a #define, so I guess it would  
be OK though.


	I'm going to modify it according to your suggestions and repost it (why  
didn't I do that first ?...)



Not that this changes your conclusion.  With or without that distortion I
completely believe that WALInsertLock is the bottleneck of parallel bulk
copy into unindexed tables.  I just can't find anything else it is a  
primary

bottleneck on.  I think the only real solution for bulk copy is to call
XLogInsert less often.  For example, it could build blocks in local  
memory,

then when done copy it into the shared buffers and then toss the entire
block into WAL in one call.  Easier said than implemented, of course.


Actually,

http://archives.postgresql.org/pgsql-hackers/2009-09/msg00806.php

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] generic copy options

2009-09-14 Thread Robert Haas
On Mon, Sep 14, 2009 at 2:51 PM, Emmanuel Cecchet m...@asterdata.com wrote:
 This looks good. Shoud I try to elaborate on that for the patch with error
 logging and autopartitioning in COPY?

That make sense to me.  You shouldn't need to do anything else in
gram.y; whatever you want to add should just involve changing copy.c.
If not, please post the details.

We also need to fix the psql end of this, and the docs...  any
interest in taking a crack at either of those?

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] new version of PQconnectdb was:(Re: [HACKERS] Determining client_encoding from client locale)

2009-09-14 Thread Andrew Chernow

Jaime Casanova wrote:

On Mon, Sep 14, 2009 at 1:34 PM, Andrew Chernow a...@esilo.com wrote:

Jaime Casanova wrote:

i extracted the functions to connect that Heikki put on psql in his
patch for determining client_encoding from client locale and put it in
libpq so i follow the PQconnectdbParams(* params[]) approach.

[...]

The below posts agreed on a two argument version of parallel arrays
(keywords, values):

http://archives.postgresql.org/pgsql-hackers/2009-09/msg00533.php
http://archives.postgresql.org/pgsql-hackers/2009-09/msg00559.php



actually, Tom said: it's hard to be sure which way is
actually more convenient without having tried coding some likely
calling scenarios both ways.



Aahhh, correct you are Daniel son :)


personally, i think it will cause more problems than solve because you
have to be sure your arrays have relationship between them...



A strict relationship exists either way.


There is also the idea of passing an array of structs floating around, NULL
terminated list or include an additional argument specifying element count.



one more variable to the equation, more innecesary complexity and
another source of errors, IMO...


one more variable or one more element, both of which cause problems if 
omitted/incorrect.


const char *params[] =
  {host, blah.com, port, 6262, NULL, NULL};

// compiler enforces relationship
const PGopotion opts[] =
  {{host, blah.com}, {port, 6262}, {NULL, NULL}};

IMHO, the struct approach seems like a cleaner solution.

Any chance of using a term other than params?  Maybe options or 
props?


--
Andrew Chernow
eSilo, LLC
every bit counts
http://www.esilo.com/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bulk Inserts

2009-09-14 Thread Pierre Frédéric Caillau d


Replying to myself...

Jeff suggested to build pages in local memory and insert them later in the  
table. This is what's used in CLUSTER for instance, I believe.


It has some drawbacks though :

- To insert the tuples in indexes, the tuples need tids, but if you build  
the page in local memory, you don't know on which page they will be until  
after allocating the page, which will probably be done after the page is  
built, so it's a bit of a chicken and egg problem.


- It only works on new pages. Pages which are not empty, but have free  
space, cannot be written in this way.


The little experiment I made yesterday does not have these drawbacks,  
since it allocates pages in the standard way, simply it inserts many  
tuples in one operation instead of just inserting one. If the page  
happened to be empty, it's even better, but it's not necessary. If your  
table has lots of free space, it will be used.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] generic copy options

2009-09-14 Thread Emmanuel Cecchet

Robert Haas wrote:

On Mon, Sep 14, 2009 at 2:51 PM, Emmanuel Cecchet m...@asterdata.com wrote:
  

This looks good. Shoud I try to elaborate on that for the patch with error
logging and autopartitioning in COPY?



That make sense to me.  You shouldn't need to do anything else in
gram.y; whatever you want to add should just involve changing copy.c.
If not, please post the details.
  

Ok, I'll keep you posted.

We also need to fix the psql end of this, and the docs...  any
interest in taking a crack at either of those?
  

I can certainly help with the doc.
I have never looked at the psql code but that could be a good way to get 
started on that. If you can point me at where to look at, I'll give it a 
try.


Emmanuel

--
Emmanuel Cecchet
Aster Data Systems
Web: http://www.asterdata.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] DefaultACLs

2009-09-14 Thread Petr Jelinek

Josh Berkus wrote:

But if I understood Tom's suggestions correctly then his approach does
not solve this at all since every one of those users with CREATE TABLE
privileges would have to also set same DEFAULT PRIVILEGES and the dba
would have no say in the matter.



This latter approach benefits nobody.  If default's can't be set by the
DBA centrally, the feature is useless.
  

I agree, however I assume I understood Tom properly since he didn't reply.

So I've been working on solution with which I am happy with (does not 
mean anybody else will be also though).
I created a new version with syntax devised by Tom and one which is user 
centric, but also one with which DBA can control default privileges.

The attached patch adds syntax in this format:
ALTER DEFAULT PRIVILEGES [ IN SCHEMA schema_name(s) ] [ FOR ROLE 
role_name(s) ] GRANT privileges ON object_type TO role(s);
Obviously it sets default privileges for new objects of given object 
type created by role(s) specified using FOR ROLE (is that syntax ok?) 
clause and inside specified schema(s).
If user omits IN SCHEMA it applies database wide. Database wide settings 
are used only if there is nothing specified for current schema (ie no 
cascading/inheritance).
If FOR ROLE is omitted then the privileges are set for current role. 
Only superusers and users with ADMIN privilege (we might want to add 
specific privilege for this but ADMIN seems suitable to me) granted on 
the role can use FOR ROLE clause.

The order of FOR ROLE and IN SCHEMA clauses does not matter.

Some of my thoughts on the changed behavior of the patch:
There is no need to be schema owner anymore in this implementation since 
the privileges are handled quite differently.
There are no longer issues about who should be grantor (discussed on IRC 
only, there was problem that schema owner as grantor didn't seem logical 
and we didn't know owner at the time we created default privileges).
Also there is no longer a problem with what should be template for 
privileges because we now know the owner of the object at default 
privileges creation time (which we didn't before as it was all schema 
based) so we can use standard template as used by GRANT.

The whole thing is more consistent with GRANT.
The patch is also a bit smaller :)
It's not as easy to do the original idea of setting default privileges 
for schema for all users with CREATE privilege on schema but it can 
still be done, one just have to update default privileges every time 
somebody is granted that privilege, and DBA can still have control over 
it all.


Hopefully this will at least inspire some more discussion on the matter.

--
Regards
Petr Jelinek (PJMODOS)



defacl-2009-09-14.diff.gz
Description: Unix tar archive

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest Code Sprint with PUGs

2009-09-14 Thread gabrielle
On Fri, Jul 10, 2009 at 4:06 PM, Josh Berkus j...@agliodbs.com wrote:
 Gabrielle of PDXPUG volunteered that PUG to hold a code sprint in
 coordination with a commitfest sometime soon.  For that event, the PDXPUG
 members would take on a dozen or so patches, compile and review them and
 submit the results.

All systems are go!

Date: Sept 15
Time: 6pm - 9pm PDT, which is GMT -7.

We have 6 attendees.  Robert Haas is going to choose 4 patches for us.
 Can somebody volunteer to hang out on IRC in case we have questions?

Thanks!

gabrielle

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COPY enhancements

2009-09-14 Thread Andrew Dunstan



Emmanuel Cecchet wrote:

Greg Smith wrote:

On Fri, 11 Sep 2009, Emmanuel Cecchet wrote:

I guess the problem with extra or missing columns is to make sure 
that you know exactly which data belongs to which column so that you 
don't put data in the wrong columns which is likely to happen if 
this is fully automated.


Allowing the extra column case is easy:  everwhere in copy.c you find 
the error message extra data after last expected column, just 
ignore the overflow fields rather than rejecting the line just based 
on that.  And the default information I mentioned you might want to 
substitute for missing columns is already being collected by the code 
block with the comment Get default info if needed.
If I understand it well, you expect the garbage to be after the last 
column. But what if the extra or missing column is somewhere upfront 
or in the middle? Sometimes you might have a type conflict problem 
that will help you detect the problem, sometimes you will just insert 
garbage. This might call for another mechanism that would log the 
lines that are automatically 'adjusted' to be able to rollback any 
mistake that might happen during this automated process.






Garbage off to the right is exactly the case that we have. Judging from 
what I'm hearing a number of other people are too.


Nobody suggests that a facility to ignore extra columns will handle 
every case. It will handle what increasingly appears to be a common case.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Streaming Replication patch for CommitFest 2009-09

2009-09-14 Thread Simon Riggs

On Mon, 2009-09-14 at 20:24 +0900, Fujii Masao wrote:

 The latest patch has overcome those problems:

Well done. I hope to look at it myself in a few days time.

-- 
 Simon Riggs   www.2ndQuadrant.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP - syslogger infrastructure changes

2009-09-14 Thread Guillaume Smet
Hi Magnus,

On Mon, Sep 14, 2009 at 9:41 PM, Magnus Hagander mag...@hagander.net wrote:
 First, the patch removes the logging_collector parameter and basically
 assumes that logging_collector is always on.

Alvaro commited this a few days ago:
http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/doc/src/sgml/config.sgml?r1=1.225r2=1.226

Any consequence?

 Second, data transfer from the backends is now in the form of
 structured data, and the actual logging message is put together in the
 syslogger (today,it's put together in the backend and just sent as a
 string to the syslogger). Yes, this means that we will likely send
 more data than what's eventually going to be logged, since all fields
 don't go out (except with CVS logging, I guess). But you usually don't
 send *that* much data in the log.

I don't know if it will make a real difference but some of us log
quite a lot of queries.

-- 
Guillaume

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP - syslogger infrastructure changes

2009-09-14 Thread Magnus Hagander
On Mon, Sep 14, 2009 at 21:56, Guillaume Smet guillaume.s...@gmail.com wrote:
 Hi Magnus,

 On Mon, Sep 14, 2009 at 9:41 PM, Magnus Hagander mag...@hagander.net wrote:
 First, the patch removes the logging_collector parameter and basically
 assumes that logging_collector is always on.

 Alvaro commited this a few days ago:
 http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/doc/src/sgml/config.sgml?r1=1.225r2=1.226

 Any consequence?

At this point, not really. If you log to syslog, it still goes
directly to syslog, just like before, without passing the logging
collector. That is something worth considering inthe future, though.



 Second, data transfer from the backends is now in the form of
 structured data, and the actual logging message is put together in the
 syslogger (today,it's put together in the backend and just sent as a
 string to the syslogger). Yes, this means that we will likely send
 more data than what's eventually going to be logged, since all fields
 don't go out (except with CVS logging, I guess). But you usually don't
 send *that* much data in the log.

 I don't know if it will make a real difference but some of us log
 quite a lot of queries.

Yeah, one of the main reasons is to be able to do that using CSV (not
CVS, sorry about that typo) to stick it into logging parsers, and then
have the regular logfile available for DBA reading as well.

Anyway, we'll have to do some performance testing as well, obviously -
but I haven't done any of that yet.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] generic copy options

2009-09-14 Thread Robert Haas
On Mon, Sep 14, 2009 at 3:25 PM, Emmanuel Cecchet m...@asterdata.com wrote:
 Robert Haas wrote:

 On Mon, Sep 14, 2009 at 2:51 PM, Emmanuel Cecchet m...@asterdata.com
 wrote:


 This looks good. Shoud I try to elaborate on that for the patch with
 error
 logging and autopartitioning in COPY?


 That make sense to me.  You shouldn't need to do anything else in
 gram.y; whatever you want to add should just involve changing copy.c.
 If not, please post the details.

 Ok, I'll keep you posted.

 We also need to fix the psql end of this, and the docs...  any
 interest in taking a crack at either of those?

 I can certainly help with the doc.

If you have the time to revise the docs to describe this new syntax,
that would be great.

 I have never looked at the psql code but that could be a good way to get
 started on that. If you can point me at where to look at, I'll give it a
 try.

I don't know either off the top of my head, but I'll go look for it
when I get a chance.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP - syslogger infrastructure changes

2009-09-14 Thread Tom Lane
Magnus Hagander mag...@hagander.net writes:
 First, the patch removes the logging_collector parameter and basically
 assumes that logging_collector is always on.

I don't find that to be a good idea, and you certainly have not made
a case why we should change it.  I can't see any reason why pushing
functionality out of backends and downstream to the syslogger process
is an improvement.  What it's more likely to do is create a processing
bottleneck and a single point of failure.

 ... Given that the syslogger is now
 always started, those that actually *want* logging to stderr (which I
 claim is a low number of people, but that's a different story) will
 have it go through the syslogger and then to the stderr of syslogger.

That design doesn't work because there is then *no* way to recover from
a syslogger crash.  You no longer have access to the original stderr
file once the postmaster has redirected stderr to syslogger.  We can
live with that so long as syslogger's stderr output isn't very
interesting, but not if it's going to be the main log channel.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] new version of PQconnectdb was:(Re: [HACKERS] Determining client_encoding from client locale)

2009-09-14 Thread Jaime Casanova
On Mon, Sep 14, 2009 at 1:55 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Jaime Casanova jcasa...@systemguards.com.ec writes:
 i put the new function at the end of the exports.txt file, there's a
 reason to renumber the exports to put it at the beginning with the
 other PQconnectdb function?

 Exports.txt numbers do not change.  EVER.


i didn't find any info about it, not even in the sources... should we
document that we need to put some functions in that file and for what
reasons?

actually, i was very confused when the psql fails to compile until i
understood i need to put the function in that file

-- 
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. +59387171157

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] new version of PQconnectdb was:(Re: [HACKERS] Determining client_encoding from client locale)

2009-09-14 Thread Jaime Casanova
On Mon, Sep 14, 2009 at 2:20 PM, Andrew Chernow a...@esilo.com wrote:
 Jaime Casanova wrote:

 On Mon, Sep 14, 2009 at 1:34 PM, Andrew Chernow a...@esilo.com wrote:

 Jaime Casanova wrote:

 i extracted the functions to connect that Heikki put on psql in his
 patch for determining client_encoding from client locale and put it in
 libpq so i follow the PQconnectdbParams(* params[]) approach.

 [...]

 The below posts agreed on a two argument version of parallel arrays
 (keywords, values):

 http://archives.postgresql.org/pgsql-hackers/2009-09/msg00533.php
 http://archives.postgresql.org/pgsql-hackers/2009-09/msg00559.php


 actually, Tom said: it's hard to be sure which way is
 actually more convenient without having tried coding some likely
 calling scenarios both ways.


 Aahhh, correct you are Daniel son :)


??? don't understand you ???

 personally, i think it will cause more problems than solve because you
 have to be sure your arrays have relationship between them...


 A strict relationship exists either way.

[...]

 IMHO, the struct approach seems like a cleaner solution.


i agree

 Any chance of using a term other than params?  Maybe options or props?


i don't have any problems with options

-- 
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. +59387171157

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] generic copy options

2009-09-14 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Mon, Sep 14, 2009 at 3:25 PM, Emmanuel Cecchet m...@asterdata.com wrote:
 I have never looked at the psql code but that could be a good way to get
 started on that. If you can point me at where to look at, I'll give it a
 try.

 I don't know either off the top of my head, but I'll go look for it
 when I get a chance.

src/bin/psql/copy.c ...

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] new version of PQconnectdb was:(Re: [HACKERS] Determining client_encoding from client locale)

2009-09-14 Thread Andrew Chernow

actually, Tom said: it's hard to be sure which way is
actually more convenient without having tried coding some likely
calling scenarios both ways.


Aahhh, correct you are Daniel son :)



??? don't understand you ???


From the movie karate kid; oopps, should be Daniel San.  I was trying 
to be cute but that apparently failed :(


--
Andrew Chernow
eSilo, LLC
every bit counts
http://www.esilo.com/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] new version of PQconnectdb was:(Re: [HACKERS] Determining client_encoding from client locale)

2009-09-14 Thread Tom Lane
Jaime Casanova jcasa...@systemguards.com.ec writes:
 On Mon, Sep 14, 2009 at 1:55 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Exports.txt numbers do not change.  EVER.

 i didn't find any info about it, not even in the sources... should we
 document that we need to put some functions in that file and for what
 reasons?

Every function that is meant to be exported from libpq.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] new version of PQconnectdb was:(Re: [HACKERS] Determining client_encoding from client locale)

2009-09-14 Thread Jaime Casanova
On Mon, Sep 14, 2009 at 3:31 PM, Andrew Chernow a...@esilo.com wrote:
 actually, Tom said: it's hard to be sure which way is
 actually more convenient without having tried coding some likely
 calling scenarios both ways.

 Aahhh, correct you are Daniel son :)


 ??? don't understand you ???

 From the movie karate kid; oopps, should be Daniel San.


ah! got it... ;)

-- 
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. +59387171157

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest Code Sprint with PUGs

2009-09-14 Thread David E. Wheeler

On Sep 14, 2009, at 12:37 PM, gabrielle wrote:


All systems are go!

Date: Sept 15
Time: 6pm - 9pm PDT, which is GMT -7.

We have 6 attendees.  Robert Haas is going to choose 4 patches for us.
Can somebody volunteer to hang out on IRC in case we have questions?


I might be able to make it. I expect to have the hstore patch to  
review (I reviewed it last fest).


David

--
David E. Wheeler david.whee...@pgexperts.com
Associate, PostgreSQL Experts, Inc.
Phone: 1-888-PG-EXPRT x504
http://www.pgexperts.com/


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] DefaultACLs

2009-09-14 Thread Josh Berkus
Petr,

 It's not as easy to do the original idea of setting default privileges
 for schema for all users with CREATE privilege on schema but it can
 still be done, one just have to update default privileges every time
 somebody is granted that privilege, and DBA can still have control over
 it all.

Sounds like a good solution.  Thanks for persisting with this.


-- 
Josh Berkus
PostgreSQL Experts Inc.
www.pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Feature Request: JSON input for hstore

2009-09-14 Thread Josh Berkus
Hackers,

I've just had a feature request from a client that we come up with a way
to enable JSON input into hstore.  This would make hstore much more
useful for software developers.

First question: would this go in the language driver, libpq, or the
backend?  Thoughts?

-- 
Josh Berkus
PostgreSQL Experts Inc.
www.pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Timestamp to time_t

2009-09-14 Thread Scott Mohekey
I think the issue is that we treat TIMESTAMP WITHOUT TIME ZONE as TIMESTAMP
at GMT. We then convert it to a users local timezone within application
code.
Scott Mohekey
Systems/Application Specialist – OnTrack – Telogis, Inc.
www.telogis.com  www.telogis.co.nz
+1 949 625-4115 ext. 207 (USA)  +64 3339 2825 x207 (NZ)

Leading Global Platform for Location Based Services
--
This e-mail, and any attachments, is intended only for use by the
addressee(s) named herein and may contain legally privileged and/or
confidential information.  It is the property of Telogis.  If you are not
the intended recipient of this e-mail, you are hereby notified that any
dissemination, distribution or copying of this e-mail, any attachments
thereto, and use of the information contained, is strictly prohibited.  If
you have received this e-mail in error, please notify the sender and
permanently delete the original and any copy there of.


On Tue, Sep 15, 2009 at 5:29 AM, Kevin Grittner kevin.gritt...@wicourts.gov
 wrote:

 Scott Mohekey scott.mohe...@telogis.com wrote:

  What is the relationship between Timestamp and TimestampTz?

 TIMESTAMP WITHOUT TIME ZONE does not identify a moment in time without
 first associating it with a time zone.  When Daylight Saving Time
 ends, the same TIMESTAMP WITHOUT TIME ZONE values will recur, without
 any way to distinguish them from those from the previous hour.

 The only use case I have been able to think of, personally, for
 TIMESTAMP WITHOUT TIME ZONE is a release date for, say a Harry Potter
 book or World of Warcraft expansion set, where as the given moment
 arrives in each time zone, stores in that time zone can begin to sell
 the given work.

 I suspect there's probably one or two other valid uses, but most uses
 are just mistakes, waiting to be exposed.  For almost every reasonable
 use, the right data type is TIMESTAMP WITH TIME ZONE.

 -Kevin



Re: [HACKERS] Commitfest Code Sprint with PUGs

2009-09-14 Thread David E. Wheeler

On Sep 14, 2009, at 3:22 PM, Robert Haas wrote:

I might be able to make it. I expect to have the hstore patch to  
review (I

reviewed it last fest).


It has not been resubmitted for this CommitFest.


He told me a few hours ago that he was working to get it done. Looks  
like there's about 90 mins to go. ;-)



I don't necessarily try to assign people back to the same patches - I
think sometimes a fresh pair of eyes is useful.  But I'm not dead set
on changing it up, either.


Yeah, but I *love* this patch! :-)

David

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest Code Sprint with PUGs

2009-09-14 Thread Robert Haas
On Mon, Sep 14, 2009 at 4:41 PM, David E. Wheeler da...@kineticode.com wrote:
 On Sep 14, 2009, at 12:37 PM, gabrielle wrote:

 All systems are go!

 Date: Sept 15
 Time: 6pm - 9pm PDT, which is GMT -7.

 We have 6 attendees.  Robert Haas is going to choose 4 patches for us.
 Can somebody volunteer to hang out on IRC in case we have questions?

 I might be able to make it. I expect to have the hstore patch to review (I
 reviewed it last fest).

It has not been resubmitted for this CommitFest.

I don't necessarily try to assign people back to the same patches - I
think sometimes a fresh pair of eyes is useful.  But I'm not dead set
on changing it up, either.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Feature Request: JSON input for hstore

2009-09-14 Thread David Fetter
On Mon, Sep 14, 2009 at 03:14:57PM -0700, Josh Berkus wrote:
 Hackers,
 
 I've just had a feature request from a client that we come up with a
 way to enable JSON input into hstore.  This would make hstore much
 more useful for software developers.
 
 First question: would this go in the language driver, libpq, or the
 backend?  Thoughts?

The backend, ideally in some pluggable way.  XML, under proper
circumstances, should work, as should YAML, etc.

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] revised hstore patch

2009-09-14 Thread Andrew Gierth
Latest hstore patch with provision for inplace upgrading.

-- 
Andrew (irc:RhodiumToad)



hstore-20090914.patch.gz
Description: hstore patch

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Feature Request: JSON input for hstore

2009-09-14 Thread Andrew Dunstan



David Fetter wrote:

On Mon, Sep 14, 2009 at 03:14:57PM -0700, Josh Berkus wrote:
  

Hackers,

I've just had a feature request from a client that we come up with a
way to enable JSON input into hstore.  This would make hstore much
more useful for software developers.

First question: would this go in the language driver, libpq, or the
backend?  Thoughts?



The backend, ideally in some pluggable way.  XML, under proper
circumstances, should work, as should YAML, etc.


  


The way the standard specifies SQL/XML requires grammar support. That 
means it can't really be pluggable. We've been over this before.


chjeers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Feature Request: JSON input for hstore

2009-09-14 Thread Tom Lane
David Fetter da...@fetter.org writes:
 On Mon, Sep 14, 2009 at 03:14:57PM -0700, Josh Berkus wrote:
 I've just had a feature request from a client that we come up with a
 way to enable JSON input into hstore.  This would make hstore much
 more useful for software developers.
 
 First question: would this go in the language driver, libpq, or the
 backend?  Thoughts?

 The backend, ideally in some pluggable way.  XML, under proper
 circumstances, should work, as should YAML, etc.

Uh, hstore is a contrib module.  Why aren't we talking about just some
more code in the contrib module?  Adding anything to support a contrib
module in, say, libpq seems right out.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RfD: more powerful any types

2009-09-14 Thread decibel


On Sep 14, 2009, at 1:02 PM, Pavel Stehule wrote:


2009/9/14 Merlin Moncure mmonc...@gmail.com:
On Mon, Sep 14, 2009 at 1:42 PM, Pavel Stehule  
pavel.steh...@gmail.com wrote:
How is it any worse than what people can already do? Anyone who  
isn't aware
of the dangers of SQL injection has already screwed themselves.  
You're
basically arguing that they would put a variable inside of  
quotes, but they

would never use ||.


simply - people use functions quote_literal or quote_ident.


you still have use of those functions:
execute sprintf('select * from %s', quote_ident($1));

sprintf is no more or less dangerous than || operator.


sure. I commented different feature

some := 'select * from $1'

regards
Pavel

p.s. In this case, I am not sure what is more readable:

execute 'select * from ' || quote_ident($1)

is readable well too.



Ahh... the problem is one of fixating on an example instead of the  
overall use case.


More examples...

RETURN 'Your account is now $days_overdue days overdue. Please  
contact your account manager ($manager_name) to ...';


And an example of how readability would certainly be improved...

sql := $$INSERT INTO cnu_stats.$$ || v_field_name || $$( $$ ||  
v_field_name || $$ )

SELECT DISTINCT $$ || v_field_name || $$
FROM chunk t
WHERE NOT EXISTS( SELECT * FROM cnu_stats.$$ || v_field_name  
|| $$ s WHERE s.$$

|| v_field_name || $$ = t.$$ || v_field_name || $$ )$$

becomes

sql := $$INSERT INTO cnu_stats.${v_field_name} ( ${v_field_name} )
SELECT DISTINCT $v_field_name
FROM chunk t
WHERE NOT EXISTS( SELECT * FROM cnu_stats.${v_field_name} s
  WHERE s.${v_field_name} = t.$ 
{v_field_name} )$$


Granted, that example wouldn't be too bad with sprintf, but only  
because everything is referencing the same field.

--
Decibel!, aka Jim C. Nasby, Database Architect  deci...@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Feature Request: JSON input for hstore

2009-09-14 Thread Jeff Davis
On Mon, 2009-09-14 at 15:14 -0700, Josh Berkus wrote:
 Hackers,
 
 I've just had a feature request from a client that we come up with a way
 to enable JSON input into hstore.  This would make hstore much more
 useful for software developers.
 
 First question: would this go in the language driver, libpq, or the
 backend?  Thoughts?

What's wrong with just using a variant of the type input function? With
a parameterized insert, it doesn't seem like it's really placing much of
a burden on the application.

Or am I missing the use case?

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Feature Request: JSON input for hstore

2009-09-14 Thread Robert Haas

On Sep 14, 2009, at 7:05 PM, Jeff Davis pg...@j-davis.com wrote:


On Mon, 2009-09-14 at 15:14 -0700, Josh Berkus wrote:

Hackers,

I've just had a feature request from a client that we come up with  
a way

to enable JSON input into hstore.  This would make hstore much more
useful for software developers.

First question: would this go in the language driver, libpq, or the
backend?  Thoughts?


What's wrong with just using a variant of the type input function?  
With
a parameterized insert, it doesn't seem like it's really placing  
much of

a burden on the application.


Agreed, I was thinking the same thing.

...Roberta

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest Code Sprint with PUGs

2009-09-14 Thread Robert Haas
On Mon, Sep 14, 2009 at 6:25 PM, David E. Wheeler da...@kineticode.com wrote:
 I don't necessarily try to assign people back to the same patches - I
 think sometimes a fresh pair of eyes is useful.  But I'm not dead set
 on changing it up, either.

 Yeah, but I *love* this patch! :-)

Fair enough.  Sold.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Feature Request: JSON input for hstore

2009-09-14 Thread David E. Wheeler

On Sep 14, 2009, at 4:32 PM, Robert Haas wrote:

What's wrong with just using a variant of the type input function?  
With
a parameterized insert, it doesn't seem like it's really placing  
much of

a burden on the application.


Agreed, I was thinking the same thing.


And the type output function, too. Not sure how you'd configure that,  
though.


David

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Feature Request: JSON input for hstore

2009-09-14 Thread Robert Haas
On Mon, Sep 14, 2009 at 7:42 PM, David E. Wheeler da...@kineticode.com wrote:
 On Sep 14, 2009, at 4:32 PM, Robert Haas wrote:

 What's wrong with just using a variant of the type input function? With
 a parameterized insert, it doesn't seem like it's really placing much of
 a burden on the application.

 Agreed, I was thinking the same thing.

 And the type output function, too. Not sure how you'd configure that,
 though.

Well I don't think you really need any them to BE the type
input/output functions.  You just need

hstore_to_json(hstore) returns text
json_to_hstore(text) returns json

Insert into your query where appropriate.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Patch for 1-byte buffer overflow in libpq PQencryptPassword

2009-09-14 Thread ljb
A trivial little fix for PostgreSQL-8.4.1.

Calling the libpq function PQencryptPassword(password, ) doesn't make
a lot of sense (empty string for username). But if you do, it results
in a 1-byte buffer overflow in pg_md5_encrypt().  (This is in
backend/libpq/md5.c, but it's client, not backend.)

This is because pg_md5_encrypt(password, salt, salt_len, buf) with
salt_len=0 allocates a buffer crypt_buf of size strlen(password), then
uses strcpy to copy the password in there. The null byte at the end of
the password overruns the end of the allocated buffer.

(Found during pgtclng testing, looking for the cause of an error
on WinXP only, which turned out to have nothing to do with this.)

Two possible suggested fixes to src/backend/libpq/md5.c, pg_md5_crypt():
1) Allocate crypt_buf to (passwd_len + 1 + salt_len)
2) Use memcpy(crypt_buf, passwd, passwd_len) not strcpy(crypt_buf, passwd).

I like fix #2 better, although fix #1 avoids a weirdness with
PQencryptPassword(,) calling malloc(0) with platform-dependent
results (which was the problem I was chasing with pgtclng).

Patch below is for fix #2.

--- postgresql-8.4.1/src/backend/libpq/md5.c.bak2009-01-01 
12:23:42.0 -0500
+++ postgresql-8.4.1/src/backend/libpq/md5.c2009-09-13 11:21:59.0 
-0400
@@ -324,7 +324,7 @@
 * Place salt at the end because it may be known by users trying to 
crack
 * the MD5 output.
 */
-   strcpy(crypt_buf, passwd);
+   memcpy(crypt_buf, passwd, passwd_len);
memcpy(crypt_buf + passwd_len, salt, salt_len);
 
strcpy(buf, md5);

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Feature Request: JSON input for hstore

2009-09-14 Thread Andrew Dunstan



David Fetter wrote:

On Mon, Sep 14, 2009 at 06:55:52PM -0400, Andrew Dunstan wrote:
  

David Fetter wrote:


On Mon, Sep 14, 2009 at 03:14:57PM -0700, Josh Berkus wrote:
  

Hackers,

I've just had a feature request from a client that we come up with
a way to enable JSON input into hstore.  This would make hstore
much more useful for software developers.

First question: would this go in the language driver, libpq, or
the backend?  Thoughts?


The backend, ideally in some pluggable way.  XML, under proper
circumstances, should work, as should YAML, etc.
  

The way the standard specifies SQL/XML requires grammar support.
That  means it can't really be pluggable. We've been over this
before.



I haven't found anything about hstore in the standard, so I'm not sure
how this applies.


  


You're the one that mentioned the backend being pluggable w.r.t. XML 
among other things.


In any case, this is academic. It has become clear in off-list 
discussion that support for JSON input isn't really what the requestor 
needs. What he needs is a way to translate a Perl hashref to an hstore 
literal and vice versa, and Andrew Gierth has written some nice routines 
in Perl to do just that.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch for 1-byte buffer overflow in libpq PQencryptPassword

2009-09-14 Thread Tom Lane
ljb ljb1...@pobox.com writes:
 Two possible suggested fixes to src/backend/libpq/md5.c, pg_md5_crypt():
 1) Allocate crypt_buf to (passwd_len + 1 + salt_len)
 2) Use memcpy(crypt_buf, passwd, passwd_len) not strcpy(crypt_buf, passwd).

 I like fix #2 better, although fix #1 avoids a weirdness with
 PQencryptPassword(,) calling malloc(0) with platform-dependent
 results (which was the problem I was chasing with pgtclng).

Hmm ... I'm inclined to do both.  I agree that the memcpy coding is
cleaner than strcpy when we don't actually care about adding a trailing
null.  But malloc(0) is unportable and best avoided.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] CommitFest 2009-09: Now In Progress

2009-09-14 Thread Robert Haas
commitfest.postgresql.org is updated, please submit new patches at
https://commitfest.postgresql.org/action/commitfest_view/open

Initial reviewing assignments have been sent to -rrreviewers.  If you
didn't get one due to an oversight on my part, please drop me a note!

Thanks,

...Robert

[P.S. I learned my lesson - last CF the equivalent email said that the
CF was closed, which of course was not what I meant at all.]

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CommitFest 2009-09: Now In Progress

2009-09-14 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 commitfest.postgresql.org is updated, please submit new patches at
 https://commitfest.postgresql.org/action/commitfest_view/open

Anyone else finding that logging in at commitfest.postgresql.org fails?
I get

Internal Server Error

The server encountered an internal error or misconfiguration and was unable to 
complete your request.

Please contact the server administrator, webmas...@postgresql.org and inform 
them of the time the error occurred, and anything you might have done that may 
have caused the error.

More information about this error may be available in the server error
log.


regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CommitFest 2009-09: Now In Progress

2009-09-14 Thread Stephen Frost
* Tom Lane (t...@sss.pgh.pa.us) wrote:
 Robert Haas robertmh...@gmail.com writes:
  commitfest.postgresql.org is updated, please submit new patches at
  https://commitfest.postgresql.org/action/commitfest_view/open
 
 Anyone else finding that logging in at commitfest.postgresql.org fails?
 I get
 
 Internal Server Error

Yes.  The auth server appears to be down (wwwmaster).  Much discussion
has ensued on IRC, but none of us are admins. :/

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] CommitFest 2009-09: Now In Progress

2009-09-14 Thread Robert Haas
On Mon, Sep 14, 2009 at 9:30 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 commitfest.postgresql.org is updated, please submit new patches at
 https://commitfest.postgresql.org/action/commitfest_view/open

 Anyone else finding that logging in at commitfest.postgresql.org fails?
 I get

 Internal Server Error

 The server encountered an internal error or misconfiguration and was unable 
 to complete your request.

 Please contact the server administrator, webmas...@postgresql.org and inform 
 them of the time the error occurred, and anything you might have done that 
 may have caused the error.

 More information about this error may be available in the server error
 log.

Yep, we were just discussing on IRC.  It seems wwwmaster.postgreql.org
is down (unpingable).  If you were already logged in, you could work
just fine, but you can't get logged in, because there's no way to
authenticate you.  (You know all those people trying to impersonate
Tom Lane)

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bulk Inserts

2009-09-14 Thread Jeff Janes
2009/9/14 Pierre Frédéric Caillaud li...@peufeu.com


 I've done a little experiment with bulk inserts.

 = heap_bulk_insert()

 Behaves like heap_insert except it takes an array of tuples (HeapTuple
 *tups, int ntups).

 - Grabs a page (same as heap_insert)

 - While holding exclusive lock, inserts as many tuples as it can on the
 page.
- Either the page gets full
- Or we run out of tuples.

 - Generate xlog : choice between
- Full Xlog mode :
- if we inserted more than 10 tuples (totaly bogus
 heuristic), log the entire page
- Else, log individual tuples as heap_insert does


Does that heuristic change the timings much?  If not, it seems like it would
better to keep it simple and always do the same thing, like log the tuples
(if it is done under one WALInsertLock, which I am assuming it is..)


- Light log mode :
- if page was empty, only xlog a new empty page record,
 not page contents
- else, log fully
- heap_sync() at the end

 - Release the page
 - If we still have tuples to insert, repeat.

 Am I right in assuming that :

 1)
 - If the page was empty,
 - and log archiving isn't used,
 - and the table is heap_sync()'d at the end,
 = only a new empty page record needs to be created, then the page can be
 completely filled ?


Do you even need the new empty page record?  I think a zero page will be
handled correctly next time it is read into shared buffers, won't it?  But I
guess it is need to avoid  problems with partial page writes that would
leave in a state that is neither all zeros nor consistent.



 2)
 - If the page isn't empty
 - or log archiving is used,
 = logging either the inserted tuples or the entire page is OK to guarantee
 persistence ?


If the entire page is logged, would it have to marked as not removable by
the log compression tool?  Or can the tool recreate the needed delta?

Jeff


Re: [HACKERS] CommitFest 2009-09: Now In Progress

2009-09-14 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 Yep, we were just discussing on IRC.  It seems wwwmaster.postgreql.org
 is down (unpingable).

Looks like someone fixed it.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bulk Inserts

2009-09-14 Thread Jeff Janes
2009/9/14 Pierre Frédéric Caillaud li...@peufeu.com


 Replying to myself...

 Jeff suggested to build pages in local memory and insert them later in the
 table. This is what's used in CLUSTER for instance, I believe.

 It has some drawbacks though :

 - To insert the tuples in indexes, the tuples need tids, but if you build
 the page in local memory, you don't know on which page they will be until
 after allocating the page, which will probably be done after the page is
 built, so it's a bit of a chicken and egg problem.


Yes, I did not consider that to be a problem because I did not think it
would be used on indexed tables.  I figured that the gain from doing bulk
inserts into the table would be so diluted by the still-bottle-necked index
maintenance that it was OK not to use this optimization for indexed tables.



 - It only works on new pages. Pages which are not empty, but have free
 space, cannot be written in this way.


My original thought was based on the idea of still using heap_insert, but
with a modified form of bistate which would hold the exclusive lock and not
just a pin.  If heap_insert is being driven by the unmodified COPY code,
then it can't guarantee that COPY won't stall on a pipe read or something,
and so probably shouldn't hold an exclusive lock while filling the block.
That is why I decided a local buffer would be better, as the exclusive lock
is really a no-op and wouldn't block anyone.  But if you are creating a new
heap_bulk_insert and modifying the COPY to go with it, then you can
guarantee it won't stall from the driving end, instead.

 Whether any of these approaches will be maintainable enough to be
integrated into the code base is another matter.  It seems like there is
already a lot of discussion going on around various permutations of copy
options.

Cheers,

Jeff


Re: [HACKERS] Resjunk sort columns, Heikki's index-only quals patch, and bug #5000

2009-09-14 Thread Robert Haas
On Mon, Sep 14, 2009 at 5:41 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
 Heikki Linnakangas wrote:
 Tom Lane wrote:
 It strikes me that in the cases where it wouldn't be necessary to
 compute junk sort-key columns, it would be because we were scanning an
 index that includes those values.  So if the plan were set up to pull
 those values from the index and return them, then we'd not have to add
 this extra complexity to grouping_planner --- the argument that it's not
 worth it to get rid of the junk columns comes back into play.  Moreover,
 such an ability would also mean that if the user *does* ask for the
 sort column value as output (ie it's not resjunk), we can still satisfy
 the query from the index without recomputing the expensive function.

 So this is where we come to the connection to Heikki's index-only-quals
 patch.  As submitted, that code is only able to use an index column in
 a scan qual, it's not able to return it as part of the scan result.
 This example makes it clear that that definition is missing a large
 part of the potential benefit of an index value extraction capability.

 To be able to do anything along that line would require some more work
 in the executor and a *lot* more work in the planner, and I'm honestly
 not sure what the planner part of it would look like.

 I think we should separate the Heap Fetch operation from the IndexScan.

 I've been hacking on that approach. It's quite unfinished, but before I
 spend any more time on it, I'd like to get some feedback on the overall
 design.

 The attached patch can create plans where quals are checked and joins
 are performed using values from indexes only, and the heap tuples are
 fetched only for matching rows. Passes regression tests, but code is
 quite ugly at points. Cost estimation is bogus. The patch builds on the
 indexam-api-changes patch I posted earlier, which is also attached. I
 haven't yet done the changes to that patch that were discussed.

 I haven't done any performance testing. The overhead of an extra
 executor node for each index scan could slow down simple queries, we
 might need to compensate that somehow, maybe reintroduce a fastpath
 combined IndexScan+HeapFetch node.  I'm also afraid the extra work I've
 pushed to the stage where Paths are constructed could slow down planning
 quite a bit if you have a lot of indexes.


 Path nodes now carry a targetlist. That's because when you have a path like:

  HeapFetch
   - Join
     ...

 You won't have all the columns of the join rel available at the join
 node yet, because they will be fetched in the HeapFetch node above. The
 targetlist in Path nodes reflect that, and the targetlist of the final
 Plan nodes are created from the targetlists in the Path nodes instead of
 the ones in RelOptInfos.

 Per earlier discussion, I changed the way index tuple fetching works in
 B-tree, so that it can now be relied on. Matching index tuples are
 copied to backend-local memory when the scan steps on a page.

 Var nodes that refer to index columns (indexquals and the new index-only
 filters) now have a new field, varindexno, set. While we could've
 continued with the old representation, now that we have more expressions
 that refer to index vars instead of heap vars, this makes debugging easier.

Hi, I'm reviewing this patch for the 2009-09 CommitFest.

It doesn't seem to compile.

make[4]: Entering directory `/home/rhaas/pgsql-git/src/backend/optimizer/path'
gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith
-Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing
-fwrapv -g -I../../../../src/include -D_GNU_SOURCE
-I/usr/include/libxml2   -c -o joinpath.o joinpath.c -MMD -MP -MF
.deps/joinpath.Po
joinpath.c: In function ‘can_bubbleup’:
joinpath.c:170: warning: implicit declaration of function ‘make_indexonly_expr’
joinpath.c: In function ‘bubbleup_step’:
joinpath.c:187: warning: implicit declaration of function ‘makeVar’
joinpath.c:188: error: ‘SelfItemPointerAttributeNumber’ undeclared
(first use in this function)
joinpath.c:188: error: (Each undeclared identifier is reported only once
joinpath.c:188: error: for each function it appears in.)
joinpath.c:189: error: ‘TIDOID’ undeclared (first use in this function)
joinpath.c:189: warning: assignment makes pointer from integer without a cast

Actually, before I even tried compiling this, I was looking through
the joinpath.c changes, since that is an area of the code with which I
have some familiarity.  As I'm sure you're aware, the lack of
commenting makes it quite difficult to understand what this is trying
to do, and the functions are poorly named.  It isn't self-explanatory
what bubbling up means, even in the limited context of joinpath.c.

Leaving that aside, I think that the approach here is likely wrong;
the decision about when to perform a heap fetch doesn't seem to be
based on cost, which I think it needs to be.  Consider A IJ B, with
the scan over A implemented as an index scan.  It 

Re: [HACKERS] [BUGS] BUG #5053: domain constraints still leak

2009-09-14 Thread Tom Lane
[ probably time to move this thread to -hackers ]

There is some moderately interesting reading material in section
4.17.4 Domain constraints of SQL:2008.  In particular, it appears to
me that the standard goes out of its way to NOT claim that every value
that is of a domain type satisfies the domain's constraints.  It looks
to me that the implementation they have in mind is that domain
constraints are to be checked:

(1) when a value is assigned to a *table* column having that domain type;

(2) when a value is converted to that domain type by an *explicit*
cast construct;

(3) nowhere else.

If I'm reading this right, it sidesteps most of the concerns we have
been worrying about here, at the cost of being perhaps more surprising
and less useful than one would expect.  It would also mean that a lot
of our existing domain behavior is wrong.  I think there is ammunition
here for an argument that, in effect, values in flight in expression
or query evaluation should always be considered to be of base types,
and domain constraints should only be checked when assigning to a
persistent storage location such as a table field or plpgsql variable
(plus the special case for CAST constructs).

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Encoding issues in console and eventlog on win32

2009-09-14 Thread Itagaki Takahiro

Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote:

 Can't we use MultiByteToWideChar() to convert directly to the required
 encoding, avoiding the double conversion?

Here is an updated version of the patch.
I use direct conversion in pgwin32_toUTF16() if a corresponding codepage
is available. If not available, I still use double conversion.

Now pgwin32_toUTF16() is exported from mbutil.c. I used the function
in following parts, although the main target of the patch is eventlog.

  * WriteConsoleW() - write unredirected stderr log.
  * ReportEventW()  - write evenlog.
  * CreateFileW()   - open non-ascii filename (ex. COPY TO/FROM 'mb-path').

This approach is only available for Windows because any other platform
don't support locale-independent and wide-character-based system calls.
Other platforms require a different approach, but even then we'd still
better have win32-specific routines because UTF16 is the native encoding
in Windows.

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center



eventlog-20090915.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] BUG #5053: domain constraints still leak

2009-09-14 Thread Andrew Gierth
 Tom == Tom Lane t...@sss.pgh.pa.us writes:

 Tom [ probably time to move this thread to -hackers ]

 Tom There is some moderately interesting reading material in section
 Tom 4.17.4 Domain constraints of SQL:2008.  In particular, it
 Tom appears to me that the standard goes out of its way to NOT claim
 Tom that every value that is of a domain type satisfies the
 Tom domain's constraints.  It looks to me that the implementation
 Tom they have in mind is that domain constraints are to be checked:

 Tom (1) when a value is assigned to a *table* column having that
 Tom domain type;

 Tom (2) when a value is converted to that domain type by an
 Tom *explicit* cast construct;

 Tom (3) nowhere else.

By my reading it's a bit more involved than that. In particular, if
you cast from one rowtype to another, that seems to be defined in terms
of individual casts of each column, so CAST(ROW(null) TO rowtype) where
rowtype has one column of a not-null domain type would still count as an
explicit cast to the domain.

But there's a kicker: in Subclause 6.12, cast specification, in the
General Rules is:

  2) Case:
 a) If the cast operand specifies NULL, then the result of CS is
the null value and no further General Rules of this Subclause
are applied.
 b) If the cast operand specifies an empty specification, then
the result of CS is an empty collection of declared type TD
and no further General Rules of this Subclause are applied.
 c) If SV is the null value, then the result of CS is the null
value and no further General Rules of this Subclause are
applied.

That no further General Rules clause implies (assuming it's not a
blatant mistake in the spec) that this rule is therefore skipped in
the case of nulls:

 21) If the cast specification contains a domain name and that
 domain name refers to a domain that contains a domain
 constraint and if TV does not satisfy the check constraint
 definition simply contained in the domain constraint, then an
 exception condition is raised: integrity constraint violation.

Which would imply that you can cast a NULL to a domain type even if
that would violate a constraint. Which would pretty much leave
actual assignment to storage as being the only place for the check
to happen.

-- 
Andrew (irc:RhodiumToad)

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Streaming Replication patch for CommitFest 2009-09

2009-09-14 Thread Fujii Masao
Hi,

On Tue, Sep 15, 2009 at 12:47 AM, Greg Smith gsm...@gregsmith.com wrote:
 Putting on my DBA hat for a minute, the first question I see people asking
 is how do I measure how far behind the slaves are?.  Presumably you can
 get that out of pg_controldata; my first question is whether that's complete
 enough information?  If not, what else should be monitored?

Currently the progress of replication is shown only in PS display. So, the
following three steps are necessary to measure the gap of the servers.

1. execute pg_current_xlog_location() to check how far the primary has
written WAL.
2. execute 'ps' to check how far the standby has written WAL.
3. compare the above results.

This is very messy. More user-friendly monitoring feature is necessary,
and development of it is one of TODO item for the later CommitFest.

I'm thinking something like pg_standbys_xlog_location() which returns
one row per standby servers, showing pid of walsender, host name/
port number/user OID of the standby, the location where the standby
has written/flushed WAL. DBA can measure the gap from the
combination of pg_current_xlog_location() and pg_standbys_xlog_location()
via one query on the primary. Thought?

But the problem might be what happens after the primary has fallen
down. The current write location of the primary cannot be checked via
pg_current_xlog_locaton, and might need to be calculated from WAL
files on the primary. Is the tool which performs such calculation
necessary?

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Streaming Replication patch for CommitFest 2009-09

2009-09-14 Thread Fujii Masao
Hi,

On Tue, Sep 15, 2009 at 1:06 AM, Andrew Dunstan and...@dunslane.net wrote:
 One question I have is what is the level of traffic involved between the
 master and the slave. I know numbers of people have found the traffic
 involved in shipping of log files to be a pain, and thus we get things like
 pglesslog.

That is almost the same as the WAL write traffic on the primary. In fact,
the content of WAL files written to the standby are exactly the same as
those on the primary. Currently SR has provided no compression
capability of the traffic. Should we introduce something like
walsender_hook/walreceiver_hook to cooperate with the add-on program
for compression like pglesslog?

If you always use PITR instead of normal recovery, full_page_writes = off
might be another solution.

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP - syslogger infrastructure changes

2009-09-14 Thread Itagaki Takahiro

Magnus Hagander mag...@hagander.net wrote:

 Second, data transfer from the backends is now in the form of
 structured data, and the actual logging message is put together in the
 syslogger

I'd like to have an opposite approach -- per-backend log files.
Syslogger and appending logs at only one file will be a performance
bottleneck when we want write a lot of logs. Per-backend log files
like postgres.backend-id.log would be a help then.


Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CommitFest 2009-09: Now In Progress

2009-09-14 Thread Peter Eisentraut
On mån, 2009-09-14 at 21:14 -0400, Robert Haas wrote:
 [P.S. I learned my lesson - last CF the equivalent email said that the
 CF was closed, which of course was not what I meant at all.]

Yeah, except is it just me or is this open terminology equally weird?
Isn't the 2009-09 fest the one that is open right now?



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RfD: more powerful any types

2009-09-14 Thread Pavel Stehule
2009/9/15 decibel deci...@decibel.org:

 On Sep 14, 2009, at 1:02 PM, Pavel Stehule wrote:

 2009/9/14 Merlin Moncure mmonc...@gmail.com:

 On Mon, Sep 14, 2009 at 1:42 PM, Pavel Stehule pavel.steh...@gmail.com
 wrote:

 How is it any worse than what people can already do? Anyone who isn't
 aware
 of the dangers of SQL injection has already screwed themselves. You're
 basically arguing that they would put a variable inside of quotes, but
 they
 would never use ||.

 simply - people use functions quote_literal or quote_ident.

 you still have use of those functions:
 execute sprintf('select * from %s', quote_ident($1));

 sprintf is no more or less dangerous than || operator.

 sure. I commented different feature

 some := 'select * from $1'

 regards
 Pavel

 p.s. In this case, I am not sure what is more readable:

 execute 'select * from ' || quote_ident($1)

 is readable well too.


 Ahh... the problem is one of fixating on an example instead of the overall
 use case.

 More examples...

 RETURN 'Your account is now $days_overdue days overdue. Please contact your
 account manager ($manager_name) to ...';

 And an example of how readability would certainly be improved...

 sql := $$INSERT INTO cnu_stats.$$ || v_field_name || $$( $$ || v_field_name
 || $$ )
    SELECT DISTINCT $$ || v_field_name || $$
        FROM chunk t
        WHERE NOT EXISTS( SELECT * FROM cnu_stats.$$ || v_field_name || $$ s
 WHERE s.$$
            || v_field_name || $$ = t.$$ || v_field_name || $$ )$$


it isn't fair :) why you use $$ without single quote? And still this
case should be vulnerable on SQL injection. Maybe you or me knows,
what SQL injection means, but beginners knows nothing and this people
use following bad code:

sql := $$SELECT * FROM '${table_name}'$$} and are happy. But this code
is wrong!

 becomes

 sql := $$INSERT INTO cnu_stats.${v_field_name} ( ${v_field_name} )
    SELECT DISTINCT $v_field_name
        FROM chunk t
        WHERE NOT EXISTS( SELECT * FROM cnu_stats.${v_field_name} s
                              WHERE s.${v_field_name} = t.${v_field_name} )$$

 Granted, that example wouldn't be too bad with sprintf, but only because
 everything is referencing the same field.

Really I dislike bash like syntax in SQL. What I know - SQL is
language for normal people - it is reason why it's verbose and English
like. Bash is sw for UNIX hackers. If we cut some features from others
languages, then bash, c, perl should be last (I knows these languages
well and I using it well). I thing, so there are better languages like
ADA, SQL/PSM, Python.

regards
Pavel Stehule

 --
 Decibel!, aka Jim C. Nasby, Database Architect  deci...@decibel.org
 Give your computer some brain candy! www.distributed.net Team #1828




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CommitFest 2009-09: Now In Progress

2009-09-14 Thread Stefan Kaltenbrunner

Stephen Frost wrote:

* Tom Lane (t...@sss.pgh.pa.us) wrote:

Robert Haas robertmh...@gmail.com writes:

commitfest.postgresql.org is updated, please submit new patches at
https://commitfest.postgresql.org/action/commitfest_view/open

Anyone else finding that logging in at commitfest.postgresql.org fails?
I get

Internal Server Error


Yes.  The auth server appears to be down (wwwmaster).  Much discussion
has ensued on IRC, but none of us are admins. :/


being an admin only helps so much - especially if it's the network that 
breaks down and not something on the server...



Stefan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CommitFest 2009-09: Now In Progress

2009-09-14 Thread Stefan Kaltenbrunner

Tom Lane wrote:

Robert Haas robertmh...@gmail.com writes:

Yep, we were just discussing on IRC.  It seems wwwmaster.postgreql.org
is down (unpingable).


Looks like someone fixed it.


Yeah we major network breakdown starting at ~02:29 CEST which got fixed 
at ~04:00 CEST.



Stefan


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP - syslogger infrastructure changes

2009-09-14 Thread Magnus Hagander
On 15 sep 2009, at 07.21, Itagaki Takahiro itagaki.takah...@oss.ntt.co.jp 
 wrote:




Magnus Hagander mag...@hagander.net wrote:


Second, data transfer from the backends is now in the form of
structured data, and the actual logging message is put together in  
the

syslogger


I'd like to have an opposite approach -- per-backend log files.
Syslogger and appending logs at only one file will be a performance
bottleneck when we want write a lot of logs. Per-backend log files
like postgres.backend-id.log would be a help then.



That would make the log more or less completely useless for any of the  
situations I've been in.


I can see each backend writing it, certainly, but keeping it in  
separate files makes it useless without post processing, which in most  
vases means useless for day-to-day work.


/Magnus


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers