Re: [HACKERS] Exclude pg_largeobject form pg_dump

2016-11-29 Thread Guillaume Lelarge
2016-11-29 17:12 GMT+01:00 Stephen Frost :

> Guillaume, Amul,
>
> * Amul Sul (sula...@gmail.com) wrote:
> > The following review has been posted through the commitfest application:
> > make installcheck-world:  tested, passed
> > Implements feature:   tested, passed
> > Spec compliant:   not tested
> > Documentation:tested, passed
> >
> > Patch v6 looks good to me, passing to committer.
> >
> > Thanks !
> >
> > The new status of this patch is: Ready for Committer
>
> I've pushed this patch.  In the future if you are working with the
> pg_dump TAP tests and add a new 'run', be sure to update the 'tests' to
> indicate which tests should 'like' or 'unlike' that run.  If you have
> any questions, feel free to ask.
>
>
Thank you. And thanks Amul for the review.


-- 
Guillaume.
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


[HACKERS] Typo in psql-ref.sgml

2016-03-06 Thread Guillaume Lelarge
Hi,

While translating the 9.5 ref/psql-ref.sgml, I found this:

and variables shows help about about
psql configuration variables

The word "about" is written twice. Sounds like a nice typo to me :)

See attached patch (for 9.5 and HEAD).

Thanks.


-- 
Guillaume.
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com
diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index 6d0cb3d..f2ea63b 100644
--- a/doc/src/sgml/ref/psql-ref.sgml
+++ b/doc/src/sgml/ref/psql-ref.sgml
@@ -597,7 +597,7 @@ EOF
   explained: commands describes psql's
   backslash commands; options describes the command-line
   options that can be passed to psql;
-  and variables shows help about about psql configuration
+  and variables shows help about psql configuration
   variables.
   
   
@@ -2734,7 +2734,7 @@ testdb= \setenv LESS -imx4F
 explained: commands describes psql's
 backslash commands; options describes the command-line
 options that can be passed to psql;
-and variables shows help about about psql configuration
+and variables shows help about psql configuration
 variables.
 
 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New email address

2015-11-23 Thread Guillaume Lelarge
Le 24 nov. 2015 01:05, "Michael Paquier"  a
écrit :
>
> On Tue, Nov 24, 2015 at 3:41 AM, Kevin Grittner  wrote:
> > Yahoo recently changed their DMARC policy, and after some
> > investigation and a support case with Yahoo, it is now clear that
> > their email systems can no longer be used with the postgresql.org
> > lists.  I've migrated from kgri...@ymail.com to kgri...@gmail.com.
>
> Something to be aware of as well: I noticed that sometimes your emails
> coming from @ymail.com were flagged as spam by gmail. People be
> careful of that if you use it.

+1, happened a lot actually.


Re: [HACKERS] custom function for converting human readable sizes to bytes

2015-11-22 Thread Guillaume Lelarge
Le 22 nov. 2015 21:29, "Pavel Stehule"  a écrit :
>
>
>
> 2015-11-22 21:19 GMT+01:00 Jim Nasby :
>>
>> On 11/22/15 2:11 PM, Pavel Stehule wrote:
>>>
>>> What about pg_size(text), pg_size(value bigint, unit text) ?
>>
>>
>> I like, though I'd make it numeric or float. pg_size(3.5, 'GB')
certainly seems like a reasonable use case...
>
>
> yes, good note.
>

What about pg_size_unpretty()?


Re: [HACKERS] pageinspect patch, for showing tuple data

2015-11-17 Thread Guillaume Lelarge
Hi,

Le 12 nov. 2015 1:05 AM, "Michael Paquier"  a
écrit :
>
> On Thu, Nov 12, 2015 at 12:41 AM, Nikolay Shaplov
>  wrote:
> > В письме от 28 октября 2015 16:57:36 пользователь Michael Paquier
написал:
> >> On Sat, Oct 17, 2015 at 1:48 AM, Michael Paquier wrote:
> >> > On Sat, Oct 17, 2015 at 5:15 AM, Nikolay Shaplov wrote:
> >> >> Or it's ready to commit, and just not marked this way?
> >> >
> >> > No, I don't think we have reached this state yet.
> >> >
> >> >> I am going to make report based on this patch in Vienna. It would be
> >> >> nice to have it committed until then :)
> >> >
> >> > This is registered in this November's CF and the current one is not
> >> > completely wrapped up, so I haven't rushed into looking at it.
> >>
> >> So, I have finally been able to look at this patch in more details,
> >> resulting in the attached. I noticed a couple of things that should be
> >> addressed, mainly:
> >> - addition of a new routine text_to_bits to perform the reverse
> >> operation of bits_to_text. This was previously part of
> >> tuple_data_split, I think that it deserves its own function.
> >> - split_tuple_data should be static
> >> - t_bits_str should not be a text, rather a char* fetched using
> >> text_to_cstring(PG_GETARG_TEXT_PP(4)). This way there is no need to
> >> perform extra calculations with VARSIZE and VARHDRSZ
> >> - split_tuple_data can directly use the relation OID instead of the
> >> tuple descriptor of the relation
> >> - t_bits was leaking memory. For correctness I think that it is better
> >> to free it after calling split_tuple_data.
> >> - PG_DETOAST_DATUM_COPY allocates some memory, this was leaking as
> >> well in raw_attr actually. I refactored the code such as a bytea* is
> >> used and always freed when allocated to avoid leaks. Removing raw_attr
> >> actually simplified the code a bit.
> >> - I simplified the docs, that was largely too verbose in my opinion.
> >> - Instead of using VARATT_IS_1B_E and VARTAG_EXTERNAL, using
> >> VARATT_IS_EXTERNAL and VARATT_IS_EXTERNAL_ONDISK seems more adapted to
> >> me, those other ones are much more low-level and not really spread in
> >> the backend code.
> >> - Found some typos in the code, the docs and some comments. I reworked
> >> the error messages as well.
> >> - The code format was not really in line with the project guidelines.
> >> I fixed that as well.
> >> - I removed heap_page_item_attrs for now to get this patch in a
> >> bare-bone state. Though I would not mind if this is re-added (I
> >> personally don't think that's much necessary in the module
> >> actually...).
> >> - The calculation of the length of t_bits using HEAP_NATTS_MASK is
> >> more correct as you mentioned earlier, so I let it in its state.
> >> That's actually more accurate for error handling as well.
> >> That's everything I recall I have. How does this look?
> > You've completely rewrite everything ;-)
> >
> > Let everything be the way you wrote. This code is better than mine.
> >
> > With one exception. I really  need heap_page_item_attrs function. I
used it in
> > my Tuples Internals presentation
> > https://github.com/dhyannataraj/tuple-internals-presentation
> > and I am 100% sure that this function is needed for educational
purposes, and
> > this function should be as simple as possible, so students can use it
without
> > extra efforts.
>
> Fine. That's your patch after all.
>
> > I still have an opinion that documentation should be more verbose, than
your
> > version, but I can accept your version.
>
> I am not sure that's necessary, pageinspect is for developers.
>

FWIW, I agree that pageinspect is mostly for devs. Still, as i said to
Nikolay after his talk at pgconf.eu, it's a nice tool for people who like
to know what's going on deep inside PostgreSQL.

So +1 for that nice feature.

> > Who is going to add heap_page_item_attrs to your patch? me or you?
>
> I recall some parts of the code I still did not like much :) I'll grab
> some room to have an extra look at it.


Re: [HACKERS] Annotating pg_dump/pg_restore

2015-08-20 Thread Guillaume Lelarge
2015-08-20 18:43 GMT+02:00 Kevin Burke bu...@shyp.com:

 Hi,
 Normally I try to annotate incoming queries, to make it easier to diagnose
 slow ones. For example:

 -- Users.findByPhoneNumber
 SELECT * FROM 

 The pg_dump and pg_restore commands issue a COPY with no possibility of
 adding a comment. It would be useful to know who or what exactly is
 performing a COPY against a database - maybe a nightly backup script, maybe
 a developer copying a table.

 I was wondering if you could have a command line flag that let you attach
 a comment to the query?


You already have the application name. You just need to log it.


-- 
Guillaume.
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


Re: [HACKERS] Bypassing SQL lexer and parser

2015-07-06 Thread Guillaume Lelarge
Le 6 juil. 2015 7:16 PM, Данила Поярков d...@dannote.net a écrit :

 Hello!

 What is the best starting point to PostgreSQL internal APIs for operating
directly with the storage (performing basic INSERTs, UPDATEs, SELECTs and
simple JOINs by hand)? I'm looking for something similar to MySQL Cluster
NDB API or InnoDB internal API (the late HailDB and Embedded InnoDB).

 Does the PostgreSQL support any other type of plugins/extensions other
than FDW and custom data types? I mean, is it possible to start another
daemon within Postgres without slightly modifying the main codebase?


That sounds a lot like a background worker.

 In case you are wondering why could anyone need something like that: I'm
looking for a way to implement a small subset of HTSQL (see [
http://htsql.org/], of course not with the whole HTTP protocol) natively in
one of popular RDBMS without extra URL-to-SQL conversion. Yes, I know that
a lot of hard work was done for making query plans, optimizing etc. but I'm
still really wish to do this for some very specific needs. I'm not going to
completely replace the SQL and thus will be happy to do those manipulations
on a SQL 2008-compliant DBMS.


Good luck with that :-)

-- 
Guillaume


Re: [HACKERS] [PROPOSAL] VACUUM Progress Checker.

2015-07-01 Thread Guillaume Lelarge
Le 2 juil. 2015 7:28 AM, Simon Riggs si...@2ndquadrant.com a écrit :

 On 2 July 2015 at 03:00, Rahila Syed rahilasye...@gmail.com wrote:


 Yes, I suggest just a single column on pg_stat_activity called
pct_complete

 Reporting remaining time also can be crucial to make decisions regarding
continuing or aborting VACUUM.
 The same has been suggested  in  the thread below,

 http://www.postgresql.org/message-id/13072.1284826...@sss.pgh.pa.us

 trace_completion_interval = 5s (default)

 Every interval, we report the current % complete for any operation that
supports it. We just show NULL if the current operation has not reported
anything or never will.

 We do this for VACUUM first, then we can begin adding other operations
as we work out how (for that operation).

 Thank you for explaining. This design seems good to me except, adding
more than one columns(percent_complete, remaining_time)


 It is attractive to have a remaining_time column, or a
predicted_completion_timestamp, but those columns are prediction
calculations rather than actual progress reports. I'm interested in seeing
a report that relates to actual progress made.


Agreed.

 Predicted total work required is also interesting, but is much less
trustworthy figure.


And it is something a client app or an extension can compute. No need to
put this in core as long as we have the actual progress.

 I think we'll need to get wider input about the user interface for this
feature.



 if required to pg_stat_activity can be less user intuitive than having a
separate view for VACUUM.


 I think it is a mistake to do something just for VACUUM.

 Monitoring software will look at pg_stat_activity. I don't think we
should invent a separate view for progress statistics because it will cause
users to look in two places rather than just one. Reporting progress is
fairly cheap instrumentation, calculating a prediction of completion time
might be expensive.


+1

 Having said that, monitoring systems currently use a polling mechanism to
retrieve status data. They look at information published by the backend. We
don't currently have a mechanism to defer publication of expensive
monitoring information until requested by the monitoring system. If you
have a design for how that might work then say so, otherwise we need to
assume a simple workflow: the backend publishes whatever it chooses,
whenever it chooses and then that is made available via the monitoring
system via views.


 Your current design completely misses the time taken to scan indexes,
which is significant.

 There might be a justification to put this out of core, but measuring
progress of VACUUM wouldn't be it, IMHO.

 --
 Simon Riggshttp://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


Re: [HACKERS] Publish autovacuum informations

2015-06-04 Thread Guillaume Lelarge
2015-01-05 17:44 GMT+01:00 Guillaume Lelarge guilla...@lelarge.info:

 2015-01-05 17:40 GMT+01:00 Robert Haas robertmh...@gmail.com:

 On Wed, Dec 31, 2014 at 12:46 PM, Tom Lane t...@sss.pgh.pa.us wrote:
  I'd be all right with putting the data structure declarations in a file
  named something like autovacuum_private.h, especially if it carried an
  annotation that if you depend on this, don't be surprised if we break
  your code in future.

 Works for me.  I am not in general surprised when we do things that
 break my code, or anyway, the code that I'm responsible for
 maintaining.  But I think it makes sense to segregate this into a
 separate header file so that we are clear that it is only exposed for
 the benefit of extension authors, not so that other things in the core
 system can touch it.


 I'm fine with that too. I'll try to find some time to work on that.


So I took a look at this this week. I discovered, with the help of a
coworker, that I can already use the AutoVacuumShmem pointer and read the
struct. Unfortunately, it doesn't give me as much details as I would have
liked. The list of databases and tables aren't in shared memory. They are
local to the process that uses them. Putting them in shared memory (if at
all possible) would imply a much bigger patch than I was willing to write
right now.

Thanks anyway for the help.


-- 
Guillaume.
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


Re: [HACKERS] pg_xlog - pg_xjournal?

2015-06-02 Thread Guillaume Lelarge
Le 2 juin 2015 6:37 AM, Michael Nolan htf...@gmail.com a écrit :

 Why not take a simpler approach and create a zero length file in
directories that should not be fiddled with by non-experts using a file
name something like DO.NOT.DELETE.THESE.FILES?

 No, it won't prevent the incredibly stupid from doing incredibly stupid
things, nothing will.

Sounds much simpler and smarter to me than the other solutions.


Re: [HACKERS] RFC: Remove contrib entirely

2015-05-29 Thread Guillaume Lelarge
Le 29 mai 2015 8:10 AM, Pavel Stehule pavel.steh...@gmail.com a écrit :

 Hi

 I am not sure if PGXN can substitute contrib - mainly due deployment - It
doesn't helps with MS Windows. Installing necessary software for
compilation there is terrible.


I agree it's hard to compile an extension on Windows, but that's already
what we have. And I'm sure EDB will put all interesting contrib modules in
their windows installer to help users. They already go way further than any
Linux packages.

 Regards

 Pavel

 2015-05-28 18:19 GMT+02:00 Joshua D. Drake j...@commandprompt.com:


 Hello,

 This is a topic that has come up in various ways over the years. After
the long thread on pg_audit, I thought it might be time to bring it up
again.

 Contrib according to the docs is:

 These include porting tools, analysis utilities, and plug-in features
that are not part of the core PostgreSQL system, mainly because they
address a limited audience or are too experimental to be part of the main
source tree. This does not preclude their usefulness.

 It has also been mentioned many times over the years that contrib is a
holding tank for technology that would hopefully be pushed into core
someday.

 What I am suggesting:

 1. Analyze the current contrib modules for inclusion into -core. A few
of these are pretty obvious:

 pg_stat_statements
 citext
 postgres_fdw
 hstore
 pg_crypto
 [...]

 I am sure there will be plenty of fun to be had with what should or
shouldn't be merged into core. I think if we argue about the guidelines of
how to analyze what should be in core versus the merits of any particular
module, life will be easier. Here are some for a start:

 A. Must have been in contrib for at least two releases
 B. Must have visible community (and thus use case)

 2. Push the rest out into a .Org project called contrib. Let those who
are interested in the technology work on them or use them. This project
since it is outside of core proper can work just like other extension
projects. Alternately, allow the maintainers push them wherever they like
(Landscape, Github, Savannah, git.postgresql.org ...).

 Why I am suggesting this:

 1. Less code to maintain in core
 2. Eliminates the mysticism of contrib
 3. Removal of experimental code from core
 4. Most of the distributions package contrib separately anyway
 5. Some of core is extremely small use case (sepgsql, tsearch2, lo ...)
 6. Finding utilities for PostgreSQL used to be harder. It is rather dumb
simple teenage snapchat user easy now.
 8. Isn't this what pgxs is for?
 9. Everybody hates cleaning the closet until the end result.
 10. Several of these modules would make PostgreSQL look good anyway
(default case insensitive index searching with citext? It is a gimme)
 11. Contrib has been getting smaller and smaller. Let's cut the cord.
 12. Isn't this the whole point of extensions?

 Sincerely,

 jD

 --
 Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
 PostgreSQL Centered full stack support, consulting and development.
 Announcing I'm offended is basically telling the world you can't
 control your own emotions, so everyone else should do it for you.


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




Re: [HACKERS] RFC: Remove contrib entirely

2015-05-29 Thread Guillaume Lelarge
Le 29 mai 2015 8:01 AM, Fabien COELHO coe...@cri.ensmp.fr a écrit :


 FWIW, I don't mind which one we put in core and which one we put out of
 core. But I like Joshua's idea of getting rid of contribs and pushing
them
 out as any other extensions.


 Hmmm.

 I like the contrib directory as a living example of how to do an
extension directly available in the source tree. It also allows to test
in-tree that the extension mechanism works. So I think it should be kept at
least with a minimum set of dummy examples for this purpose, even if all
current extensions are moved out.


Agreed.

 Also, removing a feature is a regression, and someone is always bound to
complain... What is the real benefit? ISTM that it is a solution that fixes
no important problem. Reaching a consensus about what to move here or there
will consume valuable time that could be spent on more important tasks...
Is it worth it?


It would be less confusing for users. Contrib modules seem to be first
class extensions, leaving doubt on other extensions. But the fact they
aren't in core make them not fully trusted by some users. Trying to explain
all that in a training is a PITA. It would be much less confusing if they
were either in core or in their own repository.

 Also moving things into postgresql main sources makes pg heavier for all
and benefits only to some, so I think that some careful filtering must be
done bevore moving large contribs there. I guess this is part of the
argumentation.


Agreed.


Re: [HACKERS] RFC: Remove contrib entirely

2015-05-28 Thread Guillaume Lelarge
Le 29 mai 2015 5:33 AM, Joshua D. Drake j...@commandprompt.com a écrit :


 On 05/28/2015 08:10 PM, Stephen Frost wrote:

 JD,


 This seems reasonable to me.  It's in line with the recent move from
 contrib to bin.  It'll just be quite a bit bigger of an undertaking.
 (50 threads to discuss the merits of each module separately?)  Maybe
 start by picking the top 5 and sort those out.


 The thing is, we don't have that many to argue about now, in fact:


 Alright, I'll bite. :)


 I knew somebody eventually would ;)



 F.1. adminpack


 Need it- pgAdmin can't senibly install it or even include it in some
 way, and it'd be *very* painful to not have it for a lot of users.


Painful? The adminpack allows pgadmin to change the config files remotely
with a UI that doesn't make it easy to say the least. You may well trash
your pg_hba.conf file and not be able to connect again after reloading. It
also allows you to read your log files remotely... if it only contains UTF8
characters (which doesn't happen much with my french customers). And
loading a 1GB log file is definitely painful.

I would be of the idea it doesn't give much (though it doesn't mean I want
it to be dropped), but I'm pretty much telling my customers to drop it
whenever I can.

 Fair enough, although keep in mind we aren't telling people pgAdmin isn't
useful. We are just pushing it out of core. Who runs from source except
developers? Distributions would take care of this for us.


Yeah. The way I see this is that distributions would make packages for each
extension. And I don't see the difference between doing a

yum install postgresql94-contrib

And a

yum install postgresql94-adminpack

for example, except I would have to run various yum install commands to
install every extensions I need, but this is much better for me than
bloating my system with extensions I never use (earthdistance comes to mind
for example).

FWIW, I don't mind which one we put in core and which one we put out of
core. But I like Joshua's idea of getting rid of contribs and pushing them
out as any other extensions.

 F.2. auth_delay


 Should be a core feature.  Having this in a contrib module is silly.


 +1

 F.3. auto_explain


 Move to extension directory in the repo.


 +1



 F.4. btree_gin
 F.5. btree_gist


 Both of these should simply be in core.


 +1



 F.6. chkpass
 F.7. citext
 F.8. cube


 Push out and/or keep it in contrib in repo.


 Agreed except citext which I think should install by default.



 F.9. dblink


 Move to extension directory in the repo.


 Agreed.


 F.10. dict_int
 F.11. dict_xsyn


 Looks like these are just examples?  Maybe move to an 'examples'
 directory, or into src/test/modules, or keep in contrib.


 Agreed.


 F.12. earthdistance


 Depends on cube, so, same as whatever happens there.  I don't think
 extensions-in-repo should depend on contrib modules, as a rule.

 F.13. file_fdw
 F.14. fuzzystrmatch
 F.15. hstore


 Move to extension directory in the repo.


 Disagree, hstore should be in core. Rest, fine.



 F.16. intagg


 Obsolute, per the docs.  Push out and deal with the break, or keep it in
 contrib in repo.


 Spelling mistake aside ;) agreed


 F.17. intarray


 Move to extension directory in the repo.


 Agreed


 F.18. isn
 F.19. lo
 F.20. ltree
 F.21. pageinspect


 Move to extension directory in the repo.


 Except for maybe pageinspect, agreed.


 F.22. passwordcheck


 Should be an in-core capability and not shoved off into an extension.


 Agreed


 F.23. pg_buffercache


 Pull it into core.


 Agreed


 F.24. pgcrypto


 Move to extension directory in the repo.


 Sure.


 F.25. pg_freespacemap


 Should be in core.


 Agreed.


 F.26. pg_prewarm
 F.27. pgrowlocks


 Should be in core.


 With a change to pg_rowlocks, agreed.


 F.28. pg_stat_statements


 I'd actually prefer that this be in core, but I'd be alright with it
 being in extension directory in the repo.


 Agreed just not enabled by default.


 F.29. pgstattuple
 F.30. pg_trgm


 Should be in core.


 Agreed.



 F.31. postgres_fdw


 Move to extension directory in the repo.
 (actually, I'd be fine with both this and file_fdw being included in
 core..  I'm just not 100% sure how that'd look)


 I think they should be in core, not all FDWs of course but file and
postgres are kind of obvious to me.


 F.32. seg
 F.33. sepgsql


 Move to extension directory in the repo.


 Agreed.


 F.34. spi


 Maybe pull some into core..  or maybe all, or move to an extension.


 No opinion.


 F.35. sslinfo


 Should be in core.


 Agreed.


 F.36. tablefunc


 My gut reaction is that it should be in core for crosstab(), but David's
 talking about implementing PIVOT, so..


 Easy... give it 1 more release. If we get PIVOT, then we don't need it,
if we don't... all the better for us.


 F.37. tcn


 Should be in core, imv, but not a position I hold very strongly.


 no opinion



 F.38. test_decoding


 Should be in src/test/modules, or maybe some 'examples' dir.


Re: [HACKERS] pg_dump's aborted transactions

2015-02-04 Thread Guillaume Lelarge
2015-02-04 6:37 GMT+01:00 Tom Lane t...@sss.pgh.pa.us:

 Stephen Frost sfr...@snowman.net writes:
  All,
We recently had a client complain that check_postgres' commitratio
check would alert about relatively unused databases.  As it turns
out, the reason for this is because they automate running pg_dump
against their databases (surely a good thing..), but pg_dump doesn't
close out its transaction cleanly, leading to rolled back
transactions.

At first blush, at least, this strikes me as an oversight which we
should probably fix and possibly backpatch.

 No, somebody should fix check_postgres to count rollbacks as well as
 commits as activity (as they obviously are).


Well, actually, no. This is a commit ratio, not an activity counter, not
even a transactions count.

The formula right now is:

round(100.*sd.xact_commit/(sd.xact_commit+sd.xact_rollback), 2)

which, AFAICT, is correct.

The fact that the OP uses it to know if there's activity on his databases
can get him false positives if he has no actual activity, except for dumps.

I might be wrong, but there is nothing to fix on the check_postgres (at
least, for this issue ;) ). The expectation of this user is to fix :)

This is not an oversight, it's 100% intentional.  The reason pg_dump
 aborts rather than commits is to make entirely sure that it does not
 commit any changes to the database.  I would be against removing that
 safety feature, considering that pg_dump is typically run as superuser.
 We have frequently worried about security exploits that involve hijacking
 superuser activities, and this behavior provides at least a small
 increment of safety against such holes.


+1


-- 
Guillaume.
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


Re: [HACKERS] Publish autovacuum informations

2015-01-05 Thread Guillaume Lelarge
2015-01-05 17:40 GMT+01:00 Robert Haas robertmh...@gmail.com:

 On Wed, Dec 31, 2014 at 12:46 PM, Tom Lane t...@sss.pgh.pa.us wrote:
  I'd be all right with putting the data structure declarations in a file
  named something like autovacuum_private.h, especially if it carried an
  annotation that if you depend on this, don't be surprised if we break
  your code in future.

 Works for me.  I am not in general surprised when we do things that
 break my code, or anyway, the code that I'm responsible for
 maintaining.  But I think it makes sense to segregate this into a
 separate header file so that we are clear that it is only exposed for
 the benefit of extension authors, not so that other things in the core
 system can touch it.


I'm fine with that too. I'll try to find some time to work on that.

Thanks.


-- 
Guillaume.
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


Re: [HACKERS] Maximum number of WAL files in the pg_xlog directory

2014-12-30 Thread Guillaume Lelarge
Sorry for my very late answer. It's been a tough month.

2014-11-27 0:00 GMT+01:00 Bruce Momjian br...@momjian.us:

 On Mon, Nov  3, 2014 at 12:39:26PM -0800, Jeff Janes wrote:
  It looked to me that the formula, when descending from a previously
 stressed
  state, would be:
 
  greatest(1 + checkpoint_completion_target) * checkpoint_segments,
  wal_keep_segments) + 1 +
  2 * checkpoint_segments + 1

 I don't think we can assume checkpoint_completion_target is at all
 reliable enough to base a maximum calculation on, assuming anything
 above the maximum is cause of concern and something to inform the admins
 about.

 Assuming checkpoint_completion_target is 1 for maximum purposes, how
 about:

 max(2 * checkpoint_segments, wal_keep_segments) + 2 *
 checkpoint_segments + 2


Seems something I could agree on. At least, it makes sense, and it works
for my customers. Although I'm wondering why + 2, and not + 1. It seems
Jeff and you agree on this, so I may have misunderstood something.


-- 
Guillaume.
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


Re: [HACKERS] BUG: *FF WALs under 9.2 (WAS: .ready files appearing on slaves)

2014-12-30 Thread Guillaume Lelarge
2014-12-12 14:58 GMT+01:00 Heikki Linnakangas hlinnakan...@vmware.com:

 On 12/10/2014 04:32 PM, Dennis Kögel wrote:

 Hi,

 Am 04.09.2014 um 17:50 schrieb Jehan-Guillaume de Rorthais 
 j...@dalibo.com:

 Since few months, we occasionally see .ready files appearing on some
 slave
 instances from various context. The two I have in mind are under 9.2.x.
 […]
 So it seems for some reasons, these old WALs were forgotten by the
 restartpoint mechanism when they should have been recylced/deleted.


 Am 08.10.2014 um 11:54 schrieb Heikki Linnakangas 
 hlinnakan...@vmware.com:

 1. Where do the FF files come from? In 9.2, FF-segments are not supposed
 to created, ever. […]
 2. Why are the .done files sometimes not being created?




 We’ve encountered behaviour which seems to match what has been described
 here: On Streaming Replication slaves, there is an odd piling up of old
 WALs and .ready files in pg_xlog, going back several months.

 The fine people on IRC have pointed me to this thread, and have
 encouraged me to revive it with our observations, so here we go:

 Environment:

 Master,  9.2.9
 |- Slave S1, 9.2.9, on the same network as the master
 '- Slave S2, 9.2.9, some 100 km away (occassional network hickups; *not*
 a cascading replication)

 wal_keep_segments M=100 S1=100 S2=30
 checkpoint_segments M=100 S1=30 S2=30
 wal_level hot_standby (all)
 archive_mode on (all)
 archive_command on both slaves: /bin/true
 archive_timeout 600s (all)


 - On both slaves, we have „ghost“ WALs and corresponding .ready files
 (currently 600 of each on S2, slowly becoming a disk space problem)

 - There’s always gaps in the ghost WAL names, often roughly 0x20, but not
 always

 - The slave with the „bad“ network link has significantly more of these
 files, which suggests that disturbances of the Streaming Replication
 increase chances of triggering this bug; OTOH, the presence of a name gap
 pattern suggests the opposite

 - We observe files named *FF as well


 As you can see in the directory listings below, this setup is *very* low
 traffic, which may explain the pattern in WAL name gaps (?).

 I’ve listed the entries by time, expecting to easily match WALs to their
 .ready files.
 There sometimes is an interesting delay between the WAL’s mtime and the
 .ready file — especially for *FF, where there’s several days between the
 WAL and the .ready file.

 - Master:   http://pgsql.privatepaste.com/52ad612dfb
 - Slave S1: http://pgsql.privatepaste.com/58b4f3bb10
 - Slave S2: http://pgsql.privatepaste.com/a693a8d7f4


 I’ve only skimmed through the thread; my understanding is that there were
 several patches floating around, but nothing was committed.
 If there’s any way I can help, please let me know.


 Yeah. It wasn't totally clear how all this should work, so I got
 distracted with other stuff an dropped the ball; sorry.

 I'm thinking that we should change the behaviour on master so that the
 standby never archives any files from older timelines, only the new one
 that it generates itself. That will solve the immediate problem of old WAL
 files accumulating, and bogus .ready files appearing in the standby.
 However, it will not solve the bigger problem of how do you ensure that all
 WAL files are archived, when you promote a standby server. There is no
 guarantee on that today anyway, but this will make it even less reliable,
 because it will increase the chances that you miss a file on the old
 timeline in the archive, after promoting. I'd argue that that's a good
 thing; it makes the issue more obvious, so you are more likely to encounter
 it in testing, and you won't be surprised in an emergency. But I've started
 a new thread on that bigger issue, hopefully we'll come up with a solution (
 http://www.postgresql.org/message-id/548af1cb.80...@vmware.com).

 Now, what do we do with the back-branches? I'm not sure. Changing the
 behaviour in back-branches could cause nasty surprises. Perhaps it's best
 to just leave it as it is, even though it's buggy.


As long as master is fixed, I don't actually care. But I agree with Dennis
that it's hard to see what's been commited with all the different issues
found, and if any commits were done, in which branch. I'd like to be able
to tell my customers: update to this minor release to see if it's fixed,
but I can't even do that.


-- 
Guillaume.
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


Re: [HACKERS] Maximum number of WAL files in the pg_xlog directory

2014-12-30 Thread Guillaume Lelarge
2014-12-30 18:45 GMT+01:00 Jeff Janes jeff.ja...@gmail.com:

 On Tue, Dec 30, 2014 at 12:35 AM, Guillaume Lelarge 
 guilla...@lelarge.info wrote:

 Sorry for my very late answer. It's been a tough month.

 2014-11-27 0:00 GMT+01:00 Bruce Momjian br...@momjian.us:

 On Mon, Nov  3, 2014 at 12:39:26PM -0800, Jeff Janes wrote:
  It looked to me that the formula, when descending from a previously
 stressed
  state, would be:
 
  greatest(1 + checkpoint_completion_target) * checkpoint_segments,
  wal_keep_segments) + 1 +
  2 * checkpoint_segments + 1

 I don't think we can assume checkpoint_completion_target is at all
 reliable enough to base a maximum calculation on, assuming anything
 above the maximum is cause of concern and something to inform the admins
 about.

 Assuming checkpoint_completion_target is 1 for maximum purposes, how
 about:

 max(2 * checkpoint_segments, wal_keep_segments) + 2 *
 checkpoint_segments + 2


 Seems something I could agree on. At least, it makes sense, and it works
 for my customers. Although I'm wondering why + 2, and not + 1. It seems
 Jeff and you agree on this, so I may have misunderstood something.


 From hazy memory, one +1 comes from the currently active WAL file, which
 exists but is not counted towards either wal_keep_segments nor towards
 recycled files.  And the other +1 comes from the formula for how many
 recycled files to retain, which explicitly has a +1 in it.



OK, that seems much better. Thanks, Jeff.



-- 
Guillaume.
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


[HACKERS] Publish autovacuum informations

2014-12-29 Thread Guillaume Lelarge
Hey,

There are times where I would need more informations on the autovacuum
processes.

I'd love to know what each worker is currently doing. I can get something
like this from the pg_stat_activity view but it doesn't give me as much
informations as the WorkerInfoData struct.

I'd also love to have more informations on the contents of the tables list
(how many tables still to process, which table next, what kind of
processing they'll get, etc... kinda what you have in the autovac_table
struct).

All in all, I want to get informations that are typically stored in shared
memory, handled by the autovacuum launcher and autovacuum workers. I first
thought I could get that by writing some C functions embedded in an
extension. But it doesn't seem to me I can access this part of the shared
memory from a C function. If I'm wrong, I'd love to get a pointer on how to
do this.

Otherwise, I wonder what would be more welcome: making the shared memory
structs handles available outside of the autovacuum processes (and then
build an extension to decode the informations I need), or adding functions
in core to get access to this information (in that case, no need for an
extension)?

Thanks.

Regards.


-- 
Guillaume.
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


Re: [HACKERS] Publish autovacuum informations

2014-12-29 Thread Guillaume Lelarge
2014-12-29 17:03 GMT+01:00 Tom Lane t...@sss.pgh.pa.us:

 Guillaume Lelarge guilla...@lelarge.info writes:
  All in all, I want to get informations that are typically stored in
 shared
  memory, handled by the autovacuum launcher and autovacuum workers. I
 first
  thought I could get that by writing some C functions embedded in an
  extension. But it doesn't seem to me I can access this part of the shared
  memory from a C function. If I'm wrong, I'd love to get a pointer on how
 to
  do this.

  Otherwise, I wonder what would be more welcome: making the shared memory
  structs handles available outside of the autovacuum processes (and then
  build an extension to decode the informations I need), or adding
 functions
  in core to get access to this information (in that case, no need for an
  extension)?

 Either one of those approaches would cripple our freedom to change those
 data structures; which we've done repeatedly in the past and will surely
 want to do again.  So I'm pretty much -1 on exposing them.


I don't see how that's going to deny us the right to change any structs. If
they are in-core functions, we'll just have to update them.  If they are
extension functions, then the developer of those functions would simply
need to update his code.


-- 
Guillaume.
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


Re: [HACKERS] Misunderstanding on the FSM README file

2014-12-08 Thread Guillaume Lelarge
2014-12-07 15:07 GMT+01:00 Heikki Linnakangas hlinnakan...@vmware.com:

 On 12/07/2014 02:03 PM, Guillaume Lelarge wrote:

 Hi,

 I've been reading the FSM README file lately
 (src/backend/storage/freespace/README), and I'm puzzled by one of the
 graph
 (the binary tree structure of an FSM file). Here it is:

  4
   4 2
 3 4   0 2- This level represents heap pages

 Shouldn't the last line be:
 4 3   2 0

 (ie, highest number of free space on the left node, lowest on the right
 one)

 Probably just nitpicking, but still, I'm wondering if I missed something
 out.


 No, that's not how it works. Each number at the bottom level corresponds
 to a particular heap page. The first number would be heap page #0 (which
 has 3 units of free space), the second heap page #1 (with 4 units of free
 space) and so forth. Each node on the upper levels stores the maximum of
 its two children.


Oh OK. Thanks Heikki, that makes perfect sense.


-- 
Guillaume.
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


[HACKERS] Misunderstanding on the FSM README file

2014-12-07 Thread Guillaume Lelarge
Hi,

I've been reading the FSM README file lately
(src/backend/storage/freespace/README), and I'm puzzled by one of the graph
(the binary tree structure of an FSM file). Here it is:

4
 4 2
3 4   0 2- This level represents heap pages

Shouldn't the last line be:
4 3   2 0

(ie, highest number of free space on the left node, lowest on the right one)

Probably just nitpicking, but still, I'm wondering if I missed something
out.

Thanks.


-- 
Guillaume.
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


Re: [HACKERS] Maximum number of WAL files in the pg_xlog directory

2014-11-02 Thread Guillaume Lelarge
Hi,

Le 15 oct. 2014 22:25, Guillaume Lelarge guilla...@lelarge.info a écrit
:

 2014-10-15 22:11 GMT+02:00 Jeff Janes jeff.ja...@gmail.com:

 On Fri, Aug 8, 2014 at 12:08 AM, Guillaume Lelarge 
guilla...@lelarge.info wrote:

 Hi,

 As part of our monitoring work for our customers, we stumbled upon an
issue with our customers' servers who have a wal_keep_segments setting
higher than 0.

 We have a monitoring script that checks the number of WAL files in the
pg_xlog directory, according to the setting of three parameters
(checkpoint_completion_target, checkpoint_segments, and wal_keep_segments).
We usually add a percentage to the usual formula:

 greatest(
   (2 + checkpoint_completion_target) * checkpoint_segments + 1,
   checkpoint_segments + wal_keep_segments + 1
 )


 I think the first bug is even having this formula in the documentation
to start with, and in trying to use it.


 I agree. But we have customers asking how to compute the right size for
their WAL file system partitions. Right size is usually a euphemism for
smallest size, and they usually tend to get it wrong, leading to huge
issues. And I'm not even speaking of monitoring, and alerting.

 A way to avoid this issue is probably to erase the formula from the
documentation, and find a new way to explain them how to size their
partitions for WALs.

 Monitoring is another matter, and I don't really think a monitoring
solution should count the WAL files. What actually really matters is the
database availability, and that is covered with having enough disk space in
the WALs partition.

 and will normally not be more than...

 This may be normal for a toy system.  I think that the normal state
for any system worth monitoring is that it has had load spikes at some
point in the past.


 Agreed.


 So it is the next part of the doc, which describes how many segments it
climbs back down to upon recovering from a spike, which is the important
one.  And that doesn't mention wal_keep_segments at all, which surely
cannot be correct.


 Agreed too.


 I will try to independently derive the correct formula from the code, as
you did, without looking too much at your derivation  first, and see if we
get the same answer.


 Thanks. I look forward reading what you found.

 What seems clear to me right now is that no one has a sane explanation of
the formula. Though yours definitely made sense, it didn't seem to be what
the code does.


Did you find time to work on this? Any news?

Thanks.


Re: [HACKERS] Maximum number of WAL files in the pg_xlog directory

2014-10-15 Thread Guillaume Lelarge
2014-10-15 22:11 GMT+02:00 Jeff Janes jeff.ja...@gmail.com:

 On Fri, Aug 8, 2014 at 12:08 AM, Guillaume Lelarge guilla...@lelarge.info
  wrote:

 Hi,

 As part of our monitoring work for our customers, we stumbled upon an
 issue with our customers' servers who have a wal_keep_segments setting
 higher than 0.

 We have a monitoring script that checks the number of WAL files in the
 pg_xlog directory, according to the setting of three parameters
 (checkpoint_completion_target, checkpoint_segments, and wal_keep_segments).
 We usually add a percentage to the usual formula:

 greatest(
   (2 + checkpoint_completion_target) * checkpoint_segments + 1,
   checkpoint_segments + wal_keep_segments + 1
 )


 I think the first bug is even having this formula in the documentation to
 start with, and in trying to use it.


I agree. But we have customers asking how to compute the right size for
their WAL file system partitions. Right size is usually a euphemism for
smallest size, and they usually tend to get it wrong, leading to huge
issues. And I'm not even speaking of monitoring, and alerting.

A way to avoid this issue is probably to erase the formula from the
documentation, and find a new way to explain them how to size their
partitions for WALs.

Monitoring is another matter, and I don't really think a monitoring
solution should count the WAL files. What actually really matters is the
database availability, and that is covered with having enough disk space in
the WALs partition.

and will normally not be more than...

 This may be normal for a toy system.  I think that the normal state for
 any system worth monitoring is that it has had load spikes at some point in
 the past.


Agreed.


 So it is the next part of the doc, which describes how many segments it
 climbs back down to upon recovering from a spike, which is the important
 one.  And that doesn't mention wal_keep_segments at all, which surely
 cannot be correct.


Agreed too.


 I will try to independently derive the correct formula from the code, as
 you did, without looking too much at your derivation  first, and see if we
 get the same answer.


Thanks. I look forward reading what you found.

What seems clear to me right now is that no one has a sane explanation of
the formula. Though yours definitely made sense, it didn't seem to be what
the code does.


-- 
Guillaume.
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


Re: [HACKERS] Maximum number of WAL files in the pg_xlog directory

2014-08-24 Thread Guillaume Lelarge
Le 8 août 2014 09:08, Guillaume Lelarge guilla...@lelarge.info a écrit :

 Hi,

 As part of our monitoring work for our customers, we stumbled upon an
issue with our customers' servers who have a wal_keep_segments setting
higher than 0.

 We have a monitoring script that checks the number of WAL files in the
pg_xlog directory, according to the setting of three parameters
(checkpoint_completion_target, checkpoint_segments, and wal_keep_segments).
We usually add a percentage to the usual formula:

 greatest(
   (2 + checkpoint_completion_target) * checkpoint_segments + 1,
   checkpoint_segments + wal_keep_segments + 1
 )

 And we have lots of alerts from the script for customers who set their
wal_keep_segments setting higher than 0.

 So we started to question this sentence of the documentation:

 There will always be at least one WAL segment file, and will normally not
be more than (2 + checkpoint_completion_target) * checkpoint_segments + 1
or checkpoint_segments + wal_keep_segments + 1 files.

 (http://www.postgresql.org/docs/9.3/static/wal-configuration.html)

 While doing some tests, it appears it would be more something like:

 wal_keep_segments + (2 + checkpoint_completion_target) *
checkpoint_segments + 1

 But after reading the source code (src/backend/access/transam/xlog.c),
the right formula seems to be:

 wal_keep_segments + 2 * checkpoint_segments + 1

 Here is how we went to this formula...

 CreateCheckPoint(..) is responsible, among other things, for deleting and
recycling old WAL files. From src/backend/access/transam/xlog.c, master
branch, line 8363:

 /*
  * Delete old log files (those no longer needed even for previous
  * checkpoint or the standbys in XLOG streaming).
  */
 if (_logSegNo)
 {
 KeepLogSeg(recptr, _logSegNo);
 _logSegNo--;
 RemoveOldXlogFiles(_logSegNo, recptr);
 }

 KeepLogSeg(...) function takes care of wal_keep_segments. From
src/backend/access/transam/xlog.c, master branch, line 8792:

 /* compute limit for wal_keep_segments first */
 if (wal_keep_segments  0)
 {
 /* avoid underflow, don't go below 1 */
 if (segno = wal_keep_segments)
 segno = 1;
 else
 segno = segno - wal_keep_segments;
 }

 IOW, the segment number (segno) is decremented according to the setting
of wal_keep_segments. segno is then sent back to CreateCheckPoint(...) via
_logSegNo. The RemoveOldXlogFiles() gets this segment number so that it can
remove or recycle all files before this segment number. This function gets
the number of WAL files to recycle with the XLOGfileslop constant, which is
defined as:

 /*
  * XLOGfileslop is the maximum number of preallocated future XLOG
segments.
  * When we are done with an old XLOG segment file, we will recycle it as a
  * future XLOG segment as long as there aren't already XLOGfileslop future
  * segments; else we'll delete it.  This could be made a separate GUC
  * variable, but at present I think it's sufficient to hardwire it as
  * 2*CheckPointSegments+1.  Under normal conditions, a checkpoint will
free
  * no more than 2*CheckPointSegments log segments, and we want to recycle
all
  * of them; the +1 allows boundary cases to happen without wasting a
  * delete/create-segment cycle.
  */
 #define XLOGfileslop(2*CheckPointSegments + 1)

 (in src/backend/access/transam/xlog.c, master branch, line 100)

 IOW, PostgreSQL will keep wal_keep_segments WAL files before the current
WAL file, and then there may be 2*CheckPointSegments + 1 recycled ones.
Hence the formula:

 wal_keep_segments + 2 * checkpoint_segments + 1

 And this is what we usually find in our customers' servers. We may find
more WAL files, depending on the write activity of the cluster, but in
average, we get this number of WAL files.

 AFAICT, the documentation is wrong about the usual number of WAL files in
the pg_xlog directory. But I may be wrong, in which case, the documentation
isn't clear enough for me, and should be fixed so that others can't
misinterpret it like I may have done.

 Any comments? did I miss something, or should we fix the documentation?

 Thanks.


Ping?


Re: [HACKERS] PostgreSQL vs oracle doing 1 million sqrts am I doing it wrong?

2014-08-09 Thread Guillaume Lelarge
Hi,

Le 9 août 2014 05:57, Ramirez, Danilo danilo.rami...@hmhco.com a écrit :

 Thanks to all for the great info.  We are new to postgresql and this
discussion has both instructed us and increased our respect for the
database and the community.

 I am seeing a behavior that I don’t understand and hopefully you guys can
clear it up.

 I am using AWS postgresql db.m3.2xlarge and using pgadmin III 1.18
comparing against AWS oracle on db.m3.2xlarge using sql developer and TOAD.

 I am running a query with 30 tables in the from clause, getting 137
columns back (this is our most basic query, they get a lot more more
complex).   It returns back 4800 rows.

 In oracle 1st run takes 3.92 seconds, 2nd .38 seconds.  Scrolling to end
takes and extra 1.5 seconds for total of 5.5.

 Using pgadmin, I run the query.  Looking at the lower right hand I can
see the time going up.  It stops at 8200 ms or close to it every time, then
it takes an extra 6 seconds before it displays the rows on the screen.
 2nd, 3rd, etc. runs all take about  same amount of time 8 sec plus 6 sec

 I then changed it to return only 1 column back.   In oracle/sqldeveloper
identical behavior as before, same time.  In postgresql it now goes down to
1.8 seconds for 1st, 2nd, etc. runs.

 I then change it so that I am asking for the sum of 1 column.  In oracle
time goes down to .2 seconds and postgresql now goes down to .2 seconds
also.

 I then change it back to get the full result set and behavior goes back
to original, oracle .38 since its cached, postgresql 8 seconds.


Are you sure this is postgresql 8 seconds? I'd believe this is more
something like postgresql something really low and PgAdmin around 8 seconds
displaying it. What I mean is, PgAdmin uses really slow UI components and
the time it shows is the time to execute the query and display the data.
IOW, you shouldn't use it to benchmark. You should better use psql. Or,
much better, you should set log_min_duration_statement to 0 and see exactly
how much time postgresql needs to execute it.


[HACKERS] Maximum number of WAL files in the pg_xlog directory

2014-08-08 Thread Guillaume Lelarge
Hi,

As part of our monitoring work for our customers, we stumbled upon an issue
with our customers' servers who have a wal_keep_segments setting higher
than 0.

We have a monitoring script that checks the number of WAL files in the
pg_xlog directory, according to the setting of three parameters
(checkpoint_completion_target, checkpoint_segments, and wal_keep_segments).
We usually add a percentage to the usual formula:

greatest(
  (2 + checkpoint_completion_target) * checkpoint_segments + 1,
  checkpoint_segments + wal_keep_segments + 1
)

And we have lots of alerts from the script for customers who set their
wal_keep_segments setting higher than 0.

So we started to question this sentence of the documentation:

There will always be at least one WAL segment file, and will normally not
be more than (2 + checkpoint_completion_target) * checkpoint_segments + 1
or checkpoint_segments + wal_keep_segments + 1 files.

(http://www.postgresql.org/docs/9.3/static/wal-configuration.html)

While doing some tests, it appears it would be more something like:

wal_keep_segments + (2 + checkpoint_completion_target) *
checkpoint_segments + 1

But after reading the source code (src/backend/access/transam/xlog.c), the
right formula seems to be:

wal_keep_segments + 2 * checkpoint_segments + 1

Here is how we went to this formula...

CreateCheckPoint(..) is responsible, among other things, for deleting and
recycling old WAL files. From src/backend/access/transam/xlog.c, master
branch, line 8363:

/*
 * Delete old log files (those no longer needed even for previous
 * checkpoint or the standbys in XLOG streaming).
 */
if (_logSegNo)
{
KeepLogSeg(recptr, _logSegNo);
_logSegNo--;
RemoveOldXlogFiles(_logSegNo, recptr);
}

KeepLogSeg(...) function takes care of wal_keep_segments. From
src/backend/access/transam/xlog.c, master branch, line 8792:

/* compute limit for wal_keep_segments first */
if (wal_keep_segments  0)
{
/* avoid underflow, don't go below 1 */
if (segno = wal_keep_segments)
segno = 1;
else
segno = segno - wal_keep_segments;
}

IOW, the segment number (segno) is decremented according to the setting of
wal_keep_segments. segno is then sent back to CreateCheckPoint(...) via
_logSegNo. The RemoveOldXlogFiles() gets this segment number so that it can
remove or recycle all files before this segment number. This function gets
the number of WAL files to recycle with the XLOGfileslop constant, which is
defined as:

/*
 * XLOGfileslop is the maximum number of preallocated future XLOG segments.
 * When we are done with an old XLOG segment file, we will recycle it as a
 * future XLOG segment as long as there aren't already XLOGfileslop future
 * segments; else we'll delete it.  This could be made a separate GUC
 * variable, but at present I think it's sufficient to hardwire it as
 * 2*CheckPointSegments+1.  Under normal conditions, a checkpoint will free
 * no more than 2*CheckPointSegments log segments, and we want to recycle
all
 * of them; the +1 allows boundary cases to happen without wasting a
 * delete/create-segment cycle.
 */
#define XLOGfileslop(2*CheckPointSegments + 1)

(in src/backend/access/transam/xlog.c, master branch, line 100)

IOW, PostgreSQL will keep wal_keep_segments WAL files before the current
WAL file, and then there may be 2*CheckPointSegments + 1 recycled ones.
Hence the formula:

wal_keep_segments + 2 * checkpoint_segments + 1

And this is what we usually find in our customers' servers. We may find
more WAL files, depending on the write activity of the cluster, but in
average, we get this number of WAL files.

AFAICT, the documentation is wrong about the usual number of WAL files in
the pg_xlog directory. But I may be wrong, in which case, the documentation
isn't clear enough for me, and should be fixed so that others can't
misinterpret it like I may have done.

Any comments? did I miss something, or should we fix the documentation?

Thanks.


-- 
Guillaume.
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


[HACKERS] Quick doc fix

2014-08-07 Thread Guillaume Lelarge
Hi,

Still translating the 9.4 manual, and found another typo. Patch attached.

Thanks.


-- 
Guillaume.
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com
diff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgml
index d3fcb82..cf174f0 100644
--- a/doc/src/sgml/maintenance.sgml
+++ b/doc/src/sgml/maintenance.sgml
@@ -528,7 +528,7 @@
 the structfieldrelfrozenxid/ column of a table's
 structnamepg_class/ row contains the freeze cutoff XID that was used
 by the last whole-table commandVACUUM/ for that table.  All rows
-inserted by transactions with XIDs XIDs older than this cutoff XID are
+inserted by transactions with XIDs older than this cutoff XID are
 guaranteed to have been frozen.  Similarly,
 the structfielddatfrozenxid/ column of a database's
 structnamepg_database/ row is a lower bound on the unfrozen XIDs

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] config.sgml referring to unix_socket_directories on older releases

2014-07-26 Thread Guillaume Lelarge
Hi,

While updating the french translation of the latest releases, I stumbled
upon a small issue on the config.sgml file.

It talks about unix_socket_directories whereas this parameter only appears
with the 9.3 release. It should probably be replaced with
unix_socket_directory for all releases where this has been commited (8.4 to
9.2). The patch attached does this. It applies cleanly on all releases
(with a hunk though).

Thanks.

Regards.


-- 
Guillaume.
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index d47dd9c..9f23e8c 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -473,7 +473,7 @@ SET ENABLE_SEQSCAN TO OFF;
para
 This parameter is irrelevant on systems, notably Solaris as of Solaris
 10, that ignore socket permissions entirely.  There, one can achieve a
-similar effect by pointing varnameunix_socket_directories/ to a
+similar effect by pointing varnameunix_socket_directory/ to a
 directory having search permission limited to the desired audience.
 This parameter is also irrelevant on Windows, which does not have
 Unix-domain sockets.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: how to find the order of joins from Explain command XML plan output in PostgreSQL

2014-07-10 Thread Guillaume Lelarge
Le 9 juil. 2014 20:36, David G Johnston david.g.johns...@gmail.com a
écrit :

 csrajmohan wrote
  EXPLAIN (format XML)  command in PostgreSQL9.3.4 gives the plan chosen
  by
  the optimizer in XML format. In my program, I have to extract certain
data
  about optimizer plan from this XML output. I am using *LibXML2* library
  for
  parsing the XML. I had successfully extracted information about which
  relations are involved and what joins are used by parsing the XML. But
  I am *unable
  to extract the* *order of joining the relations from the XML output*. I
  conceptually understood that the reverse level order traversal of binary
  tree representation of the XML plan will give correct ordering of joins
  applied. But I could not figure out how do I get that from the XML? Does
  libXML2 support anything of this sort? If not how should I proceed to
  tackle this?

 So, since nothing better has been forthcoming in your other two posts on
 this topic I'll just say that likely you will have much better luck using
 SAX-based processing as opposed to DOM-based processing.  I seriously
doubt
 native/core PostgreSQL facilities will allow you to do what you desire.

 As you said, hierarchy and physical output order determines the order of
 joining within the planner so you have to capture and track such
relational
 information during your processing - which is made much easier if you
simply
 traverse the output node-by-node exactly as a SAX based parser does.

 Though pgAdminIII has a visual query display that you might look at for
 inspiration.


FWIW, pgadmin's visual explain doesn't (yet?) use XML or json or yaml
output.


Re: [HACKERS] Weird behaviour with the new MOVE clause of ALTER TABLESPACE

2014-05-10 Thread Guillaume Lelarge
On Fri, 2014-05-09 at 17:16 -0400, Stephen Frost wrote:
 Guillaume,
 
 * Guillaume Lelarge (guilla...@lelarge.info) wrote:
  Should information_schema tables be moved and not pg_catalog ones? it
  doesn't seem consistent to me.
 
 The catalog tables are moved by changing the database's tablespace, eg:
 
 ALTER DATABASE ... SET TABLESPACE
 
 That also moves any objects which are not assigned to a specific
 tablespace.
 
 The question ends up being just which side of is it part of the
 catalog, or not? the information schema falls on to.  For this case, I
 had considered those to *not* be part of the catalog as they can be
 moved independently of the ALTER DATABASE ... SET TABLESPACE.
 
 This is happily documented:
 
System catalogs will not be moved by this command- individuals wishing to
move a whole database should use ALTER DATABASE, or call ALTER TABLE on the
individual system catalogs.  Note that relations in 
 literalinformation_schema/literal
will be moved, just as any other normal database objects, if the user is 
 the
superuser or considered an owner of the relations in 
 literalinformation_schema/literal.
 

Thanks for the explanation. I should have RTFM before complaining. Sorry
for the noise :)


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Weird behaviour with the new MOVE clause of ALTER TABLESPACE

2014-05-09 Thread Guillaume Lelarge
Hey,

I was working on adding support to the new MOVE clause of the ALTER
TABLESPACE statement to pgAdmin when I noticed this issue. See this
example:

Fresh git compilation, and new database on a new cluster:

$ createdb b1
$ psql b1
psql (9.4devel)
Type help for help.

b1=# CREATE TABLESPACE ts1 LOCATION '/opt/postgresql/tablespaces/ts94';
CREATE TABLESPACE
b1=# SELECT count(*) FROM pg_class c
  JOIN pg_tablespace t ON c.reltablespace=t.oid
  AND spcname='pg_default';
 count 
---
 0
(1 row)

b1=# SELECT count(*) FROM pg_class c
  JOIN pg_tablespace t ON c.reltablespace=t.oid AND spcname='ts1';
 count 
---
 0
(1 row)

b1=# SELECT count(*) FROM pg_class c WHERE c.reltablespace=0;
 count 
---
   268
(1 row)

So, 268 objects in the default tablespace (which happens to be
pg_default) and none in ts1 (that's correct, it was just created).

Now, we move all objects from pg_default to ts1. My expectation was that
all user objects would be afterwards in the ts1 tablespace. And here is
what happens:

b1=# ALTER TABLESPACE pg_default MOVE ALL TO ts1;
ALTER TABLESPACE
b1=# SELECT count(*) FROM pg_class c
  JOIN pg_tablespace t ON c.reltablespace=t.oid
  AND spcname='pg_default';
 count 
---
 0
(1 row)

b1=# SELECT count(*) FROM pg_class c
  JOIN pg_tablespace t ON c.reltablespace=t.oid AND spcname='ts1';
 count 
---
21
(1 row)

b1=# SELECT count(*) FROM pg_class c WHERE c.reltablespace=0;
 count 
---
   247
(1 row)

I have 21 objects in ts1 and 247 stayed in the default tablespace. I'm
not sure what I should find weird: that some objects were moved, or that
not all objects were moved :)

What's weirder is the objects themselves:

b1=# SELECT relkind, relname FROM pg_class c
  JOIN pg_tablespace t ON c.reltablespace=t.oid AND spcname='ts1'
  ORDER BY 1, 2;
 relkind | relname 
-+-
 i   | pg_toast_12619_index
 i   | pg_toast_12624_index
 i   | pg_toast_12629_index
 i   | pg_toast_12634_index
 i   | pg_toast_12639_index
 i   | pg_toast_12644_index
 i   | pg_toast_12649_index
 r   | sql_features
 r   | sql_implementation_info
 r   | sql_languages
 r   | sql_packages
 r   | sql_parts
 r   | sql_sizing
 r   | sql_sizing_profiles
 t   | pg_toast_12619
 t   | pg_toast_12624
 t   | pg_toast_12629
 t   | pg_toast_12634
 t   | pg_toast_12639
 t   | pg_toast_12644
 t   | pg_toast_12649
(21 rows)

In other words, all information_schema tables (and their toast tables
and the toast indexes) were moved. Why only them? AFAICT, there are no
other information_schema tables, only views which obviously are not
concerned by the ALTER TABLESPACE statement.

Should information_schema tables be moved and not pg_catalog ones? it
doesn't seem consistent to me.

I probably miss something obvious.

Thanks for any pointer.


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql blows up on BOM character sequence

2014-03-23 Thread Guillaume Lelarge
On Sat, 2014-03-22 at 11:23 -0500, Jim Nasby wrote:
 On 3/21/14, 8:13 PM, David E. Wheeler wrote:
  On Mar 21, 2014, at 2:16 PM, Andrew Dunstan and...@dunslane.net wrote:
 
  Surely if it were really a major annoyance, someone would have sent code 
  to fix it during the last 4 years and more since the above.
 
  I suspect it's a minor annoyance :-)
 
  But by all means add it to the TODO list if it's not there already.
 
  I have cleaned up many a BOM added to files that made psql blow up. I think 
  PGAdmin III was a culprit, though I’m not sure (I don’t use, it, cleaned up 
  after coworkers who do).
 
 Yes, my coworker that figured out what the problem was said the culprit here 
 is actually pgAdmin. :(

Just a quick comment on this. Yes, pgAdmin always added a BOM in every
SQL files it wrote. That being said, since 2010, pgAdmin has an option
that allows the user to request the BOM writing (IOW, it's disabled by
default for new installations of pgAdmin). See in the
preferences/options window, Query tool / Query file sub-item, and
Write BOM for UTF files checkbox. Make sure it's unchecked.

Either your coworker has an older release (that would be older than 1.14
IIRC), or he didn't change the setting to make it work like he would
like.


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Problem with background worker

2013-03-24 Thread Guillaume Lelarge
On Wed, 2013-03-20 at 13:13 -0300, Alvaro Herrera wrote:
 Marc Cousin escribió:
  On 20/03/2013 16:33, Alvaro Herrera wrote:
 
  Ah.  The reason for this problem is that the statement start time (which
  also sets the transaction start time, when it's the first statement) is
  set by postgres.c, not the transaction-control functions in xact.c.  So
  you'd need to add a SetCurrentStatementStartTimestamp() call somewhere
  in your loop.
  
  Yes, that works. Thanks a lot !
  
  Maybe this should be added to the worker_spi example ?
 
 Yeah, I think I need to go over the postgres.c code and figure out what
 else needs to be called.  I have a pending patch from Guillaume to
 improve worker_spi some more; I'll add this bit too.
 

Happy to know that you still remember it :) Thanks.


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Move pqsignal() to libpgport.

2013-03-17 Thread Guillaume Lelarge
On Sun, 2013-03-17 at 16:06 +, Tom Lane wrote:
 Move pqsignal() to libpgport.
 
 We had two copies of this function in the backend and libpq, which was
 already pretty bogus, but it turns out that we need it in some other
 programs that don't use libpq (such as pg_test_fsync).  So put it where
 it probably should have been all along.  The signal-mask-initialization
 support in src/backend/libpq/pqsignal.c stays where it is, though, since
 we only need that in the backend.
 

Hi,

When I try to compile HEAD right after this commit, I have this issue
with pg_receivexlog:

pg_receivexlog.c: In function ‘main’:
pg_receivexlog.c:425:11: error: ‘SIGINT’ undeclared (first use in this
function)
pg_receivexlog.c:425:11: note: each undeclared identifier is reported
only once for each function it appears in

The attached patch fixes this. Not sure it's the right fix though...


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com
diff --git a/src/bin/pg_basebackup/pg_receivexlog.c b/src/bin/pg_basebackup/pg_receivexlog.c
index e65f127..91caf66 100644
--- a/src/bin/pg_basebackup/pg_receivexlog.c
+++ b/src/bin/pg_basebackup/pg_receivexlog.c
@@ -20,6 +20,7 @@
 #include streamutil.h
 
 #include dirent.h
+#include signal.h
 #include sys/stat.h
 #include sys/types.h
 #include unistd.h

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Move pqsignal() to libpgport.

2013-03-17 Thread Guillaume Lelarge
On Sun, 2013-03-17 at 14:13 -0400, Tom Lane wrote:
 Guillaume Lelarge guilla...@lelarge.info writes:
  On Sun, 2013-03-17 at 16:06 +, Tom Lane wrote:
  Move pqsignal() to libpgport.
 
  When I try to compile HEAD right after this commit, I have this issue
  with pg_receivexlog:
 
 Oddly, I didn't see that on the machine I was testing on --- it must
 have something else pulling in signal.h there.  Fixed.
 

Thanks.


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Behaviour of bgworker with SIGHUP

2013-01-03 Thread Guillaume Lelarge
On Mon, 2012-12-31 at 17:44 -0300, Alvaro Herrera wrote:
 Alvaro Herrera wrote:
  Guillaume Lelarge wrote:
   On Mon, 2012-12-31 at 11:03 -0300, Alvaro Herrera wrote:
  
I think this (have a config option, and have SIGHUP work as expected)
would be useful to demo in worker_spi, if you care to submit a patch.
   
   Yeah, I would love too. Reading the code of worker_spi, we could add one
   or three parameters: a naptime, and the schemaname for both bgprocess.
   One would be enough or do you prefer all three?
  
  I got no problem with three.
 
 Actually, it occurs to me that it might be useful to demonstrate having
 the number of processes be configurable: so we could use just two
 settings, naptime and number of workers.  Have each worker just use a
 hardcoded schema, say worker_spi_%d or something like that.
 

Here you go.

worker_spi.naptime is the naptime between two checks.
worker_spi.total_workers is the number of workers to launch at
postmaster start time. The first one can change with a sighup, the last
one obviously needs a restart.


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com
diff --git a/contrib/worker_spi/worker_spi.c b/contrib/worker_spi/worker_spi.c
index 6da747b..4b6de45 100644
--- a/contrib/worker_spi/worker_spi.c
+++ b/contrib/worker_spi/worker_spi.c
@@ -35,12 +35,16 @@
 #include lib/stringinfo.h
 #include utils/builtins.h
 #include utils/snapmgr.h
+#include tcop/utility.h
 
 PG_MODULE_MAGIC;
 
 void	_PG_init(void);
 
+static bool	got_sighup = false;
 static bool	got_sigterm = false;
+static int  worker_spi_naptime = 1;
+static int  worker_spi_total_workers = 2;
 
 
 typedef struct worktable
@@ -65,6 +69,7 @@ static void
 worker_spi_sighup(SIGNAL_ARGS)
 {
 	elog(LOG, got sighup!);
+	got_sighup = true;
 	if (MyProc)
 		SetLatch(MyProc-procLatch);
 }
@@ -176,13 +181,22 @@ worker_spi_main(void *main_arg)
 		 */
 		rc = WaitLatch(MyProc-procLatch,
 	   WL_LATCH_SET | WL_TIMEOUT | WL_POSTMASTER_DEATH,
-	   1000L);
+	   worker_spi_naptime*1000L);
 		ResetLatch(MyProc-procLatch);
 
 		/* emergency bailout if postmaster has died */
 		if (rc  WL_POSTMASTER_DEATH)
 			proc_exit(1);
 
+		/*
+		 * In case of a sighup, just reload the configuration.
+		 */
+if (got_sighup)
+{
+got_sighup = false;
+ProcessConfigFile(PGC_SIGHUP);
+}
+
 		StartTransactionCommand();
 		SPI_connect();
 		PushActiveSnapshot(GetTransactionSnapshot());
@@ -225,11 +239,40 @@ _PG_init(void)
 {
 	BackgroundWorker	worker;
 	worktable		   *table;
+	unsigned inti;
+	charname[20];
+
+	/* get the configuration */
+	DefineCustomIntVariable(worker_spi.naptime,
+Duration between each check (in seconds).,
+NULL,
+worker_spi_naptime,
+1,
+1,
+INT_MAX,
+PGC_SIGHUP,
+0,
+NULL,
+NULL,
+NULL);
+	DefineCustomIntVariable(worker_spi.total_workers,
+Number of workers.,
+NULL,
+worker_spi_total_workers,
+2,
+1,
+100,
+PGC_POSTMASTER,
+0,
+NULL,
+NULL,
+NULL);
 
 	/* register the worker processes.  These values are common for both */
 	worker.bgw_flags = BGWORKER_SHMEM_ACCESS |
 		BGWORKER_BACKEND_DATABASE_CONNECTION;
 	worker.bgw_start_time = BgWorkerStart_RecoveryFinished;
+	worker.bgw_restart_time = BGW_NEVER_RESTART;
 	worker.bgw_main = worker_spi_main;
 	worker.bgw_sighup = worker_spi_sighup;
 	worker.bgw_sigterm = worker_spi_sigterm;
@@ -242,22 +285,17 @@ _PG_init(void)
 	 * memory in the child process; and if we fork and then exec, the exec'd
 	 * process will run this code again, and so the memory is also valid there.
 	 */
-	table = palloc(sizeof(worktable));
-	table-schema = pstrdup(schema1);
-	table-name = pstrdup(counted);
+	for (i = 1; i = worker_spi_total_workers; i++)
+	{
+		sprintf(name, worker %d, i);
+		worker.bgw_name = pstrdup(name);
 
-	worker.bgw_name = SPI worker 1;
-	worker.bgw_restart_time = BGW_NEVER_RESTART;
-	worker.bgw_main_arg = (void *) table;
-	RegisterBackgroundWorker(worker);
-
-	/* Values for the second worker */
-	table = palloc(sizeof(worktable));
-	table-schema = pstrdup(our schema2);
-	table-name = pstrdup(counted rows);
-
-	worker.bgw_name = SPI worker 2;
-	worker.bgw_restart_time = 2;
-	worker.bgw_main_arg = (void *) table;
-	RegisterBackgroundWorker(worker);
+		table = palloc(sizeof(worktable));
+		sprintf(name, schema%d, i);
+		table-schema = pstrdup(name);
+		table-name = pstrdup(counted);
+		worker.bgw_main_arg = (void *) table;
+
+		RegisterBackgroundWorker(worker);
+	}
 }

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Behaviour of bgworker with SIGHUP

2012-12-31 Thread Guillaume Lelarge
On Mon, 2012-12-31 at 11:03 -0300, Alvaro Herrera wrote:
 Guillaume Lelarge wrote:
  Hi,
  
  Today, I tried to make fun with the new background worker processes in
  9.3, but I found something disturbing, and need help to go further.
 
 Thanks.
 
  Is it the work of the function (pointed by bgw_sighup) to get the new
  config values from the postmaster? and if so, how can I get these new
  values?
 
 You probably want to have the sighup handler set a flag, and then call
 ProcessConfigFile(PGC_SIGHUP) in your main loop when the flag is set.  
 Search for got_SIGHUP in postgres.c.
 

Thanks for the tip. It works great.

 I think this (have a config option, and have SIGHUP work as expected)
 would be useful to demo in worker_spi, if you care to submit a patch.
 

Yeah, I would love too. Reading the code of worker_spi, we could add one
or three parameters: a naptime, and the schemaname for both bgprocess.
One would be enough or do you prefer all three?

  I thought the configuration reloading would work just like a shared
  library but it doesn't seem so.
 
 Yeah, you need to handle that manually, because you're running your own
 process now.
 

That makes sense, thanks.


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Behaviour of bgworker with SIGHUP

2012-12-31 Thread Guillaume Lelarge
On Mon, 2012-12-31 at 12:54 -0300, Alvaro Herrera wrote:
 Guillaume Lelarge wrote:
  On Mon, 2012-12-31 at 11:03 -0300, Alvaro Herrera wrote:
 
   I think this (have a config option, and have SIGHUP work as expected)
   would be useful to demo in worker_spi, if you care to submit a patch.
  
  Yeah, I would love too. Reading the code of worker_spi, we could add one
  or three parameters: a naptime, and the schemaname for both bgprocess.
  One would be enough or do you prefer all three?
 
 I got no problem with three.
 

OK, will do on wednesday.

Thanks.


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Behaviour of bgworker with SIGHUP

2012-12-28 Thread Guillaume Lelarge
Hi,

Today, I tried to make fun with the new background worker processes in
9.3, but I found something disturbing, and need help to go further.

My code is available on https://github.com/gleu/stats_recorder. If you
take a look, it is basically a copy of Alvarro's worker_spi contrib
module with a few changes. It compiles, and installs OK.

With this code, when I change my config option (stats_recorder.naptime),
I see that PostgreSQL gets the new value, but my background worker
process doesn't. See these log lines:

LOG:  stats recorder, worker_spi_main loop, stats_recorder_naptime is 1
LOG:  stats recorder, worker_spi_main loop, stats_recorder_naptime is 1
LOG:  received SIGHUP, reloading configuration files
LOG:  parameter stats_recorder.naptime changed to 5
LOG:  stats recorder, worker_spi_sighup
LOG:  stats recorder, worker_spi_main loop, stats_recorder_naptime is 1
LOG:  stats recorder, worker_spi_main loop, stats_recorder_naptime is 1

Is it the work of the function (pointed by bgw_sighup) to get the new
config values from the postmaster? and if so, how can I get these new
values?

I thought the configuration reloading would work just like a shared
library but it doesn't seem so. I wondered if it was because I had the
sighup function (initialized with bgw_sighup), so I got rid of it. The
new behaviour was actually more surprising as it launched _PG_init each
time I did a pg_ctl reload.

LOG:  stats recorder, worker_spi_main loop, stats_recorder_naptime is 1
LOG:  stats recorder, worker_spi_main loop, stats_recorder_naptime is 1
LOG:  received SIGHUP, reloading configuration files
LOG:  stats_recorder, _PG_init
FATAL:  cannot create PGC_POSTMASTER variables after startup
LOG:  worker process: stats recorder (PID 5435) exited with exit code 1

Is it the expected behaviour?

Thanks.

Regards.


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in -c CLI option of pg_dump/pg_restore

2012-10-20 Thread Guillaume Lelarge
On Sat, 2012-10-20 at 14:28 -0400, Tom Lane wrote:
 I wrote:
  It looks like I broke this in commit
  4317e0246c645f60c39e6572644cff1cb03b4c65, because I removed this from
  _tocEntryRequired():
 
  -   /* Ignore DATABASE entry unless we should create it */
  -   if (!ropt-createDB  strcmp(te-desc, DATABASE) == 0)
  -   return 0;
 
 Actually, on closer look, this change provides the foundation needed to
 do more than just fix this bug.  We can also make the combination
 pg_dump -C -c work sanely, which it never has before.  I propose that
 we fix this with the attached patch (plus probably some documentation
 changes, though I've not looked yet to see what the docs say about it).
 With this fix, the output for -C -c looks like
 
 DROP DATABASE regression;
 CREATE DATABASE regression WITH ...
 ALTER DATABASE regression OWNER ...
 \connect regression
 ... etc ...
 
 which seems to me to be just about exactly what one would expect.
 
 The patch also gets rid of a kluge in PrintTOCSummary, which was needed
 because of the old coding in _tocEntryRequired(), but no longer is.
 

Thanks a lot.


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in -c CLI option of pg_dump/pg_restore

2012-10-18 Thread Guillaume Lelarge
On Thu, 2012-10-18 at 12:19 -0300, Alvaro Herrera wrote:
 Robert Haas escribió:
  On Tue, Oct 16, 2012 at 10:31 AM, Guillaume Lelarge
  guilla...@lelarge.info wrote:
   Any comments on this?
  
  I'm not sure I'd want to back-patch this, since it is a behavior
  change, but I do think it's probably a good idea to change it for 9.3.
 
 Hm, but the bug is said to happen only in 9.2, so if we don't backpatch
 we would leave 9.2 alone exhibiting this behavior.
 

Yeah, Alvarro got it right. The behaviour changed in 9.2. This patch
needs to be applied on 9.2 and master, nothing else. If the patch is
good enough though...


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in -c CLI option of pg_dump/pg_restore

2012-10-16 Thread Guillaume Lelarge
On Sat, 2012-10-13 at 16:47 +0200, Guillaume Lelarge wrote:
 Hi,
 
 One of my colleagues, Jehan-Guillaume de Rorthais, found a weird
 behaviour of the -c command line option in the pg_restore tool while
 doing a training. Here is the following steps he followed:
 
 createdb foo
 adds a few objets in foo
 pg_dump -Fc foo  foo.dump
 createdb bar
 pg_restore -c -d bar foo.dump
 
 bar contains the same objects as foo (nothing unusual here), but... foo
 is no longer present. Actually, if you use the -c command line option,
 you get a DROP DATABASE statement. To me, it feels like a quite
 terrible bug.
 
 It's quite easy to reproduce. Just create a database, and use pg_dump
 with the -c option:
 
 createdb foo
 pg_dump -s -c foo | grep DATABASE
 
 and you end up with this:
 
 DROP DATABASE foo;
 
 I tried from 8.3 till 9.2, and only 9.2 has this behaviour.
 
 You'll find attached a patch that fixes this issue. Another colleague,
 Gilles Darold, tried it in every possible way, and it works. I'm not
 sure the test I added makes it a very good patch, but it fixes the bug.
 

Any comments on this?


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Bug in -c CLI option of pg_dump/pg_restore

2012-10-13 Thread Guillaume Lelarge
Hi,

One of my colleagues, Jehan-Guillaume de Rorthais, found a weird
behaviour of the -c command line option in the pg_restore tool while
doing a training. Here is the following steps he followed:

createdb foo
adds a few objets in foo
pg_dump -Fc foo  foo.dump
createdb bar
pg_restore -c -d bar foo.dump

bar contains the same objects as foo (nothing unusual here), but... foo
is no longer present. Actually, if you use the -c command line option,
you get a DROP DATABASE statement. To me, it feels like a quite
terrible bug.

It's quite easy to reproduce. Just create a database, and use pg_dump
with the -c option:

createdb foo
pg_dump -s -c foo | grep DATABASE

and you end up with this:

DROP DATABASE foo;

I tried from 8.3 till 9.2, and only 9.2 has this behaviour.

You'll find attached a patch that fixes this issue. Another colleague,
Gilles Darold, tried it in every possible way, and it works. I'm not
sure the test I added makes it a very good patch, but it fixes the bug.

Regards.


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index c7ef9a6..d1bd454 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -456,8 +456,8 @@ RestoreArchive(Archive *AHX)
 		{
 			AH-currentTE = te;
 
-			/* We want anything that's selected and has a dropStmt */
-			if (((te-reqs  (REQ_SCHEMA | REQ_DATA)) != 0)  te-dropStmt)
+			/* We want anything but database that's selected and has a dropStmt */
+			if (((te-reqs  (REQ_SCHEMA | REQ_DATA)) != 0)  strcmp(te-desc, DATABASE) != 0  te-dropStmt)
 			{
 ahlog(AH, 1, dropping %s %s\n, te-desc, te-tag);
 /* Select owner and schema as necessary */

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_tablespace.spclocation column removed in 9.2

2012-06-25 Thread Guillaume Lelarge
Hi Pavel,

On Mon, 2012-06-25 at 08:26 +0300, Pavel Golub wrote:
 Hello, Pgsql-bugs.
 
 According to the Moving tablespaces thread started by Bruce
 http://archives.postgresql.org/pgsql-docs/2011-12/msg3.php
 pg_tablespace.spclocation column is removed in the 9.2beta. However
 this breaks backward compatibility for a bunch of products, e.g.
 pgAdmin, phpPgAdmin, PgMDD etc.
 
 I'm not sure this is the best choice. Because each application with
 tablespace support will need additional check now to determine what
 way to use for obtaining tablespace location:
 pg_get_tablespace_location(oid) or tablespace.spclocation
 
 I'm aware of problems caused by this hard coded column. My proposal is
 to convert pg_tablespace to system view may be?
 

I don't see why it causes you so much trouble. You should already have
many locations in your code where you need to check the version to be
compatible with the latest major releases. I know pgAdmin does. So I
guess that one more is not a big deal.

And this change in PostgreSQL helps a lot DBAs who want to move
tablespaces (not really common work AFAIK, I agree).


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RANGE type, and its subtype parameter

2012-04-21 Thread Guillaume Lelarge
On Sat, 2012-04-21 at 07:03 +0530, Amit Kapila wrote:
 If I understood correctly the following query should give your answer:
 Select opcintype from pg_opclass where opcname = 'operator class name';

You're right, and my question was wrong. I finally found the SQL query I
was looking for.

Thanks.


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] RANGE type, and its subtype parameter

2012-04-20 Thread Guillaume Lelarge
Hi,

I'm working on adding support of range types in pgAdmin and I have a
really hard time understanding the subtype parameter of a range type.
How can I find all the types associated with a specific operator class?
I'm pretty sure it's a really dumb question, but I'm completely lost
here.

Thanks.

Regards.


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Why can't I use pgxs to build a plpgsql plugin?

2012-04-16 Thread Guillaume Lelarge
On Mon, 2012-04-16 at 13:09 +0300, Heikki Linnakangas wrote:
 On 13.04.2012 19:17, Guillaume Lelarge wrote:
  On Thu, 2012-04-12 at 12:28 +0300, Heikki Linnakangas wrote:
  On 08.04.2012 11:59, Guillaume Lelarge wrote:
  There could be a good reason which would explain why we can't (or don't
  want to) do this, but I don't see it right now.
 
  Me neither, except a general desire to keep internals hidden. I propose
  the attached.
 
  Sounds good to me. I would love to see this happening in 9.2.
 
 Ok, committed. I fixed the .PHONY line as Tom pointed out, and changed 
 MSVC install.pm to also copy the header file.
 

Thanks.


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Why can't I use pgxs to build a plpgsql plugin?

2012-04-13 Thread Guillaume Lelarge
On Thu, 2012-04-12 at 12:28 +0300, Heikki Linnakangas wrote:
 On 08.04.2012 11:59, Guillaume Lelarge wrote:
  Hi,
 
  I recently wrote a plpgsql plugin. I wanted to enable the use of pgxs,
  to make it easier to compile the plugin, but I eventually found that I
  can't do that because the plpgsql.h file is not available in the include
  directory.
 
  I'm wondering if we shouldn't put the header files of plpgsql source
  code in the include directory. It would help compiling the PL/pgsql
  debugger, and profiler (and of course my own plugin).
 
 Yep, I just bumped into this myself, while trying to make pldebugger 
 module compilable with pgxs.
 
  There could be a good reason which would explain why we can't (or don't
  want to) do this, but I don't see it right now.
 
 Me neither, except a general desire to keep internals hidden. I propose 
 the attached.
 

Sounds good to me. I would love to see this happening in 9.2.

Thanks, Heikki.


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Add new replication mode synchronous_commit = 'write'.

2012-04-13 Thread Guillaume Lelarge

On 04/13/2012 08:15 PM, Kevin Grittner wrote:

Robert Haasrobertmh...@gmail.com  wrote:


In my view, remote_write seems a lot more clear than write


+1

I sure didn't understand it to mean remote_write when I read the
subject line.



Neither did I. So definitely +1.


--
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Why can't I use pgxs to build a plpgsql plugin?

2012-04-08 Thread Guillaume Lelarge
Hi,

I recently wrote a plpgsql plugin. I wanted to enable the use of pgxs,
to make it easier to compile the plugin, but I eventually found that I
can't do that because the plpgsql.h file is not available in the include
directory.

I'm wondering if we shouldn't put the header files of plpgsql source
code in the include directory. It would help compiling the PL/pgsql
debugger, and profiler (and of course my own plugin).

There could be a good reason which would explain why we can't (or don't
want to) do this, but I don't see it right now.

Thanks.

Regards.


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in intarray?

2012-02-17 Thread Guillaume Lelarge
On Thu, 2012-02-16 at 19:27 -0500, Tom Lane wrote:
 Guillaume Lelarge guilla...@lelarge.info writes:
  This query:
SELECT ARRAY[-1,3,1]  ARRAY[1, 2];
  should give {1} as a result.
 
  But, on HEAD (and according to his tests, on 9.0.6 and 9.1.2), it
  appears to give en empty array.
 
 Definitely a bug, and I'll bet it goes all the way back.
 
  Digging on this issue, another user (Julien Rouhaud) made an interesting
  comment on this line of code:
 
  if (i + j == 0 || (i + j  0  *(dr - 1) != db[j]))
 
  (line 159 of contrib/intarray/_int_tool.c, current HEAD)
 
  Apparently, the code tries to check the current value of the right side
  array with the previous value of the resulting array. Which clearly
  cannot work if there is no previous value in the resulting array.
 
  So I worked on a patch to fix this, as I think it is a bug (but I may be
  wrong). Patch is attached and fixes the issue AFAICT.
 
 Yeah, this code is bogus, but it's also pretty unreadable.  I think
 it's better to get rid of the inconsistently-used pointer arithmetic
 and the fundamentally wrong/irrelevant test on i+j, along the lines
 of the attached.
 

Completely agree.

Thank you.


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Bug in intarray?

2012-02-16 Thread Guillaume Lelarge
Hi,

On a french PostgreSQL web forum, one of our users asked about a curious
behaviour of the intarray extension.

This query:
  SELECT ARRAY[-1,3,1]  ARRAY[1, 2];
should give {1} as a result.

But, on HEAD (and according to his tests, on 9.0.6 and 9.1.2), it
appears to give en empty array.

Digging on this issue, another user (Julien Rouhaud) made an interesting
comment on this line of code:

if (i + j == 0 || (i + j  0  *(dr - 1) != db[j]))

(line 159 of contrib/intarray/_int_tool.c, current HEAD)

Apparently, the code tries to check the current value of the right side
array with the previous value of the resulting array. Which clearly
cannot work if there is no previous value in the resulting array.

So I worked on a patch to fix this, as I think it is a bug (but I may be
wrong). Patch is attached and fixes the issue AFAICT.

Thanks.


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com
diff --git a/contrib/intarray/_int_tool.c b/contrib/intarray/_int_tool.c
index 79f018d..4d7a1f2 100644
--- a/contrib/intarray/_int_tool.c
+++ b/contrib/intarray/_int_tool.c
@@ -159,7 +159,7 @@ inner_int_inter(ArrayType *a, ArrayType *b)
 			i++;
 		else if (da[i] == db[j])
 		{
-			if (i + j == 0 || (i + j  0  *(dr - 1) != db[j]))
+			if (i + j == 0 || (i + j  0  (dr - ARRPTR(r)) == 0) || (i + j  0  *(dr - 1) != db[j]))
 *dr++ = db[j];
 			i++;
 			j++;

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Standalone synchronous master

2011-12-26 Thread Guillaume Lelarge
On Mon, 2011-12-26 at 16:23 +0100, Magnus Hagander wrote:
 On Mon, Dec 26, 2011 at 15:59, Alexander Björnhagen
 alex.bjornha...@gmail.com wrote:
  Basically I like this whole idea, but I'd like to know why do you think 
  this functionality is required?
 
  How should a synchronous master handle the situation where all
  standbys have failed ?
 
  Well, I think this is one of those cases where you could argue either
  way. Someone caring more about high availability of the system will
  want to let the master continue and just raise an alert to the
  operators. Someone looking for an absolute guarantee of data
  replication will say otherwise.
 
 If you don't care about the absolute guarantee of data, why not just
 use async replication? It's still going to replicate the data over to
 the client as quickly as it can - which in the end is the same level
 of guarantee that you get with this switch set, isn't it?
 
  This setup does still guarantee that if the master fails, then you can
  still fail over to the standby without any possible data loss because
  all data is synchronously replicated.
 
 Only if you didn't have a network hitch, or if your slave was down.
 
 Which basically means it doesn't *guarantee* it.
 

It doesn't guarantee it, but it increases the master availability.
That's the kind of customization some users would like to have. Though I
find it weird to introduce another GUC there. Why not add a new enum
value to synchronous_commit, such as local_only_if_slaves_unavailable
(yeah, the enum value is completely stupid, but you get my point).


-- 
Guillaume
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com
  PostgreSQL Sessions #3: http://www.postgresql-sessions.org


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC 2011 - Mentors? Projects?

2011-03-28 Thread Guillaume Lelarge
Le 26/03/2011 02:43, Tomas Vondra a écrit :
 Dne 26.3.2011 02:05, Joshua Berkus napsal(a):
 Tomas,

 I spoke to a teacher from a local university last week, mainly as we
 were looking for a place where a local PUG could meet regularly. I
 realized this could be a good opportunity to head-hunt some students
 to
 participate in this GSoC. Are we still interested in new students?

 Yes, please!   We have had students from Charles University several
 times before, and would be glad to have more.  The wiki page has
 links to the information about the program.  Talk to Zdenek if you
 have more questions.

 
 I know Zdenek was mentoring some students in the previous years, but
 he's been a bit hard to reach recently. And the deadline is near.
 
 I've read some info about the program on a wiki, but I'm not sure what
 should the students do. Let's say they will read the list of project
 ideas on the wiki, and they'll choose one or two of them. What should
 they do next? Should they write to the pgsql-students mailing list?
 

They could write to the pgsql-students list. There are already some
threads about items to work on.

 I guess most of the students won't have much experience with PostgreSQL,
 and most of the ideas is described just very briefly, so they'll need
 help with the proposal.
 

Sure. Two lists AFAICT, pgsql_students and pgsql-hackers.


-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Comments on SQL/Med objects

2011-03-23 Thread Guillaume Lelarge
Le 23/03/2011 17:53, Tom Lane a écrit :
 Robert Haas robertmh...@gmail.com writes:
 On Tue, Mar 22, 2011 at 6:23 PM, Guillaume Lelarge
 guilla...@lelarge.info wrote:
 While working on adding support for SQL/Med objects to pgAdmin, I'm
 quite surprised to see there is no way to add comments to SQL/Med
 objects. Is this on purpose or is it just something that was simply missed?
 
 I think it's an oversight.  We should probably fix this.
 
 Yeah, I had a private TODO about that.  I'd like to see if we can
 refactor the grammar to eliminate some of the duplication there
 as well as the potential for oversights of this sort.  I believe
 that USER MAPPINGs are missing from ObjectType as well as a bunch
 of other basic places ...
 

OK, great. Thanks for your answers.


-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Comments on SQL/Med objects

2011-03-22 Thread Guillaume Lelarge
Hi,

While working on adding support for SQL/Med objects to pgAdmin, I'm
quite surprised to see there is no way to add comments to SQL/Med
objects. Is this on purpose or is it just something that was simply missed?

Thanks.


-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] and it's not a bunny rabbit, either

2011-01-01 Thread Guillaume Lelarge
Le 01/01/2011 06:05, Robert Haas a écrit :
 On Fri, Dec 31, 2010 at 8:48 AM, Peter Eisentraut pete...@gmx.net wrote:
 On tor, 2010-12-30 at 11:03 -0500, Robert Haas wrote:
 No, quite the opposite.  With the other approach, you needed:

 constraints cannot be used on views
 constraints cannot be used on composite types
 constraints cannot be used on TOAST tables
 constraints cannot be used on indexes
 constraints cannot be used on foreign tables

 With this, you just need:

 constraints can only be used on tables

 At the beginning of this thread you said that the error messages should
 focus on what you tried to do, not what you could do instead.
 
 Yeah, and I still believe that.  I'm having difficulty coming up with
 a workable approach, though.  It would be simple enough if we could
 write:
 
 /* translator: first %s is a feature, second %s is a relation type */
 %s cannot be used on %s
 
 ...but I think this is likely to cause some translation headaches.
 

Actually, this is simply not translatable in some languages. We had the
same issue on pgAdmin, and we resolved this by having quite a big number
of new strings to translate. Harder one time for the translator, but
results in a much better experience for the user.

 Also, in this particular case, the user could very well assume that a
 TOAST table or a foreign table is a table.
 
 There's a limited amount we can do about confused users, but it is
 true that the negative phrasing is better for that case.
 

It's at least better for the translator.


-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] and it's not a bunny rabbit, either

2011-01-01 Thread Guillaume Lelarge
Le 01/01/2011 16:00, Robert Haas a écrit :
 On Sat, Jan 1, 2011 at 9:53 AM, Guillaume Lelarge
 guilla...@lelarge.info wrote:
 Le 01/01/2011 06:05, Robert Haas a écrit :
 On Fri, Dec 31, 2010 at 8:48 AM, Peter Eisentraut pete...@gmx.net wrote:
 On tor, 2010-12-30 at 11:03 -0500, Robert Haas wrote:
 No, quite the opposite.  With the other approach, you needed:

 constraints cannot be used on views
 constraints cannot be used on composite types
 constraints cannot be used on TOAST tables
 constraints cannot be used on indexes
 constraints cannot be used on foreign tables

 With this, you just need:

 constraints can only be used on tables

 At the beginning of this thread you said that the error messages should
 focus on what you tried to do, not what you could do instead.

 Yeah, and I still believe that.  I'm having difficulty coming up with
 a workable approach, though.  It would be simple enough if we could
 write:

 /* translator: first %s is a feature, second %s is a relation type */
 %s cannot be used on %s

 ...but I think this is likely to cause some translation headaches.

 Actually, this is simply not translatable in some languages. We had the
 same issue on pgAdmin, and we resolved this by having quite a big number
 of new strings to translate. Harder one time for the translator, but
 results in a much better experience for the user.
 
 Is it in any better if we write one string per feature, like this:
 
 constraints cannot be used on %s
 triggers cannot be used on %s
 
 ...where %s is a plural object type (views, foreign tables, etc.).
 

If %s was a singular object, it would be an issue for french. But for
plural form, it won't be an issue. Not sure it would be the same in
other languages. IIRC from my student years, german could have an issue
here.


-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_primary_conninfo

2010-12-28 Thread Guillaume Lelarge
Le 28/12/2010 16:34, Tom Lane a écrit :
 Magnus Hagander mag...@hagander.net writes:
 On Dec 28, 2010 3:58 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 What's the use case?  And aren't there security reasons to NOT expose
 that?  It might contain a password for instance.
 
 Good point - should be made superuser only.
 
 I'm still wondering what's the actual use-case for exposing this inside
 SQL.  Those with a legitimate need-to-know can look at the slave
 server's config files, no?
 

This is something I wanted to have in 9.0 when I coded in pgAdmin some
features related to the HotStandby. Knowing on which IP is the master
can help pgAdmin offer the user to register the master node.

It's also interesting to get lag between master and slave. As soon as
I'm connected to a slave, I can connect to the master and get the lag
between them. Something I can't do right now in pgAdmin.


-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_primary_conninfo

2010-12-28 Thread Guillaume Lelarge
Le 28/12/2010 17:36, Tom Lane a écrit :
 Guillaume Lelarge guilla...@lelarge.info writes:
 Le 28/12/2010 16:34, Tom Lane a écrit :
 I'm still wondering what's the actual use-case for exposing this inside
 SQL.  Those with a legitimate need-to-know can look at the slave
 server's config files, no?
 
 This is something I wanted to have in 9.0 when I coded in pgAdmin some
 features related to the HotStandby. Knowing on which IP is the master
 can help pgAdmin offer the user to register the master node.
 
 It's also interesting to get lag between master and slave. As soon as
 I'm connected to a slave, I can connect to the master and get the lag
 between them. Something I can't do right now in pgAdmin.
 
 The proposed primary_conninfo seems like a pretty awful solution to
 those problems, though.
 

I would say not the best one, but better than what I have now :)

 1. It'll have to be restricted to superusers, therefore ordinary
 users on the slave can't actually make use of it.
 

pgAdmin's users usually connect as superusers.

 2. It's not what you want, since you don't want to connect as the
 replication user.  Therefore, you'd have to start by parsing out
 the parts you do need.  Expecting every client to include conninfo
 parsing logic doesn't seem cool to me.
 
 I can see the point of, say, a primary_host_address() function returning
 inet, which would be way better on both those dimensions than the
 current proposal.  But I'm not sure what else would be needed.
 

Yeah, it would be better that way. I'm actually interested in Magnus's
patch because, during 9.0 development phase, I had in mind to parse the
primary_conninfo till I found I could not get this value with SHOW or
current_setting().

But, actually, what I really need is host and port. This way, I could
connect to the master node, with the same user and password that was
used on the slave node.


-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_primary_conninfo

2010-12-28 Thread Guillaume Lelarge
Le 28/12/2010 17:50, Gurjeet Singh a écrit :
 On Tue, Dec 28, 2010 at 11:36 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 

 I can see the point of, say, a primary_host_address() function returning
 inet, which would be way better on both those dimensions than the
 current proposal.  But I'm not sure what else would be needed.


 +1, since it bypasses security risks associated with exposing
 username/password.
 
 Ability to see port number will be a useful addition.
 
 Another case to consider is what if slave is connected to a local server
 over unix-domain sockets? Returning NULL might make it ambiguous with the
 case where the instance has been promoted out of standby.
 

The host should be the socket file path.


-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_primary_conninfo

2010-12-28 Thread Guillaume Lelarge
Le 28/12/2010 18:12, Robert Haas a écrit :
 On Dec 28, 2010, at 10:34 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 I'm still wondering what's the actual use-case for exposing this inside
 SQL.  Those with a legitimate need-to-know can look at the slave
 server's config files, no?
 
 SQL access is frequently more convenient, though.  Although maybe now that 
 we've made recovery.conf use the GUC lexer we oughta continue in that vein 
 and expose those parameters as PGC_INTERNAL GUCs rather than inventing a new 
 function for it...
 

That was the first thing I wanted. Knowing the trigger file for example
would be quite useful for pgAdmin and pgPool for example.


-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_primary_conninfo

2010-12-28 Thread Guillaume Lelarge
Le 28/12/2010 19:30, Tom Lane a écrit :
 Gurjeet Singh singh.gurj...@gmail.com writes:
 On Tue, Dec 28, 2010 at 12:12 PM, Robert Haas robertmh...@gmail.com wrote:
 SQL access is frequently more convenient, though.  Although maybe now that
 we've made recovery.conf use the GUC lexer we oughta continue in that vein
 and expose those parameters as PGC_INTERNAL GUCs rather than inventing a new
 function for it...
 
 +1 for SQL access, but exposing it via pg_settings opens up the security
 problem as there might be sensitive info in those GUCs.
 
 IIRC we do have a GUC property that hides the value from non-superusers,
 so we could easily have a GUC that is equivalent to the proposed
 pg_primary_conninfo function.  Of course this does nothing for my
 objections to the function.  Also, I'm not sure how we'd deal with the
 state-dependency aspect of it (ie, value changes once you exit recovery
 mode).
 

We already have superuser GUC.

b1= show data_directory;
ERROR:  must be superuser to examine data_directory

We only need to do the same for primary_conninfo and trigger_file (as I
remember it, there are the only ones needing this).


-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A small update for postgresql.conf.sample

2010-09-27 Thread Guillaume Lelarge
Le 27/09/2010 15:18, Robert Haas a écrit :
 2010/9/27 Devrim GÜNDÜZ dev...@gunduz.org:
 Attached is a small patch that adds a few comments for the settings that
 require restart. Applicable for 9.0+.
 
 I'm not sure this is worth back-patching, but I've committed it to the
 master branch.
 

+1 for backpatching.

Otherwise, the fact that requires restart is not here doesn't mean
anything (ie, doesn't mean if restart is required or not).

Actually, I don't see any reason why not to backpatch it.


-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] antisocial things you can do in git (but not CVS)

2010-07-24 Thread Guillaume Lelarge
Le 21/07/2010 09:53, Dave Page a écrit :
 On Tue, Jul 20, 2010 at 8:12 PM, Peter Eisentraut pete...@gmx.net wrote:
 My preference would be to stick to a style where we identify the
 committer using the author tag and note the patch author, reviewers,
 whether the committer made changes, etc. in the commit message.  A
 single author field doesn't feel like enough for our workflow, and
 having a mix of authors and committers in the author field seems like
 a mess.

 Well, I had looked forward to actually putting the real author into the
 author field.
 
 I hadn't realised that was possible until Guillaume did so on his
 first commit to the new pgAdmin GIT repo. It seems to work nicely:
 
 http://git.postgresql.org/gitweb?p=pgadmin3.git;a=commit;h=08e2826d90129bd4e4b3b7462bab682dd6a703e4
 

It's one of the nice things with git. So, I'm eager to use it with the
pgAdmin repo.


-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] accentuated letters in text-search

2010-07-21 Thread Guillaume Lelarge
Le 21/07/2010 23:23, Andreas Joseph Krogh a écrit :
 [...]
 I was googling for how to create a text-seach-config with the following
 properties:
 - Map unicode accentuated letters to an un-accentuated equivalent
 - No stop-words
 - Lowercase all words
 
 And came over this from -general:
 http://www.techienuggets.com/Comments?tx=106813
 
 Then after some more googling I found this:
 http://www.sai.msu.su/~megera/wiki/unaccent
 
 Any reason the unaccent dict. and function did not make it in 9.0?
 

Well, AFAICT, it's available in 9.0:

  http://www.postgresql.org/docs/9.0/static/unaccent.html


-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SHOW TABLES

2010-07-15 Thread Guillaume Lelarge
Le 15/07/2010 17:48, Joshua D. Drake a écrit :
 On Thu, 2010-07-15 at 16:20 +0100, Simon Riggs wrote:
 On Thu, 2010-07-15 at 11:05 -0400, Tom Lane wrote:
 Simon Riggs si...@2ndquadrant.com writes:
 The biggest turn off that most people experience when using PostgreSQL
 is that psql does not support memorable commands.

 I would like to implement the following commands as SQL, allowing them
 to be used from any interface.

 SHOW TABLES
 SHOW COLUMNS
 SHOW DATABASES

 This has been discussed before, and rejected before.  Please see
 archives.

 Many years ago. I think it's worth revisiting now in light of the number
 of people now joining the PostgreSQL community and the greater
 prevalence other ways of doing it. The world has changed, we have not.

 I'm not proposing any change in function, just a simpler syntax to allow
 the above information to be available, for newbies.

 Just for the record, I've never ever met anyone that said Oh, this \d
 syntax makes so much sense. I'm a real convert to Postgres now you've
 shown me this. The reaction is always the opposite one; always
 negative. Which detracts from our efforts elsewhere.
 
 I have to agree with Simon here. \d is ridiculous for the common user.
 
 SHOW TABLES, SHOW COLUMNS makes a lot of sense. Just has something like
 DESCRIBE TABLE foo makes a lot more sense than \d.
 

And would you add the complete syntax? I mean:

  SHOW [OPEN] TABLES [FROM db_name] [LIKE 'pattern']

I'm wondering what one can do with the [FROM db_name] clause :)


-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cannot cancel the change of a tablespace

2010-07-01 Thread Guillaume Lelarge
Le 30/06/2010 06:53, Guillaume Lelarge a écrit :
 Le 30/06/2010 05:25, Tom Lane a écrit :
 Robert Haas robertmh...@gmail.com writes:
 On Tue, Jun 29, 2010 at 9:42 PM, Bruce Momjian br...@momjian.us wrote:
 So this is not something we want fixed for 9.0, as indicated by Simon?
 I don't see the patch on the commit-fest page yet.

 I tend to think we should fix it for 9.0, but could be talked out of
 it if someone has a compelling argument to make.

 Er, maybe I lost count, but I thought you were the one objecting to
 the patch.

 
 You're right. Robert questioned the use of CHECK_FOR_INTERRUPTS() in
 code available in the src/port directory. I don't see what issue could
 result with this. He also said that whatever would be commited should be
 back-patched.
 
 I can still add it for the next commit fest, I just don't want this
 patch to get lost. Though I won't be able to do this before getting back
 from work.
 

Finally, I added it to the next commit fest. Robert can work on it
before if he wants to (or has the time).

https://commitfest.postgresql.org/action/patch_view?id=331


-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cannot cancel the change of a tablespace

2010-07-01 Thread Guillaume Lelarge
Le 01/07/2010 17:54, Robert Haas a écrit :
 On Thu, Jul 1, 2010 at 10:18 AM, Robert Haas robertmh...@gmail.com wrote:
 On Thu, Jul 1, 2010 at 5:30 AM, Guillaume Lelarge
 guilla...@lelarge.info wrote:
 On Tue, Jun 29, 2010 at 9:42 PM, Bruce Momjian br...@momjian.us wrote:
 So this is not something we want fixed for 9.0, as indicated by Simon?
 I don't see the patch on the commit-fest page yet.

 Finally, I added it to the next commit fest. Robert can work on it
 before if he wants to (or has the time).

 I'd been avoiding working on this because Simon had said he was going
 to commit it, but I can pick it up.  I've committed and back-patched
 (to 8.0, as 7.4 does not have tablespaces) the fix for ALTER TABLE ..
 SET TABLESPACE.  I'll take a look at the rest of it as well.
 
 It looks like we have two reasonable choices here:
 
 - We could backpatch this only to 8.4, where ALTER DATABASE .. SET
 TABLESPACE was introduced.
 
 - Or, since this also makes it easier to interrupt CREATE DATABASE new
 TEMPLATE = some_big_database, we could back-patch it all the way to
 8.1, which is the first release where we use copydir() rather than
 invoking cp -r (except on Windows, where copydir() has always been
 used, but releases  8.2 aren't supported on Windows anyway).
 
 Since I can't remember anyone complaining about difficulty
 interrupting CREATE DATABASE, I'm inclined to go back only to 8.4, and
 will do that a bit later.
 

I agree that a backpatch to 8.4 seems enough.


-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cannot cancel the change of a tablespace

2010-07-01 Thread Guillaume Lelarge
Le 01/07/2010 22:13, Robert Haas a écrit :
 On Thu, Jul 1, 2010 at 12:11 PM, Guillaume Lelarge
 guilla...@lelarge.info wrote:
 Le 01/07/2010 17:54, Robert Haas a écrit :
 On Thu, Jul 1, 2010 at 10:18 AM, Robert Haas robertmh...@gmail.com wrote:
 On Thu, Jul 1, 2010 at 5:30 AM, Guillaume Lelarge
 guilla...@lelarge.info wrote:
 On Tue, Jun 29, 2010 at 9:42 PM, Bruce Momjian br...@momjian.us 
 wrote:
 So this is not something we want fixed for 9.0, as indicated by Simon?
 I don't see the patch on the commit-fest page yet.

 Finally, I added it to the next commit fest. Robert can work on it
 before if he wants to (or has the time).

 I'd been avoiding working on this because Simon had said he was going
 to commit it, but I can pick it up.  I've committed and back-patched
 (to 8.0, as 7.4 does not have tablespaces) the fix for ALTER TABLE ..
 SET TABLESPACE.  I'll take a look at the rest of it as well.

 It looks like we have two reasonable choices here:

 - We could backpatch this only to 8.4, where ALTER DATABASE .. SET
 TABLESPACE was introduced.

 - Or, since this also makes it easier to interrupt CREATE DATABASE new
 TEMPLATE = some_big_database, we could back-patch it all the way to
 8.1, which is the first release where we use copydir() rather than
 invoking cp -r (except on Windows, where copydir() has always been
 used, but releases  8.2 aren't supported on Windows anyway).

 Since I can't remember anyone complaining about difficulty
 interrupting CREATE DATABASE, I'm inclined to go back only to 8.4, and
 will do that a bit later.


 I agree that a backpatch to 8.4 seems enough.
 
 Done.
 

Thanks, Robert.


-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cannot cancel the change of a tablespace

2010-06-29 Thread Guillaume Lelarge
Le 23/06/2010 23:29, Guillaume Lelarge a écrit :
 Le 23/06/2010 22:54, Tom Lane a écrit :
 Robert Haas robertmh...@gmail.com writes:
 On Mon, Jun 21, 2010 at 12:46 PM, Guillaume Lelarge
 guilla...@lelarge.info wrote:
 I added a CHECK_FOR_INTERRUPTS call in the copy_relation_data(),
 copy_dir(), and copy_file() functions. Works for me on ALTER TABLE ...
 SET TABLESPACE and ALTER DATABASE ... SET TABLESPACE, in 9.0 and 8.4.

 Adding a CHECK_FOR_INTERRUPTS() to copy_relation_data seems like it
 ought to be OK (though I haven't tested), but copydir() is in
 src/port, and I fear that putting CHECK_FOR_INTERRUPTS() in there
 might cause problems.

 copydir.c is already backend-specific thanks to all the ereport calls.
 If we ever tried to make it usable in frontend code, we could easily
 deal with CHECK_FOR_INTERRUPTS() via #ifndef FRONTEND --- changing the
 error management would be far more painful.

 
 I'm not sure I get it right. Do I need to do something on the patch so
 that it can get commited?
 

Still not sure what to do right now for this patch :)

Could it be applied? or should I work on it? (and if yes on the latter,
to do what?)

Thanks.



-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cannot cancel the change of a tablespace

2010-06-29 Thread Guillaume Lelarge
Le 30/06/2010 05:25, Tom Lane a écrit :
 Robert Haas robertmh...@gmail.com writes:
 On Tue, Jun 29, 2010 at 9:42 PM, Bruce Momjian br...@momjian.us wrote:
 So this is not something we want fixed for 9.0, as indicated by Simon?
 I don't see the patch on the commit-fest page yet.
 
 I tend to think we should fix it for 9.0, but could be talked out of
 it if someone has a compelling argument to make.
 
 Er, maybe I lost count, but I thought you were the one objecting to
 the patch.
 

You're right. Robert questioned the use of CHECK_FOR_INTERRUPTS() in
code available in the src/port directory. I don't see what issue could
result with this. He also said that whatever would be commited should be
back-patched.

I can still add it for the next commit fest, I just don't want this
patch to get lost. Though I won't be able to do this before getting back
from work.

Thanks.


-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cannot cancel the change of a tablespace

2010-06-23 Thread Guillaume Lelarge
Le 23/06/2010 22:54, Tom Lane a écrit :
 Robert Haas robertmh...@gmail.com writes:
 On Mon, Jun 21, 2010 at 12:46 PM, Guillaume Lelarge
 guilla...@lelarge.info wrote:
 I added a CHECK_FOR_INTERRUPTS call in the copy_relation_data(),
 copy_dir(), and copy_file() functions. Works for me on ALTER TABLE ...
 SET TABLESPACE and ALTER DATABASE ... SET TABLESPACE, in 9.0 and 8.4.
 
 Adding a CHECK_FOR_INTERRUPTS() to copy_relation_data seems like it
 ought to be OK (though I haven't tested), but copydir() is in
 src/port, and I fear that putting CHECK_FOR_INTERRUPTS() in there
 might cause problems.
 
 copydir.c is already backend-specific thanks to all the ereport calls.
 If we ever tried to make it usable in frontend code, we could easily
 deal with CHECK_FOR_INTERRUPTS() via #ifndef FRONTEND --- changing the
 error management would be far more painful.
 

I'm not sure I get it right. Do I need to do something on the patch so
that it can get commited?


-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] system views for walsender activity

2010-06-22 Thread Guillaume Lelarge
Le 22/06/2010 06:40, Takahiro Itagaki a écrit :
 [...]
 Tom Lane t...@sss.pgh.pa.us wrote:
 
 I'm of the opinion that this is a 9.1 problem.  It needs more thought
 than we can put into it now --- one obvious question is what about
 monitoring on the slave side?  Another is who should be able to see the
 data?
 
 Sure. We should research user's demands for monitoring and management
 of replication. I'll report some voices from users as of this moment:
 
 * Managers often ask DBAs How long standby servers are behind the master?
   We should provide such methods for DBAs. We have pg_xlog_location()
   functions, but they should be improved for:
 - The returned values are xxx/yyy texts, but more useful information
   is the difference of two values. Subtraction functions are required.
 - For easier management, the master server should provide not only
   sent/flush locations but also received/replayed locations for each
   standby servers. Users don't want to access both master and slaves.
 
 * Some developers want to pause and restart replication from the master
   server. They're going to use replication for application version
   managements. They'll pause all replications, and test their new features
   at the master, and restart replication to spread the changes to slaves.
 

I agree on these two.

Something I found lacking when I added support for Hot Standby /
Streaming Replication in pgAdmin (that was a really small patch, there
was not a lot to do) was that one cannot get the actual value of each
recovery.conf parameter. Try a SHOW primary_conninfo; and it will
juste reply that primary_conninfo is an unknown parameter. I already
talked about this to Heikki, but didn't get a chance to actually look at
the code.


-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] system views for walsender activity

2010-06-22 Thread Guillaume Lelarge
Le 22/06/2010 12:42, Simon Riggs a écrit :
 On Tue, 2010-06-22 at 12:19 +0200, Guillaume Lelarge wrote:
 Shamely simple : I only added some informations on the server's
 properties. See
 http://www.pgadmin.org/images/visualtour12/visualtour08.jpg. We only
 display the fact that the server is (or isn't) in recovery, and the
 result of the two admin functions (receive and replay location). 
 
 If you store the is-in-Recovery result you could set the .enabled
 property of many of the dialog boxes. I think its going to be painful
 for people to attempt to submit a DDL command and get an error.
 

That's what I first thought. But it would be weird that we disabled all
the OK button of the dialog properties only for hotstandby servers, but
not when a user doesn't have the permission. At least, that was the
reasonning I had at the time.

 Too bad the other admin functions aren't there, I could have used them
 (and hope to do so in 9.1). Too bad also we cannot know the primary
 server from a connection to the slave (that's why I would love to get
 the value of
 primary_conninfo, to found the alias/IP of the primary server).
 
 Agreed
 

:)


-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] system views for walsender activity

2010-06-22 Thread Guillaume Lelarge
Le 22/06/2010 11:41, Simon Riggs a écrit :
 On Tue, 2010-06-22 at 09:54 +0200, Guillaume Lelarge wrote:
 I added support for Hot Standby /
 Streaming Replication in pgAdmin (that was a really small patch, there
 was not a lot to do)
 
 Well done.
 
 Does this mean that pgAdmin has a read only mode now?
 

Nope, it does not really have one. Though I intend to work on having
pgAdmin more aware of the actual rights of the connected user (allowing
him to get to display the create table dialog when we should already
know he cannot is an issue, at least to me).

 What are the details of that support? I couldn't easily see the commits
 in the pgadmin list.
 

Shamely simple : I only added some informations on the server's
properties. See
http://www.pgadmin.org/images/visualtour12/visualtour08.jpg. We only
display the fact that the server is (or isn't) in recovery, and the
result of the two admin functions (receive and replay location). Too bad
the other admin functions aren't there, I could have used them (and hope
to do so in 9.1). Too bad also we cannot know the primary server from a
connection to the slave (that's why I would love to get the value of
primary_conninfo, to found the alias/IP of the primary server).


-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Cannot cancel the change of a tablespace

2010-06-21 Thread Guillaume Lelarge
Hi,

Today, I tried to cancel the change of a tablespace for a table (ALTER
TABLE ... SET TABLESPACE). I got the Cancel request sent but the query
continued and finally succeed. It was a big issue for my customer, and I
wanted to look more into that issue. So, I got a look at the source code
and found we didn't check for interrupts in this part of the code. I
added them, and it seems to work as I wanted.

I added a CHECK_FOR_INTERRUPTS call in the copy_relation_data(),
copy_dir(), and copy_file() functions. Works for me on ALTER TABLE ...
SET TABLESPACE and ALTER DATABASE ... SET TABLESPACE, in 9.0 and 8.4.

Not sure we really want that change, and it don't feel like a bug to me.
Should I add it to to the next commitfest?

Comments?


-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com
Index: src/backend/commands/tablecmds.c
===
RCS file: /opt/cvsroot_postgresql/pgsql/src/backend/commands/tablecmds.c,v
retrieving revision 1.330
diff -c -p -c -r1.330 tablecmds.c
*** src/backend/commands/tablecmds.c	28 Apr 2010 16:10:41 -	1.330
--- src/backend/commands/tablecmds.c	21 Jun 2010 16:33:30 -
*** copy_relation_data(SMgrRelation src, SMg
*** 7049,7054 
--- 7049,7057 
  
  	for (blkno = 0; blkno  nblocks; blkno++)
  	{
+ /* If we got a cancel signal during the copy of the data, quit */
+ CHECK_FOR_INTERRUPTS();
+ 
  		smgrread(src, forkNum, blkno, buf);
  
  		/* XLOG stuff */
Index: src/port/copydir.c
===
RCS file: /opt/cvsroot_postgresql/pgsql/src/port/copydir.c,v
retrieving revision 1.36
diff -c -p -c -r1.36 copydir.c
*** src/port/copydir.c	1 Mar 2010 14:54:00 -	1.36
--- src/port/copydir.c	21 Jun 2010 16:33:30 -
***
*** 23,28 
--- 23,29 
  #include sys/stat.h
  
  #include storage/fd.h
+ #include miscadmin.h
  
  /*
   *	On Windows, call non-macro versions of palloc; we can't reference
*** copydir(char *fromdir, char *todir, bool
*** 67,72 
--- 68,76 
  
  	while ((xlde = ReadDir(xldir, fromdir)) != NULL)
  	{
+ /* If we got a cancel signal during the copy of the directory, quit */
+ CHECK_FOR_INTERRUPTS();
+ 
  		struct stat fst;
  
  		if (strcmp(xlde-d_name, .) == 0 ||
*** copy_file(char *fromfile, char *tofile)
*** 172,177 
--- 176,184 
  	 */
  	for (offset = 0;; offset += nbytes)
  	{
+ /* If we got a cancel signal during the copy of the file, quit */
+ CHECK_FOR_INTERRUPTS();
+ 
  		nbytes = read(srcfd, buffer, COPY_BUF_SIZE);
  		if (nbytes  0)
  			ereport(ERROR,

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Using the new libpq connection functions in PostgreSQL binaries

2010-01-31 Thread Guillaume Lelarge
Hi,

I worked on a patch to make PostgreSQL binaries use the new
PQconnectdbParams() libpq functions. I tried to mimic the way Joe Conway
changed my previous patch.

I know I'm way over the deadline for this commitfest. I couldn't do it
before because my previous patch (on this commit fest) proposed two
methods to do the new connection functions (a one array method, and a
two-arrays method). Joe chose the two arrays method. Anyways, I would
understand if it gets postponed to the first commitfest for 9.1.

Regards.


-- 
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com
Index: contrib/oid2name/oid2name.c
===
RCS file: /opt/cvsroot_postgresql/pgsql/contrib/oid2name/oid2name.c,v
retrieving revision 1.36
diff -c -p -c -r1.36 oid2name.c
*** contrib/oid2name/oid2name.c	11 Jun 2009 14:48:51 -	1.36
--- contrib/oid2name/oid2name.c	31 Jan 2010 01:36:38 -
*** sql_conn(struct options * my_opts)
*** 301,306 
--- 301,308 
  	PGconn	   *conn;
  	char	   *password = NULL;
  	bool		new_pass;
+ 	const char *keywords[] = {host,port,dbname,user,
+ 			  password,application_name,NULL};
  
  	/*
  	 * Start the connection.  Loop until we have a password if requested by
*** sql_conn(struct options * my_opts)
*** 308,321 
  	 */
  	do
  	{
  		new_pass = false;
! 		conn = PQsetdbLogin(my_opts-hostname,
! 			my_opts-port,
! 			NULL,		/* options */
! 			NULL,		/* tty */
! 			my_opts-dbname,
! 			my_opts-username,
! 			password);
  		if (!conn)
  		{
  			fprintf(stderr, %s: could not connect to database %s\n,
--- 310,327 
  	 */
  	do
  	{
+ const char *values[] = {
+   my_opts-hostname,
+   my_opts-port,
+   my_opts-dbname,
+   my_opts-username,
+   password,
+   oid2name,
+   NULL
+   };
+ 
  		new_pass = false;
! conn = PQconnectdbParams(keywords, values);
  		if (!conn)
  		{
  			fprintf(stderr, %s: could not connect to database %s\n,
Index: contrib/pgbench/pgbench.c
===
RCS file: /opt/cvsroot_postgresql/pgsql/contrib/pgbench/pgbench.c,v
retrieving revision 1.96
diff -c -p -c -r1.96 pgbench.c
*** contrib/pgbench/pgbench.c	6 Jan 2010 01:30:03 -	1.96
--- contrib/pgbench/pgbench.c	31 Jan 2010 01:41:45 -
*** doConnect(void)
*** 345,350 
--- 345,352 
  	PGconn	   *conn;
  	static char *password = NULL;
  	bool		new_pass;
+ 	const char *keywords[] = {host,port,options,tty,dbname,user,
+ 			  password,application_name,NULL};
  
  	/*
  	 * Start the connection.  Loop until we have a password if requested by
*** doConnect(void)
*** 352,361 
  	 */
  	do
  	{
  		new_pass = false;
! 
! 		conn = PQsetdbLogin(pghost, pgport, pgoptions, pgtty, dbName,
! 			login, password);
  		if (!conn)
  		{
  			fprintf(stderr, Connection to database \%s\ failed\n,
--- 354,373 
  	 */
  	do
  	{
+ const char *values[] = {
+   pghost,
+   pgport,
+   pgoptions,
+   pgtty,
+   dbName,
+   login,
+   password,
+   pgbench,
+   NULL
+   };
+ 
  		new_pass = false;
! conn = PQconnectdbParams(keywords, values);
  		if (!conn)
  		{
  			fprintf(stderr, Connection to database \%s\ failed\n,
Index: contrib/vacuumlo/vacuumlo.c
===
RCS file: /opt/cvsroot_postgresql/pgsql/contrib/vacuumlo/vacuumlo.c,v
retrieving revision 1.44
diff -c -p -c -r1.44 vacuumlo.c
*** contrib/vacuumlo/vacuumlo.c	2 Jan 2010 16:57:33 -	1.44
--- contrib/vacuumlo/vacuumlo.c	31 Jan 2010 01:44:55 -
*** vacuumlo(char *database, struct _param *
*** 70,75 
--- 70,77 
  	int			i;
  	static char *password = NULL;
  	bool		new_pass;
+ 	const char *keywords[] = {host,port,dbname,user,
+ 			  password,application_name,NULL};
  
  	if (param-pg_prompt == TRI_YES  password == NULL)
  		password = simple_prompt(Password: , 100, false);
*** vacuumlo(char *database, struct _param *
*** 80,94 
  	 */
  	do
  	{
  		new_pass = false;
! 
! 		conn = PQsetdbLogin(param-pg_host,
! 			param-pg_port,
! 			NULL,
! 			NULL,
! 			database,
! 			param-pg_user,
! 			password);
  		if (!conn)
  		{
  			fprintf(stderr, Connection to database \%s\ failed\n,
--- 82,99 
  	 */
  	do
  	{
+ const char *values[] = {
+   param-pg_host,
+   param-pg_port,
+   database,
+   param-pg_user,
+   password,
+   vacuumlo,
+   NULL
+   };
+ 
  		new_pass = false;
!

Re: [HACKERS] Using the new libpq connection functions in PostgreSQL binaries

2010-01-31 Thread Guillaume Lelarge
Le 31/01/2010 13:39, Magnus Hagander a écrit :
 On Sun, Jan 31, 2010 at 09:34, Guillaume Lelarge guilla...@lelarge.info 
 wrote:
 Hi,

 I worked on a patch to make PostgreSQL binaries use the new
 PQconnectdbParams() libpq functions. I tried to mimic the way Joe Conway
 changed my previous patch.

 I know I'm way over the deadline for this commitfest. I couldn't do it
 before because my previous patch (on this commit fest) proposed two
 methods to do the new connection functions (a one array method, and a
 two-arrays method). Joe chose the two arrays method. Anyways, I would
 understand if it gets postponed to the first commitfest for 9.1.
 
 I think this can reasonably be seen as the final step of that patch,
 rather than a completely new feature. Please add it to this CF - we
 can always remove it if too many others object ;)
 

Done (https://commitfest.postgresql.org/action/patch_view?id=278). Thanks.


-- 
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Using the new libpq connection functions in PostgreSQL binaries

2010-01-31 Thread Guillaume Lelarge
Le 31/01/2010 17:35, Tom Lane a écrit :
 Guillaume Lelarge guilla...@lelarge.info writes:
 
   */
  do
  {
 + const char *values[] = {
 +   my_opts-hostname,
 +   my_opts-port,
 +   my_opts-dbname,
 +   my_opts-username,
 +   password,
 +   oid2name,
 +   NULL
 +   };
 + 
  new_pass = false;
 
 Is that really legal C89 syntax?

I don't really know. gcc (4.4.1 release) didn't complain about it,
whereas it complained with a warning for not-legal-syntax when I had the
new_pass = false; statement before the array declaration.

 I seem to recall that array
 constructors can only be used for static assignments with older
 compilers.
 
 Also, as a matter of style, I find it pretty horrid that this isn't
 immediately adjacent to the keywords array that it MUST match.
 

I don't find that horrid. AFAICT, that's the only advantage of the
two-arrays method. By the way, it's that kind of code (keywords
declaration separated from values declaration) that got commited in the
previous patch
(http://archives.postgresql.org/pgsql-committers/2010-01/msg00398.php).
I merely used the same code for the other binaries.


-- 
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: libpq new connect function (PQconnectParams)

2010-01-28 Thread Guillaume Lelarge
Le 28/01/2010 07:32, Joe Conway a écrit :
 On 01/26/2010 02:55 PM, Guillaume Lelarge wrote:
 Le 26/01/2010 19:43, Joe Conway a écrit :
 On 01/25/2010 03:21 PM, Guillaume Lelarge wrote:
 I didn't put any documentation before knowing which one will be choosen.
 So we still need to work on the manual.

 Please send the documentation as a separate patch. Once I have that I
 will commit the posted patch, barring any objections in the meantime.

 You'll find it attached with this mail. Please read it carefully, my
 written english is not that good.
 
 Final committed patch attached.
 
 One last code correction -- in psql/startup.c the original patch defines
 the keywords array in the body of the code, rather than at the top of
 the block.
 
 Minor improvements ( hopefully ;-)) to the documentation as well.
 

Thanks a lot.


-- 
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: libpq new connect function (PQconnectParams)

2010-01-26 Thread Guillaume Lelarge
Le 26/01/2010 19:43, Joe Conway a écrit :
 On 01/25/2010 03:21 PM, Guillaume Lelarge wrote:
 I didn't put any documentation before knowing which one will be choosen.
 So we still need to work on the manual.
 
 Please send the documentation as a separate patch. Once I have that I
 will commit the posted patch, barring any objections in the meantime.
 

You'll find it attached with this mail. Please read it carefully, my
written english is not that good.

Thanks.


-- 
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com
Index: doc/src/sgml/libpq.sgml
===
RCS file: /opt/cvsroot_postgresql/pgsql/doc/src/sgml/libpq.sgml,v
retrieving revision 1.295
diff -c -p -c -r1.295 libpq.sgml
*** doc/src/sgml/libpq.sgml	21 Jan 2010 14:58:52 -	1.295
--- doc/src/sgml/libpq.sgml	26 Jan 2010 22:52:52 -
***
*** 56,62 
 one time.  (One reason to do that is to access more than one
 database.)  Each connection is represented by a
 structnamePGconn/indextermprimaryPGconn// object, which
!is obtained from the function functionPQconnectdb/ or
 functionPQsetdbLogin/.  Note that these functions will always
 return a non-null object pointer, unless perhaps there is too
 little memory even to allocate the structnamePGconn/ object.
--- 56,63 
 one time.  (One reason to do that is to access more than one
 database.)  Each connection is represented by a
 structnamePGconn/indextermprimaryPGconn// object, which
!is obtained from the function functionPQconnectdb/,
!functionPQconnectdbParams/ or
 functionPQsetdbLogin/.  Note that these functions will always
 return a non-null object pointer, unless perhaps there is too
 little memory even to allocate the structnamePGconn/ object.
***
*** 91,125 
  
 variablelist
  varlistentry
!  termfunctionPQconnectdb/functionindextermprimaryPQconnectdb///term
   listitem
para
 Makes a new connection to the database server.
  
 synopsis
! PGconn *PQconnectdb(const char *conninfo);
 /synopsis
/para
  
para
 This function opens a new database connection using the parameters taken
!from the string literalconninfo/literal.  Unlike functionPQsetdbLogin/ below,
 the parameter set can be extended without changing the function signature,
!so use of this function (or its nonblocking analogues functionPQconnectStart/
 and functionPQconnectPoll/function) is preferred for new application programming.
/para
  
para
!The passed string
!can be empty to use all default parameters, or it can contain one or more
!parameter settings separated by whitespace.
!Each parameter setting is in the form literalkeyword = value/literal.
!Spaces around the equal sign are optional.
!To write an empty value or a value containing
!spaces, surround it with single quotes, e.g.,
!literalkeyword = 'a value'/literal.
!Single quotes and backslashes within the value must be escaped with a
!backslash, i.e., literal\'/literal and literal\\/literal.
/para
  
para
--- 92,121 
  
 variablelist
  varlistentry
!  termfunctionPQconnectdbParams/functionindextermprimaryPQconnectdbParams///term
   listitem
para
 Makes a new connection to the database server.
  
 synopsis
! PGconn *PQconnectdbParams(const char **keywords, const char **values);
 /synopsis
/para
  
para
 This function opens a new database connection using the parameters taken
!from two arrays.  The first one, literalkeywords/literal, is defined
!as an array of strings, each one being a keyword.  The second one,
!literalvalues/literal, gives the value for each keyword. Unlike
!functionPQsetdbLogin/ below,
 the parameter set can be extended without changing the function signature,
!so use of this function (or its nonblocking analogues functionPQconnectStartParams/
 and functionPQconnectPoll/function) is preferred for new application programming.
/para
  
para
!The passed arrays can be empty to use all default parameters, or it can
!contain one or more parameter settings.
/para
  
para
***
*** 478,483 
--- 474,518 
  /varlistentry
  
  varlistentry
+  termfunctionPQconnectdb/functionindextermprimaryPQconnectdb///term
+  listitem
+   para
+Makes a new connection to the database server.
+ 
+synopsis
+ PGconn *PQconnectdb(const char *conninfo);
+/synopsis
+   /para
+ 
+   para
+This function opens a new database connection using the parameters taken
+from the string literalconninfo/literal.
+   /para
+ 
+   para

Re: [HACKERS] Patch: libpq new connect function (PQconnectParams)

2010-01-25 Thread Guillaume Lelarge
Le 26/01/2010 00:04, Joe Conway a écrit :
 I'm reviewing the patch posted here:
   http://archives.postgresql.org/pgsql-hackers/2010-01/msg01579.php
 for this commitfest item:
   https://commitfest.postgresql.org/action/patch_view?id=259
 

First, thanks for reviewing my patch.

 Patch attached - a few minor changes:
 -
 1) Updated to apply cleanly against cvs tip

Sorry about this. I already updated it twice. I didn't think a new
update was needed.

 2) Improved comments

Sure.

 3) Moved much of what was in PQconnectStartParams() to a new
conninfo_array_parse() to be more consistent with existing code

You're right. It also makes the code more readable and understandable. I
should have done that.

 Questions/comments:
 ---
 a) Do we want an analog to PQconninfoParse(), e.g.
PQconninfoParseParams()? If not, it isn't worth keeping use_defaults
as an argument to conninfo_array_parse().

No, I don't think so. I can't find a use case for it.

 b) I refrained from further consolidation even though there is room.
For example, I considered leaving only the real parsing code in
conninfo_parse(), and having it return keywords and values arrays.
If we did that, the rest of the code could be modified to accept
keywords and values instead of conninfo, and therefore shared. I was
concerned about the probably small performance hit to the existing
code path. Thoughts?
 c) Obviously I liked the two-arrays approach better -- any objections
to that?

No objection. I prefer the other one, but it's just not that important.

I didn't put any documentation before knowing which one will be choosen.
So we still need to work on the manual.

Thanks again.


-- 
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] further explain changes

2010-01-24 Thread Guillaume Lelarge
Le 24/01/2010 06:06, Jaime Casanova a écrit :
 On Sat, Jan 23, 2010 at 10:08 PM, Robert Haas robertmh...@gmail.com wrote:

 I was also thinking about the possibility of adding a new option
 called output and making that control whether the Output line gets
 printed.  It's kind of annoying to use EXPLAIN (ANALYZE, VERBOSE)
 
 why not let it go in ANALYZE, just as the sort info
 

Yes, it would be more consistent. Other than that, this patch is quite
interesting.


-- 
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Application name patch - v3

2010-01-21 Thread Guillaume Lelarge
Le 15/01/2010 18:53, Guillaume Lelarge a écrit :
 Le 08/01/2010 23:22, Guillaume Lelarge a écrit :
 Le 07/01/2010 19:13, Robert Haas a écrit :
 On Thu, Jan 7, 2010 at 10:33 AM, Guillaume Lelarge
 guilla...@lelarge.info wrote:
 Le 04/01/2010 22:36, Guillaume Lelarge a écrit :
 Le 29/12/2009 14:12, Guillaume Lelarge a écrit :
 Le 29/12/2009 00:03, Guillaume Lelarge a écrit :
 Le 28/12/2009 22:59, Tom Lane a écrit :
 Guillaume Lelarge guilla...@lelarge.info writes:
 Le 28/12/2009 17:06, Tom Lane a écrit :
 I think we were stalled on the question of whether to use one array
 or two parallel arrays.  Do you want to try coding up a sample usage
 of each possibility so we can see which one seems more useful?

 I'm interested in working on this. But I don't find the thread that 
 talk
 about this.

 Try here
 http://archives.postgresql.org/message-id/4aae8ccf.9070...@esilo.com


 Thanks. I've read all the new version of PQconnectdb and Determining
 client_encoding from client locale threads. I think I understand the
 goal. Still need to re-read this one
 (http://archives.postgresql.org/message-id/6222.1253734...@sss.pgh.pa.us)
  and
 completely understand it (will probably need to look at the code, at
 least the PQconnectdb one). But I'm definitely working on this.


 If I try to sum up my readings so far, this is what we still have to do:

 1. try the one-array approach
PGconn *PQconnectParams(const char **params)

 2. try the two-arrays approach
PGconn *PQconnectParams(const char **keywords, const char **values)

 Instead of doing a wrapper around PQconnectdb, we need to refactor the
 whole function, so that we can get rid of the parsing of the conninfo
 string (which is quite complicated).

 Using psql as an example would be a good idea, AFAICT.

 Am I right? did I misunderstand or forget something?


 I supposed I was right since noone yell at me :)

 I worked on this tonight. You'll find two patches attached, one for the
 one-array approach, one for the two-arrays approach. I know some more
 factoring can be done (at least, the get the fallback resources...
 part). I'm OK to do them. I just need to know if I'm on the right track.


 Hmmm... sorry but... can i have some comments on these two patches, please?

 I would suggest adding your patch(es) to:

 https://commitfest.postgresql.org/action/commitfest_view/open

 Probably just one entry for the two of them would be most appropriate.


 Done. Thanks.

 
 New patches because the old ones didn't apply anymore, due to recent CVS
 commits.
 

New patches for same reason.


-- 
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com
Index: src/bin/psql/startup.c
===
RCS file: /opt/cvsroot_postgresql/pgsql/src/bin/psql/startup.c,v
retrieving revision 1.158
diff -c -p -c -r1.158 startup.c
*** src/bin/psql/startup.c	2 Jan 2010 16:57:59 -	1.158
--- src/bin/psql/startup.c	4 Jan 2010 21:04:13 -
*** main(int argc, char *argv[])
*** 171,181 
  	/* loop until we have a password if requested by backend */
  	do
  	{
! 		new_pass = false;
! 		pset.db = PQsetdbLogin(options.host, options.port, NULL, NULL,
! 	options.action == ACT_LIST_DB  options.dbname == NULL ?
! 			   postgres : options.dbname,
! 			   options.username, password);
  
  		if (PQstatus(pset.db) == CONNECTION_BAD 
  			PQconnectionNeedsPassword(pset.db) 
--- 171,190 
  	/* loop until we have a password if requested by backend */
  	do
  	{
! const char *params[] = {
!   host, options.host,
!   port, options.port,
!   dbname, (options.action == ACT_LIST_DB  
!options.dbname == NULL) ? postgres : options.dbname,
!   user, options.username,
!   password, password,
!   application_name, pset.progname,
!   NULL, NULL
!   };
! 
! new_pass = false;
! 
! pset.db = PQconnectdbParams(params);
  
  		if (PQstatus(pset.db) == CONNECTION_BAD 
  			PQconnectionNeedsPassword(pset.db) 
Index: src/interfaces/libpq/exports.txt
===
RCS file: /opt/cvsroot_postgresql/pgsql/src/interfaces/libpq/exports.txt,v
retrieving revision 1.24
diff -c -p -c -r1.24 exports.txt
*** src/interfaces/libpq/exports.txt	21 Jan 2010 14:58:53 -	1.24
--- src/interfaces/libpq/exports.txt	21 Jan 2010 19:40:50 -
*** PQconninfoParse   152
*** 155,157 
--- 155,159 
  PQinitOpenSSL 153
  PQescapeLiteral   154
  PQescapeIdentifier155
+ PQconnectdbParams 156
+ PQconnectStartParams  157
Index: src/interfaces/libpq/fe-connect.c
===
RCS file: /opt/cvsroot_postgresql/pgsql/src/interfaces/libpq/fe-connect.c,v
retrieving revision 1.384
diff -c -p -c -r1.384 fe

Re: [HACKERS] Application name patch - v3

2010-01-15 Thread Guillaume Lelarge
Le 08/01/2010 23:22, Guillaume Lelarge a écrit :
 Le 07/01/2010 19:13, Robert Haas a écrit :
 On Thu, Jan 7, 2010 at 10:33 AM, Guillaume Lelarge
 guilla...@lelarge.info wrote:
 Le 04/01/2010 22:36, Guillaume Lelarge a écrit :
 Le 29/12/2009 14:12, Guillaume Lelarge a écrit :
 Le 29/12/2009 00:03, Guillaume Lelarge a écrit :
 Le 28/12/2009 22:59, Tom Lane a écrit :
 Guillaume Lelarge guilla...@lelarge.info writes:
 Le 28/12/2009 17:06, Tom Lane a écrit :
 I think we were stalled on the question of whether to use one array
 or two parallel arrays.  Do you want to try coding up a sample usage
 of each possibility so we can see which one seems more useful?

 I'm interested in working on this. But I don't find the thread that 
 talk
 about this.

 Try here
 http://archives.postgresql.org/message-id/4aae8ccf.9070...@esilo.com


 Thanks. I've read all the new version of PQconnectdb and Determining
 client_encoding from client locale threads. I think I understand the
 goal. Still need to re-read this one
 (http://archives.postgresql.org/message-id/6222.1253734...@sss.pgh.pa.us)
  and
 completely understand it (will probably need to look at the code, at
 least the PQconnectdb one). But I'm definitely working on this.


 If I try to sum up my readings so far, this is what we still have to do:

 1. try the one-array approach
PGconn *PQconnectParams(const char **params)

 2. try the two-arrays approach
PGconn *PQconnectParams(const char **keywords, const char **values)

 Instead of doing a wrapper around PQconnectdb, we need to refactor the
 whole function, so that we can get rid of the parsing of the conninfo
 string (which is quite complicated).

 Using psql as an example would be a good idea, AFAICT.

 Am I right? did I misunderstand or forget something?


 I supposed I was right since noone yell at me :)

 I worked on this tonight. You'll find two patches attached, one for the
 one-array approach, one for the two-arrays approach. I know some more
 factoring can be done (at least, the get the fallback resources...
 part). I'm OK to do them. I just need to know if I'm on the right track.


 Hmmm... sorry but... can i have some comments on these two patches, please?

 I would suggest adding your patch(es) to:

 https://commitfest.postgresql.org/action/commitfest_view/open

 Probably just one entry for the two of them would be most appropriate.

 
 Done. Thanks.
 

New patches because the old ones didn't apply anymore, due to recent CVS
commits.


-- 
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com
Index: src/bin/psql/startup.c
===
RCS file: /opt/cvsroot_postgresql/pgsql/src/bin/psql/startup.c,v
retrieving revision 1.158
diff -c -p -c -r1.158 startup.c
*** src/bin/psql/startup.c	2 Jan 2010 16:57:59 -	1.158
--- src/bin/psql/startup.c	4 Jan 2010 21:04:13 -
*** main(int argc, char *argv[])
*** 171,181 
  	/* loop until we have a password if requested by backend */
  	do
  	{
! 		new_pass = false;
! 		pset.db = PQsetdbLogin(options.host, options.port, NULL, NULL,
! 	options.action == ACT_LIST_DB  options.dbname == NULL ?
! 			   postgres : options.dbname,
! 			   options.username, password);
  
  		if (PQstatus(pset.db) == CONNECTION_BAD 
  			PQconnectionNeedsPassword(pset.db) 
--- 171,190 
  	/* loop until we have a password if requested by backend */
  	do
  	{
! const char *params[] = {
!   host, options.host,
!   port, options.port,
!   dbname, (options.action == ACT_LIST_DB  
!options.dbname == NULL) ? postgres : options.dbname,
!   user, options.username,
!   password, password,
!   application_name, pset.progname,
!   NULL, NULL
!   };
! 
! new_pass = false;
! 
! pset.db = PQconnectdbParams(params);
  
  		if (PQstatus(pset.db) == CONNECTION_BAD 
  			PQconnectionNeedsPassword(pset.db) 
Index: src/interfaces/libpq/exports.txt
===
RCS file: /opt/cvsroot_postgresql/pgsql/src/interfaces/libpq/exports.txt,v
retrieving revision 1.23
diff -c -p -c -r1.23 exports.txt
*** src/interfaces/libpq/exports.txt	31 Mar 2009 01:41:27 -	1.23
--- src/interfaces/libpq/exports.txt	4 Jan 2010 20:51:13 -
*** PQresultSetInstanceData   150
*** 153,155 
--- 153,157 
  PQfireResultCreateEvents  151
  PQconninfoParse   152
  PQinitOpenSSL 153
+ PQconnectdbParams 154
+ PQconnectStartParams  155
Index: src/interfaces/libpq/fe-connect.c
===
RCS file: /opt/cvsroot_postgresql/pgsql/src/interfaces/libpq/fe-connect.c,v
retrieving revision 1.383
diff -c -p -c -r1.383 fe-connect.c
*** src/interfaces/libpq/fe-connect.c	15 Jan 2010 09:19:10 -	1.383
--- src

Re: [HACKERS] Application name patch - v3

2010-01-08 Thread Guillaume Lelarge
Le 07/01/2010 19:13, Robert Haas a écrit :
 On Thu, Jan 7, 2010 at 10:33 AM, Guillaume Lelarge
 guilla...@lelarge.info wrote:
 Le 04/01/2010 22:36, Guillaume Lelarge a écrit :
 Le 29/12/2009 14:12, Guillaume Lelarge a écrit :
 Le 29/12/2009 00:03, Guillaume Lelarge a écrit :
 Le 28/12/2009 22:59, Tom Lane a écrit :
 Guillaume Lelarge guilla...@lelarge.info writes:
 Le 28/12/2009 17:06, Tom Lane a écrit :
 I think we were stalled on the question of whether to use one array
 or two parallel arrays.  Do you want to try coding up a sample usage
 of each possibility so we can see which one seems more useful?

 I'm interested in working on this. But I don't find the thread that talk
 about this.

 Try here
 http://archives.postgresql.org/message-id/4aae8ccf.9070...@esilo.com


 Thanks. I've read all the new version of PQconnectdb and Determining
 client_encoding from client locale threads. I think I understand the
 goal. Still need to re-read this one
 (http://archives.postgresql.org/message-id/6222.1253734...@sss.pgh.pa.us) 
 and
 completely understand it (will probably need to look at the code, at
 least the PQconnectdb one). But I'm definitely working on this.


 If I try to sum up my readings so far, this is what we still have to do:

 1. try the one-array approach
PGconn *PQconnectParams(const char **params)

 2. try the two-arrays approach
PGconn *PQconnectParams(const char **keywords, const char **values)

 Instead of doing a wrapper around PQconnectdb, we need to refactor the
 whole function, so that we can get rid of the parsing of the conninfo
 string (which is quite complicated).

 Using psql as an example would be a good idea, AFAICT.

 Am I right? did I misunderstand or forget something?


 I supposed I was right since noone yell at me :)

 I worked on this tonight. You'll find two patches attached, one for the
 one-array approach, one for the two-arrays approach. I know some more
 factoring can be done (at least, the get the fallback resources...
 part). I'm OK to do them. I just need to know if I'm on the right track.


 Hmmm... sorry but... can i have some comments on these two patches, please?
 
 I would suggest adding your patch(es) to:
 
 https://commitfest.postgresql.org/action/commitfest_view/open
 
 Probably just one entry for the two of them would be most appropriate.
 

Done. Thanks.


-- 
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Application name patch - v3

2010-01-07 Thread Guillaume Lelarge
Le 04/01/2010 22:36, Guillaume Lelarge a écrit :
 Le 29/12/2009 14:12, Guillaume Lelarge a écrit :
 Le 29/12/2009 00:03, Guillaume Lelarge a écrit :
 Le 28/12/2009 22:59, Tom Lane a écrit :
 Guillaume Lelarge guilla...@lelarge.info writes:
 Le 28/12/2009 17:06, Tom Lane a écrit :
 I think we were stalled on the question of whether to use one array
 or two parallel arrays.  Do you want to try coding up a sample usage
 of each possibility so we can see which one seems more useful?

 I'm interested in working on this. But I don't find the thread that talk
 about this.

 Try here
 http://archives.postgresql.org/message-id/4aae8ccf.9070...@esilo.com


 Thanks. I've read all the new version of PQconnectdb and Determining
 client_encoding from client locale threads. I think I understand the
 goal. Still need to re-read this one
 (http://archives.postgresql.org/message-id/6222.1253734...@sss.pgh.pa.us) 
 and
 completely understand it (will probably need to look at the code, at
 least the PQconnectdb one). But I'm definitely working on this.


 If I try to sum up my readings so far, this is what we still have to do:

 1. try the one-array approach
PGconn *PQconnectParams(const char **params)

 2. try the two-arrays approach
PGconn *PQconnectParams(const char **keywords, const char **values)

 Instead of doing a wrapper around PQconnectdb, we need to refactor the
 whole function, so that we can get rid of the parsing of the conninfo
 string (which is quite complicated).

 Using psql as an example would be a good idea, AFAICT.

 Am I right? did I misunderstand or forget something?

 
 I supposed I was right since noone yell at me :)
 
 I worked on this tonight. You'll find two patches attached, one for the
 one-array approach, one for the two-arrays approach. I know some more
 factoring can be done (at least, the get the fallback resources...
 part). I'm OK to do them. I just need to know if I'm on the right track.
 

Hmmm... sorry but... can i have some comments on these two patches, please?


-- 
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Application name patch - v3

2010-01-04 Thread Guillaume Lelarge
Le 29/12/2009 14:12, Guillaume Lelarge a écrit :
 Le 29/12/2009 00:03, Guillaume Lelarge a écrit :
 Le 28/12/2009 22:59, Tom Lane a écrit :
 Guillaume Lelarge guilla...@lelarge.info writes:
 Le 28/12/2009 17:06, Tom Lane a écrit :
 I think we were stalled on the question of whether to use one array
 or two parallel arrays.  Do you want to try coding up a sample usage
 of each possibility so we can see which one seems more useful?

 I'm interested in working on this. But I don't find the thread that talk
 about this.

 Try here
 http://archives.postgresql.org/message-id/4aae8ccf.9070...@esilo.com


 Thanks. I've read all the new version of PQconnectdb and Determining
 client_encoding from client locale threads. I think I understand the
 goal. Still need to re-read this one
 (http://archives.postgresql.org/message-id/6222.1253734...@sss.pgh.pa.us) and
 completely understand it (will probably need to look at the code, at
 least the PQconnectdb one). But I'm definitely working on this.

 
 If I try to sum up my readings so far, this is what we still have to do:
 
 1. try the one-array approach
PGconn *PQconnectParams(const char **params)
 
 2. try the two-arrays approach
PGconn *PQconnectParams(const char **keywords, const char **values)
 
 Instead of doing a wrapper around PQconnectdb, we need to refactor the
 whole function, so that we can get rid of the parsing of the conninfo
 string (which is quite complicated).
 
 Using psql as an example would be a good idea, AFAICT.
 
 Am I right? did I misunderstand or forget something?
 

I supposed I was right since noone yell at me :)

I worked on this tonight. You'll find two patches attached, one for the
one-array approach, one for the two-arrays approach. I know some more
factoring can be done (at least, the get the fallback resources...
part). I'm OK to do them. I just need to know if I'm on the right track.


-- 
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com
Index: src/bin/psql/startup.c
===
RCS file: /opt/cvsroot_postgresql/pgsql/src/bin/psql/startup.c,v
retrieving revision 1.158
diff -c -p -c -r1.158 startup.c
*** src/bin/psql/startup.c	2 Jan 2010 16:57:59 -	1.158
--- src/bin/psql/startup.c	4 Jan 2010 21:04:13 -
*** main(int argc, char *argv[])
*** 171,181 
  	/* loop until we have a password if requested by backend */
  	do
  	{
! 		new_pass = false;
! 		pset.db = PQsetdbLogin(options.host, options.port, NULL, NULL,
! 	options.action == ACT_LIST_DB  options.dbname == NULL ?
! 			   postgres : options.dbname,
! 			   options.username, password);
  
  		if (PQstatus(pset.db) == CONNECTION_BAD 
  			PQconnectionNeedsPassword(pset.db) 
--- 171,190 
  	/* loop until we have a password if requested by backend */
  	do
  	{
! const char *params[] = {
!   host, options.host,
!   port, options.port,
!   dbname, (options.action == ACT_LIST_DB  
!options.dbname == NULL) ? postgres : options.dbname,
!   user, options.username,
!   password, password,
!   application_name, pset.progname,
!   NULL, NULL
!   };
! 
! new_pass = false;
! 
! pset.db = PQconnectdbParams(params);
  
  		if (PQstatus(pset.db) == CONNECTION_BAD 
  			PQconnectionNeedsPassword(pset.db) 
Index: src/interfaces/libpq/exports.txt
===
RCS file: /opt/cvsroot_postgresql/pgsql/src/interfaces/libpq/exports.txt,v
retrieving revision 1.23
diff -c -p -c -r1.23 exports.txt
*** src/interfaces/libpq/exports.txt	31 Mar 2009 01:41:27 -	1.23
--- src/interfaces/libpq/exports.txt	4 Jan 2010 20:51:13 -
*** PQresultSetInstanceData   150
*** 153,155 
--- 153,157 
  PQfireResultCreateEvents  151
  PQconninfoParse   152
  PQinitOpenSSL 153
+ PQconnectdbParams 154
+ PQconnectStartParams  155
Index: src/interfaces/libpq/fe-connect.c
===
RCS file: /opt/cvsroot_postgresql/pgsql/src/interfaces/libpq/fe-connect.c,v
retrieving revision 1.382
diff -c -p -c -r1.382 fe-connect.c
*** src/interfaces/libpq/fe-connect.c	2 Jan 2010 16:58:11 -	1.382
--- src/interfaces/libpq/fe-connect.c	4 Jan 2010 20:54:12 -
*** static bool connectOptions2(PGconn *conn
*** 259,264 
--- 259,265 
  static int	connectDBStart(PGconn *conn);
  static int	connectDBComplete(PGconn *conn);
  static PGconn *makeEmptyPGconn(void);
+ static void fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
  static void freePGconn(PGconn *conn);
  static void closePGconn(PGconn *conn);
  static PQconninfoOption *conninfo_parse(const char *conninfo,
*** pgthreadlock_t pg_g_threadlock = default
*** 299,304 
--- 300,337

Re: [HACKERS] Application name patch - v3

2009-12-29 Thread Guillaume Lelarge
Le 29/12/2009 00:03, Guillaume Lelarge a écrit :
 Le 28/12/2009 22:59, Tom Lane a écrit :
 Guillaume Lelarge guilla...@lelarge.info writes:
 Le 28/12/2009 17:06, Tom Lane a écrit :
 I think we were stalled on the question of whether to use one array
 or two parallel arrays.  Do you want to try coding up a sample usage
 of each possibility so we can see which one seems more useful?

 I'm interested in working on this. But I don't find the thread that talk
 about this.

 Try here
 http://archives.postgresql.org/message-id/4aae8ccf.9070...@esilo.com

 
 Thanks. I've read all the new version of PQconnectdb and Determining
 client_encoding from client locale threads. I think I understand the
 goal. Still need to re-read this one
 (http://archives.postgresql.org/message-id/6222.1253734...@sss.pgh.pa.us) and
 completely understand it (will probably need to look at the code, at
 least the PQconnectdb one). But I'm definitely working on this.
 

If I try to sum up my readings so far, this is what we still have to do:

1. try the one-array approach
   PGconn *PQconnectParams(const char **params)

2. try the two-arrays approach
   PGconn *PQconnectParams(const char **keywords, const char **values)

Instead of doing a wrapper around PQconnectdb, we need to refactor the
whole function, so that we can get rid of the parsing of the conninfo
string (which is quite complicated).

Using psql as an example would be a good idea, AFAICT.

Am I right? did I misunderstand or forget something?


-- 
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Application name patch - v3

2009-12-28 Thread Guillaume Lelarge
Le 28/12/2009 10:07, Dave Page a écrit :
 On Sun, Dec 27, 2009 at 11:15 PM, Guillaume Lelarge
 guilla...@lelarge.info wrote:
 Le 13/11/2009 12:11, Dave Page a écrit :
 [...]
 What about pg_dump/psql setting fallback_application_name?

 Per Tom, I'm waiting on the possible new array-based libpq connect API
 which will make a conversion of those utilities from PQsetdbLogin a
 lot cleaner than moving to PQconnectdb (and all the ugly connection
 string building that would require).


 Is it still to be done? I don't see psql pr pg_dump set an application
 name on alpha 3. There are also pg_restore, vacuumdb, reindexdb, etc.
 
 Yes, still waiting on the new API.
 

Is there something I can do to make this move forward?


-- 
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Application name patch - v3

2009-12-28 Thread Guillaume Lelarge
Le 28/12/2009 17:06, Tom Lane a écrit :
 Guillaume Lelarge guilla...@lelarge.info writes:
 Le 28/12/2009 10:07, Dave Page a écrit :
 Yes, still waiting on the new API.
 
 Is there something I can do to make this move forward?
 
 I think we were stalled on the question of whether to use one array
 or two parallel arrays.  Do you want to try coding up a sample usage
 of each possibility so we can see which one seems more useful?
 

I'm interested in working on this. But I don't find the thread that talk
about this. I feel pretty dumb, but I re-read every mail on Application
name patch - v2, Application name patch - v3, and Application name
patch - v4 threads. I also re-read the Client application name
thread. The only mail I see that relates to the new API is the one from
Dave (the one I answered today).

So, can someone point me to the thread that deals with this new
array-based libpq connect API? or can someone explain it to me?

Thanks.


-- 
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Application name patch - v3

2009-12-28 Thread Guillaume Lelarge
Le 28/12/2009 22:59, Tom Lane a écrit :
 Guillaume Lelarge guilla...@lelarge.info writes:
 Le 28/12/2009 17:06, Tom Lane a écrit :
 I think we were stalled on the question of whether to use one array
 or two parallel arrays.  Do you want to try coding up a sample usage
 of each possibility so we can see which one seems more useful?
 
 I'm interested in working on this. But I don't find the thread that talk
 about this.
 
 Try here
 http://archives.postgresql.org/message-id/4aae8ccf.9070...@esilo.com
 

Thanks. I've read all the new version of PQconnectdb and Determining
client_encoding from client locale threads. I think I understand the
goal. Still need to re-read this one
(http://archives.postgresql.org/message-id/6222.1253734...@sss.pgh.pa.us) and
completely understand it (will probably need to look at the code, at
least the PQconnectdb one). But I'm definitely working on this.


-- 
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Application name patch - v3

2009-12-27 Thread Guillaume Lelarge
Le 13/11/2009 12:11, Dave Page a écrit :
 [...]
 What about pg_dump/psql setting fallback_application_name?
 
 Per Tom, I'm waiting on the possible new array-based libpq connect API
 which will make a conversion of those utilities from PQsetdbLogin a
 lot cleaner than moving to PQconnectdb (and all the ugly connection
 string building that would require).
 

Is it still to be done? I don't see psql pr pg_dump set an application
name on alpha 3. There are also pg_restore, vacuumdb, reindexdb, etc.


-- 
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Time to run initdb is mostly figure-out-the-timezone work

2009-12-18 Thread Guillaume Lelarge
Le 18/12/2009 18:07, Tom Lane a écrit :
 On current Fedora 11, there is a huge difference in initdb time if you
 have TZ set versus if you don't: I get about 18 seconds versus less than
 four.
 
 $ time initdb
 ... blah blah blah ...
 
 real0m17.953s
 user0m6.490s
 sys 0m10.935s
 $ rm -rf $PGDATA
 $ export TZ=GMT
 $ time initdb
 ... blah blah blah ...
 
 real0m3.767s
 user0m2.997s
 sys 0m0.784s
 $ 
 
 The reason for this is that initdb launches the postmaster many times
 (at least 14) and each one of those launches results in a search of
 every file in the timezone database, if we don't have a TZ value to
 let us identify the timezone immediately.
 
 Now this hardly matters to end users who seldom do initdb, but from a
 developer's perspective it would be awfully nice if initdb took less
 time.  If other people can reproduce similar behavior, I think it
 would be worth the trouble to have initdb forcibly set the TZ or PGTZ
 variable while it runs.

I have the exact same issue:

guilla...@laptop:~$ time initdb
Les fichiers de ce cluster appartiendront à l'utilisateur « guillaume ».
[...]
real0m7.972s
user0m3.588s
sys 0m3.444s
guilla...@laptop:~$ export TZ=GMT
guilla...@laptop:~$ rm -rf t1
guilla...@laptop:~$ time initdb
[...]
real0m1.828s
user0m1.436s
sys 0m0.368s


This is on Ubuntu 9.10.

Quite impressive. I think I'll add an alias (alias initdb=TZ=GMT initdb).


-- 
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgAdmin III: timestamp displayed in what time zone?

2009-12-15 Thread Guillaume Lelarge
Le mardi 15 décembre 2009 à 06:30:15, Greg Smith a écrit :
 [...]
  BTW, this list is listed as the list for tech questions in the pgAdmin
  tips, therefore if you don't want to be disturb, you might want to
  remove it from the pgAdmin tips.
 
 When I look at http://www.pgadmin.org/support/ for example it suggests
 the right list.  I only see this one listed in the Translation section,
 as the place to ask to get added to the translators list.  Does anyone
 know where the tips section suggesting people send tech questions to
 pgsql-hackers he's referring to is at?  That seems like it should be
 cleaned up to point to the pgAdmin list instead if that's floating
 around there somewhere.
 

Sorry Greg, it's fixed now.

 Fred, if you could recall exactly what path you followed to end up here
 and let us know, we can try to keep someone else from being confused and
 instead directed to the right place more directly.  It would be a nice
 gesture on your part to end our conversation here having done something
 useful, rather than with you just venting at me.
 

Yeah, it would be interesting to know.


-- 
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Patch to change a pg_restore message

2009-11-19 Thread Guillaume Lelarge
Hi,

pg_restore --help gives this message for the --no-tablespaces parameter:

  --no-tablespaces do not dump tablespace assignments

The message should say restore and not dump. You'll find a patch attached 
that fixes this issue.

Thanks.

Regards.


-- 
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com
Index: src/bin/pg_dump/pg_restore.c
===
RCS file: /opt/cvsroot_postgresql/pgsql/src/bin/pg_dump/pg_restore.c,v
retrieving revision 1.100
diff -c -p -c -r1.100 pg_restore.c
*** src/bin/pg_dump/pg_restore.c	11 Jun 2009 14:49:07 -	1.100
--- src/bin/pg_dump/pg_restore.c	19 Nov 2009 15:41:23 -
*** usage(const char *progname)
*** 430,436 
  	printf(_(  --no-data-for-failed-tables\n
  			do not restore data of tables that could not be\n
  			created\n));
! 	printf(_(  --no-tablespaces do not dump tablespace assignments\n));
  	printf(_(  --role=ROLENAME  do SET ROLE before restore\n));
  	printf(_(  --use-set-session-authorization\n
  			use SET SESSION AUTHORIZATION commands instead of\n
--- 430,436 
  	printf(_(  --no-data-for-failed-tables\n
  			do not restore data of tables that could not be\n
  			created\n));
! 	printf(_(  --no-tablespaces do not restore tablespace assignments\n));
  	printf(_(  --role=ROLENAME  do SET ROLE before restore\n));
  	printf(_(  --use-set-session-authorization\n
  			use SET SESSION AUTHORIZATION commands instead of\n

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to change a pg_restore message

2009-11-19 Thread Guillaume Lelarge
Le jeudi 19 novembre 2009 à 23:05:16, Peter Eisentraut a écrit :
 On tor, 2009-11-19 at 16:47 +0100, Guillaume Lelarge wrote:
  pg_restore --help gives this message for the --no-tablespaces
  parameter:
 
--no-tablespaces do not dump tablespace assignments
 
  The message should say restore and not dump. You'll find a patch
  attached that fixes this issue.
 
 Fixed in 8.4 and 8.5.
 

Thanks Peter.


-- 
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Show schema size with \dn+

2009-10-28 Thread Guillaume Lelarge
Le mercredi 28 octobre 2009 à 15:11:31, Anders Steinlein a écrit :
 Is there any interest in expanding \dn+ to show schema size, similar
 to table sizes using \dt+ in 8.4? We use separate schemas for each
 user, so this would allow us to quickly look up the sizes of each
 user's data.
 
 I have little experience with C and none with the PostgreSQL code base
 -- where should I look to have a go at this?
 

I would say source file src/bin/psql/describe.c, function listSchemas.


-- 
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   >