Re: [HACKERS] pg_receivexlog does not report flush position with --synchronous

2016-08-23 Thread Gabriele Bartolini
Hi Simon and Michael,

2016-08-23 10:39 GMT+02:00 Simon Riggs <si...@2ndquadrant.com>:
>
> Agreed, but I'd move all the comments above the block.
>

That's fine with me.

Thanks,
Gabriele
--
 Gabriele Bartolini - 2ndQuadrant Italia - Director
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it
<http://www.2ndquadrant.it/>


[HACKERS] pg_receivexlog does not report flush position with --synchronous

2016-08-22 Thread Gabriele Bartolini
Hi guys,

  while adding synchronous WAL streaming to Barman, I noticed that
pg_receivexlog - unless a replication slot is specified and --synchronous
is passed - does not become a synchronous receiver (if the application_name
is in the synchronous_standby_names value). I was a bit surprised by this
behaviour.

  By reading the pg_receivexlog documentation, I assumed that:

1) if I set application_name properly for pg_receivexlog (let's say
'barman_receive_wal')
2) then I set synchronous_standby_names so that barman_receive_wal is first
in the list
3) then I run pg_receivexlog with --synchronous

  I would find the pg_receivexlog in 'sync' state in the
pg_stat_replication view on the master.

  However, I kept receiving the 'async' state. After looking at the
documentation once more, I noticed that '--synchronous' was mentioned also
in the '--slot-name' option but the explanation - at least to me - was not
very clear.

  I tried again by creating a replication slot and passing it to
pg_receivexlog and this time I could see 'sync' as streaming state.

  Looking up the code in more details I see that, unless replication slot
are enabled, pg_receivexlog does not report the flush position (this is a
precaution that's been taken when '--synchronous' was probably not around).
Please find this very short patch which - in case replication slots are not
present but synchronous is - reports the flush position.

   I am not sure if it is a bug or not. I any case I guess we should
probably improve the documentation - it's a bit late here so maybe I can
try better tomorrow with a fresher mind. :)

Thanks,
Gabriele
--
 Gabriele Bartolini - 2ndQuadrant Italia - Director
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it


0001-pg_receivexlog-does-not-report-flush-position-with-s.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] File based Incremental backup v8

2015-03-07 Thread Gabriele Bartolini
Hi Bruce,

2015-03-08 5:37 GMT+11:00 Bruce Momjian br...@momjian.us:

 Desirability - Design - Implement - Test - Review - Commit

 This patch has continued in development without getting agreement on
 its Desirability or Design, meaning we are going to continue going back
 to those points until there is agreement.  Posting more versions of this
 patch is not going to change that.


Could you please elaborate that?

I actually think the approach that has been followed is what makes open
source and collaborative development work. The initial idea was based on
timestamp approach which, thanks to the input of several developers, led
Marco to develop LSN based checks and move forward the feature
implementation.

The numbers that Marco has posted clearly show that a lot of users will
benefit from this file-based approach for incremental backup through
pg_basebackup.

As far as I see it, the only missing bit is the pg_restorebackup tool which
is quite trivial - given the existing prototype in Python.

Thanks,
Gabriele


Re: [HACKERS] File based Incremental backup v8

2015-03-07 Thread Gabriele Bartolini
Hi Robert,

2015-03-07 2:57 GMT+11:00 Robert Haas robertmh...@gmail.com:

 By the way, unless I'm missing something, this patch only seems to
 include the code to construct an incremental backup, but no tools
 whatsoever to do anything useful with it once you've got it.


As stated previously, Marco is writing a tool called pg_restorebackup (the
prototype in Python has been already posted) to be included in the core. I
am in Australia now and not in the office so I cannot confirm it, but I am
pretty sure he had already written it and was about to send it to the list.

He's been trying to find more data - see the one that he's sent - in order
to convince that even a file-based approach is useful.

  I think that's 100% unacceptable.


I agree, that's why pg_restorebackup written in C is part of this patch.
See: https://wiki.postgresql.org/wiki/Incremental_backup


 Users need to be able to manipulate
 PostgreSQL backups using either standard operating system tools or
 tools provided with PostgreSQL.  Some people may prefer to use
 something like repmgr or pitrtools or omniptr in addition, but that
 shouldn't be a requirement for incremental backup to be usable.


Not at all. I believe those tools will have to use pg_basebackup and
pg_restorebackup. If they want to use streaming replication protocol they
will be responsible to make sure that - if the protocol changes - they
adapt their technology.

Agile development is good, but that does not mean you can divide a big
 project into arbitrarily small chunks.  At some point the chunks are
 too small to be sure that the overall direction is right, and/or
 individually useless.


The goal has always been to provide file-based incremental backup. I can
assure that this has always been our compass and the direction to follow.

I repeat that, using pg_restorebackup, this patch will transparently let
users benefit from incremental backup even when it will be moved to an
internal block-level logic. Users will continue to execute pg_basebackup
and pg_restorebackup, ignoring that with - for example 9.5 - it is
file-based (saving between 50-70% of space and time) of block level - for
example 9.6.

My proposal is that Marco provides pg_restorebackup according to the
initial plan - a matter of hours/days.

Cheers,
Gabriele


Re: [HACKERS] File based Incremental backup v8

2015-03-06 Thread Gabriele Bartolini
Hi Robert,

2015-03-06 3:10 GMT+11:00 Robert Haas robertmh...@gmail.com:

 But I agree with Fujii to the extent that I see little value in
 committing this patch in the form proposed.  Being smart enough to use
 the LSN to identify changed blocks, but then sending the entirety of
 every file anyway because you don't want to go to the trouble of
 figuring out how to revise the wire protocol to identify the
 individual blocks being sent and write the tools to reconstruct a full
 backup based on that data, does not seem like enough of a win.


I believe the main point is to look at a user interface point of view.
If/When we switch to a block level incremental support, this will be
completely transparent to the end user, even if we start with a file-level
approach with LSN check.

The win is already determined by the average space/time gained by users of
VLDB with a good chunk of read-only data. Our Barman users with incremental
backup (released recently - its algorithm can be compared to the one of
file-level backup proposed by Marco) can benefit on average of a data
deduplication ratio ranging between 50 to 70% of the cluster size.

A tangible example is depicted here, with Navionics saving 8.2TB a week
thanks to this approach (and 17 hours instead of 50 for backup time):
http://blog.2ndquadrant.com/incremental-backup-barman-1-4-0/

However, even smaller databases will benefit. It is clear that very small
databases as well as frequently updated ones won't be interested in
incremental backup, but that is never been the use case for this feature.

I believe that if we still think that this approach is not worth it, we are
making a big mistake. The way I see it, this patch follows an agile
approach and it is an important step towards incremental backup on a block
basis.


 As Fujii says, if we ship this patch as written, people will just keep
 using the timestamp-based approach anyway.


I think that allowing users to be able to backup in an incremental way
through streaming replication (even though based on files) will give more
flexibility to system and database administrators for their disaster
recovery solutions.

Thanks,
Gabriele


Re: [HACKERS] [RFC] Incremental backup v3: incremental PoC

2015-01-14 Thread Gabriele Bartolini
Hi Marco,

  thank you for sending an updated patch. I am writing down a report of
this initial (and partial) review.

IMPORTANT: This patch is not complete, as stated by Marco. See the
Conclusions section for my proposed TODO list.

== Patch application

I have been able to successfully apply your patch and compile it.
Regression tests passed.

== Initial run

I have created a fresh new instance of PostgreSQL and activated streaming
replication to be used by pg_basebackup. I have done a pgbench run with
scale 100.

I have taken a full consistent backup with pg_basebackup (in plain format):

pg_basebackup -v -F p -D $BACKUPDIR/backup-$(date '+%s') -x


I have been able to verify that the backup_profile is correctly placed in
the destination PGDATA directory. Here is an excerpt:

POSTGRESQL BACKUP PROFILE 1
START WAL LOCATION: 0/358 (file 00010003)
CHECKPOINT LOCATION: 0/38C
BACKUP METHOD: streamed
BACKUP FROM: master
START TIME: 2015-01-14 10:07:07 CET
LABEL: pg_basebackup base backup
FILE LIST
\N  \N  t   1421226427  206 backup_label
\N  \N  t   1421225508  88  postgresql.auto.conf
...


As suggested by Marco, I have manually taken the LSN from this file (next
version must do this automatically).
I have then executed pg_basebackup and activated the incremental feature by
using the LSN from the previous backup, as follows:

LSN=$(awk '/^START WAL/{print $4}' backup_profile)

pg_basebackup -v -F p -D $BACKUPDIR/backup-$(date '+%s') -I $LSN -x


The time taken by this operation has been much lower than the previous one
and the size is much lower (I have not done any operation in the meantime):

du -hs backup-1421226*
1,5Gbackup-1421226427
17M backup-1421226427


I have done some checks on the file system and then used the prototype of
recovery script in Python written by Marco.

./recover.py backup-1421226427 backup-1421226427 new-data

The cluster started successfully. I have then run a pg_dump of the pgbench
database and were able to reload it on the initial cluster.

== Conclusions

The first run of this patch seems promising.

While the discussion on the LSN map continues (which is mainly an
optimisation of this patch), I would really like to see this patch progress
as it would be a killer feature in several contexts (not in every context).

Just in this period we are releasing file based incremental backup for
Barman and customers using the alpha version are experiencing on average a
deduplication ratio between 50% to 70%. This is for example an excerpt of
barman show-backup from one of our customers (a daily saving of 550GB is
not bad):

 Base backup information:
Disk usage   : 1.1 TiB (1.1 TiB with WALs)
Incremental size : 564.6 GiB (-50.60%)
...

My opinion, Marco, is that for version 5 of this patch, you:

1) update the information on the wiki (it is outdated - I know you have
been busy with LSN map optimisation)
2) modify pg_basebackup in order to accept a directory (or tar file) and
automatically detect the LSN from the backup profile
3) add the documentation regarding the backup profile and pg_basebackup

Once we have all of this, we can continue trying the patch. Some unexplored
paths are:

* tablespace usage
* tar format
* performance impact (in both read-only and heavily updated contexts)
* consistency checks

I would then leave for version 6 the pg_restorebackup utility (unless you
want to do everything at once).

One limitation of the current recovery script is that it cannot accept
multiple incremental backups (it just accepts three parameters: base
backup, incremental backup and merge destination). Maybe you can change the
syntax as follows:

./recover.py DESTINATION BACKUP_1 BACKUP_2 [BACKUP_3, ...]

Thanks a lot for working on this.

I am looking forward to continuing the review.

Ciao,
Gabriele
--
 Gabriele Bartolini - 2ndQuadrant Italia - Managing Director
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it

2015-01-13 17:21 GMT+01:00 Marco Nenciarini marco.nenciar...@2ndquadrant.it
:

 Il 13/01/15 12:53, Gabriele Bartolini ha scritto:
  Hi Marco,
 
could you please send an updated version the patch against the current
  HEAD in order to facilitate reviewers?
 

 Here is the updated patch for incremental file based backup.

 It is based on the current HEAD.

 I'm now working to the client tool to rebuild a full backup starting
 from a file based incremental backup.

 Regards,
 Marco

 --
 Marco Nenciarini - 2ndQuadrant Italy
 PostgreSQL Training, Services and Support
 marco.nenciar...@2ndquadrant.it | www.2ndQuadrant.it



Re: [HACKERS] [RFC] Incremental backup v3: incremental PoC

2015-01-13 Thread Gabriele Bartolini
Hi Marco,

  could you please send an updated version the patch against the current
HEAD in order to facilitate reviewers?

Thanks,
Gabriele

--
 Gabriele Bartolini - 2ndQuadrant Italia - Managing Director
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it

2015-01-07 11:00 GMT+01:00 Marco Nenciarini marco.nenciar...@2ndquadrant.it
:

 Il 06/01/15 14:26, Robert Haas ha scritto:
  I suggest leaving this out altogether for the first version.  I can
  think of three possible ways that we can determine which blocks need
  to be backed up.  One, just read every block in the database and look
  at the LSN of each one.  Two, maintain a cache of LSN information on a
  per-segment (or smaller) basis, as you suggest here.  Three, scan the
  WAL generated since the incremental backup and summarize it into a
  list of blocks that need to be backed up.  This last idea could either
  be done when the backup is requested, or it could be done as the WAL
  is generated and used to populate the LSN cache.  In the long run, I
  think some variant of approach #3 is likely best, but in the short
  run, approach #1 (scan everything) is certainly easiest.  While it
  doesn't optimize I/O, it still gives you the benefit of reducing the
  amount of data that needs to be transferred and stored, and that's not
  nothing.  If we get that much working, we can improve things more
  later.
 

 Hi,
 The patch now uses the approach #1, but I've just sent a patch that uses
 the #2 approach.

 54ad016e.9020...@2ndquadrant.it

 Regards,
 Marco

 --
 Marco Nenciarini - 2ndQuadrant Italy
 PostgreSQL Training, Services and Support
 marco.nenciar...@2ndquadrant.it | www.2ndQuadrant.it




Re: [HACKERS] [RFC] Incremental backup v2: add backup profile to base backup

2014-10-06 Thread Gabriele Bartolini
Hello,

2014-10-06 17:51 GMT+02:00 Marco Nenciarini marco.nenciar...@2ndquadrant.it
:

 I agree that a full backup does not need to include a profile.

 I've added the option to require the profile even for a full backup, as
 it can be useful for backup softwares. We could remove the option and
 build the profile only during incremental backups, if required. However,
 I would avoid the needing to scan the whole backup to know the size of
 the recovered data directory, hence the backup profile.


I really like this approach.

I think we should leave users the ability to ship a profile file even in
case of full backup (by default disabled).

Thanks,
Gabriele


Re: [HACKERS] Proposal: Incremental Backup

2014-08-12 Thread Gabriele Bartolini
Hi Claudio,

2014-08-12 15:25 GMT+02:00 Claudio Freire klaussfre...@gmail.com:
 Still not safe. Checksum collisions do happen, especially in big data sets.

Can I ask you what you are currently using for backing up large data
sets with Postgres?

Thanks,
Gabriele


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal: Incremental Backup

2014-08-07 Thread Gabriele Bartolini
Hi Marco,

 With the current full backup procedure they are backed up, so I think
 that having them backed up with a rsync-like algorithm is what an user
 would expect for an incremental backup.

Exactly. I think a simple, flexible and robust method for file based
incremental backup is all we need. I am confident it could be done for
9.5.

I would like to quote every single word Simon said. Block level
incremental backup (with Robert's proposal) is definitely the ultimate
goal for effective and efficient physical backups. I see file level
incremental backup as a very good compromise, a sort of intermediate
release which could nonetheless produce a lot of benefits to our user
base, for years to come too.

Thanks,
Gabriele


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal: Incremental Backup

2014-08-05 Thread Gabriele Bartolini
Hi Claudio,

  I think there has been a misunderstanding. I agree with you (and I
think also Marco) that LSN is definitely a component to consider in
this process. We will come up with an alternate proposal which
considers LSNS either today or tomorrow. ;)

Thanks,
Gabriele
--
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it


2014-08-04 20:30 GMT+02:00 Claudio Freire klaussfre...@gmail.com:
 On Mon, Aug 4, 2014 at 5:15 AM, Gabriele Bartolini
 gabriele.bartol...@2ndquadrant.it wrote:
I really like the proposal of working on a block level incremental
 backup feature and the idea of considering LSN. However, I'd suggest
 to see block level as a second step and a goal to keep in mind while
 working on the first step. I believe that file-level incremental
 backup will bring a lot of benefits to our community and users anyway.

 Thing is, I don't see how the LSN method is that much harder than an
 on-disk bitmap. In-memory bitmap IMO is just a recipe for disaster.

 Keeping a last-updated-LSN for each segment (or group of blocks) is
 just as easy as keeping a bitmap, and far more flexible and robust.

 The complexity and cost of safely keeping the map up-to-date is what's
 in question here, but as was pointed before, there's no really safe
 alternative. Nor modification times nor checksums (nor in-memory
 bitmaps IMV) are really safe enough for backups, so you really want to
 use something like the LSN. It's extra work, but opens up a world of
 possibilities.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal: Incremental Backup

2014-08-04 Thread Gabriele Bartolini
Hi guys,

  sorry if I jump in the middle of the conversation. I have been
reading with much interest all that's been said above. However, the
goal of this patch is to give users another possibility while
performing backups. Especially when large databases are in use.

   I really like the proposal of working on a block level incremental
backup feature and the idea of considering LSN. However, I'd suggest
to see block level as a second step and a goal to keep in mind while
working on the first step. I believe that file-level incremental
backup will bring a lot of benefits to our community and users anyway.

  I base this sentence on our daily experience. We have to honour (and
the duty) to manage - probably - some of the largest Postgres
databases in the world. We currently rely on rsync to copy database
pages. Performing a full backup in 2 days instead of 9 days completely
changes disaster recovery policies in a company. Or even 2 hours
instead of 6.

My 2 cents,
Gabriele
--
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it


2014-08-01 19:05 GMT+02:00 Claudio Freire klaussfre...@gmail.com:
 On Fri, Aug 1, 2014 at 1:43 PM, desmodemone desmodem...@gmail.com wrote:



 2014-08-01 18:20 GMT+02:00 Claudio Freire klaussfre...@gmail.com:

 On Fri, Aug 1, 2014 at 12:35 AM, Amit Kapila amit.kapil...@gmail.com
 wrote:
  c) the map is not crash safe by design, because it needs only for
  incremental backup to track what blocks needs to be backuped, not for
  consistency or recovery of the whole cluster, so it's not an heavy cost
  for
  the whole cluster to maintain it. we could think an option (but it's
  heavy)
  to write it at every flush  on file to have crash-safe map, but I not
  think
  it's so usefull . I think it's acceptable, and probably it's better to
  force
  that, to say: if your db will crash, you need a fullbackup ,
 
  I am not sure if your this assumption is right/acceptable, how can
  we say that in such a case users will be okay to have a fullbackup?
  In general, taking fullbackup is very heavy operation and we should
  try to avoid such a situation.


 Besides, the one taking the backup (ie: script) may not be aware of
 the need to take a full one.

 It's a bad design to allow broken backups at all, IMNSHO.


 Hi Claudio,
  thanks for your observation
 First: the case it's after a crash of a database, and it's not something
 happens every day or every week. It's something that happens in rare
 conditions, or almost my experience is so. If it happens very often probably
 there are other problems.

 Not so much. In this case, the software design isn't software-crash
 safe, it's not that it's not hardware-crash safe.

 What I mean, is that an in-memory bitmap will also be out of sync if
 you kill -9 (or if one of the backends is killed by the OOM), or if it
 runs out of disk space too.

 Normally, a simple restart fixes it because pg will do crash recovery
 just fine, but now the bitmap is out of sync, and further backups are
 broken. It's not a situation I want to face unless there's a huge
 reason to go for such design.

 If you make it so that the commit includes flipping the bitmap, it can
 be done cleverly enough to avoid too much overhead (though it will
 have some), and you now have it so that any to-be-touched block is now
 part of the backup. You just apply all the bitmap changes in batch
 after a checkpoint, before syncing to disk, and before erasing the WAL
 segments. Simple, relatively efficient, and far more robust than an
 in-memory thing.

 Still, it *can* double checkpoint I/O on the worst case, and it's not
 an unfathomable case either.

 Second: to avoid the problem to know if the db needed to have a full backup
 to rebuild the map we could think to write in the map header the backup
 reference (with an id and LSN reference for example ) so  if the
 someone/something try to do an incremental backup after a crash, the map
 header will not have noone full backup listed [because it will be empty] ,
 and automaticcaly switch to a full one. I think after a crash it's a good
 practice to do a full backup, to see if there are some problems on files or
 on filesystems, but if I am wrong I am happy to know :) .

 After a crash I do not do a backup, I do a verification of the data
 (VACUUM and some data consistency checks usually), lest you have a
 useless backup. The backup goes after that.

 But, I'm not DBA guru.

 Remember that I propose a map in ram to reduce the impact on performances,
 but we could create an option to leave the choose to the user, if you want a
 crash safe map, at every flush will be updated also a map file , if not, the
 map will be in ram.

 I think the performance impact of a WAL-linked map isn't so big as to
 prefer the possibility of broken backups. I wouldn't even allow it.

 It's not free, making it crash safe, but it's not that expensive
 either

Re: [HACKERS] [PATCH] Support for pg_stat_archiver view

2014-02-01 Thread Gabriele Bartolini
Hi Michael and Fujii,

Il 01/02/14 17:46, Fujii Masao ha scritto:
 I think that it's OK to add that as TODO item. There might be
 the system that the speed of WAL archiving is slower than
 that of WAL generation, at peak time. The DBA of that system
 might want to monitor the size of archive queue.
I agree that it is an interesting thing to do. The reason I didn't
introduce it in the first place was that I did not want to make too many
changes in this first attempt.
 We can implement this by just counting the files with .ready
 extension in pg_xlog/archive_status directory. Or we can also
 implement that by adding new counter field in pg_stat_archiver
 structure, incrementing it whenever creating .ready file, and
 decrementing it whenever changing .ready to .done.
I would love to give it  a shot at the next opportunity.

Thanks,
Gabriele

-- 
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Relocation of tablespaces in pg_basebackup

2014-01-10 Thread Gabriele Bartolini
Hi Steeve,

Il 09/01/14 22:38, Steeve Lennmark ha scritto:
 I'm a barman user myself so that was actually my initial thought.

Ah! Very good!
 If there aren't some kind of hidden internal that I've missed I don't see
 a way to convert an OID (only have OID and path at this stage) to a
 tablespace name. This solution, even though not optimal, is a lot
 better than my initial one where I used the OID directly.

Try:

SELECT spcname, oid, pg_tablespace_location(oid) FROM pg_tablespace

Thanks,
Gabriele

-- 
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Relocation of tablespaces in pg_basebackup

2014-01-09 Thread Gabriele Bartolini
Hi Steeve,
 Il 09/01/14 22:10, Steeve Lennmark ha scritto:

 That's a much better solution, I attached a patch with the updated code.

 # SELECT oid, pg_tablespace_location(oid) FROM pg_tablespace;
 [...]
  16388 | /tmp/tblspc1
  16389 | /tmp/tblspc2

I'd suggest, a similar solution to the one we have adopted in Barman (if
you don't know it: www.pgbarman.org), that is:

--tablespace NAME:LOCATION [--tablespace NAME:location]

I prefer this over the location on the master as this might change over
time (at least more frequently than the tablespace name) and over servers.

 $ pg_basebackup -Xs -D backup/data -T /tmp/tblspc1:$(pwd)/backup/t1 -T
 /tmp/tblspc2:$(pwd)/backup/t2

With the above example, it would become:

$ pg_basebackup -Xs -D backup/data -T tblspc1:$(pwd)/backup/t1 -T
tblspc2:$(pwd)/backup/t2

Thanks,
Gabriele

-- 
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Support for pg_stat_archiver view

2014-01-06 Thread Gabriele Bartolini
Hi Fabrizio,

Il 05/01/14 20:46, Fabrizio Mello ha scritto:
 I don't see your code yet, but I would like to know if is possible to
 implement this view as an extension.
I wanted to do it as an extension - so that I could backport that to
previous versions of Postgres.

I do not think it is a possibility, given that the client code that is
aware of the events lies in pgarch.c.

Ciao,
Gabriele

-- 
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Support for pg_stat_archiver view

2014-01-05 Thread Gabriele Bartolini
Il 05/01/14 13:52, Magnus Hagander ha scritto:
 Yes, that's what I'm talking about. And just increment it by 1.
Done. I am attaching version 2 of the patch, which now implements only
one function (pg_stat_get_archiver()) and adds:

* failed attempts
* WAL of the last failed attempt
* time of the last failed attempt

Thanks for your inputs.

Ciao,
Gabriele

-- 
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it

diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 4ec6981..0094c19 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -270,6 +270,14 @@ postgres: replaceableuser/ replaceabledatabase/ 
replaceablehost/ re
  /row
 
  row
+  
entrystructnamepg_stat_archiver/indextermprimarypg_stat_archiver/primary/indexterm/entry
+  entryOne row only, showing statistics about the
+   WAL archiver process's activity. See
+   xref linkend=pg-stat-archiver-view for details.
+ /entry
+ /row
+
+ row
   
entrystructnamepg_stat_bgwriter/indextermprimarypg_stat_bgwriter/primary/indexterm/entry
   entryOne row only, showing statistics about the
background writer process's activity. See
@@ -648,6 +656,64 @@ postgres: replaceableuser/ replaceabledatabase/ 
replaceablehost/ re
/para
   /note
 
+  table id=pg-stat-archiver-view xreflabel=pg_stat_archiver
+   titlestructnamepg_stat_archiver/structname View/title
+
+   tgroup cols=3
+thead
+row
+  entryColumn/entry
+  entryType/entry
+  entryDescription/entry
+ /row
+/thead
+
+tbody
+ row
+  entrystructfieldarchived_wals//entry
+  entrytypebigint/type/entry
+  entryNumber of WAL files that have been successfully archived/entry
+ /row
+ row
+  entrystructfieldlast_archived_wal//entry
+  entrytypetext/type/entry
+  entryName of the last successfully archived WAL file/entry
+ /row
+ row
+  entrystructfieldlast_archived_wal_time//entry
+  entrytypetimestamp with time zone/type/entry
+  entryTime of the last successful archival operation/entry
+ /row
+ row
+  entrystructfieldfailed_attempts//entry
+  entrytypebigint/type/entry
+  entryNumber of failed attempts for archiving WAL files/entry
+ /row
+ row
+  entrystructfieldlast_failed_wal//entry
+  entrytypetext/type/entry
+  entryName of the WAL file of the last failed archival operation/entry
+ /row
+ row
+  entrystructfieldlast_failed_wal_time//entry
+  entrytypetimestamp with time zone/type/entry
+  entryTime of the last failed archival operation/entry
+ /row
+ row
+  entrystructfieldstats_reset//entry
+  entrytypetimestamp with time zone/type/entry
+  entryTime at which these statistics were last reset/entry
+ /row
+/tbody
+/tgroup
+  /table
+
+  para
+   The structnamepg_stat_archiver/structname view will always have a
+   single row, containing data about the archiver process of the cluster.
+  /para
+
+
   table id=pg-stat-bgwriter-view xreflabel=pg_stat_bgwriter
titlestructnamepg_stat_bgwriter/structname View/title
 
@@ -1613,6 +1679,8 @@ postgres: replaceableuser/ replaceabledatabase/ 
replaceablehost/ re
argument (requires superuser privileges).
Calling literalpg_stat_reset_shared('bgwriter')/ will zero all the
counters shown in the structnamepg_stat_bgwriter/ view.
+   Calling literalpg_stat_reset_shared('archiver')/ will zero all the
+   counters shown in the structnamepg_stat_archiver/ view.
   /entry
  /row
 
diff --git a/src/backend/catalog/system_views.sql 
b/src/backend/catalog/system_views.sql
index 575a40f..5ea8c87 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -672,6 +672,17 @@ CREATE VIEW pg_stat_xact_user_functions AS
 WHERE P.prolang != 12  -- fast check to eliminate built-in functions
   AND pg_stat_get_xact_function_calls(P.oid) IS NOT NULL;
 
+CREATE VIEW pg_stat_archiver AS
+SELECT
+s.archived_wals,
+s.last_archived_wal,
+s.last_archived_wal_time,
+s.failed_attempts,
+s.last_failed_wal,
+s.last_failed_wal_time,
+s.stats_reset
+FROM pg_stat_get_archiver() s;
+
 CREATE VIEW pg_stat_bgwriter AS
 SELECT
 pg_stat_get_bgwriter_timed_checkpoints() AS checkpoints_timed,
diff --git a/src/backend/postmaster/pgarch.c b/src/backend/postmaster/pgarch.c
index 2bb572e..60f957c 100644
--- a/src/backend/postmaster/pgarch.c
+++ b/src/backend/postmaster/pgarch.c
@@ -36,6 +36,7 @@
 #include access/xlog_internal.h
 #include libpq/pqsignal.h
 #include miscadmin.h
+#include pgstat.h
 #include postmaster/fork_process.h
 #include postmaster/pgarch.h
 #include postmaster/postmaster.h
@@ -46,6 +47,7 @@
 #include storage/pmsignal.h
 #include utils/guc.h
 #include utils/ps_status.h

Re: [HACKERS] [PATCH] Support for pg_stat_archiver view

2014-01-04 Thread Gabriele Bartolini
Hi Magnus,

Il 04/01/14 13:25, Magnus Hagander ha scritto:
 My first reaction was that exactly those two things were missing. And
 then I read your whole email :)

:)

 With those two, I think it would make much sense to have a view like
 this.

Ok, I will prepare version 2 with those.

 I'd suggest making the view on top of an SRF like pg_stat_replication
 and pg_stat_activity (for example), instead of a whole lot of separate
 function calls like the older stats views.

Ok, good idea.

 in pgarch_ArchiveDone() you seem to be increasing the m_archived_vals
 value for each call and then sending it off.  And then you add that
 number in the stats collector. Isn't that going to add the wrong
 number in the end - after a while, the archiver is going to send add
 100 when it's just sent one file? ISTM that pgstat_recv_archiver
 should just do ++ on the value?

You are right. The purpose was to set it to 1 in ArchiveDone (I might
have missed that change), so that I can manage the failed counters in
the same way. I will fix this in version 2.

 Oh, and you need to change the format id number of the stats file.

I have not found any instruction on how to set it. I assume you are
talking about this:

PGSTAT_FILE_FORMAT_ID0x01A5BC9B

Any suggestion is welcome.
 
 There's a quick review you for ;) I think it's definitely worthwhile
 with those things fixed (and a proper review, that was just a quick
 one-over)

Thanks for that. It already means a lot if you agree too it is worth it.

Ciao,
Gabriele

-- 
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [PATCH] Support for pg_stat_archiver view

2014-01-03 Thread Gabriele Bartolini
Hello,

  please find attached the patch that adds basic support for the
pg_stat_archiver system view, which allows users that have continuous
archiving procedures in place to keep track of some important metrics
and information.

  Currently, pg_stat_archiver displays:

* archived_wals: number of successfully archived WAL files since start
(or the last reset)
* last_archived_wal: last successfully archived WAL file
* last_archived_wal_time: timestamp of the latest successful WAL archival
* stats_reset: time of last stats reset

  This is an example of output:

postgres=# select * from pg_stat_archiver ;
-[ RECORD 1 ]--+--
archived_wals  | 1
last_archived_wal  | 00010001
last_archived_wal_time | 2014-01-04 01:01:08.858648+01
stats_reset| 2014-01-04 00:59:25.895034+01

  Similarly to pg_stat_bgwriter, it is possible to reset statistics just
for this context, calling the pg_stat_reset_shared('archiver') function.

  The patch is here for discussion and has been prepared against HEAD.
It includes also changes in the documentation and the rules.out test.

  I plan to add further information to the pg_stat_archiver view,
including the number of failed attempts of archival and the WAL and
timestamp of the latest failure. However, before proceeding, I'd like to
get some feedback on this small patch as well as advice on possible
regression tests to be added.

  Thank you.

Cheers,
Gabriele

-- 
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it

diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 4ec6981..6d45972 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -270,6 +270,14 @@ postgres: replaceableuser/ replaceabledatabase/ 
replaceablehost/ re
  /row
 
  row
+  
entrystructnamepg_stat_archiver/indextermprimarypg_stat_archiver/primary/indexterm/entry
+  entryOne row only, showing statistics about the
+   WAL archiver process's activity. See
+   xref linkend=pg-stat-archiver-view for details.
+ /entry
+ /row
+
+ row
   
entrystructnamepg_stat_bgwriter/indextermprimarypg_stat_bgwriter/primary/indexterm/entry
   entryOne row only, showing statistics about the
background writer process's activity. See
@@ -648,6 +656,49 @@ postgres: replaceableuser/ replaceabledatabase/ 
replaceablehost/ re
/para
   /note
 
+  table id=pg-stat-archiver-view xreflabel=pg_stat_archiver
+   titlestructnamepg_stat_archiver/structname View/title
+
+   tgroup cols=3
+thead
+row
+  entryColumn/entry
+  entryType/entry
+  entryDescription/entry
+ /row
+/thead
+
+tbody
+ row
+  entrystructfieldarchived_wals//entry
+  entrytypebigint/type/entry
+  entryNumber of WAL files that have been successfully archived/entry
+ /row
+ row
+  entrystructfieldlast_archived_wal//entry
+  entrytypetext/type/entry
+  entryName of the last successfully archived WAL file/entry
+ /row
+ row
+  entrystructfieldlast_archived_wal_time//entry
+  entrytypetimestamp with time zone/type/entry
+  entryTime of the last successful archival operation/entry
+ /row
+ row
+  entrystructfieldstats_reset//entry
+  entrytypetimestamp with time zone/type/entry
+  entryTime at which these statistics were last reset/entry
+ /row
+/tbody
+/tgroup
+  /table
+
+  para
+   The structnamepg_stat_archiver/structname view will always have a
+   single row, containing data about the archiver process of the cluster.
+  /para
+
+
   table id=pg-stat-bgwriter-view xreflabel=pg_stat_bgwriter
titlestructnamepg_stat_bgwriter/structname View/title
 
@@ -1613,6 +1664,8 @@ postgres: replaceableuser/ replaceabledatabase/ 
replaceablehost/ re
argument (requires superuser privileges).
Calling literalpg_stat_reset_shared('bgwriter')/ will zero all the
counters shown in the structnamepg_stat_bgwriter/ view.
+   Calling literalpg_stat_reset_shared('archiver')/ will zero all the
+   counters shown in the structnamepg_stat_archiver/ view.
   /entry
  /row
 
diff --git a/src/backend/catalog/system_views.sql 
b/src/backend/catalog/system_views.sql
index 575a40f..3a8d7b4 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -672,6 +672,13 @@ CREATE VIEW pg_stat_xact_user_functions AS
 WHERE P.prolang != 12  -- fast check to eliminate built-in functions
   AND pg_stat_get_xact_function_calls(P.oid) IS NOT NULL;
 
+CREATE VIEW pg_stat_archiver AS
+SELECT
+pg_stat_get_archiver_archived_wals() AS archived_wals,
+pg_stat_get_archiver_last_archived_wal() AS last_archived_wal,
+pg_stat_get_archiver_last_archived_wal_time() AS 
last_archived_wal_time,
+pg_stat_get_archiver_stat_reset_time

[HACKERS] Italian PGDay 2012, Call for papers is now open

2012-08-27 Thread Gabriele Bartolini
The sixth edition of the Italian PostgreSQL Day (PGDay.IT 2012) will be 
held on November 23 in Prato, Tuscany.


The International Call for Papers is now open. Talks and presentations 
in English are accepted.


Information in English for papers submission is available at: 
http://2012.pgday.it/call-for-papers/international-call-for-papers/


For any kind of information, do not hesitate to contact the organising 
committee via email at pgday2...@itpug.org. Thank you.


Cheers,
Gabriele

Gabriele Bartolini
Italian PostgreSQL Users Group (ITPUG), President



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Support for foreign keys with arrays

2012-08-01 Thread Gabriele Bartolini

Il 30/07/12 19:16, Gabriele Bartolini ha scritto:

And it can be also interchanged with Array element Foreign Key.

As promised, we have sent a patch for the Array ELEMENT foreign key 
support.


We are discontinuing this thread here and continue discussing the former 
Foreign Keys with Arrays/EACH Foreign Key feature support from here: 
http://archives.postgresql.org/pgsql-hackers/2012-08/msg00011.php


Thank you,
Gabriele

--
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Support for foreign keys with arrays

2012-07-30 Thread Gabriele Bartolini

Hi guys,

   it is time to give another go to this patch. I would like to thank 
everyone for suggestions and ideas expressed through this list.


   We are happy that Part 0 of the patch has been committed 
(array_remove() and array_replace() functions), which will be useful in 
Part 2 (too early now to talk about it). Let's not rush though and 
focus on Part 1 of the patch. :)
   First, I would like to find a unique and general term for this 
feature. We started with Foreign keys with arrays and ended up with 
EACH foreign keys. Following Peter's suggestion, we will use the 
ELEMENT keyword (so that maybe in the future we can extend the usage). 
Our proposals are:


* Array Foreign Key
* Foreign Key Arrays
* ELEMENT Foreign Keys
* ...

   Which one is your favourite?

   Secondly, we have decided to split the patch we proposed back in 
March in two smaller patches. The most important goal of Part 1 is to 
find a generally accepted syntax. By removing ACTION handling from Part 
1 (see limitations below), we believe that the community will be able 
to contribute more to driving future directions and requirements. Based 
on Peter's comments, we would like to propose the use of the ELEMENT 
keyword, rather than the EACH keyword proposed in March. You can find 
three examples at the bottom of this email.


   Finally, Part 1 of this patch will have these limitations:

* Only one |ELEMENT| column allowed in a multi-column key (same as the 
proposed patch in March)

* Supported actions|:
 * NO ACTION||
 * RESTRICT|

Cheers,
Gabriele


Example 1: inline usage

CREATE TABLE drivers (
driver_id integer PRIMARY KEY,
first_name text,
last_name text,
...
);

CREATE TABLE races (
race_id integer PRIMARY KEY,
title text,
race_day DATE,
...
final_positions integer[] ELEMENT REFERENCES drivers
);



Example 2: with FOREIGN KEY

CREATE TABLE races (
race_id integer PRIMARY KEY,
title text,
race_day DATE,
...
final_positions integer[],
FOREIGN KEY (ELEMENT final_positions) REFERENCES drivers
);



Example 3: with ALTER TABLE

CREATE TABLE races (
race_id integer PRIMARY KEY,
title text,
race_day DATE,
...
final_positions integer[]
);

ALTER TABLE races ADD FOREIGN KEY (ELEMENT final_positions) REFERENCES 
drivers;


--
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it



Re: [HACKERS] [PATCH] Support for foreign keys with arrays

2012-07-30 Thread Gabriele Bartolini

Il 30/07/12 19:11, Alvaro Herrera ha scritto:
I was going to say the same, except I had ELEMENT as a capitalized 
word in my mind (and in the docs it'd be within literal). 

So: Array ELEMENT Foreign Key

+1 for me

And it can be also interchanged with Array element Foreign Key.

--
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch pg_is_in_backup()

2012-06-14 Thread Gabriele Bartolini

Hello Gilles,

   thank you very much for your patience (and thank you Marco for 
supporting me). I apologise for the delay.


   I have retested the updated patch and it works fine with me. It is 
ready for committer for me.


Cheers,
Gabriele


Il 14/06/12 11:36, Marco Nenciarini ha scritto:

Hi Gilles,

unfortunately Gabriele has been very busy recently and he asked me to
check again the status of this patch for this commit fest. In order to
speed up the application of the patch, I am sending an updated version
which correctly compiles with current head. Gabriele will then proceed
with the review.

Thank you,
Marco



This body part will be downloaded on demand.



--
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it



Re: [HACKERS] [PATCH] Support for foreign keys with arrays

2012-06-13 Thread Gabriele Bartolini

Hi Noah,

Il 10/06/12 22:53, Noah Misch ha scritto:

This has bitrotted; please refresh.

Also, please evaluate Peter's feedback:
http://archives.postgresql.org/message-id/1333693277.32606.9.ca...@vanquo.pezone.net


Our goal is to work on this patch from the next commit fest.

What we are about to do for this commit fest is to split the previous 
patch and send a small one just for the array_remove() and 
array_replace() functions.


Then we will sit down and organise the development of the feature 
according to Peter's feedback. It is important indeed that we find a 
commonly accepted terminology and syntax for this feature.


I hope this sounds like a reasonable plan. Thank you.

Cheers,
Gabriele

--
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch pg_is_in_backup()

2012-05-02 Thread Gabriele Bartolini

Hi Gilles,

   Sorry for the delay.

Il 03/04/12 14:21, Gilles Darold ha scritto:

+1, this is also my point of view.


   I have looked at the patch that contains both pg_is_in_backup() and 
pg_backup_start_time().


   From a functional point of view it looks fine to me. I was thinking 
of adding the BackupInProgress() at the beginning of 
pg_backup_start_time(), but the AllocateFile() function already make 
sure the file exists.


   I have performed some basic testing of both functions and tried to 
inject invalid characters in the start time field of the backup_label 
file and it is handled (with an exception) by the server. Cool.


   I spotted though some formatting issues, in particular indentation 
and multi-line comments. Some rows are longer than 80 chars.


   Please resubmit with these cosmetic changes and it is fine with me. 
Thank you.


Cheers,
Gabriele

--
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch pg_is_in_backup()

2012-03-31 Thread Gabriele Bartolini

Hi Gilles,

   first and foremost, sorry for jumping in this thread so late. I read 
all previous discussions and I'd be happy to help you with this patch.


Agreed and sorry for the response delay. I've attached 2 patches here, 
the first one is the same as before with just the renaming of the 
function into pg_is_in_exclusive_backup().


   My quick response: I really like the idea of having a function that 
simply returns a boolean value. To be honest, I would have called it 
pg_is_in_backup() as you originally did - after all, I do not execute 
pg_start_exclusive_backup() or pg_stop_exclusive_backup(). It is more 
intuitive and pairs with pg_is_in_recovery(). I have never found any 
mention whatsoever in the documentation that talks about exclusive 
backup, and I am afraid it would generate confusion.


   However, I leave this up to the rest of the community's judgement 
(here is my opinion).


   I agree also that a function that returns the timestamp of the start 
of the backup might be useful. My opinion with the 'exclusive' keyword 
applies here too (I would simply name it pg_backup_start_time()).


   Finally, I tried to apply the patch but it looks like we need a new 
version that can apply with current HEAD. If you can do that, I am happy 
to assist with the review.


   Have a good weekend.

Thank you,
Gabriele

--
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Support for foreign keys with arrays

2012-03-26 Thread Gabriele Bartolini

Hello Tom,


 I started to look at this patch a bit.  I'm quite confused by the fact
 that some, but not all, of the possible FK action types now come in an
 EACH variant.  This makes no sense at all to me.  ISTM that EACH is a
 property of the FK constraint as a whole, that is that it says the
 constraint is from array elements on the referencing side to column
 values on the referenced side, rather than the normal case of column
 values to column values.


The specification that Gianni posted applies only to v5 of the patch.
The original idea was indeed to have the whole foreign key to be defined 
with an EACH property (initially we were actually thinking of the ARRAY 
keyword following your advice, then for grammar reasons we opted for EACH).
However, during the actual development we faced some difficulties with 
multi-column foreign keys.
Through discussions on this list and with the reviewer we opted to allow 
the EACH keyword at column level.
We started with the case where at most one column is EACH, which is 
easier to understand.
The case of two or more EACH columns in the same foreign key has been 
left open for future development.



 Why would the possible actions be affected, and why only these?


We  had to add the EACH variant to two actions (EACH CASCADE and EACH 
SET  NULL), in order to leave users the flexibility to choose the 
operation  to be performed in case of delete or update of one or more 
elements from  the referenced table.
Some  users indeed might prefer that, in case a referenced row is 
deleted,  the whole row is deleted (therefore they'd use the standard 
CASCADE  action). Others mights simply require that references to that 
row is  removed from the referencing array (therefore they'd use the 
variant  EACH CASCADE action). The same concept applies for SET NULL 
(the whole  array is set to NULL) and EACH SET NULL (referencing 
elements are set to  NULL).


Thank you.

Cheers,
Gabriele

--
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it



Re: [HACKERS] [PATCH] Support for foreign keys with arrays

2012-03-15 Thread Gabriele Bartolini

Hello,

Il 15/03/12 05:03, Marco Nenciarini ha scritto:

please find attached v4 of the EACH Foreign Key patch (formerly known
also as Foreign Key Array).
Please find attached version v4b which replaces v4 and fixes a bug in 
array_replace() and adds further regression tests on array_replace() and 
fixes a few typos in the documentation.


Thank you,
Gabriele

--
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it



EACH-foreign-key.v4b.patch.bz2
Description: BZip2 compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Support for foreign keys with arrays

2012-02-21 Thread Gabriele Bartolini

Hello Erik,

Il 21/02/12 16:22, Erik Rijkers ha scritto:

(I reply to an older message but I did use the newest patch, version 3)

I wanted to have a look at v3 of this patch today, but it seems it won't apply 
and compile anymore.


As Peter pointed out, it is due to a new Boolean field added in the 
pg_proc catalog. I have updated our patch to set this value to false by 
default.



I'd like to try this out a bit; could you see if you can fix it?


Thank you so much for dedicating your time on reviewing this patch.

I have attached version 3b (noting that is just a very small 
modification/change to major 3 version of the patch), which passes all 
tests.


Cheers,
Gabriele

--
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it



EACH-foreign-key-constraints-aka-foreign-key-arrays.v3b.patch.bz2
Description: BZip2 compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Dry-run mode for pg_archivecleanup

2012-02-06 Thread Gabriele Bartolini

Ciao Alvaro,

Il 06/02/12 16:02, Alvaro Herrera ha scritto:

FWIW I committed this last week, though I changed the debug message
wording slightly -- hope that's OK; it can be adjusted of course.

http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=b2e431a4db81a735d1474c4d1565a20b835878c9



Beautiful! :)

Thank you very much.

Ciao,
Gabriele

--
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Dry-run mode for pg_archivecleanup

2012-01-31 Thread Gabriele Bartolini

Hi Robert,

sorry for the delay.

Il 27/01/12 15:47, Robert Haas ha scritto:
This email thread seems to have trailed off without reaching a 
conclusion. The patch is marked as Waiting on Author in the CommitFest 
application, but I'm not sure that's accurate. Can we try to nail this 
down? 
Here is my final version which embeds comments from Josh. I have also 
added debug information to be printed in case '-d' is given.


I will update the CommitFest website.

Thanks,
Gabriele

--
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it

*** a/contrib/pg_archivecleanup/pg_archivecleanup.c
--- b/contrib/pg_archivecleanup/pg_archivecleanup.c
***
*** 36,41  const char *progname;
--- 36,42 
  
  /* Options and defaults */
  bool  debug = false;  /* are we debugging? */
+ bool  dryrun = false; /* are we performing a dry-run 
operation? */
  
  char *archiveLocation;/* where to find the archive? */
  char *restartWALFileName; /* the file from which we can restart restore */
***
*** 119,124  CleanupPriorWALFiles(void)
--- 120,137 
{
snprintf(WALFilePath, MAXPGPATH, %s/%s,
 archiveLocation, xlde-d_name);
+ 
+   if (dryrun)
+   {
+   /* Prints the name of the file and
+* skips the actual removal of the file 
*/
+   fprintf(stdout, %s\n, WALFilePath);
+   if (debug)
+   fprintf(stderr, %s: dry-run 
mode suggests removing file \%s\\n,
+   progname, 
WALFilePath);
+   continue;
+   }
+ 
if (debug)
fprintf(stderr, %s: removing file 
\%s\\n,
progname, WALFilePath);
***
*** 205,210  usage(void)
--- 218,224 
printf(  %s [OPTION]... ARCHIVELOCATION OLDESTKEPTWALFILE\n, 
progname);
printf(\nOptions:\n);
printf(  -d generates debug output (verbose mode)\n);
+   printf(  -n shows the names of the files that would 
have been removed (dry-run)\n);
printf(  --help show this help, then exit\n);
printf(  --version  output version information, then exit\n);
printf(\n
***
*** 241,253  main(int argc, char **argv)
}
}
  
!   while ((c = getopt(argc, argv, d)) != -1)
{
switch (c)
{
case 'd':   /* Debug mode */
debug = true;
break;
default:
fprintf(stderr, Try \%s --help\ for more 
information.\n, progname);
exit(2);
--- 255,270 
}
}
  
!   while ((c = getopt(argc, argv, dn)) != -1)
{
switch (c)
{
case 'd':   /* Debug mode */
debug = true;
break;
+   case 'n':   /* Dry-Run mode */
+   dryrun = true;
+   break;
default:
fprintf(stderr, Try \%s --help\ for more 
information.\n, progname);
exit(2);
*** a/doc/src/sgml/pgarchivecleanup.sgml
--- b/doc/src/sgml/pgarchivecleanup.sgml
***
*** 98,103  pg_archivecleanup:  removing file 
archive/00010037000E
--- 98,112 
/listitem
   /varlistentry
  
+  varlistentry
+   termoption-n/option/term
+   listitem
+para
+ Print the names of the files that would have been removed on 
filenamestdout/ (performs a dry run).
+/para
+   /listitem
+  /varlistentry
+ 
  /variablelist
 /para
  

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Dry-run mode for pg_archivecleanup

2012-01-31 Thread Gabriele Bartolini

Hi Alvaro,

Il 27/01/12 23:55, Alvaro Herrera ha scritto:
If you do that, please make sure you use two complete separate strings 
instead of building one from spare parts. This bit is missing the _() 
stuff though. 
Currently pg_archivecleanup does not comply with localisation standards. 
I can put this in the todo list however. Let me know what you think.


Cheers,
Gabriele

--
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Support for foreign keys with arrays

2012-01-31 Thread Gabriele Bartolini

Hi Noah,

Il 21/01/12 21:42, Noah Misch ha scritto:

On Sat, Jan 14, 2012 at 08:18:48PM +0100, Marco Nenciarini wrote:
I greatly like that name; it would still make sense for other 
aggregate types, should we ever expand its use. Please complete the 
name change: the documentation, catalog entries, etc should all call 
them something like each foreign key constraints (I don't 
particularly like that exact wording).
Ok, we'll go with EACH Foreign Key Constraints but I would allow the 
synonym Foreign Key Array, especially in the documentation.
How about: FOREIGN KEY(col_a, EACH col_b, col_c) REFERENCES pktable 
(a, b, c)
We really like this syntax. However, as also Simon suggested, we'd go 
for switching to this syntax, but stick to a simpler implementation for 
9.2. We will then be able to expand the functionality, by keeping the 
same syntax, from 9.3.
To complete the ARRAY - EACH transition, I would suggest names like 
CASCADE EACH/SET EACH NULL.

Sounds perfect.

Marco will go through all your comments and will send version 3 shortly.

Thank you,
Gabriele

--
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Dry-run mode for pg_archivecleanup

2012-01-15 Thread Gabriele Bartolini

Hi Josh,

Il 15/01/12 01:13, Josh Kupershmidt ha scritto:

I have signed on to review this patch for the 2012-01 CF. The patch
applies cleanly, includes the necessary documentation, and implements
a useful feature.

Thank you for dedicating your time to this matter.

I think the actual debugging line:

+   fprintf(stdout, %s\n, WALFilePath);
My actual intention was to have the filename as output of the command, 
in order to easily pipe it to another script. Hence my first choice 
was to use the stdout channel, considering also that pg_archivecleanup 
in dry-run mode is harmless and does not touch the content of the directory.

Oh, and I think the removing file...  debug message above should not
be printed in dryrun-mode, lest we confuse the admin.

Yes, I agree with you here.

Let me know what you think about my reasons for the stdout channel, then 
I will send v2.


Thanks again!

Ciao,
Gabriele

--
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Dry-run mode for pg_archivecleanup

2011-12-11 Thread Gabriele Bartolini

Hi guys,

  I have added the '-n' option to pg_archivecleanup which performs a 
dry-run and outputs the names of the files to be removed to stdout 
(making possible to pass the list via pipe to another process).


  Please find attached the small patch. I submit it to the CommitFest.

Thanks,
Gabriele

--
  Gabriele Bartolini - 2ndQuadrant Italia
  PostgreSQL Training, Services and Support
  gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it


*** a/contrib/pg_archivecleanup/pg_archivecleanup.c
--- b/contrib/pg_archivecleanup/pg_archivecleanup.c
***
*** 36,41  const char *progname;
--- 36,42 
  
  /* Options and defaults */
  bool  debug = false;  /* are we debugging? */
+ bool  dryrun = false; /* are we performing a dry-run 
operation? */
  
  char *archiveLocation;/* where to find the archive? */
  char *restartWALFileName; /* the file from which we can restart restore */
***
*** 123,128  CleanupPriorWALFiles(void)
--- 124,136 
fprintf(stderr, %s: removing file 
\%s\\n,
progname, WALFilePath);
  
+   if (dryrun)
+   {
+   /* Prints the name of the file and
+* skips the actual removal of the file 
*/
+   fprintf(stdout, %s\n, WALFilePath);
+   continue;
+   }
rc = unlink(WALFilePath);
if (rc != 0)
{
***
*** 205,210  usage(void)
--- 213,219 
printf(  %s [OPTION]... ARCHIVELOCATION OLDESTKEPTWALFILE\n, 
progname);
printf(\nOptions:\n);
printf(  -d generates debug output (verbose mode)\n);
+   printf(  -n shows the names of the files that would 
have been removed (dry-run)\n);
printf(  --help show this help, then exit\n);
printf(  --version  output version information, then exit\n);
printf(\n
***
*** 241,253  main(int argc, char **argv)
}
}
  
!   while ((c = getopt(argc, argv, d)) != -1)
{
switch (c)
{
case 'd':   /* Debug mode */
debug = true;
break;
default:
fprintf(stderr, Try \%s --help\ for more 
information.\n, progname);
exit(2);
--- 250,265 
}
}
  
!   while ((c = getopt(argc, argv, dn)) != -1)
{
switch (c)
{
case 'd':   /* Debug mode */
debug = true;
break;
+   case 'n':   /* Dry-Run mode */
+   dryrun = true;
+   break;
default:
fprintf(stderr, Try \%s --help\ for more 
information.\n, progname);
exit(2);
*** a/doc/src/sgml/pgarchivecleanup.sgml
--- b/doc/src/sgml/pgarchivecleanup.sgml
***
*** 98,103  pg_archivecleanup:  removing file 
archive/00010037000E
--- 98,112 
/listitem
   /varlistentry
  
+  varlistentry
+   termoption-n/option/term
+   listitem
+para
+ Print the names of the files that would have been removed on 
filenamestdout/ (performs a dry run).
+/para
+   /listitem
+  /varlistentry
+ 
  /variablelist
 /para
  

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Support for foreign keys with arrays

2011-12-10 Thread Gabriele Bartolini

Hi Noah,

thanks for your feedback.

Il 20/11/11 14:05, Noah Misch ha scritto:

What about making ON UPDATE CASCADE an error?  That way, we can say that ARRAY
action  always applies to array elements, and plainaction  always applies to
entire rows.

SET DEFAULT should now be fine to allow.  It's ARRAY SET DEFAULT, in your new
terminology, that wouldn't make sense.


I have tried to gather your ideas with Gianni's and come to a 
compromise, which I hope you can both agree on.


The reason why I would be inclined to leave CASCADE act on rows (rather 
than array elements as Gianni suggests) is for backward compatibility 
(people that are already using referential integrity based on array 
values). For the same reason, I am not sure whether we should raise an 
error on update, but will leave this for later.


So, here is a summary:

--- - -
   |   ON|   ON|
Action | DELETE  | UPDATE  |
--- - -
CASCADE|   Row   |  Error  |
SET NULL   |   Row   |   Row   |
SET DEFAULT|   Row   |   Row   |
ARRAY CASCADE  | Element | Element |
ARRAY SET NULL | Element | Element |
NO ACTION  |-|-|
RESTRICT   |-|-|
--- - -

If that's fine with you guys, Marco and I will refactor the development 
based on these assumptions.


Thanks,
Gabriele

--
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Support for foreign keys with arrays

2011-11-20 Thread Gabriele Bartolini

Hi Noah,

thanks for your unvaluable review, rich of useful and thorough comments 
and notes. Marco and myself will add your proposed tests as soon as 
possible (most likely after the Italian PGDay which is this week). 
However, given the feedback received from other developers too 
(including Tom), I would first concentrate on defining the syntax and 
how referential integrity actions should work.


Il 17/11/11 05:28, Noah Misch ha scritto:
Removing values from the array seems best to me. There's no doubt 
about what ON UPDATE CASCADE should do, and having ON DELETE CASCADE 
excise individual array elements is consistent with that. It's less 
clear for SET NULL, but I'd continue with a per-element treatment. I'd 
continue to forbid SET DEFAULT. However, Jeff Davis did expect ON 
DELETE CASCADE to remove entire rows: 
http://archives.postgresql.org/message-id/1288119207.15279.24.camel@jdavis-ux.asterdata.local 
So, perhaps the behavior needs to be user-selectable. 
I would agree with what Tom is saying here, given that SQL specs do not 
say anything about this feature. We could leave standard REFERENCES 
keyword handling the array value as it is now. If a user wants to take 
advantage of in-array referential integrity, we could implement the 
special keyword ARRAY REFERENCES as Tom proposes (or a similar keyword).


Consequently, we need to agree on what the actions on delete and update 
operations are. In case of ARRAY REFERENCES, I would be inclined to 
leave the same meaning of ROW scope actions to CASCADE and SET NULL 
actions, while disallowing the SET DEFAULT action (as Noah suggests 
too). At the same time, I would add two actions for ARRAY REFERENCES 
which will be processing array elements:


* ARRAY CASCADE
* ARRAY SET NULL

(Of course if you are welcome to propose a better naming convention). 
This table summarises the scope of the actions.


--- - -
   |   ON|   ON|
Action | DELETE  | UPDATE  |
--- - -
CASCADE|   Row   | Element |
SET NULL   |   Row   |   Row   |
ARRAY CASCADE  | Element | Element |
ARRAY SET NULL | Element | Element |
SET DEFAULT|  Error  |  Error  |
NO ACTION  |-|-|
RESTRICT   |-|-|
--- - -

For instance, with an ARRAY REFERENCES ... ON DELETE CASCADE, I would 
expect that the whole row is deleted (as Jeff et al. say). However, if I 
specify ARRAY REFERENCES ... ON DELETE ARRAY CASCADE, I would expect 
that elements in the referencing array are removed.
Similary the ARRAY REFERENCES ... ON DELETE SET NULL will set the row 
to NULL, whereas ARRAY REFERENCES ... ON DELETE ARRAY SET NULL will 
set individual elements in the referencing array to NULL.


In case of updates, SET NULL and ARRAY SET NULL works the same (updating 
the whole row or the single elements). CASCADE and ARRAY CASCADE are 
synonyms, as they would work in individual elements (which is the action 
that makes more sense anyway).


I believe that, before we proceed with one implementation or another, it 
is important we discuss this sort of things and agree on a possible 
long-term path (so that we can organise intermediate deliverables).


Thanks,
Gabriele

--
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [PATCH] Support for foreign keys with arrays

2011-11-04 Thread Gabriele Bartolini
This patch adds basic support of arrays in foreign keys, by allowing to 
define a referencing column as an array of elements having the same type 
as the referenced column in the referenced table.
Every NOT NULL element in the referencing array is matched against the 
referenced table.



Example:

CREATE TABLE pt (
  id INTEGER PRIMARY KEY,
  ...
);

CREATE TABLE ft (
  id SERIAL PRIMARY KEY,
  pids INTEGER[] REFERENCES pt,
  ...
);


This patch is for discussion and has been built against HEAD.
It compiles and passes all regressions tests (including specific ones - 
see the src/test/regress/sql/foreign_key.sql file).
Empty arrays, multi-dimensional arrays, duplicate elements and NULL 
values are allowed.


We had to enforce some limitations, due to the lack (yet) of a clear and 
universally accepted behaviour and strategy.
For example, consider the ON DELETE action on the above tables: in case 
of delete of a record in the 'pt' table, should we remove the whole row 
or just the values from the array?

We hope we can start a discussion from here.

Current limitations:

* Only arrays of the same type as the primary key in the referenced 
table are supported

* multi-column foreign keys are not supported (only single column)
* Only RESTRICT and NO ACTION methods for referential integrity 
enforcement are currently supported


TODO:
* Improve check for empty arrays, which might interfere with SSI (see below)
* Verify interaction with serializable transactions

AUTHORS:
* Gabriele Bartolini gabriele.bartol...@2ndquadrant.it
* Marco Nenciarini marco.nenciar...@2ndquadrant.it

Cheers,
Gabriele (and Marco)

--
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it

*** a/doc/src/sgml/ddl.sgml
--- b/doc/src/sgml/ddl.sgml
***
*** 764,769  CREATE TABLE order_items (
--- 764,796 
  the last table.
 /para
  
+para
+ Another option you have with foreign keys is to use a referencing column
+ which is an array of elements with the same type as the referenced column
+ in the related table. This feature, also known as firsttermforeign key 
arrays/firstterm,
+ is described in the following example:
+ 
+ programlisting
+ CREATE TABLE countries (
+ country_id integer PRIMARY KEY,
+ name text,
+ ...
+ );
+ 
+ CREATE TABLE people (
+ person_id integer PRIMARY KEY,
+ first_name text,
+ last_name text,
+ ...
+ citizenship_ids integer[] REFERENCES countries
+ );
+ /programlisting
+ 
+ The above example lists in an array the citizenships held by
+ a person and enforces referential integrity checks.
+ 
+/para
+ 
 indexterm
  primaryCASCADE/primary
  secondaryforeign key action/secondary
***
*** 852,857  CREATE TABLE order_items (
--- 879,891 
 /para
  
 para
+ When working with foreign key arrays, you are currently limited
+ to literalRESTRICT/literal and literalNO ACTION/literal
+ options, as the default behaviour for the other cases is not
+ clearly and universally determined yet.
+/para
+ 
+para
  More information about updating and deleting data is in xref
  linkend=dml.
 /para
*** a/doc/src/sgml/ref/create_table.sgml
--- b/doc/src/sgml/ref/create_table.sgml
***
*** 576,581  CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] 
TABLE [ IF NOT EXI
--- 576,587 
   /para
  
   para
+   If the referencing column is an array of elements of the same type as
+   the referenced column in the referenced table, the value of each element
+   of the array will be matched against some row of the referenced table.
+  /para
+ 
+  para
A value inserted into the referencing column(s) is matched against the
values of the referenced table and referenced columns using the
given match type.  There are three match types: literalMATCH
***
*** 634,640  CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] 
TABLE [ IF NOT EXI
   para
Delete any rows referencing the deleted row, or update the
value of the referencing column to the new value of the
!   referenced column, respectively.
   /para
  /listitem
 /varlistentry
--- 640,647 
   para
Delete any rows referencing the deleted row, or update the
value of the referencing column to the new value of the
!   referenced column, respectively. Foreign key arrays are not
!   supported by this action (as the behaviour is not easily 
determined).
   /para
  /listitem
 /varlistentry
***
*** 643,649  CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] 
TABLE [ IF NOT EXI
  termliteralSET NULL/literal/term
  listitem
   para
!   Set the referencing column(s

[HACKERS] Italian PGDay 2011, Call for papers is now open

2011-09-02 Thread Gabriele Bartolini
The fifth edition of the Italian PostgreSQL Day (PGDay.IT 2011) will be 
held on November 25 in Prato.


It will cover topics for users, developers and contributors to 
PostgreSQL, as well as decision and policy makers. For more information 
about the conference, please visit the website at http://2011.pgday.it/.


Proposals for talks are now being accepted. Please note that talks will 
be accepted in both English and Italian. This call for papers is 
targeted for English speakers.


Each session will last 30 minutes, and may be on any topic related to 
PostgreSQL. Suggested topic areas include:


 * Developing applications for PostgreSQL
 * Administering large scale PostgreSQL installations
 * Case studies of PostgreSQL deployments
 * PostgreSQL tools and utilities
 * PostgreSQL hacking
 * Community  user groups
 * Tuning the server
 * Migrating from other systems
 * Scaling/replication
 * Benchmarking  hardware
 * PostgreSQL related products

Of course, proposals for talks on other PostgreSQL related topics will 
also be gladly accepted. A limited number of longer, 60-minute, slots 
will be available, especially aimed at practical tutorials. Please 
indicate clearly in your submission if you wish to make a 60-minute talk.


It is also possible for you to give a tutorial on advanced use, 
administration or development of a PostgreSQL cluster. Every tutorial 
must have a live demo, didactic material for attendees and have to be no 
more long than 60 minutes. All the material required for the tutorial 
itself (e.g., slides, code snippets, howtos, etc.) are on the lecturer(s).


Finally, there will be a session of five minute lightning talks. A 
separate call for proposals will be made for these further on.


The submission deadline is October 16th, 2011. Selected speakers will be 
notified on October 18th, 2011. Please email your proposals to 
pgday2...@itpug.org, and include the following information:


 * Name
 * Company (if applicable)
 * Biography (a paragraph about you and your involvement with PostgreSQL)
 * Presentation title
 * Presentation abstract
 * Specific requests

The proposals will be considered by a committee that will finalise the 
schedule to be published nearer the conference date. If your proposal 
has been accepted, you will be informed by email.


Submission form and more detailed information are available as a PDF 
document here: 
http://2011.pgday.it/sites/default/files/PGDay2011-CFP-EN_0.pdf


We look forward to hearing from you, and seeing you in Prato in November!

Gabriele Bartolini
Italian PostgreSQL Users Group (ITPUG), President



[HACKERS] Dry Run mode for pg_archivecleanup

2011-06-27 Thread Gabriele Bartolini

Hi guys,

   I have added the '-n' option to pg_archivecleanup which performs a 
dry-run and outputs the names of the files to be removed to stdout 
(making possible to pass the list via pipe to another process). Please 
find attached the small patch.


Thanks,
Gabriele

--
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it

From 25fcf05ff787dae2b0d62de423a363f8597f1d42 Mon Sep 17 00:00:00 2001
From: Gabriele Bartolini gabriele.bartol...@2ndquadrant.it
Date: Mon, 27 Jun 2011 12:27:09 +0200
Subject: [PATCH] Add dryrun option to pg_archivecleanup

---
 contrib/pg_archivecleanup/pg_archivecleanup.c |   13 -
 doc/src/sgml/pgarchivecleanup.sgml|9 +
 2 files changed, 21 insertions(+), 1 deletions(-)

diff --git a/contrib/pg_archivecleanup/pg_archivecleanup.c 
b/contrib/pg_archivecleanup/pg_archivecleanup.c
index dd8a451..0bbec92 100644
--- a/contrib/pg_archivecleanup/pg_archivecleanup.c
+++ b/contrib/pg_archivecleanup/pg_archivecleanup.c
@@ -36,6 +36,7 @@ const char *progname;
 
 /* Options and defaults */
 bool   debug = false;  /* are we debugging? */
+bool   dryrun = false; /* are we performing a dry-run 
operation? */
 
 char  *archiveLocation;/* where to find the archive? */
 char  *restartWALFileName; /* the file from which we can restart restore */
@@ -123,6 +124,12 @@ CleanupPriorWALFiles(void)
fprintf(stderr, %s: removing file 
\%s\\n,
progname, WALFilePath);
 
+   if (dryrun) {
+   /* Prints the name of the file and
+* skips the actual removal of the file 
*/
+   fprintf(stdout, %s\n, WALFilePath);
+   continue;
+   }
rc = unlink(WALFilePath);
if (rc != 0)
{
@@ -205,6 +212,7 @@ usage(void)
printf(  %s [OPTION]... ARCHIVELOCATION OLDESTKEPTWALFILE\n, 
progname);
printf(\nOptions:\n);
printf(  -d generates debug output (verbose mode)\n);
+   printf(  -n shows the names of the files that would 
have been removed (dry-run)\n);
printf(  --help show this help, then exit\n);
printf(  --version  output version information, then exit\n);
printf(\n
@@ -241,13 +249,16 @@ main(int argc, char **argv)
}
}
 
-   while ((c = getopt(argc, argv, d)) != -1)
+   while ((c = getopt(argc, argv, dn)) != -1)
{
switch (c)
{
case 'd':   /* Debug mode */
debug = true;
break;
+   case 'n':   /* Dry-Run mode */
+   dryrun = true;
+   break;
default:
fprintf(stderr, Try \%s --help\ for more 
information.\n, progname);
exit(2);
diff --git a/doc/src/sgml/pgarchivecleanup.sgml 
b/doc/src/sgml/pgarchivecleanup.sgml
index ddffa32..8148c53 100644
--- a/doc/src/sgml/pgarchivecleanup.sgml
+++ b/doc/src/sgml/pgarchivecleanup.sgml
@@ -98,6 +98,15 @@ pg_archivecleanup:  removing file 
archive/00010037000E
   /listitem
  /varlistentry
 
+ varlistentry
+  termoption-n/option/term
+  listitem
+   para
+Print the names of the files that would have been removed on 
filenamestdout/ (performs a dry run).
+   /para
+  /listitem
+ /varlistentry
+
 /variablelist
/para
 
-- 
1.7.4.1


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposed patch: Smooth replication during VACUUM FULL

2011-05-09 Thread Gabriele Bartolini

Il 09/05/11 09:14, Simon Riggs ha scritto:

Anyway, I hope I can give you more detailed information tomorrow. Thanks.

Did you find anything else of note, or is your patch ready to commit?


Unfortunately I did not have much time to run further tests.

The ones I have done so far show that it mostly works (see attached 
graph), but there are some unresolved spikes that will require further 
work in 9.2.


Cheers,
Gabriele

--
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it

attachment: smooth-replication-vacuum-full-s50-600.png
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposed patch: Smooth replication during VACUUM FULL

2011-05-02 Thread Gabriele Bartolini

Il 02/05/11 18:20, Simon Riggs ha scritto:

I'm sure Gabriele can add those things as well - that also looks like
another 1 line change.


Yes, today we have performed some tests with that patch as well 
(attached is version 2). The version 2 of the patch (which includes the 
change Tom suggested on Saturday), smooths the process even more.


You can look at the attached graph for now - even though we are 
currently relaunching a test with all 3 different versions from scratch 
(unpatched, patch v1 and patch v2), with larger data in order to confirm 
this behaviour.



I'm just observing that the patch as-is appears effective and I feel
it is important.


Exactly. One thing also important to note as well is that with the 
vacuum delay being honoured, vacuum full operations in a SyncRep 
scenario take less time as well - as the load is more distributed over time.


You can easily spot in the graphs the point where VACUUM FULL 
terminates, then it is just a matter of flushing the WAL delay for 
replication.


Anyway, I hope I can give you more detailed information tomorrow. Thanks.

Cheers,
Gabriele

--
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it

attachment: vacuum_full_delay-v2.pngdiff --git a/src/backend/commands/cluster.c b/src/backend/commands/cluster.c
index bcc7d1e..fa3d22f 100644
--- a/src/backend/commands/cluster.c
+++ b/src/backend/commands/cluster.c
@@ -894,7 +894,8 @@ copy_heap_data(Oid OIDNewHeap, Oid OIDOldHeap, Oid 
OIDOldIndex,
Buffer  buf;
boolisdead;
 
-   CHECK_FOR_INTERRUPTS();
+   /* Launches vacuum delay */
+   vacuum_delay_point();
 
if (indexScan != NULL)
{
@@ -1012,7 +1013,7 @@ copy_heap_data(Oid OIDNewHeap, Oid OIDOldHeap, Oid 
OIDOldIndex,
HeapTuple   tuple;
boolshouldfree;
 
-   CHECK_FOR_INTERRUPTS();
+   vacuum_delay_point();
 
tuple = tuplesort_getheaptuple(tuplesort, true, 
shouldfree);
if (tuple == NULL)

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Proposed patch: Smooth replication during VACUUM FULL

2011-04-30 Thread Gabriele Bartolini

Hi guys,

I have noticed that during VACUUM FULL on reasonably big tables, 
replication lag climbs. In order to smooth down the replication lag, I 
propose the attached patch which enables vacuum delay for VACUUM FULL.


Please find attached the patch and below more information on this 
specific issue.


Cheers,
Gabriele


== Scenario

I have setup a simple SyncRep scenario with one master and one standby 
on the same server.

On the master I have setup vacuum_cost_delay = 10 milliseconds.

I have created a scale 50 pgbench database, which produces a 640MB 
pgbench_accounts table (about 82k pages). I have then launched a 60 
seconds pgbench activity with 4 concurrent clients with the goal to make 
some changes to the pgbench table (approximately 1800 changes on my laptop).


== Problem observed

Replication lag climbs during VACUUM FULL.

== Proposed change

Enable vacuum delay for VACUUM FULL (and CLUSTER).

== Test

I have then launched a VACUUM FULL operation on the pgbench_accounts 
table and measured the lag in bytes every 5 seconds, by calculating the 
difference between the current location and the sent location.


Here is a table with lag values. The first column (sec) is the sampling 
time (every 5 seconds for the sake of simplicity here), the second 
column (mlag) is the master lag on the current HEAD instance, the third 
column (mlagpatch) is the lag measured on the patched Postgres instance.


 sec |  mlag | mlagpatch
-+---+---
  0  |  1896424  |0
  5  | 15654912  |  4055040
 10  |  8019968  | 13893632
 15  | 16850944  |  4177920
 20  | 10969088  | 21102592
 25  | 11468800  |  2277376
 30  |  7995392  | 13893632
 35  | 14811136  | 20660224
 40  |  6127616  |0
 45  |  6914048  |  5136384
 50  |  5996544  | 13500416
 55  | 14155776  |  9043968
 60  | 23298048  | 11722752
 65  | 15400960  | 18202624
 70  | 17858560  | 28049408
 75  |  8560640  | 34865152
 80  | 19628032  | 33161216
 85  | 25526272  | 39976960
 90  | 23183360  | 23683072
 95  | 23265280  |   303104
100  | 24346624  |  3710976
105  | 24813568  |0
110  | 32587776  |  7651328
115  | 42827776  | 12369920
120  | 50167808  | 14991360
125  | 60260352  |  3850240
130  | 62750720  |  5160960
135  | 68255744  |  9355264
140  | 60653568  | 14336000
145  | 68780032  | 16564224
150  | 74342400  |  5398528
155  | 84639744  | 11321344
160  | 92741632  | 16302080
165  | 70123520  | 20234240
170  | 13606912  | 23248896
175  | 20586496  | 29278208
180  | 16482304  |  1900544
185  |0  |0

As you can see, replication lag on HEAD's PostgreSQL reaches 92MB (160 
seconds) before starting to decrease (when the operation terminates).
The test result is consistent with the expected behaviour of cost-based 
vacuum delay.


--
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it

diff --git a/src/backend/commands/cluster.c b/src/backend/commands/cluster.c
index 191ef54..f10ae3c 100644
--- a/src/backend/commands/cluster.c
+++ b/src/backend/commands/cluster.c
@@ -877,7 +877,8 @@ copy_heap_data(Oid OIDNewHeap, Oid OIDOldHeap, Oid 
OIDOldIndex,
Buffer  buf;
boolisdead;
 
-   CHECK_FOR_INTERRUPTS();
+   /* Launches vacuum delay */
+   vacuum_delay_point();
 
if (indexScan != NULL)
{

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Uppercase SGML entity declarations

2011-04-04 Thread Gabriele Bartolini

Hi,

  I have made very small modifications to a few files in the 
documentation directory, which involve SGML entity declarations. 
Currently they are all written lowercase, the patch makes them 
uppercase. This won't affect SGML parsing, as SGML is case insensitive - 
even though it is a good practice to write SGML declarations in uppercase.


   An example is:

!entity abort  system abort.sgml

   written

!ENTITY abort  SYSTEM abort.sgml


   This patch makes also every single directive of them XML compliant.

   The list of files that have been changed is:

doc/src/sgml/Makefile
doc/src/sgml/filelist.sgml
doc/src/sgml/postgres.sgml
doc/src/sgml/ref/allfiles.sgml
doc/src/sgml/standalone-install.sgml

   Thanks.

Cheers,
Gabriele

--
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it

diff --git a/doc/src/sgml/Makefile b/doc/src/sgml/Makefile
index 3041a53..c5078a8 100644
--- a/doc/src/sgml/Makefile
+++ b/doc/src/sgml/Makefile
@@ -126,8 +126,8 @@ bookindex.sgml: HTML.index
 # changes.
 version.sgml: $(top_srcdir)/configure
{ \
- echo !entity version \$(VERSION)\; \
- echo !entity majorversion \$(MAJORVERSION)\; \
+ echo !ENTITY version \$(VERSION)\; \
+ echo !ENTITY majorversion \$(MAJORVERSION)\; \
}  $@
 
 features-supported.sgml: 
$(top_srcdir)/src/backend/catalog/sql_feature_packages.txt 
$(top_srcdir)/src/backend/catalog/sql_features.txt
@@ -220,9 +220,9 @@ HISTORY.html: generate_history.pl $(wildcard 
$(srcdir)/release*.sgml)
rm tempfile_HISTORY.sgml
 
 regress_README.html: regress.sgml
-   ( echo '!doctype chapter PUBLIC -//OASIS//DTD DocBook V4.2//EN ['; \
- echo '!entity % standalone-ignore IGNORE'; \
- echo '!entity % standalone-include INCLUDE ]'; \
+   ( echo '!DOCTYPE chapter PUBLIC -//OASIS//DTD DocBook V4.2//EN ['; \
+ echo '!ENTITY % standalone-ignore IGNORE'; \
+ echo '!ENTITY % standalone-include INCLUDE ]'; \
  cat $ ) tempfile_regress_README.sgml
$(JADE.text) -V nochunks tempfile_regress_README.sgml  $@
rm tempfile_regress_README.sgml
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 1a60796..ed39e0b 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -1,176 +1,176 @@
 !-- doc/src/sgml/filelist.sgml --
 
-!entity historySYSTEM history.sgml
-!entity info   SYSTEM info.sgml
-!entity intro  SYSTEM intro.sgml
-!entity legal  SYSTEM legal.sgml
-!entity notation   SYSTEM notation.sgml
-!entity problems   SYSTEM problems.sgml
+!ENTITY historySYSTEM history.sgml
+!ENTITY info   SYSTEM info.sgml
+!ENTITY intro  SYSTEM intro.sgml
+!ENTITY legal  SYSTEM legal.sgml
+!ENTITY notation   SYSTEM notation.sgml
+!ENTITY problems   SYSTEM problems.sgml
 
 !-- tutorial --
-!entity advanced   SYSTEM advanced.sgml
-!entity query  SYSTEM query.sgml
-!entity start  SYSTEM start.sgml
+!ENTITY advanced   SYSTEM advanced.sgml
+!ENTITY query  SYSTEM query.sgml
+!ENTITY start  SYSTEM start.sgml
 
 !-- currently unused, but contains some interesting information --
-!entity sqlSYSTEM sql.sgml
+!ENTITY sqlSYSTEM sql.sgml
 
 !-- user's guide --
-!entity array  SYSTEM array.sgml
-!entity datatype   SYSTEM datatype.sgml
-!entity ddlSYSTEM ddl.sgml
-!entity dmlSYSTEM dml.sgml
-!entity func   SYSTEM func.sgml
-!entity indicesSYSTEM indices.sgml
-!entity mvcc   SYSTEM mvcc.sgml
-!entity performSYSTEM perform.sgml
-!entity queriesSYSTEM queries.sgml
-!entity rowtypes   SYSTEM rowtypes.sgml
-!entity syntax SYSTEM syntax.sgml
-!entity textsearch SYSTEM textsearch.sgml
-!entity typeconv   SYSTEM typeconv.sgml
+!ENTITY array  SYSTEM array.sgml
+!ENTITY datatype   SYSTEM datatype.sgml
+!ENTITY ddlSYSTEM ddl.sgml
+!ENTITY dmlSYSTEM dml.sgml
+!ENTITY func   SYSTEM func.sgml
+!ENTITY indicesSYSTEM indices.sgml
+!ENTITY mvcc   SYSTEM mvcc.sgml
+!ENTITY performSYSTEM perform.sgml
+!ENTITY queriesSYSTEM queries.sgml
+!ENTITY rowtypes   SYSTEM rowtypes.sgml
+!ENTITY syntax SYSTEM syntax.sgml
+!ENTITY textsearch SYSTEM textsearch.sgml
+!ENTITY typeconv   SYSTEM typeconv.sgml
 
 !-- administrator's guide --
-!entity backupSYSTEM backup.sgml
-!entity charset   SYSTEM charset.sgml
-!entity client-auth   SYSTEM client-auth.sgml
-!entity diskusage SYSTEM diskusage.sgml
-!entity high-availability  SYSTEM high-availability.sgml
-!entity installation  SYSTEM installation.sgml
-!entity installw  SYSTEM install-windows.sgml
-!entity maintenance   SYSTEM maintenance.sgml
-!entity manage-ag SYSTEM manage-ag.sgml
-!entity monitoringSYSTEM monitoring.sgml
-!entity regress   SYSTEM regress.sgml
-!entity recovery-config SYSTEM recovery-config.sgml
-!entity runtime   SYSTEM

Re: [HACKERS] Uppercase SGML entity declarations

2011-04-04 Thread Gabriele Bartolini

Hi Robert (and Tom),

Il 04/04/11 16:57, Robert Haas ha scritto:

Forgive me for asking what may seem like a stupid question, but what's
not XML compliant about them now, and why do we care?  The text is
only ever going to parse as SGML (not XML) so I guess I don't see why
it matters.  I don't really object to the proposed patch but I guess
I'm not sure what it accomplishes.


My apologies, I should have explained it better and at once.

Here is the context. I am coordinating the Italian translation of the 
documentation and we are about to create a branch for 9.1. The 
publishing mechanism is based on DocBook XML (as well as the French 
one), as we are facing extreme difficulties publishing using SGML tools.


My intention was to start and change some simple documentation files in 
order to make our conversion process from SGML to XML smoother, while 
keeping the SGML compatibility of the original documentation intact. 
Also I was trying to make the ENTITY declarations consistent throughout 
the documents.


I hope I gave you a more clear explanation.

Cheers,
Gabriele

--
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [DOCS] [HACKERS] Uppercase SGML entity declarations

2011-04-04 Thread Gabriele Bartolini

Il 04/04/11 18:37, Tom Lane ha scritto:

AFAIK, the main stumbling block for that is that XML doesn't allow
abbreviated close tags (ie,foowhatever/).  Which is something that
we are not likely to give up.  So I'm not sure of the point of changing
something as trivial as entity declaration casing.  You're going to end
up having to fork the documentation anyway, or at least feed it through
an SGML to XML converter.  So why not fix the entity casing then?
We do it. However, given that the files that have been changed are 
edited very rarely, the scope of the patch is limited and compatibility 
with SGML is kept, I thought changing this at source level would be 
something to propose and submit.


Ciao,
Gabriele

--
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [DOCS] [HACKERS] Uppercase SGML entity declarations

2011-04-04 Thread Gabriele Bartolini

Il 04/04/11 22:26, Robert Haas ha scritto:

I think you still need to update Solution.pm to match.


Here it is, including change of 3 'Id' attributes (I made them lowercase).

Thanks,
Gabriele

--
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index bb5527c..9266558 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1,6 +1,6 @@
 !-- doc/src/sgml/config.sgml --
 
-chapter Id=runtime-config
+chapter id=runtime-config
   titleServer Configuration/title
 
   indexterm
diff --git a/doc/src/sgml/recovery-config.sgml 
b/doc/src/sgml/recovery-config.sgml
index 7f515dc..144b880 100644
--- a/doc/src/sgml/recovery-config.sgml
+++ b/doc/src/sgml/recovery-config.sgml
@@ -1,6 +1,6 @@
 !-- doc/src/sgml/recovery-config.sgml --
 
-chapter Id=recovery-config
+chapter id=recovery-config
   titleRecovery Configuration/title
 
   indexterm
diff --git a/doc/src/sgml/typeconv.sgml b/doc/src/sgml/typeconv.sgml
index 71ddc0f..f233e9d 100644
--- a/doc/src/sgml/typeconv.sgml
+++ b/doc/src/sgml/typeconv.sgml
@@ -1,6 +1,6 @@
 !-- doc/src/sgml/typeconv.sgml --
 
-chapter Id=typeconv
+chapter id=typeconv
 titleType Conversion/title
 
 indexterm zone=typeconv
diff --git a/src/tools/msvc/Solution.pm b/src/tools/msvc/Solution.pm
index e1fe4e9..1a7b94d 100644
--- a/src/tools/msvc/Solution.pm
+++ b/src/tools/msvc/Solution.pm
@@ -389,8 +389,8 @@ EOF
 
 open(O, doc/src/sgml/version.sgml) || croak Could not write to 
version.sgml\n;
 print O EOF;
-!entity version $self-{strver}
-!entity majorversion $self-{majorver}
+!ENTITY version $self-{strver}
+!ENTITY majorversion $self-{majorver}
 EOF
 close(O);
 }

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Italian PGDay 2010, Call for papers

2010-10-08 Thread Gabriele Bartolini



The fourth edition of the Italian PostgreSQL Day (PGDay.IT 2010) will be 
held on December 10 in Rome.


It will cover topics for users, developers and contributors to 
PostgreSQL, as well as decision and policy makers. For more information 
about the conference, please visit the website at http://2010.pgday.it/.


Proposals for talks are now being accepted. Please note that talks will 
be accepted in both English and Italian. This call for papers is 
targeted for English speakers.


Each session will last 30 minutes, and may be on any topic related to 
PostgreSQL. Suggested topic areas include:


   * Developing applications for PostgreSQL
   * Administering large scale PostgreSQL installations
   * Case studies of PostgreSQL deployments
   * PostgreSQL tools and utilities
   * PostgreSQL hacking
   * Community  user groups
   * Tuning the server
   * Migrating from other systems
   * Scaling/replication
   * Benchmarking  hardware
   * PostgreSQL related products

Of course, proposals for talks on other PostgreSQL related topics will 
also be gladly accepted.A limited number of longer, 60-minute, slots 
will be available, especially aimed at practical tutorials. Please 
indicate clearly in your submission if you wish to make a 60-minute talk.


It is also possible for you to give a tutorial on advanced use, 
administration or development of a PostgreSQL cluster. Every tutorial 
must have a live demo, didactic material for attendees and have to be no 
more long than 60 minutes. All the material required for the tutorial 
itself (e.g., slides, code snippets, howtos, etc.) are on the lecturer(s).


Finally, there will be a session of five minute lightning talks. A 
separate call for proposals will be made for these further on.


The submission deadline is October 24th, 2010. Selected speakers will be 
notified on October 27th, 2010. Please email your proposals to 
pgday2...@itpug.org mailto:pgday2...@itpug.org, and include the 
following information:


   * Name
   * Company (if applicable)
   * Biography (a paragraph about you and your involvement with PostgreSQL)
   * Presentation title
   * Presentation abstract
   * Specific requests

The proposals will be considered by a committee that will finalise the 
schedule to be published nearer the conference date. If your proposal 
has been accepted, you will be informed by email.


We look forward to hearing from you, and seeing you in Rome in December!

Gabriele Bartolini
Italian PostgreSQL Users Group (ITPUG), President



Re: [HACKERS] psycopg2 license changed

2010-02-15 Thread Gabriele Bartolini

Ciao Federico,

Federico Di Gregorio ha scritto:

I just wanted all interested people know that psycopg2 2.0.14 to be
released in the next few days will be under the LGPL3 + OpenSSL
exception (example code and tests under the LGPL3 alone because they are
never linked to OpenSSL).
  

Thank you so much for your contribution!

I hope this makes everybody happy, have fun,
  
This is great news. I also want to point out that our valuable ITPUG 
member Daniele Varrazzo has started to write some documentation about 
PsycoPG2, which can be found here: http://initd.org/psycopg/docs/


It would be good if we could update our wiki as well in order to include 
this resource too.


Ciao,
Gabriele

--
Gabriele Bartolini - 2ndQuadrant Italia
PostgreSQL Training, Services and Support
gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Confusion over Python drivers

2010-02-08 Thread Gabriele Bartolini

Hi there,

Greg Smith ha scritto:
Looks like the first action item is to talk with the Psycopg people 
about their license.


Oh:  and I'm going to take care of this.  License changes can be a 
very sensitive topic and I'm told that discussion probably needs to 
happy in Italian too; I can arrange that.


I can try and help with this issue, given my role with the Italian 
PostgreSQL community and PostgreSQL business with 2ndQuadrant Italia. I 
have met Psycopg's developer a couple of times at open-source 
conferences. I have great respect for his work and his contribution in 
the open-source community, and I will be very happy to try and explain 
the situation to him.


I will keep you posted.

Ciao,
Gabriele

--
Gabriele Bartolini - 2ndQuadrant Italia
PostgreSQL Training, Services and Support
gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MySQL-ism help patch for psql

2010-01-20 Thread Gabriele Bartolini
I would personally emulate \d and take the chance for showing a funny  
warning, something like: hey, it's not MySql! or similar. I am sure  
we will Finder something appropriate. :)


Inviato da iPhone

Il giorno 20/gen/2010, alle ore 16.30, Kevin Grittner kevin.gritt...@wicourts.gov 
 ha scritto:



Dimitri Fontaine dfonta...@hi-media.com wrote:


I'll give my vote to Peter's idea that show tables; should better
act as if you typed \d.


I guess we don't need a tables GUC.  Show all wouldn't include it?
Would we require a semicolon?  Do we support \d-style globs?

Still seems kinda messy.  +1 for help to show the PostgreSQL command
as a guess for what they want to do.  -1 for MySQL emulation.

-Kevin

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Mentors needed urgently for SoC PostgreSQL Student Internships

2009-03-27 Thread Gabriele Bartolini

Ciao Josh,

Josh Berkus ha scritto:
What this all hinges on is getting some really solid mentors who have 
projects they'd like students to work on, and can commit 
unconditionally to having 5 hours a week or more, over a 3-month 
period, to work with the student.
Thanks for letting us know. However for this year we (as 2ndQuadrant) 
have just planned to collaborate with some Italian Universities, 
starting from the University of Pisa (I spoke to their IT students last 
Monday). I don't think we can dedicate more time to mentoring in the 
short period (that's a pity, I know). :(


However, thanks again for keeping us informed.

Ciao,
Gabriele

--
Gabriele Bartolini - 2ndQuadrant Italia
PostgreSQL Training, Services and Support
gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers