Re: [HACKERS] directory archive format for pg_dump

2010-12-16 Thread Greg Smith
Moving onto the directory archive part of this patch, the feature seems 
to work as advertised; here's a quick test case:


createdb pgbench
pgbench -i -s 1 pgbench
pg_dump -F d -f test
pg_restore -k test
pg_restore -l test
createdb copy
pg_restore -d copy test

The copy made that way looked good.  There's a good chunk of code in the 
patch that revolves around BLOB support.  We need to get someone who is 
more familiar with those than me to suggest some tests for that part 
before this gets committed.  If you could suggest how to test that code, 
that would be helpful.


There's a number of small things that I'd like to see improved in new 
rev of this code


pg_dump:  help message for --file needs to mention that this is 
overloaded to also specify the output directory too.


pg_dump:  the documentation for --file should say the directory is 
created, and must not exist when you start.  The code catches this well, 
but that expectation is not clear until you try it.


pg_restore:  the help message check the directory archive would be 
clearer as check an archive in directory format.


There are some tab vs. space whitespace inconsistencies in the 
documentation added.


The comments at the beginning of functions could be more consistent.  
Early parts of the code have a header for each function that's 
extensive.  Maybe even a bit more than needed.  I'm not sure why it's 
important to document here which of these functions is 
optional/mandatory for example, and getting rid of just those would trim 
a decent number of lines out of the patch.  But then at the end, all of 
the new functions added aren't documented at all.  Some of those are 
near trivial, but it would be better to have at least a small 
descriptive header for them.


The comment header at the beginning of pg_backup_directory is a bit 
weird.  I guess Philip Warner should still be credited as the author of 
the code this was based on, but it's a weird seeing a new file 
attributed solely to him.  Also, there's an XXX in the identification 
field there that should be filled in with the file name.


There's your feedback for this round.  I hope we'll see an updated patch 
from you as part of the next CommitFest.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services and Supportwww.2ndQuadrant.us
PostgreSQL 9.0 High Performance: http://www.2ndQuadrant.com/books


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] directory archive format for pg_dump

2010-12-16 Thread Heikki Linnakangas

On 16.12.2010 12:12, Greg Smith wrote:

Moving onto the directory archive part of this patch, the feature seems
to work as advertised; here's a quick test case:

createdb pgbench
pgbench -i -s 1 pgbench
pg_dump -F d -f test
pg_restore -k test
pg_restore -l test
createdb copy
pg_restore -d copy test

The copy made that way looked good. There's a good chunk of code in the
patch that revolves around BLOB support. We need to get someone who is
more familiar with those than me to suggest some tests for that part
before this gets committed. If you could suggest how to test that code,
that would be helpful.

There's a number of small things that I'd like to see improved in new
rev of this code
...


In addition to those:

The check functionality seems orthogonal, it should be splitted off to 
a separate patch. It would possibly be useful to be perform sanity 
checks on an archive in custom format too, and the directory format 
works just as well without it.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] directory archive format for pg_dump

2010-12-16 Thread Heikki Linnakangas

On 16.12.2010 17:23, Heikki Linnakangas wrote:

On 16.12.2010 12:12, Greg Smith wrote:

There's a number of small things that I'd like to see improved in new
rev of this code
...


In addition to those:
...


One more thing: the motivation behind this patch is to allow parallel 
pg_dump in the future, so we should be make sure this patch caters well 
for that.


As soon as we have parallel pg_dump, the next big thing is going to be 
parallel dump of the same table using multiple processes. Perhaps we 
should prepare for that in the directory archive format, by allowing the 
data of a single table to be split into multiple files. That way 
parallel pg_dump is simple, you just split the table in chunks of 
roughly the same size, say 10GB each, and launch a process for each 
chunk, writing to a separate file.


It should be a quite simple add-on to the current patch, but will make 
life so much easier for parallel pg_dump. It would also be helpful to 
work around file size limitations on some filesystems.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] directory archive format for pg_dump

2010-12-16 Thread Robert Haas
On Thu, Dec 16, 2010 at 12:48 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
 One more thing: the motivation behind this patch is to allow parallel
 pg_dump in the future, so we should be make sure this patch caters well for
 that.

 As soon as we have parallel pg_dump, the next big thing is going to be
 parallel dump of the same table using multiple processes. Perhaps we should
 prepare for that in the directory archive format, by allowing the data of a
 single table to be split into multiple files. That way parallel pg_dump is
 simple, you just split the table in chunks of roughly the same size, say
 10GB each, and launch a process for each chunk, writing to a separate file.

 It should be a quite simple add-on to the current patch, but will make life
 so much easier for parallel pg_dump. It would also be helpful to work around
 file size limitations on some filesystems.

Sounds reasonable.  Are you planning to do this and commit?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] directory archive format for pg_dump

2010-12-16 Thread Heikki Linnakangas

On 16.12.2010 19:58, Robert Haas wrote:

On Thu, Dec 16, 2010 at 12:48 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com  wrote:

One more thing: the motivation behind this patch is to allow parallel
pg_dump in the future, so we should be make sure this patch caters well for
that.

As soon as we have parallel pg_dump, the next big thing is going to be
parallel dump of the same table using multiple processes. Perhaps we should
prepare for that in the directory archive format, by allowing the data of a
single table to be split into multiple files. That way parallel pg_dump is
simple, you just split the table in chunks of roughly the same size, say
10GB each, and launch a process for each chunk, writing to a separate file.

It should be a quite simple add-on to the current patch, but will make life
so much easier for parallel pg_dump. It would also be helpful to work around
file size limitations on some filesystems.


Sounds reasonable.  Are you planning to do this and commit?


I'll defer to Joachim, assuming he has the time  energy.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] directory archive format for pg_dump

2010-12-16 Thread Joachim Wieland
On Thu, Dec 16, 2010 at 12:48 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
 As soon as we have parallel pg_dump, the next big thing is going to be
 parallel dump of the same table using multiple processes. Perhaps we should
 prepare for that in the directory archive format, by allowing the data of a
 single table to be split into multiple files. That way parallel pg_dump is
 simple, you just split the table in chunks of roughly the same size, say
 10GB each, and launch a process for each chunk, writing to a separate file.

How exactly would you just split the table in chunks of roughly the
same size ? Which queries should pg_dump send to the backend? If it
just sends a bunch of WHERE queries, the server would still scan the
same data several times since each pg_dump client would result in a
seqscan over the full table.

Ideally pg_dump should be able to query for all data in only one
relation segment so that each segment is scanned by only one backend
process. However this requires backend support and we would be sending
queries that we'd not want clients other than pg_dump to send...

If you were thinking about WHERE queries to get equally sized
partitions, how would we deal with unindexed and/or non-numerical data
in a large table?


Joachim

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] directory archive format for pg_dump

2010-12-16 Thread Heikki Linnakangas

On 16.12.2010 20:33, Joachim Wieland wrote:

On Thu, Dec 16, 2010 at 12:48 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com  wrote:

As soon as we have parallel pg_dump, the next big thing is going to be
parallel dump of the same table using multiple processes. Perhaps we should
prepare for that in the directory archive format, by allowing the data of a
single table to be split into multiple files. That way parallel pg_dump is
simple, you just split the table in chunks of roughly the same size, say
10GB each, and launch a process for each chunk, writing to a separate file.


How exactly would you just split the table in chunks of roughly the
same size ?


Check pg_class.relpages, and divide that evenly across the processes. 
That should be good enough.



Which queries should pg_dump send to the backend? If it
just sends a bunch of WHERE queries, the server would still scan the
same data several times since each pg_dump client would result in a
seqscan over the full table.


Hmm, I was thinking of SELECT * FROM table WHERE ctid BETWEEN ? AND ?, 
but we don't support TidScans for ranges. Perhaps we could add that.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] directory archive format for pg_dump

2010-12-16 Thread Tom Lane
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
 On 16.12.2010 20:33, Joachim Wieland wrote:
 How exactly would you just split the table in chunks of roughly the
 same size ?

 Check pg_class.relpages, and divide that evenly across the processes. 
 That should be good enough.

Not even close ... relpages could be badly out of date.  If you believe
it, you could fail to dump data that's in further-out pages.  We'd need
to move pg_relpages() or some equivalent into core to make this
workable.

 Which queries should pg_dump send to the backend?

 Hmm, I was thinking of SELECT * FROM table WHERE ctid BETWEEN ? AND ?, 
 but we don't support TidScans for ranges. Perhaps we could add that.

Yeah, that seems probably workable, given an up-to-date idea of the
possible block range.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] directory archive format for pg_dump

2010-12-16 Thread Robert Haas
On Thu, Dec 16, 2010 at 2:29 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
 On 16.12.2010 20:33, Joachim Wieland wrote:
 How exactly would you just split the table in chunks of roughly the
 same size ?

 Check pg_class.relpages, and divide that evenly across the processes.
 That should be good enough.

 Not even close ... relpages could be badly out of date.  If you believe
 it, you could fail to dump data that's in further-out pages.  We'd need
 to move pg_relpages() or some equivalent into core to make this
 workable.

 Which queries should pg_dump send to the backend?

 Hmm, I was thinking of SELECT * FROM table WHERE ctid BETWEEN ? AND ?,
 but we don't support TidScans for ranges. Perhaps we could add that.

 Yeah, that seems probably workable, given an up-to-date idea of the
 possible block range.

So how bad would it be if we committed this new format without support
for splitting large relations into multiple files, or with some stub
support that never actually gets used, and fixed this later?  Because
this is starting to sound like a bigger project than I think we ought
to be requiring for this patch.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] directory archive format for pg_dump

2010-12-16 Thread Heikki Linnakangas

On 16.12.2010 22:13, Robert Haas wrote:

So how bad would it be if we committed this new format without support
for splitting large relations into multiple files, or with some stub
support that never actually gets used, and fixed this later?  Because
this is starting to sound like a bigger project than I think we ought
to be requiring for this patch.


Would probably be fine, as long as we don't paint ourselves in the corner.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] directory archive format for pg_dump

2010-12-16 Thread Andrew Dunstan



On 12/16/2010 03:13 PM, Robert Haas wrote:

So how bad would it be if we committed this new format without support
for splitting large relations into multiple files, or with some stub
support that never actually gets used, and fixed this later?  Because
this is starting to sound like a bigger project than I think we ought
to be requiring for this patch.


I don't think we have to have that in the first go at all. Parallel dump 
could be extremely useful without it. I haven't looked closely, but I 
assume there will still be an archive version recorded somewhere. When 
we change the archive format, bump the version number.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] directory archive format for pg_dump

2010-12-16 Thread Tom Lane
Andrew Dunstan and...@dunslane.net writes:
 On 12/16/2010 03:13 PM, Robert Haas wrote:
 So how bad would it be if we committed this new format without support
 for splitting large relations into multiple files, or with some stub
 support that never actually gets used, and fixed this later?  Because
 this is starting to sound like a bigger project than I think we ought
 to be requiring for this patch.

 I don't think we have to have that in the first go at all. Parallel dump 
 could be extremely useful without it. I haven't looked closely, but I 
 assume there will still be an archive version recorded somewhere. When 
 we change the archive format, bump the version number.

Sure, but it's worth thinking about the feature now.  If there are
format tweaks to be made, it might be less painful to make them now
instead of later, even if actual support for the feature isn't there.
(I agree I don't want to try to implement it just yet.)

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] directory archive format for pg_dump

2010-12-16 Thread Andres Freund
On Thursday 16 December 2010 19:33:10 Joachim Wieland wrote:
 On Thu, Dec 16, 2010 at 12:48 PM, Heikki Linnakangas
 
 heikki.linnakan...@enterprisedb.com wrote:
  As soon as we have parallel pg_dump, the next big thing is going to be
  parallel dump of the same table using multiple processes. Perhaps we
  should prepare for that in the directory archive format, by allowing the
  data of a single table to be split into multiple files. That way
  parallel pg_dump is simple, you just split the table in chunks of
  roughly the same size, say 10GB each, and launch a process for each
  chunk, writing to a separate file.
 
 How exactly would you just split the table in chunks of roughly the
 same size ? Which queries should pg_dump send to the backend? If it
 just sends a bunch of WHERE queries, the server would still scan the
 same data several times since each pg_dump client would result in a
 seqscan over the full table.
I would suggest implementing   support for tidscans and doing it in segment 
size...

Andres

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] directory archive format for pg_dump

2010-12-16 Thread Heikki Linnakangas

On 17.12.2010 00:29, Andres Freund wrote:

On Thursday 16 December 2010 19:33:10 Joachim Wieland wrote:

On Thu, Dec 16, 2010 at 12:48 PM, Heikki Linnakangas

heikki.linnakan...@enterprisedb.com  wrote:

As soon as we have parallel pg_dump, the next big thing is going to be
parallel dump of the same table using multiple processes. Perhaps we
should prepare for that in the directory archive format, by allowing the
data of a single table to be split into multiple files. That way
parallel pg_dump is simple, you just split the table in chunks of
roughly the same size, say 10GB each, and launch a process for each
chunk, writing to a separate file.


How exactly would you just split the table in chunks of roughly the
same size ? Which queries should pg_dump send to the backend? If it
just sends a bunch of WHERE queries, the server would still scan the
same data several times since each pg_dump client would result in a
seqscan over the full table.

I would suggest implementingsupport for tidscans and doing it in segment
size...


I don't think there's any particular gain from matching the server's 
data file segment size, although 1GB does sound like a good chunk size 
for this too.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] directory archive format for pg_dump

2010-12-16 Thread Andres Freund
On Thursday 16 December 2010 23:34:02 Heikki Linnakangas wrote:
 On 17.12.2010 00:29, Andres Freund wrote:
  On Thursday 16 December 2010 19:33:10 Joachim Wieland wrote:
  On Thu, Dec 16, 2010 at 12:48 PM, Heikki Linnakangas
  
  heikki.linnakan...@enterprisedb.com  wrote:
  As soon as we have parallel pg_dump, the next big thing is going to be
  parallel dump of the same table using multiple processes. Perhaps we
  should prepare for that in the directory archive format, by allowing
  the data of a single table to be split into multiple files. That way
  parallel pg_dump is simple, you just split the table in chunks of
  roughly the same size, say 10GB each, and launch a process for each
  chunk, writing to a separate file.
  
  How exactly would you just split the table in chunks of roughly the
  same size ? Which queries should pg_dump send to the backend? If it
  just sends a bunch of WHERE queries, the server would still scan the
  same data several times since each pg_dump client would result in a
  seqscan over the full table.
  
  I would suggest implementingsupport for tidscans and doing it in
  segment size...
 
 I don't think there's any particular gain from matching the server's
 data file segment size, although 1GB does sound like a good chunk size
 for this too.
Its noticeable more efficient reading from different files in different 
processes 
in comparison to all hammering the same file.

Andres

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] directory archive format for pg_dump

2010-12-16 Thread Andrew Dunstan



On 12/16/2010 03:52 PM, Tom Lane wrote:

Andrew Dunstanand...@dunslane.net  writes:

On 12/16/2010 03:13 PM, Robert Haas wrote:

So how bad would it be if we committed this new format without support
for splitting large relations into multiple files, or with some stub
support that never actually gets used, and fixed this later?  Because
this is starting to sound like a bigger project than I think we ought
to be requiring for this patch.

I don't think we have to have that in the first go at all. Parallel dump
could be extremely useful without it. I haven't looked closely, but I
assume there will still be an archive version recorded somewhere. When
we change the archive format, bump the version number.

Sure, but it's worth thinking about the feature now.  If there are
format tweaks to be made, it might be less painful to make them now
instead of later, even if actual support for the feature isn't there.
(I agree I don't want to try to implement it just yet.)





Yeah, OK. Well, time is getting short but (hand waving wildly) I think 
we could probably get by with just adding a member to the TOC for the 
section number of the entry (set it to 0 for non TABLE DATA TOC 
entries). The section number could be built into the file name in 
directory format. For now that number would always be 1 for TABLE DATA 
members.


This has intriguing possibilities for parallel restore of custom format 
dumps too. It could be very useful to be able to restore a single table 
in parallel, if we had more than one TABLE DATA member per table.


I'm deliberately just addressing infrastructure issues rather than how 
we actually generate multiple sections of data for a single table 
(especially if we want to do that in parallel).


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] directory archive format for pg_dump

2010-12-07 Thread Joachim Wieland
On Thu, Dec 2, 2010 at 2:52 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
 Ok, committed, with some small cleanup since the last patch I posted.

 Could you update the directory-format patch on top of the committed version,
 please?

Thanks for committing the first part. Here is the updated and rebased
directory-format patch.

Joachim


pg_dump-directory-rebased.diff.gz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] directory archive format for pg_dump

2010-12-03 Thread Heikki Linnakangas

On 02.12.2010 23:12, Alvaro Herrera wrote:

Excerpts from Heikki Linnakangas's message of jue dic 02 16:52:27 -0300 2010:

Ok, committed, with some small cleanup since the last patch I posted.


I think the comments on _ReadBuf and friends need to be updated, since
they are not just for headers and TOC stuff anymore.  I'm not sure if
they were already outdated before your patch ...


These routines are only used to read  write headers  TOC

Hmm, ReadInt calls _ReadByte, and PrintData used to call ReadInt, so it 
was indirectly been called for things other than headers and TOC 
already. Unless you consider the headers to include length integer in 
in each data block. I'm inclined to just remove that sentence.


I also note that the _Clone and _DeClone functions are a bit misplaced. 
There's a big END OF FORMAT CALLBACKS earlier in the file, but _Clone 
and _DeClone are such callbacks. I'll move them to the right place.


PS. Thanks for the cleanup you did yesterday.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] directory archive format for pg_dump

2010-12-02 Thread Heikki Linnakangas

Ok, committed, with some small cleanup since the last patch I posted.

Could you update the directory-format patch on top of the committed 
version, please?


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] directory archive format for pg_dump

2010-12-02 Thread Alvaro Herrera
Excerpts from Heikki Linnakangas's message of jue dic 02 16:52:27 -0300 2010:
 Ok, committed, with some small cleanup since the last patch I posted.

I think the comments on _ReadBuf and friends need to be updated, since
they are not just for headers and TOC stuff anymore.  I'm not sure if
they were already outdated before your patch ...

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] directory archive format for pg_dump

2010-12-01 Thread Heikki Linnakangas

On 29.11.2010 22:21, Heikki Linnakangas wrote:

On 29.11.2010 07:11, Joachim Wieland wrote:

On Mon, Nov 22, 2010 at 3:44 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:

* wrap long lines
* use extern in function prototypes in header files
* inline some functions like _StartDataCompressor, _EndDataCompressor,
_DoInflate/_DoDeflate that aren't doing anything but call some other
function.


So here is a new round of patches. It turned out that the feature to
allow to also restore files from a different dump and with a different
compression required some changes in the compressor API. And in the
end I didn't like all the #ifdefs either and made a less #ifdef-rich
version using function pointers.


Ok. The separate InitCompressorState() and AllocateCompressorState()
functions seem unnecessary. As the code stands, there's little
performance gain from re-using the same CompressorState, just
re-initializing it, and I can't see any other justification for them
either.

I combined those, and the Free/Flush steps, and did a bunch of other
editorializations and cleanups. Here's an updated patch, also available
in my git repository at
git://git.postgresql.org/git/users/heikki/postgres.git, branch
pg_dump-dir. I'm going to continue reviewing this later, tomorrow
hopefully.


Here's another update. I changed things quite heavily. I didn't see the 
point of having the Alloc+Free functions for uncompressing, because the 
ReadDataFromArchive processed the whole input stream in one go anyway. 
So the new API consists of four functions, AllocateCompressor, 
WriteDataToArchive and EndCompressor for writing, and 
ReadDataFromArchive for reading.


Also, I reverted the zlib buffer size from 64k to 4k. If you want to 
raise that, let's discuss that separately.


Please let me know what you think of this version, or if you spot any 
bugs. I'll keep working on this, I'm hoping to get this into committable 
shape by the end of the week.


The pg_backup_directory patch naturally won't apply over this anymore. 
Once we have the compress_io part in shape, that will need to be fixed.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] directory archive format for pg_dump

2010-12-01 Thread Heikki Linnakangas

On 01.12.2010 16:03, Heikki Linnakangas wrote:

On 29.11.2010 22:21, Heikki Linnakangas wrote:

I combined those, and the Free/Flush steps, and did a bunch of other
editorializations and cleanups. Here's an updated patch, also available
in my git repository at
git://git.postgresql.org/git/users/heikki/postgres.git, branch
pg_dump-dir. I'm going to continue reviewing this later, tomorrow
hopefully.


Here's another update.


Forgot attachment. This is also available in the above git repo.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com
*** a/src/bin/pg_dump/Makefile
--- b/src/bin/pg_dump/Makefile
***
*** 20,26  override CPPFLAGS := -I$(libpq_srcdir) $(CPPFLAGS)
  
  OBJS=	pg_backup_archiver.o pg_backup_db.o pg_backup_custom.o \
  	pg_backup_files.o pg_backup_null.o pg_backup_tar.o \
! 	dumputils.o $(WIN32RES)
  
  KEYWRDOBJS = keywords.o kwlookup.o
  
--- 20,26 
  
  OBJS=	pg_backup_archiver.o pg_backup_db.o pg_backup_custom.o \
  	pg_backup_files.o pg_backup_null.o pg_backup_tar.o \
! 	dumputils.o compress_io.o $(WIN32RES)
  
  KEYWRDOBJS = keywords.o kwlookup.o
  
*** /dev/null
--- b/src/bin/pg_dump/compress_io.c
***
*** 0 
--- 1,404 
+ /*-
+  *
+  * compress_io.c
+  *   Routines for archivers to write an uncompressed or compressed data
+  *   stream.
+  *
+  * Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group
+  * Portions Copyright (c) 1994, Regents of the University of California
+  *
+  *  The interface for writing to an archive consists of three functions:
+  *  AllocateCompressor, WriteDataToArchive and EndCompressor. First you call
+  *  AllocateCompressor, then write all the data by calling WriteDataToArchive
+  *  as many times as needed, and finally EndCompressor. WriteDataToArchive
+  *  and EndCompressor will call the WriteFunc that was provided to
+  *  AllocateCompressor for each chunk of compressed data.
+  *
+  *  The interface for reading an archive consists of just one function:
+  *  ReadDataFromArchive. ReadDataFromArchive reads the whole compressed input
+  *  stream, by repeatedly calling the given ReadFunc. ReadFunc returns the
+  *  compressed data chunk at a time, and ReadDataFromArchive decompresses it
+  *  and passes the decompressed data to ahwrite(), until ReadFunc returns 0
+  *  to signal EOF.
+  *
+  *  The interface is the same for compressed and uncompressed streams.
+  *
+  *
+  * IDENTIFICATION
+  * src/bin/pg_dump/compress_io.c
+  *
+  *-
+  */
+ 
+ #include compress_io.h
+ 
+ static const char *modulename = gettext_noop(compress_io);
+ 
+ static void ParseCompressionOption(int compression, CompressorAlgorithm *alg,
+    int *level);
+ 
+ /* Routines that are private to a specific compressor (static functions) */
+ #ifdef HAVE_LIBZ
+ /* Routines that support zlib compressed data I/O */
+ static void InitCompressorZlib(CompressorState *cs, int level);
+ static void DeflateCompressorZlib(ArchiveHandle *AH, CompressorState *cs,
+   bool flush);
+ static void ReadDataFromArchiveZlib(ArchiveHandle *AH, ReadFunc readF);
+ static size_t WriteDataToArchiveZlib(ArchiveHandle *AH, CompressorState *cs,
+ 	 const char *data, size_t dLen);
+ static void EndCompressorZlib(ArchiveHandle *AH, CompressorState *cs);
+ 
+ #endif
+ 
+ /* Routines that support uncompressed data I/O */
+ static void ReadDataFromArchiveNone(ArchiveHandle *AH, ReadFunc readF);
+ static size_t WriteDataToArchiveNone(ArchiveHandle *AH, CompressorState *cs,
+ 	 const char *data, size_t dLen);
+ 
+ static void
+ ParseCompressionOption(int compression, CompressorAlgorithm *alg, int *level)
+ {
+ 	/*
+ 	 * The compression is set either on the commandline when creating
+ 	 * an archive or by ReadHead() when restoring an archive. It can also be
+ 	 * set on a per-data item basis in the directory archive format.
+ 	 */
+ 	if (compression == Z_DEFAULT_COMPRESSION ||
+ 		(compression  0  compression = 9))
+ 		*alg = COMPR_ALG_LIBZ;
+ 	else if (compression == 0)
+ 		*alg = COMPR_ALG_NONE;
+ 	else
+ 		die_horribly(NULL, modulename, Invalid compression code: %d\n,
+ 	 compression);
+ 
+ 	if (level)
+ 		*level = compression;
+ }
+ 
+ /* Public interface routines */
+ 
+ /* Allocate a new compressor */
+ CompressorState *
+ AllocateCompressor(int compression, WriteFunc writeF)
+ {
+ 	CompressorState *cs;
+ 	CompressorAlgorithm alg;
+ 	int level;
+ 
+ 	ParseCompressionOption(compression, alg, level);
+ 
+ 	cs = (CompressorState *) calloc(1, sizeof(CompressorState));
+ 	if (cs == NULL)
+ 		die_horribly(NULL, modulename, out of memory\n);
+ 	cs-writeF = writeF;
+ 	cs-comprAlg = alg;
+ 
+ #ifndef HAVE_LIBZ
+ 	if (alg == COMPR_ALG_LIBZ)
+ 		die_horribly(NULL, modulename, not built with zlib support\n);
+ #endif
+ 
+ 	/*
+ 	 * Perform compression algorithm specific 

Re: [HACKERS] directory archive format for pg_dump

2010-12-01 Thread Joachim Wieland
On Wed, Dec 1, 2010 at 9:05 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
 Forgot attachment. This is also available in the above git repo.

I have quickly checked your modifications, on the one hand I like the
reduction of functions, I would have said that we have AH around all
the time and so we could just allocate once and stuff it all into
ctx-cs and reuse the buffers for every object, but re-allocating them
for every (dumpable) object should be fine as well.

Regarding the function pointers that you removed, you are now putting
back in what Dimitri wanted me to take out, namely switch/case
instructions for the algorithms and then #ifdefs for every algorithm.
It's not too many now since we have taken out LZF. Well, I can live
with both ways.

There is one thing however that I am not in favor of, which is the
removal of the sizeHint parameter for the read functions. The reason
for this parameter is not very clear now without LZF but I have tried
to put in a few comments to explain the situation (which you have
taken out as well :-) ).

The point is that zlib is a stream based compression algorithm, you
just stuff data in and from time to time you get data out and in the
end you explicitly flush the compressor. The read function can just
return as many bytes as it wants and we can just hand it all over to
zlib. Other compression algorithms however are block based and first
write a block header that contains the information on the next data
block, including uncompressed and compressed sizes. Now with the
sizeHint parameter I used, the compressor could tell the read function
that it just wants to read the fixed size header (6 bytes IIRC). In
the header it would look up the compressed size for the next block and
would then ask the read function to get exactly this amount of data,
decompress it and go on with the next block, and so forth...

Of course you can possibly do that memory management inside the
compressor with an extra buffer holding what you got in excess but
it's a pain. If you removed that part on purpose on the grounds that
there is no block based compression algorithm in core and probably
never will be, then that's okay :-)


Joachim

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] directory archive format for pg_dump

2010-12-01 Thread Heikki Linnakangas

On 02.12.2010 04:35, Joachim Wieland wrote:

There is one thing however that I am not in favor of, which is the
removal of the sizeHint parameter for the read functions. The reason
for this parameter is not very clear now without LZF but I have tried
to put in a few comments to explain the situation (which you have
taken out as well :-) ).

The point is that zlib is a stream based compression algorithm, you
just stuff data in and from time to time you get data out and in the
end you explicitly flush the compressor. The read function can just
return as many bytes as it wants and we can just hand it all over to
zlib. Other compression algorithms however are block based and first
write a block header that contains the information on the next data
block, including uncompressed and compressed sizes. Now with the
sizeHint parameter I used, the compressor could tell the read function
that it just wants to read the fixed size header (6 bytes IIRC). In
the header it would look up the compressed size for the next block and
would then ask the read function to get exactly this amount of data,
decompress it and go on with the next block, and so forth...

Of course you can possibly do that memory management inside the
compressor with an extra buffer holding what you got in excess but
it's a pain. If you removed that part on purpose on the grounds that
there is no block based compression algorithm in core and probably
never will be, then that's okay :-)


Yeah, we're not going to have lzf built-in anytime soon. The external 
command approach seems like the best way to support additional 
compression algorithms, and I don't think it could do anything with 
sizeHint. And the custom format didn't obey sizeHint anyway, because it 
reads one custom-format block at a time.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] directory archive format for pg_dump

2010-11-29 Thread Heikki Linnakangas

On 29.11.2010 07:11, Joachim Wieland wrote:

On Mon, Nov 22, 2010 at 3:44 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com  wrote:

* wrap long lines
* use extern in function prototypes in header files
* inline some functions like _StartDataCompressor, _EndDataCompressor,
_DoInflate/_DoDeflate  that aren't doing anything but call some other
function.


So here is a new round of patches. It turned out that the feature to
allow to also restore files from a different dump and with a different
compression required some changes in the compressor API. And in the
end I didn't like all the #ifdefs either and made a less #ifdef-rich
version using function pointers. The downside now is that I have
created quite a few one-line functions that Heikki doesn't like all
that much, but I assume that they are okay in this case on the grounds
that the public compressor interface is calling the private
implementation of a certain compressor.


Thanks, I'll take a look.

BTW, I know you wanted to have support for other compression algorithms; 
I think the best way to achieve that is to make it possible to specify 
an external command to be used for compression. pg_dump would fork() and 
exec() that, and pipe the data to be compressed/decompressed to 
stdin/stdout of the external command. We're not going to add support for 
every new compression algorithm that's in vogue, but generic external 
command support should make happy those who want it. I'd be particularly 
excited about using something like pbzip2, to speed up the compression 
on multi-core systems.


That should be a separate patch, but it's something to keep in mind with 
these refactorings.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] directory archive format for pg_dump

2010-11-29 Thread Robert Haas
On Mon, Nov 29, 2010 at 10:49 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
 On 29.11.2010 07:11, Joachim Wieland wrote:

 On Mon, Nov 22, 2010 at 3:44 PM, Heikki Linnakangas
 heikki.linnakan...@enterprisedb.com  wrote:

 * wrap long lines
 * use extern in function prototypes in header files
 * inline some functions like _StartDataCompressor, _EndDataCompressor,
 _DoInflate/_DoDeflate  that aren't doing anything but call some other
 function.

 So here is a new round of patches. It turned out that the feature to
 allow to also restore files from a different dump and with a different
 compression required some changes in the compressor API. And in the
 end I didn't like all the #ifdefs either and made a less #ifdef-rich
 version using function pointers. The downside now is that I have
 created quite a few one-line functions that Heikki doesn't like all
 that much, but I assume that they are okay in this case on the grounds
 that the public compressor interface is calling the private
 implementation of a certain compressor.

 Thanks, I'll take a look.

 BTW, I know you wanted to have support for other compression algorithms; I
 think the best way to achieve that is to make it possible to specify an
 external command to be used for compression. pg_dump would fork() and exec()
 that, and pipe the data to be compressed/decompressed to stdin/stdout of the
 external command. We're not going to add support for every new compression
 algorithm that's in vogue, but generic external command support should make
 happy those who want it. I'd be particularly excited about using something
 like pbzip2, to speed up the compression on multi-core systems.

 That should be a separate patch, but it's something to keep in mind with
 these refactorings.

That would also ease licensing concerns, since we wouldn't have to
redistribute or bundle anything.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] directory archive format for pg_dump

2010-11-29 Thread Heikki Linnakangas

On 29.11.2010 07:11, Joachim Wieland wrote:

On Mon, Nov 22, 2010 at 3:44 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com  wrote:

* wrap long lines
* use extern in function prototypes in header files
* inline some functions like _StartDataCompressor, _EndDataCompressor,
_DoInflate/_DoDeflate  that aren't doing anything but call some other
function.


So here is a new round of patches. It turned out that the feature to
allow to also restore files from a different dump and with a different
compression required some changes in the compressor API. And in the
end I didn't like all the #ifdefs either and made a less #ifdef-rich
version using function pointers.


Ok. The separate InitCompressorState() and AllocateCompressorState() 
functions seem unnecessary. As the code stands, there's little 
performance gain from re-using the same CompressorState, just 
re-initializing it, and I can't see any other justification for them either.


I combined those, and the Free/Flush steps, and did a bunch of other 
editorializations and cleanups. Here's an updated patch, also available 
in my git repository at 
git://git.postgresql.org/git/users/heikki/postgres.git, branch 
pg_dump-dir. I'm going to continue reviewing this later, tomorrow 
hopefully.



The downside now is that I have
created quite a few one-line functions that Heikki doesn't like all
that much, but I assume that they are okay in this case on the grounds
that the public compressor interface is calling the private
implementation of a certain compressor.


You could avoid the wrapper functions by calling the function pointers 
directly, but I agree it seems neater the way you did it.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com
*** a/src/bin/pg_dump/Makefile
--- b/src/bin/pg_dump/Makefile
***
*** 20,26  override CPPFLAGS := -I$(libpq_srcdir) $(CPPFLAGS)
  
  OBJS=	pg_backup_archiver.o pg_backup_db.o pg_backup_custom.o \
  	pg_backup_files.o pg_backup_null.o pg_backup_tar.o \
! 	dumputils.o $(WIN32RES)
  
  KEYWRDOBJS = keywords.o kwlookup.o
  
--- 20,26 
  
  OBJS=	pg_backup_archiver.o pg_backup_db.o pg_backup_custom.o \
  	pg_backup_files.o pg_backup_null.o pg_backup_tar.o \
! 	dumputils.o compress_io.o $(WIN32RES)
  
  KEYWRDOBJS = keywords.o kwlookup.o
  
*** /dev/null
--- b/src/bin/pg_dump/compress_io.c
***
*** 0 
--- 1,415 
+ /*-
+  *
+  * compress_io.c
+  *   Routines for archivers to write an uncompressed or compressed data
+  *   stream.
+  *
+  * Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group
+  * Portions Copyright (c) 1994, Regents of the University of California
+  *
+  * IDENTIFICATION
+  * src/bin/pg_dump/compress_io.c
+  *
+  *-
+  */
+ 
+ #include compress_io.h
+ 
+ static const char *modulename = gettext_noop(compress_io);
+ 
+ /* Routines that are private to a specific compressor (static functions) */
+ #ifdef HAVE_LIBZ
+ /* Routines that support zlib compressed data I/O */
+ static void InitCompressorZlib(CompressorState *cs, int compression);
+ static void DeflateCompressorZlib(ArchiveHandle *AH, CompressorState *cs,
+   bool flush);
+ static void ReadDataFromArchiveZlib(ArchiveHandle *AH, CompressorState *cs);
+ static size_t WriteDataToArchiveZlib(ArchiveHandle *AH, CompressorState *cs,
+ 	 const void *data, size_t dLen);
+ static void EndCompressorZlib(ArchiveHandle *AH, CompressorState *cs);
+ static CompressorState *AllocateCompressorState(CompressorAction action,
+ int compression);
+ 
+ static CompressorFuncs cfs_zlib = {
+ 	InitCompressorZlib,
+ 	ReadDataFromArchiveZlib,
+ 	WriteDataToArchiveZlib,
+ 	EndCompressorZlib
+ };
+ #endif
+ 
+ /* Routines that support uncompressed data I/O */
+ static void InitCompressorNone(CompressorState *cs, int compression);
+ static void ReadDataFromArchiveNone(ArchiveHandle *AH, CompressorState *cs);
+ static size_t WriteDataToArchiveNone(ArchiveHandle *AH, CompressorState *cs,
+ 	 const void *data, size_t dLen);
+ static void EndCompressorNone(ArchiveHandle *AH, CompressorState *cs);
+ 
+ static CompressorFuncs cfs_none = {
+ 	InitCompressorNone,
+ 	ReadDataFromArchiveNone,
+ 	WriteDataToArchiveNone,
+ 	EndCompressorNone
+ };
+ 
+ /* Allocate a new decompressor */
+ CompressorState *
+ AllocateInflator(int compression, ReadFunc readF)
+ {
+ 	CompressorState *cs;
+ 
+ 	cs = AllocateCompressorState(COMPRESSOR_INFLATE, compression);
+ 	cs-readF = readF;
+ 
+ 	return cs;
+ }
+ 
+ /* Allocate a new compressor */
+ CompressorState *
+ AllocateDeflator(int compression, WriteFunc writeF)
+ {
+ 	CompressorState *cs;
+ 
+ 	cs = AllocateCompressorState(COMPRESSOR_DEFLATE, compression);
+ 	cs-writeF = writeF;
+ 
+ 	return cs;
+ }
+ 
+ static CompressorState *
+ AllocateCompressorState(CompressorAction action, int compression)
+ {

Re: [HACKERS] directory archive format for pg_dump

2010-11-22 Thread Heikki Linnakangas

On 20.11.2010 06:10, Joachim Wieland wrote:

2010/11/19 José Arthur Benetasso Villanovajose.art...@gmail.com:

The md5.c and kwlookup.c reuse using a link doesn't look nice either.
This way you need to compile twice, among others things, but I think
that its temporary, right?


No, it isn't. md5.c is used in the same way by e.g. libpq and there
are other examples for links in core, check out src/bin/psql for
example.


It seems like overkill to include md5 just for hashing the random bytes 
that getRandomData() generates. And if random() doesn't produce unique 
values, it's not going to get better by hashing it. How about using a 
timestamp instead of the hash?


If you don't initialize random() with srandom(), BTW, it will always 
return the same value.


But I'm not actually sure we should be preventing mix  match of files 
from different dumps. It might be very useful to do just that sometimes, 
like restoring a recent backup, with the contents of one table replaced 
with older data. A warning would be ok, though.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] directory archive format for pg_dump

2010-11-22 Thread Tom Lane
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
 But I'm not actually sure we should be preventing mix  match of files 
 from different dumps. It might be very useful to do just that sometimes, 
 like restoring a recent backup, with the contents of one table replaced 
 with older data. A warning would be ok, though.

+1.  This mechanism seems like a solution in search of a problem.
Just lose the whole thing, and instead fix pg_dump to complain if
the target directory isn't empty.  That should be sufficient to guard
against accidental mixing of different dumps, and as Heikki says
there's not a good reason to prevent intentional mixing.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] directory archive format for pg_dump

2010-11-22 Thread Heikki Linnakangas

On 22.11.2010 19:07, Tom Lane wrote:

Heikki Linnakangasheikki.linnakan...@enterprisedb.com  writes:

But I'm not actually sure we should be preventing mix  match of files
from different dumps. It might be very useful to do just that sometimes,
like restoring a recent backup, with the contents of one table replaced
with older data. A warning would be ok, though.


+1.  This mechanism seems like a solution in search of a problem.
Just lose the whole thing, and instead fix pg_dump to complain if
the target directory isn't empty.  That should be sufficient to guard
against accidental mixing of different dumps, and as Heikki says
there's not a good reason to prevent intentional mixing.


Extending that thought a bit, it would be nice if the per-file header 
would carry the info if the file is compressed or not, instead of just 
one such flag in the TOC. You could then also mix  match files from 
compressed and non-compressed archives.


Other than the md5 thing, the patch looks fine to me. There's many quite 
levels of indirection, it took me a while to get my head around the call 
chains like DataDumper-_WriteData-WriteDataToArchive-_WriteBuf, but I 
don't have any ideas on how to improve that.


However, docs are missing, so I'm marking this as Waiting on Author.

There's some cosmetic changes I'd like to have fixed or do myself before 
committing:


* wrap long lines
* use extern in function prototypes in header files
* inline some functions like _StartDataCompressor, _EndDataCompressor, 
_DoInflate/_DoDeflate  that aren't doing anything but call some other 
function.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] directory archive format for pg_dump

2010-11-19 Thread Dimitri Fontaine
Hi,

Sharing some thoughts after a first round of reviewing, where I only had
time to read the patch itself.

Joachim Wieland j...@mcknight.de writes:
 Since the compression is currently all down in the custom format
 backup code,
 the first thing I've done was refactoring the compression functions
 into a
 separate file. While at it, I have added support for liblzf
 compression.

I think I'd like to see a separate patch for the new compression
support. Sorry about that, I realize that's extra work…

And it could be about personal preferences, but the way you added the
liblzf support strikes me at odd, with all those #ifdefs everywhere. Is
it possible to have a specific file for each supported compression
format, then some routing code in src/bin/pg_dump/compress_io.c?

The routing code already exists but then the file is full of #ifdef
sections to define the right supporting function when I think having a
compress_io_zlib and a compress_io_lzf files would be better.


Then there's the bulk of the new dump format feature in the other part
of the patch, namely src/bin/pg_dump/pg_backup_directory.c. You have to
update the copyright in the file header there, at least :)

I'm yet to devote more time on this part of the patch but it seems like
it's rewriting the full support without using the existing bits. That's
something I have to check, didn't have time to read the existing other
archive formats code there.

I'm hesitant as far as marking the patch Waiting on author to get it
split. Joachim, what do you think?

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] directory archive format for pg_dump

2010-11-19 Thread José Arthur Benetasso Villanova
Hi Dimitri and Joachim.

I've looked the patch too, and I want to share some thoughts too. I've
used http://wiki.postgresql.org/wiki/Reviewing_a_Patch to guide my
review.

Submission review:

I've apllied and compiled the patch successfully using the current master.

Usability review:

The dir format generated in my database 60 files, with different
sizes, and it looks very confusing. Is it possible to use the same
trick as pigz and pbzip2, creating a concatenated file of streams?

Feature test:

Just a partial review. I can dump / restore using lzf, but didnt
stress it hard to check robustness.

Performance review:

Didnt test it hard too, but looks ok.


Coding review:

Just a shallow review here.

 I think I'd like to see a separate patch for the new compression
 support. Sorry about that, I realize that's extra work…

Same feeling here, this is the 1st thing that I notice.

The md5.c and kwlookup.c reuse using a link doesn't look nice either.
This way you need to compile twice, among others things, but I think
that its temporary, right?

-- 
José Arthur Benetasso Villanova

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] directory archive format for pg_dump

2010-11-19 Thread Alvaro Herrera
Excerpts from José Arthur Benetasso Villanova's message of vie nov 19 18:28:03 
-0300 2010:

 The md5.c and kwlookup.c reuse using a link doesn't look nice either.
 This way you need to compile twice, among others things, but I think
 that its temporary, right?

Not sure what you mean here, but kwlookup.c is a symlink without this
patch too.  It's just the way it works; the compilation environments
here and in the backend are different, so there is no other option but
to compile twice.  I guess md5.c is a new one (I didn't check), but I
would assume it's the same thing.

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] directory archive format for pg_dump

2010-11-19 Thread Joachim Wieland
Hi Dimitri,

thanks for reviewing my patch!

On Fri, Nov 19, 2010 at 2:44 PM, Dimitri Fontaine
dimi...@2ndquadrant.fr wrote:
 I think I'd like to see a separate patch for the new compression
 support. Sorry about that, I realize that's extra work…

I guess it wouldn't be a very big deal but I also doubt that it makes
the review that much easier. Basically the compression refactor patch
would just touch pg_backup_custom.c (because this is the place where
the libz compression is currently burried into) and the two new
compress_io.(c|h) files. Everything else is pretty much the directory
stuff and is on top of these changes.


 And it could be about personal preferences, but the way you added the
 liblzf support strikes me at odd, with all those #ifdefs everywhere. Is
 it possible to have a specific file for each supported compression
 format, then some routing code in src/bin/pg_dump/compress_io.c?

Sure we could. But I wanted to wait with any fancy function pointer
stuff until we have decided if we want to include the liblzf support
at all. The #ifdefs might be a bit ugly but in case we do not include
liblzf support, it's the easiest way to take it out again. As written
in my introduction, this patch is not really about liblzf, liblzf is
just a proof of concept for factoring out the compression part and I
have included it, so that people can use it and see how much speed
improvement they get.


 The routing code already exists but then the file is full of #ifdef
 sections to define the right supporting function when I think having a
 compress_io_zlib and a compress_io_lzf files would be better.

Sure! I completely agree...


 Then there's the bulk of the new dump format feature in the other part
 of the patch, namely src/bin/pg_dump/pg_backup_directory.c. You have to
 update the copyright in the file header there, at least :)

Well, not sure if we can just change the copyright notice, because in
the end the structure was copied from one of the other files which all
have the copyright notice in them, so my work is based on those other
files...


 I'm hesitant as far as marking the patch Waiting on author to get it
 split. Joachim, what do you think?

I will see if I can split it.


Joachim

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] directory archive format for pg_dump

2010-11-19 Thread Tom Lane
Dimitri Fontaine dimi...@2ndquadrant.fr writes:
 I think I'd like to see a separate patch for the new compression
 support. Sorry about that, I realize that's extra work…

That part of the patch is likely to get rejected outright anyway,
so I *strongly* recommend splitting it out.  We have generally resisted
adding random compression algorithms to pg_dump because of license and
patent considerations, and I see no reason to suppose this one is going
to pass muster.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] directory archive format for pg_dump

2010-11-19 Thread Joachim Wieland
On Fri, Nov 19, 2010 at 11:53 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Dimitri Fontaine dimi...@2ndquadrant.fr writes:
  I think I'd like to see a separate patch for the new compression
  support. Sorry about that, I realize that's extra work…

 That part of the patch is likely to get rejected outright anyway,
 so I *strongly* recommend splitting it out.  We have generally resisted
 adding random compression algorithms to pg_dump because of license and
 patent considerations, and I see no reason to suppose this one is going
 to pass muster.


I was already anticipating that possiblitiy and my inital patch description
is along these lines.

However, liblzf is BSD licensed so on the license side we should be fine.
Regarding patents, your last comment was that you'd like to see if it's
really worth it and so I have included support for lzf for anybody to go
ahead and find that out.

Will send an updated split up patch this weekend (which would actually be
four patches already...).


Joachim


Re: [HACKERS] directory archive format for pg_dump

2010-11-19 Thread Joachim Wieland
Hi Jose,

2010/11/19 José Arthur Benetasso Villanova jose.art...@gmail.com:
 The dir format generated in my database 60 files, with different
 sizes, and it looks very confusing. Is it possible to use the same
 trick as pigz and pbzip2, creating a concatenated file of streams?

What pigz is parallelizing is the actual computation of the compressed
data. The directory archive format however is a preparation for a
parallel pg_dump, dumping several tables (especially large tables of
course) in parallel via multiple database connections and multiple
pg_dump frontends. The idea of multiplexing their output into one file
has been rejected on the grounds that it would probably slow down the
whole process.

Nevertheless pigz could be implemented as an alternative compression
algorithm and that way the custom and the directory archive format
could use it, but here as well, license and patent questions might be
in the way, even though it is based on libz.


 The md5.c and kwlookup.c reuse using a link doesn't look nice either.
 This way you need to compile twice, among others things, but I think
 that its temporary, right?

No, it isn't. md5.c is used in the same way by e.g. libpq and there
are other examples for links in core, check out src/bin/psql for
example.

Joachim

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers