Re: [HACKERS] extensible external toast tuple support & snappy prototype
On 2013-06-27 09:17:10 -0700, Hitoshi Harada wrote: > On Thu, Jun 27, 2013 at 6:08 AM, Robert Haas wrote: > > > On Wed, Jun 19, 2013 at 3:27 AM, Andres Freund > > wrote: > > > There will be a newer version of the patch coming today or tomorrow, so > > > there's probably no point in looking at the one linked above before > > > that... > > > > This patch is marked as "Ready for Committer" in the CommitFest app. > > But it is not clear to me where the patch is that is being proposed > > for commit. > > > > > > > > No, this conversation is about patch #1153, pluggable toast compression, > which is in Needs Review, and you may be confused with #1127, extensible > external toast tuple support. Well, actually this is the correct thread, and pluggable toast compression developed out of it. You had marked #1127 as ready for committer although you had noticed an omission in heap_tuple_fetch_attr... Answered with the new patch version toplevel in the thread, to make this hopefully less confusing. Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] extensible external toast tuple support & snappy prototype
On Thu, Jun 27, 2013 at 6:08 AM, Robert Haas wrote: > On Wed, Jun 19, 2013 at 3:27 AM, Andres Freund > wrote: > > There will be a newer version of the patch coming today or tomorrow, so > > there's probably no point in looking at the one linked above before > > that... > > This patch is marked as "Ready for Committer" in the CommitFest app. > But it is not clear to me where the patch is that is being proposed > for commit. > > > > No, this conversation is about patch #1153, pluggable toast compression, which is in Needs Review, and you may be confused with #1127, extensible external toast tuple support. -- Hitoshi Harada
Re: [HACKERS] extensible external toast tuple support & snappy prototype
On Wed, Jun 19, 2013 at 3:27 AM, Andres Freund wrote: > There will be a newer version of the patch coming today or tomorrow, so > there's probably no point in looking at the one linked above before > that... This patch is marked as "Ready for Committer" in the CommitFest app. But it is not clear to me where the patch is that is being proposed for commit. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] extensible external toast tuple support & snappy prototype
On 2013-06-19 00:15:56 -0700, Hitoshi Harada wrote: > On Wed, Jun 5, 2013 at 8:01 AM, Andres Freund wrote: > > > > Two patches attached: > > 1) add snappy to src/common. The integration needs some more work. > > 2) Combined patch that adds indirect tuple and snappy compression. Those > > coul be separated, but this is an experiment so far... > > > > > > > I took a look at them a little. This proposal is a super set of patch > #1127. > https://commitfest.postgresql.org/action/patch_view?id=1127 > > - is not found in my mac. Commented it out, it builds clean. > - I don't see what the added is_inline flag means in > toast_compress_datum(). Obviously not used, but I wonder what was the > intention. Hm. I don't think you've looked at the latest version of the patch, check http://archives.postgresql.org/message-id/20130614230625.gd19...@awork2.anarazel.de - that should be linked from the commitfest. The is_inline part should be gone there. > - By this, > * compression method. We could just use the two bytes to store 3 other > * compression methods but maybe we better don't paint ourselves in a > * corner again. > you mean two bits, not two bytes? Yes, typo... The plan is to use those two bits in the following way - 00 pglz - 01 snappy/lz4/whatever - 10 another - 11 one extra byte > And patch adds snappy-c in src/common. I definitely like the idea to have > pluggability for different compression algorithm for datum, but I am not > sure if this location is a good place to add it. Maybe we want a modern > algorithm other than pglz for different components across the system in > core, and it's better to let users choose which to add more. The mapping > between the index number and compression algorithm should be consistent for > the entire life of database, so it should be defined at initdb time. From > core maintainability perspective and binary size of postgres, I don't think > we want to put dozenes of different algorithms into core in the future. > And maybe someone will want to try BSD-incompatible algorithm privately. We've argued about this in the linked thread and I still think we should add one algorithm now, and when that one is outdated - which probably will be some time - replace it. Building enough infrastructure to make this really pluggable is not likely enough to be beneficial to many. There will be a newer version of the patch coming today or tomorrow, so there's probably no point in looking at the one linked above before that... Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] extensible external toast tuple support & snappy prototype
On Wed, Jun 5, 2013 at 8:01 AM, Andres Freund wrote: > > Two patches attached: > 1) add snappy to src/common. The integration needs some more work. > 2) Combined patch that adds indirect tuple and snappy compression. Those > coul be separated, but this is an experiment so far... > > > I took a look at them a little. This proposal is a super set of patch #1127. https://commitfest.postgresql.org/action/patch_view?id=1127 - is not found in my mac. Commented it out, it builds clean. - I don't see what the added is_inline flag means in toast_compress_datum(). Obviously not used, but I wonder what was the intention. - By this, * compression method. We could just use the two bytes to store 3 other * compression methods but maybe we better don't paint ourselves in a * corner again. you mean two bits, not two bytes? And patch adds snappy-c in src/common. I definitely like the idea to have pluggability for different compression algorithm for datum, but I am not sure if this location is a good place to add it. Maybe we want a modern algorithm other than pglz for different components across the system in core, and it's better to let users choose which to add more. The mapping between the index number and compression algorithm should be consistent for the entire life of database, so it should be defined at initdb time. From core maintainability perspective and binary size of postgres, I don't think we want to put dozenes of different algorithms into core in the future. And maybe someone will want to try BSD-incompatible algorithm privately. I guess it's ok to use one more byte to indicate which compression is used for the value. It is a compressed datum and we don't expect something short anyway. I don't see big problems in this patch other than how to manage the pluggability, but it is a WIP patch anyway, so I'm going to mark it as Returned with Feedback. Thanks, -- Hitoshi Harada
Re: [HACKERS] extensible external toast tuple support & snappy prototype
On 2013-06-07 12:16:48 -0400, Stephen Frost wrote: > * Andres Freund (and...@2ndquadrant.com) wrote: > > Currently on a little endian system the pglz header contains the length > > in the first four bytes as: > > [][][][xxdd] > > Where dd are valid length bits for pglz and xx are the two bits which > > are always zero since we only ever store up to 1GB. We can redefine 'xx' > > to mean whatever we want but we cannot change it's place. > > I'm not thrilled with the idea of using those 2 bits from the length > integer. I understand the point of it and that we'd be able to have > binary compatibility from it but is it necessary to track at the > per-tuple level..? What about possibly supporting >1GB objects at some > point (yes, I know there's a lot of other issues there, but still). > We've also got complexity around the size of the length integer already. I am open for different suggestions, but I don't know of any realistic ones. Note that the 1GB limitation is already pretty heavily baked in into varlenas itself (which is not the length we are talking about here!) since we use the two remaining bits discern between the 4 types of varlenas we have. * short (1b) * short, pointing to a ondisk tuple (1b_e) * long (4b) * long compressed (4b_c) Since long compressed ones always need to be convertible to long ones we can't ever have a 'rawsize' (which is what's proposed to be used for this) that's larger than 1GB. So, breaking the 1GB limit will not be stopped by this, but much much earlier. And it will require a break in on-disk compatibility. Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] extensible external toast tuple support & snappy prototype
* Andres Freund (and...@2ndquadrant.com) wrote: > Currently on a little endian system the pglz header contains the length > in the first four bytes as: > [][][][xxdd] > Where dd are valid length bits for pglz and xx are the two bits which > are always zero since we only ever store up to 1GB. We can redefine 'xx' > to mean whatever we want but we cannot change it's place. I'm not thrilled with the idea of using those 2 bits from the length integer. I understand the point of it and that we'd be able to have binary compatibility from it but is it necessary to track at the per-tuple level..? What about possibly supporting >1GB objects at some point (yes, I know there's a lot of other issues there, but still). We've also got complexity around the size of the length integer already. Anyway, just not 100% sure that we really want to use these bits for this. Thanks, Stephen signature.asc Description: Digital signature
Re: [HACKERS] extensible external toast tuple support & snappy prototype
On 06/07/2013 05:38 PM, Andres Freund wrote: > On 2013-06-07 17:27:28 +0200, Hannu Krosing wrote: >> On 06/07/2013 04:54 PM, Andres Freund wrote: >>> I mean, we don't necessarily need to make it configurable if we just add >>> one canonical new "better" compression format. I am not sure that's >>> sufficient since I can see usecases for 'very fast but not too well >>> compressed' and 'very well compressed but slow', but I am personally not >>> really interested in the second case, so ... >> As DE-comression is often still fast for slow-but-good compression, >> the obvious use case for 2nd is read-mostly data > Well. Those algorithms still are up to magnitude or so slower > decompressing than something like snappy, lz4 or even pglz while the > compression ratio usually is only like 50-80% improved... So you really > need a good bit of compressible data (so the amount of storage actually > hurts) that you don't read all that often (since you then would > bottleneck on compression too often). > That's just not something I run across to regularly. While the difference in compression speeds between algorithms is different, it may be more then offset in favour of better compression if there is real I/O involved as exemplified here: http://www.citusdata.com/blog/64-zfs-compression -- Hannu Krosing PostgreSQL Consultant Performance, Scalability and High Availability 2ndQuadrant Nordic OÜ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] extensible external toast tuple support & snappy prototype
Andres Freund writes: > On 2013-06-07 11:46:45 -0400, Tom Lane wrote: >> IME, once we've changed it once, the odds that we'll want to change it >> again go up drastically. I think if we're going to make a change here >> we should leave room for future revisions. > The above bit was just about how much control we give the user over the > compression algorithm used for compressing new data. If we just add one > method atm which we think is just about always better than pglz there's > not much need to provide the tunables already. Ah, ok, I thought you were talking about storage-format decisions not about whether to expose a tunable setting. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] extensible external toast tuple support & snappy prototype
Andres Freund escribió: > 2) Combined patch that adds indirect tuple and snappy compression. Those > coul be separated, but this is an experiment so far... Can we have a separate header for toast definitions? (i.e. split them out of postgres.h) -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] extensible external toast tuple support & snappy prototype
On 2013-06-07 11:46:45 -0400, Tom Lane wrote: > Andres Freund writes: > > I mean, we don't necessarily need to make it configurable if we just add > > one canonical new "better" compression format. I am not sure that's > > sufficient since I can see usecases for 'very fast but not too well > > compressed' and 'very well compressed but slow', but I am personally not > > really interested in the second case, so ... > > IME, once we've changed it once, the odds that we'll want to change it > again go up drastically. I think if we're going to make a change here > we should leave room for future revisions. The above bit was just about how much control we give the user over the compression algorithm used for compressing new data. If we just add one method atm which we think is just about always better than pglz there's not much need to provide the tunables already. I don't think there's any question over the fact that we should leave room on the storage level to reasonably easy add new compression algorithms without requiring on-disk changes. Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] extensible external toast tuple support & snappy prototype
Andres Freund writes: > I mean, we don't necessarily need to make it configurable if we just add > one canonical new "better" compression format. I am not sure that's > sufficient since I can see usecases for 'very fast but not too well > compressed' and 'very well compressed but slow', but I am personally not > really interested in the second case, so ... IME, once we've changed it once, the odds that we'll want to change it again go up drastically. I think if we're going to make a change here we should leave room for future revisions. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] extensible external toast tuple support & snappy prototype
On 2013-06-07 17:27:28 +0200, Hannu Krosing wrote: > On 06/07/2013 04:54 PM, Andres Freund wrote: > > > > I mean, we don't necessarily need to make it configurable if we just add > > one canonical new "better" compression format. I am not sure that's > > sufficient since I can see usecases for 'very fast but not too well > > compressed' and 'very well compressed but slow', but I am personally not > > really interested in the second case, so ... > As DE-comression is often still fast for slow-but-good compression, > the obvious use case for 2nd is read-mostly data Well. Those algorithms still are up to magnitude or so slower decompressing than something like snappy, lz4 or even pglz while the compression ratio usually is only like 50-80% improved... So you really need a good bit of compressible data (so the amount of storage actually hurts) that you don't read all that often (since you then would bottleneck on compression too often). That's just not something I run across to regularly. Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] extensible external toast tuple support & snappy prototype
On 06/07/2013 04:54 PM, Andres Freund wrote: > > I mean, we don't necessarily need to make it configurable if we just add > one canonical new "better" compression format. I am not sure that's > sufficient since I can see usecases for 'very fast but not too well > compressed' and 'very well compressed but slow', but I am personally not > really interested in the second case, so ... As DE-comression is often still fast for slow-but-good compression, the obvious use case for 2nd is read-mostly data > > Greetings, > > Andres Freund > -- Hannu Krosing PostgreSQL Consultant Performance, Scalability and High Availability 2ndQuadrant Nordic OÜ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] extensible external toast tuple support & snappy prototype
On 2013-06-07 10:44:24 -0400, Robert Haas wrote: > On Fri, Jun 7, 2013 at 10:30 AM, Andres Freund wrote: > > Turns out the benefits are imo big enough to make it worth pursuing > > further. > > Yeah, those were nifty numbers. > > > The problem is that to discern from pglz on little endian the byte with > > the two high bits unset is actually the fourth byte in a toast datum. So > > we would need to store it in the 5th byte or invent some more > > complicated encoding scheme. > > > > So I think we should just define '00' as pglz, '01' as xxx, '10' as yyy > > and '11' as storing the schema in the next byte. > > Not totally following, but I'm fine with that. Currently on a little endian system the pglz header contains the length in the first four bytes as: [][][][xxdd] Where dd are valid length bits for pglz and xx are the two bits which are always zero since we only ever store up to 1GB. We can redefine 'xx' to mean whatever we want but we cannot change it's place. > >> > 3) Surely choosing the compression algorithm via GUC ala SET > >> > toast_compression_algo = ... isn't the way to go. I'd say a storage > >> > attribute is more appropriate? > >> > >> The way we do caching right now supposes that attoptions will be > >> needed only occasionally. It might need to be revised if we're going > >> to need it all the time. Or else we might need to use a dedicated > >> pg_class column. > > > > Good point. It probably belongs right besides attstorage, seems to be > > the most consistent choice anyway. > > Possibly, we could even store it in attstorage. We're really only > using two bits of that byte right now, so just invent some more > letters. Hm. Possible, but I don't think that's worth it. There's a padding byte before attinhcount anyway. Storing the preferred location in attstorage (plain, preferred-internal, external, preferred-external) separately from the compression seems to make sense to me. > > Alternatively, if we only add one form of compression, we can just > > always store in snappy/lz4/ > > Not following. I mean, we don't necessarily need to make it configurable if we just add one canonical new "better" compression format. I am not sure that's sufficient since I can see usecases for 'very fast but not too well compressed' and 'very well compressed but slow', but I am personally not really interested in the second case, so ... Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] extensible external toast tuple support & snappy prototype
On Fri, Jun 7, 2013 at 10:30 AM, Andres Freund wrote: > Turns out the benefits are imo big enough to make it worth pursuing > further. Yeah, those were nifty numbers. > The problem is that to discern from pglz on little endian the byte with > the two high bits unset is actually the fourth byte in a toast datum. So > we would need to store it in the 5th byte or invent some more > complicated encoding scheme. > > So I think we should just define '00' as pglz, '01' as xxx, '10' as yyy > and '11' as storing the schema in the next byte. Not totally following, but I'm fine with that. >> > 3) Surely choosing the compression algorithm via GUC ala SET >> > toast_compression_algo = ... isn't the way to go. I'd say a storage >> > attribute is more appropriate? >> >> The way we do caching right now supposes that attoptions will be >> needed only occasionally. It might need to be revised if we're going >> to need it all the time. Or else we might need to use a dedicated >> pg_class column. > > Good point. It probably belongs right besides attstorage, seems to be > the most consistent choice anyway. Possibly, we could even store it in attstorage. We're really only using two bits of that byte right now, so just invent some more letters. > Alternatively, if we only add one form of compression, we can just > always store in snappy/lz4/ Not following. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] extensible external toast tuple support & snappy prototype
On 2013-06-07 10:04:15 -0400, Robert Haas wrote: > On Wed, Jun 5, 2013 at 11:01 AM, Andres Freund wrote: > > On 2013-05-31 23:42:51 -0400, Robert Haas wrote: > >> > This should allow for fairly easy development of a new compression > >> > scheme for out-of-line toast tuples. It will *not* work for compressed > >> > inline tuples (i.e. VARATT_4B_C). I am not convinced that that is a > >> > problem or that if it is, that it cannot be solved separately. > > > >> Seems pretty sensible to me. The patch is obviously WIP but the > >> direction seems fine to me. > > > > So, I played a bit more with this, with an eye towards getting this into > > a non WIP state, but: While I still think the method for providing > > indirect external Datum support is fine, I don't think my sketch for > > providing extensible compression is. > > I didn't really care about doing (and don't really want to do) both > things in the same patch. I just didn't want the patch to shut the > door to extensible compression in the future. Oh. I don't want to actually commit it in the same patch either. But to keep the road for extensible compression open we kinda need to know what the way to do that is. Turns out it's an independent thing that doesn't reuse any of the respective infrastructures. I only went so far to actually implement the compression because a) my previous thoughts about how it could work were bogus b) it was fun. Turns out the benefits are imo big enough to make it worth pursuing further. > > 2) Do we want to build infrastructure for more than 3 compression > > algorithms? We could delay that decision till we add the 3rd. > > I think we should leave the door open, but I don't have a compelling > desire to get too baroque for v1. Still, maybe if the first byte has > a 1 in the high-bit, the next 7 bits should be defined as specifying a > compression algorithm. 3 compression algorithms would probably last > us a while; but 127 should last us, in effect, forever. The problem is that to discern from pglz on little endian the byte with the two high bits unset is actually the fourth byte in a toast datum. So we would need to store it in the 5th byte or invent some more complicated encoding scheme. So I think we should just define '00' as pglz, '01' as xxx, '10' as yyy and '11' as storing the schema in the next byte. > > 3) Surely choosing the compression algorithm via GUC ala SET > > toast_compression_algo = ... isn't the way to go. I'd say a storage > > attribute is more appropriate? > > The way we do caching right now supposes that attoptions will be > needed only occasionally. It might need to be revised if we're going > to need it all the time. Or else we might need to use a dedicated > pg_class column. Good point. It probably belongs right besides attstorage, seems to be the most consistent choice anyway. Alternatively, if we only add one form of compression, we can just always store in snappy/lz4/ Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] extensible external toast tuple support & snappy prototype
On Wed, Jun 5, 2013 at 11:01 AM, Andres Freund wrote: > On 2013-05-31 23:42:51 -0400, Robert Haas wrote: >> > This should allow for fairly easy development of a new compression >> > scheme for out-of-line toast tuples. It will *not* work for compressed >> > inline tuples (i.e. VARATT_4B_C). I am not convinced that that is a >> > problem or that if it is, that it cannot be solved separately. > >> Seems pretty sensible to me. The patch is obviously WIP but the >> direction seems fine to me. > > So, I played a bit more with this, with an eye towards getting this into > a non WIP state, but: While I still think the method for providing > indirect external Datum support is fine, I don't think my sketch for > providing extensible compression is. I didn't really care about doing (and don't really want to do) both things in the same patch. I just didn't want the patch to shut the door to extensible compression in the future. > Important questions are: > 1) Which algorithms do we want? I think snappy is a good candidate but I > mostly chose it because it's BSD licenced, widely used, and has a C > implementation with a useable API. LZ4 might be another interesting > choice. Another slower algorithm with higher compression ratio > would also be a good idea for many applications. I have no opinion on this. > 2) Do we want to build infrastructure for more than 3 compression > algorithms? We could delay that decision till we add the 3rd. I think we should leave the door open, but I don't have a compelling desire to get too baroque for v1. Still, maybe if the first byte has a 1 in the high-bit, the next 7 bits should be defined as specifying a compression algorithm. 3 compression algorithms would probably last us a while; but 127 should last us, in effect, forever. > 3) Surely choosing the compression algorithm via GUC ala SET > toast_compression_algo = ... isn't the way to go. I'd say a storage > attribute is more appropriate? The way we do caching right now supposes that attoptions will be needed only occasionally. It might need to be revised if we're going to need it all the time. Or else we might need to use a dedicated pg_class column. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers