@JonathanGregory @AndersMS @davidhassell @oceandatalab (Sylvain) FYI In pursuing #327 the following text was raised by @JonathanGregory (https://urldefense.us/v3/__https://github.com/cf-convention/cf-conventions/issues/327*issuecomment-859397744__;Iw!!G2kpM7uM-TzIFchu!lQ-6WV-p1PlzNtWT7Sr5-At0hvOn8Ek5BhqypKciFKeNR3erwZLei3KvWsiIHIGPpysCsGCeP2I$ ), which we will not pursue in the course of #327 but should nonetheless be captured for referencing and addressing separately. I've quoted that below
In the first paragraph of Sect 8 we distinguish three methods of reduction of datset size. I would suggest minor clarifications: > There are three methods for reducing dataset size: packing, lossless > compression, and lossy compression. By packing we mean altering the data in a > way that reduces its precision **(but has no other effect on accuracy)**. By > lossless compression we mean techniques that store the data more efficiently > and result in no **loss of precision or accuracy**. By lossy compression we > mean techniques that store the data more efficiently **and retain its > precision** but result in some loss in accuracy. Then I think we could start a new paragraph with "Lossless compression only works in certain circumstances ...". By the way, isn't it the case that HDF supports per-variable gzipping? That wasn't available in the old netCDF data format for which this section was first written, so it's not mentioned, but perhaps it should be now. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://urldefense.us/v3/__https://github.com/cf-convention/cf-conventions/issues/330__;!!G2kpM7uM-TzIFchu!lQ-6WV-p1PlzNtWT7Sr5-At0hvOn8Ek5BhqypKciFKeNR3erwZLei3KvWsiIHIGPpysCcFhZbjg$ This list forwards relevant notifications from Github. It is distinct from [email protected], although if you do nothing, a subscription to the UCAR list will result in a subscription to this list. To unsubscribe from this list only, send a message to [email protected].
