Congratulations and thanks to all who contributed to this successful piece of
work.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
@JonathanGregory @AndersMS et al., chanting is all done and the merge is
complete. Thanks all for your many varied contributions - this was a lot of
work on all sides and my hope is that it proves useful to both data producers
and consumers moving forward!
--
You are receiving this because
Closed #327 via #326.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
Dear @AndersMS @erget et al.
I would be pleased to merge the pull request and close this issue, but I see
that the PR has conflicts which have to be resolved. I expect there is some
GitHub incantation which you can pronounce to resolve them.
Best wishes
Jonathan
--
You are receiving this
Dear @AndersMS
Thanks for the clarification. That's fine. The proposal will be approved this
Friday 24th if no further concern is raised.
Best wishes
Jonathan
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
Dear @JonathanGregory ,
We have just discussed the matter of the cell bounds interpolation and the
question you [raised
Dear @JonathanGregory,
Once again, thank you very much for your thorough review and valuable comments,
which significantly improved the proposal.
Cheers
Anders
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
Dear @JonathanGregory ,
Regarding the interpolation of bounds, [you
asked](https://urldefense.us/v3/__https://github.com/cf-convention/cf-conventions/issues/327*issuecomment-886687324__;Iw!!G2kpM7uM-TzIFchu!gwecS-gI9rNIndTP8rqMcB6mXYSV_fxclkyOlS4CO2hI_NE9xWHxmdM2h4s2mto-MGJkkxsi9oo$
):
> Are
Dear @AndersMS @davidhassell @erget @oceandatalab and collaborators
Thanks for the enormous amount of hard and thorough work you have put into
this, and for answering all my questions and comments. I have no more concerns.
Looking through the rendered PDF of App J, I see boxes, probably
Dear @JonathanGregory et al.,
Due to the heroic contributions primarily of @AndersMS and @davidhassell as
well as the expert review of @oceandatalab and friends we can present to you
the now-finalised version of the pull request associated with this issue.
To see all points listed and
Dear @JonathanGregory, @AndersMS, and all,
> Conformance
>
> For "Each tie_point_variable token specifies a tie point variable that must
> exist in the file, and each interpolation_variable token specifies an
> interpolation variable that must exist in the file," I think all you can say
> is
Dear @AndersMS
Thanks for your detailed replies. I think there are only two outstanding points
in those you have answered.
**18**: Now I understand what you mean, thanks. To make this clearer to myself,
I would say something like this: Bounds interpolation uses the same tie point
index
Dear @JonathanGregory,
Just to let you know that I just updated my reply to **Reply to
Comment/Proposed Change 23**
Dear All,
Here are the links to the easy-to-read versions including all the above changes:
- [Chapter
Dear @JonathanGregory ,
Thank you for your rich set of comments and suggestions. I have provided
replies below, in the same format we used for the first set of comments.
Several of the replies I have already implemented in the document and indicated
the corresponding commit. For others, the
Dear all
@AndersMS and colleagues have proposed a large addition to Chapter 8 and an
accompanying new appendix to the CF convention, defining methods for storing
subsampled coordinate variables and the descriptions of the interpolation
methods that should be used to reconstruct the entire
Dear @AndersMS and colleagues
Thanks again for the new version. I find it very clear and comprehensive. I
have a few comments.
## Chapter 8
"Tie point mapping attribute" mentions "target dimension", which is not a
phrase used elsewhere. Should this be "interpolated dimension"?
You say, "For
Great, thanks, @AndersMS. I am still learning about GitHub. I was using the
Diff, which doesn't show the diagrams, rather than Viewing the file, which
works fine. Jonathan
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
Dear @JonathanGregory ,
I am still a bit new to documents on GtiHup, but these two links does the job
in my browser:
- [Chapter
Dear @AndersMS et al.
Thanks for the new version. Can you tell me where to find versions of Ch 8 and
App J with the figures in place? That would make it easier to follow.
I've just read the text of Ch 8, which I found much clearer than before. I
don't recall reading about bounds last time. Is
Dear @JonathanGregory,
Appendix J is now ready for your review.
The only remaining open issues is now that we will do one more iteration on the
section on Computational Precision for Chapter 8 - we will publish it here
within the next days.
Best regards,
Anders
--
You are receiving this
Dear All,
Just to let you know that as agreed during the discussion of the new
"Interpolation of Cell Boundaries" section (f3de508) I have added a the
following sentence in the "Interpolation Parameters"
section (2ce5d66):
> Interpolation parameters are not permitted to contain absolute
Dear @AndersMS
Thanks for the update and your hard work on this. I will read the section again
in conjunction with Appendix J, once you announce that the latter is ready.
Best wishes
Jonathan
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly
Dear team,
Following our meeting this afternoon, I propose the following new paragraph at
the end of the section "Tie Points and Interpolation Subareas":
> Tie point coordinate variables for both coordinate and auxiliary coordinate
> variables must be defined as a numeric data type and are not
> Maybe we could append a new item at the end of "Coordinate Compression Steps"
> in Appendix J recommending that data producers check the positional error by
> comparing the reconstructed coordinates against the original data, and then
> provide as many details as possible regarding the
We may be solving a problem here before it arises. From this arises the danger
that we'll solve a problem that won't arise, or that we'll solve it in a way
that's not as useful as it could be!
It seems that computational precision is neither sufficient to describe the
actual target, which is
Dear All,
Regarding the wording of the section on computational precision attribute, I
have reservations with respect to the direction it has taken and I suggest we
discuss the matter during our meting this afternoon.
It is essential to the value and usability of the of the _Lossy Compression
Hi @davidhassell and @oceandatalab,
I also support the `computational precision` paragraph by @davidhassell
presented here
Hi @davidhassell,
I am in favor of your version of the "computational precision" paragraph: it
conveys all the required information while remaining concise and yet clearly
warns users about the limited scope of the `computational_precision` attribute.
--
You are receiving this because you are
Dear All,
As proposed
[above](https://urldefense.us/v3/__https://github.com/cf-convention/cf-conventions/issues/327*issuecomment-872151596__;Iw!!G2kpM7uM-TzIFchu!i2lqlelli9HNPJ8B2iYj8zTfVVqAhReHwOARRaVVBHVvLhBGBN3dSzLqichBRORcq_I9wjOrnsg$
) I will go ahead and change all occurrences and forms
Hi all,
Sylvain's descriptions and rational are very good, I think. I am wondering,
however, if we are making too bold claims about accuracy when we have no
control over the interpolation method's implementation. A user's technique
may differ from the creator's (that's OK), but if one
Thank you for the comments @AndersMS and @erget.
I like the concise version too, I would just keep my version of the "As an
example ..." paragraph even if it is more verbose because it states exactly
what the attribute means, hopefully leaving no room for misinterpretation.
The "{...] using
@oceandatalab (Sylvain) & @AndersMS - I am in favour of the shorter text; in
fact, perhaps one could combine these 3 paragraphs into 1:
> The accuracy of the reconstituted coordinates will mainly depend on the
> degree of subsampling, the choice of interpolation method and the choice of
> the
Dear Sylvain (@oceandatalab)
Thank you very much for your proposed wording of the Computational Precision
text, which I think is a sound way to formulate the meaning and usage of the
`computational_precision` attribute.
I like the detailed rationale you have provided and support having the
@AndersMS: yes I think replacing "sample/sampled" with "subsample/subsampled"
would make the text more consistent.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
Hi,
Here is a new take on the computational precision paragraph:
8.3.8 Computational Precision
The accuracy of the reconstituted coordinates will depend on the degree of
subsampling, the choice of interpolation method and the choice of the
floating-point arithmetic precision used in the
Dear All,
Considering that we have now renamed the term _tie point interpolation
dimension_ to _subsampled dimension_, should we possibly change the title
**Lossy Compression by Coordinate Sampling**
to
**Lossy Compression by Coordinate Subsampling**
and the replace the occurrences of
Dear @AndersMS. Daniel @erget et al.,
> Concerning terminology, following discussion in the group, these terms seem
> good candidates:
> At tie-point level: "subsampled dimension", "non-interpolated dimension"
> At reconstituted level: "interpolated dimension", "non-interpolated dimension"
# Terminology issues
Dear @JonathanGregory et al. (@AndersMS @davidhassell @oceandatalab @ajelenak)
Concerning terminology, following discussion in the group, these terms seem
good candidates:
- At tie-point level: "subsampled dimension", "non-interpolated dimension"
- At reconstituted level:
Dear @JonathanGregory
That's an interesting suggestion, than you. We will discuss it in the group
tomorrow.
Best regards,
Anders
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
Dear @AndersMS
In your proposed change 10, you used the word "uncompressed", and "compression"
is in the title of this proposal. I think it would be clear to speak of a
"compressed dimension" of the tie point variable corresponding to an
"uncompressed dimension" of the data variable, or
Dear @JonathanGregory
Dear Jonathan,
Thank you for the fedback.
- yes, we had a sentence saying that the size of a tie point interpolation
dimension must be less than or equal to the size of the corresponding
interpolated dimension. I actually deleted it, since it is a consequence of
other
Dear @AndersMS and colleagues
Thanks very much for taking my comments so seriously and for the modifications
and explanations. I agree with all these improvements, with two reservations:
* Do you somewhere state that the size of a tie point interpolated dimension
must be less than or equal to
I have removed the paragraph "The same interpolation variable may be multiply
mapped " as proposed
Hi Anders,
> I believe the following paragraph from our chapter 8 is no longer relevant
I do agree.
David
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
Dear All,
I believe the following paragraph from our chapter 8 is no longer relevant,
after we have moved all the dimension related attributes from the data variable
to the interpolation variable.
The tie point variables `lat` and `lon` spanning dimension `tp_dimension1` and
tie point
Hi again @JonathanGregory
Just to add the figures have not yet been updated, I think we will do this when
all text changes have ben agreed.
Anders
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
Dear @JonathanGregory
We have progressed with preparing the replies to your proposals. Although there
are still a couple of open points, we thought it would be useful to share what
we already have.
We have numbered your proposal as Proposed Change 1-16 and treated each of
these separately
Hi @taylor13,
Your point is valid. I guess there would be two alternative solutions:
1. We remove _'or exceed'_ from the sentence _'For the coordinate
reconstitution process, the floating-point arithmetic precision should match or
exceed the precision specified by computational_precision
Editorial suggestion:
In the statement,
```
To ensure that the results of the coordinate reconstitution process are
reproducible and of
predictable accuracy, the creator of the compressed dataset may specify the
floating-point
arithmetic precision to be used in the interpolation method
I agree. This specification of precision is good.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
That looks good to me, Anders. The word _computation_ is good.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
Good idea David.
Should we perhaps use _computation_ instead of _calculation_ to match the
attribute name? Here I have updated the first two paragraphs and added an
example:
**8.3.8 Computational Precision**
"The accuracy of the reconstituted coordinates will depend on the degree of
Thank you, Anders. I very happy with this.
A minor suggestion - perhaps change:
_"...may specify the floating-point arithmetic precision by setting ..."_
to
_... may specify the floating-point arithmetic precision to be used in the
interpolation calculations by setting ..._
just to be extra
Dear All,
Following a discussion yesterday in the team behind the proposal, we propose
the 'computational_precision` attribute to be optional. Here is the proposed
text, which now has a reference to [IEEE Std 754]. Feel free to comment.
Anders
**8.3.8 Computational Precision**
The accuracy
Dear @JonathanGregory
Thank you very much for your rich and detailed comments and suggestions, very
appreciated.
The team behind the proposal met today and discussed all the points you raised.
We have prepared or are in the process of preparing replies to each of the
points. However, before
Dear all
I've studied the text of proposed changes to Sect 8, as someone not at all
involved in writing it or using these kinds of technique. (It's easier to read
the files in [Daniel's
I have a preference for "optional" because I suspect in most cases 32-bit will
be sufficient and this would relieve data writers from including this
attribute. There may be good reasons for making it mandatory; what are they?
Not sure about this, but I think "should" rather than "shall" is
Hi @taylor13 and @davidhassell,
Regarding the `computational_precision` attribute, it appears that we currently
have two proposals: Either an optional attribute with a default value or a
mandatory attribute.
I have written two versions of the new section 8.3.8, one for each of the two
Wouldn't the statement be correct as is (perhaps rewritten slightly; see
below), if we indicated that if the computational_precision attribute is *not*
specified, a default precision of "32" should be assumed? I would think
that almost always the default precision would suffice, so for most
Hi David,
Yes, I would be happy to update the PR. However, I still have one concern
regarding the `computational_precision `attribute.
In the introduction to _Lossy Compression by Coordinate Sampling_ in chapter 8,
I am planning to change the last sentence from
> The creator of the compressed
Hi Anders - thanks, it sounds like we're currently in agreement - do you want
to update the PR?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
Hi David,
Fine, I take your advice regarding not having a default value. That is probably
also simpler - one rule less.
Anders
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
Hi Anders,
> "The floating-point arithmetic precision should match or exceed the precision
> specified by computational_precision attribute. The allowed values of
> computational_precision attribute are:
>
> (table)
"32": 32-bit floating-point arithmetic
"64": 64-bit floating-point arithmetic
@taylor13 by the way I'm still on the prowl for a moderator for this
discussion. As I see you've taken an interest, would you be willing to take on
that role? I'd be able to do it as well, but as I've been involved in this
proposal for quite some time it would be nice to have a fresh set of
Leaving out "base-2" is fine. Shortening the description further as you suggest
would also be fine with me.
I am wondering if we could change the wording to:
"The floating-point arithmetic precision should match or exceed the precision
specified by `computational_precision `attribute. The
looks good to me. Can we omit "base-2" from the descriptions, or is that
essential? Might even reduce description to, for example:
```
"32": 32-bit floating-point arithmetic
```
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on
Hi @taylor13 and @davidhassell,
I am not fully up to date on the data types, but following the links that David
sent, it appears that decimal64 is a base-10 floating-point number
representation that _is intended for applications where it is necessary to
emulate decimal rounding exactly, such
I don't understand the difference between decimal64 and binary64 or what they
precisely mean. If these terms specify things beyond precision, it's probably
not appropriate to use them here, so I would support defining our own
vocabulary, which would not confuse precision with anything else.
Thanks, @taylor13 and @AndersMS,
I, too, would favour A (_Using the scheme proposed above, requiring the data
creator to set the computational_precision accordingly._).
I'm starting to think that the we need to be clear about `"decimal64"` (or 32,
128, etc.). I'm fairly sure that we only want
Thank you @taylor13 for the proposals and @davidhassell for the implementation
details.
I fully agree with your point 1, 2 and 3.
There is possibly one situation that might need attention. If the coordinates
subject to compression are stored in decimal64, typically we would require the
yes, ``calculational_precision`` was a mistake; I prefer
``computational_precision``. Also I'd be happy with not referring to an
external standard, and for now, just suggesting that two values, "decimal32"
and "decimal64", are supported, unless someone thinks others are needed at this
time.
Hi @taylor13,
1: I agree that higher precisions should be allowed. A modified description
(which could do with some rewording, but the intent is clear for now, I hope):
* By default, the user may use any precision they like for the interpolation
calculations. If the `computational_precision`
Thanks @AndersMS for the care taken to address my concern, and thanks
@davidhassell for the proposed revision. A few minor comments:
1. I wonder if users could be given the freedom to do their interpolation at a
_higher_ precision than specified by ``interpolation_precision``. I would hate
For convenience, here is the proposal for specifying the precision to be used
for the interpolation calculations (slightly robustified):
* By default, the user may use any precision they like for the interpolation
calculatins, but if the `interpolation_precision` attribute has been set to a
Hi @taylor13
Thank you very much or your
[comments](https://urldefense.us/v3/__https://github.com/cf-convention/discuss/issues/37*issuecomment-832780564__;Iw!!G2kpM7uM-TzIFchu!lLfxA-1u7BwCI_w6J1bYPrIbNQuDCLnaPErHAUz5vBwAAZn2Z7S69d6LTvuHUjDOgIwCD7NGM8M$
). We did have a flaw or a weakness in the
Hello @taylor13, @AndersMS,
It might be better to continue the conversion over at #37 on the precision of
interpolation calculations (the comment thread starting at
# Title
Lossy Compression by Coordinate Sampling
# Moderator
@user
# Moderator Status Review [last updated: -MM-DD]
Brief comment on current status, update periodically
# Requirement Summary
The spatiotemporal, spectral, and thematic resolution of Earth science data are
increasing
78 matches
Mail list logo