As long as we are renaming, we might consider adding spaces, so I would suggest
`Github Usage` or `github usage`.
--
Reply to this email directly or view it on GitHub:
I think it's good when CF files can be interpreted, ideally both by humans and
computers, and ideally unambiguously. That means it should be easy, for
example, to color in in the five pointed star posted by @davidhassell above,
which region should be used for averaging over this geometry. It
There certainly are ways to interpret self-intersecting polygons in a
consistent manner. But is there a use-case for this in the realm of CF? For
self-intersecting lines, this seems clear (any kind of route, for instance,
from ships, planes, etc.), but for polygons, I struggle. Hence my
Thanks for the report, @joshmoore. I think this is the same issue that has been
discussed in #343 together with the solution in #344. Is this correct? Do those
address the issue satisfactorily for you?
--
Reply to this email directly or view it on GitHub:
Thanks for the heads-up, @JonathanGregory. I have updated #344 accordingly.
I have also addressed @ethanrd's suggestion of making the attribute name more
explicit.
Given the general support and the fact that no more comments seem to be
outstanding, I think we can start the clock and merge the
@zklaus pushed 1 commit.
66193b2e9804a2094053691a7658dbbfb1cff6ad Adopt better attribute name as
suggested by @ethanrd
--
View it on GitHub:
I think we should see `ncdump` as what it is, that is essentially a 3rd party
debug tool. As such, its specific choice of output format, regrettable as it
may be, doesn't have bearing on the CF conventions and imho poses no challenge
to standards.
--
Reply to this email directly or view it on
Dear @JonathanGregory,
such a clash is called a `conflict` in git lingo and indeed does occur. But it
is also one of the most common problems in versioning and as such one of the
core abilities and raison d'etre of git is to help with their resolution.
So don't worry, this will be easy to
I don't think that's necessary. Let's just check that everything works as
expected. If it doesn't, we'll update the artifacts post-hoc.
--
Reply to this email directly or view it on GitHub:
Thanks, @JonathanGregory, your points make sense to me. Let's take that
discussion in the other issue and move forward here with the corrected
attributes in place.
--
Reply to this email directly or view it on GitHub:
This looks to be moving in the right direction, thanks @castelao.
I am still a bit unclear on which provider should be chosen and how that choice
should be made.
Zenodo is well-known and documented on
Thanks, @erget. Before we merge, we should address the point raised by @ethanrd
in #343, namely the name of the second attribute.
Also, there is one caveat: The automatic "final" tagging is hard to test
because the real conditions only show up at the release, so at least for the
first release
Re workflow, that was exactly my thinking. I have added this now to #344.
Re removing the attribute from the examples, I am not so sure. I think we
should probably consider categorizing examples in the conventions as either
"full examples" or "simple examples"/"excerpts" and then rather add the
@zklaus pushed 2 commits.
b5f746651571405b02008cbddc5ea04659fcd47b Correct typos and trailing whitespace
in workflow
ef00c0dc1484f1404cd65dc59c549a2ae72edc84 Add automatic final versioning to
workflow
--
View it on GitHub:
Perhaps my previous comment was a bit obtuse. With regards to @erget's and
@davidhassell's comments
> [Daniel
>
I have added a fancified version of the version handling. Let me know what you
think or if you want me to explain a bit more.
--
Reply to this email directly or view it on GitHub:
@zklaus pushed 1 commit.
ff0a539a242555b27cd6aa5018e6e9a0c65b1530 Fancify single sourced version
--
View it on GitHub:
@zklaus pushed 1 commit.
1bae8aead25c12a9b0f7cd0af7cfd636a8ed1ca3 Add -draft suffix to version
--
View it on GitHub:
For reference, in ESMValTool we use just the process that @erget described to
generate the changelog using [this
@zklaus commented on this pull request.
> @@ -0,0 +1 @@
+:current-version: 1.10
I had this locally at some point, but I removed it again because I felt that it
looked odd in the `:Conventions` attribute both in the examples, which would
read
```
// global attributes:
:Conventions =
I added a draft PR in #344 that addresses (2). If we decide to tackle (1)
separately, the list of changes in that PR should give a good overview of the
places that need changing.
--
Reply to this email directly or view it on GitHub:
This single sources the version number as discussed in #343. There are a few
open questions.
I split the version itself out into the new file `version.adoc`, which allows
us to share the same version between `cf-conventions.adoc` and
`conformance.adoc`.
There was one small hiccup with the
See issue #343 for discussion of these changes.
# Release checklist
- [ ] Authors updated in `cf-conventions.adoc`?
- [ ] Next version in `cf-conventions.adoc` up to date? Versioning inspired by
This should be possible using a "replacement" as described in [Section 26.9 of
the asciidoc
userguide](https://urldefense.us/v3/__https://asciidoc-py.github.io/userguide.html*X7__;Iw!!G2kpM7uM-TzIFchu!g9-eXsCt31CXN0toVTDJdMsAixwlWH5Sq5yhyLtaW7Ch9hVJLjPfXvzXF_Vg69HBpUAEbFj5dyY$
).
--
Reply to
@DikraK you are correct that this information can at the moment not be encoded
in the cell methods. Your request is rather timely since we are discussing
similar issues in cf-convention/discuss#131, but no consensus on how to add
this to the conventions has been reached yet.
If you want to put
@DikraK, I think you are not calculating statistics over 25 axes, but rather
really over only one: the axis of the ensemble. The problem is that this
information is not apparent in your current encoding. IMHO, the best way to add
this information would be to combine your current 25 variables
Thanks, @JonathanGregory. I only looked in this repository for open issues, so
I missed it.
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
And one more thing came up: When publishing on Zenodo one *must* provide a
license and to my own surprise I could not figure out which license CF
conventions are published under. If this is an oversight on my part, please
help me out. If not, this is probably something that deserves its own
To illustrate a possible way of setting this up, I created an upload on the
Zenodo sandbox. It can be found
[here](https://urldefense.us/v3/__https://sandbox.zenodo.org/record/932985*.YV7MHiVS_mE__;Iw!!G2kpM7uM-TzIFchu!i-hKTs37bbN5jZUNQUkHrwwMv88KuWwFZVLsyoxwPUIq-glkRfffYJLz7Bt-70E6SQAfYVZjl4A$
PS: Zenodo has [a
Sandbox](https://urldefense.us/v3/__https://sandbox.zenodo.org/__;!!G2kpM7uM-TzIFchu!lTl4qkt9s1UU3YtWoPyy0U-JB_vLtU2FCZm_CIzzSgr6eSFSfXXxsH5CASwd4aeUKYKAi-kBwvk$
) available for experimentation. If there are specific (or unspecific) open
questions around what Zenodo can do or
I really like the approach laid out by @castelao.
A few points are probably worth mentioning/stressing:
DOIs for Versions
This is baked into Zenodo. If you have a look at [the Zenodo FAQ, Section "DOI
@zklaus commented on this pull request.
> We recommend that the unit **`year`** be used with caution. The Udunits
> package defines a **`year`** to be exactly 365.242198781 days (the interval
> between 2 successive passages of the sun through vernal equinox). __It is not
> a calendar
Note that not having explicit connectivity creates a (potentially) rather large
computational overhead for the reconstruction of it. This will be exacerbated
with more complicated grids in the future, think time-dependent unstructured
grids, perhaps with regionally varying timestep.
I think it
I second what @erget said. In terms of implementation of "bugfix" versions like
1.6.1, 1.7.1, etc., the github way of doing this, would be to create a branch,
usually, something like v1.6.x from the commit in the history that is the
release 1.6, make the changes in that branch and tag the
> It seems like there is support for deprecating `gregorian`. When the
> Gregorian calendar is intended for dates after 1582, we could introduce a new
> calendar (e.g. `strict_gregorian`), or else use the existing
> `proleptic_calendar`, which is the same as Gregorian for dates after 1582.
To
> * We should make use of the existing list, namely the conformance
> document, for the purposes being discussed here - I don't think we need a new
> list.
I agree with this.
> * We don't have to distinguish positive and negative categories, because
> they are logically related:
I agree with what @ethanrd and @erget said, namely that we have errata and what
I would call deprecations.
I think it is quite important to actually remove deprecations at some point,
preferably under a predictable policy, e.g. two versions after the initial
deprecation. The reason is that
Another option would be to declare `nsigma` deprecated. This way it would be
around for some time to come, but new data would avoid using it and it would be
clear that that is correct. In a future release, such backward-incompatible
changes could be bundled. For reference, I link here to [the
Yes, that might be a good way to encode the information. What I wanted to say
is this: I find it very plausible that in a national weather service a group
sits together and decides to code their station data using variable names
`tas_station-name` with a number of non ascii letters in the
One potential use-case that always came to my mind without an actual example at
hand Is the native names of weather stations, say a temperature time-series
from the UmeƄ station, where the variable name contains the station name.
What makes this particularly interesting is that it seems to be
For what it's worth (probably not much), I approve.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/cf-convention/cf-conventions/issues/241#issuecomment-607889648
This list forwards relevant notifications
This is great!
Would it be possible to upload the pdf and html individually, not in an archive
so as to facilitate easy inspection?
Maybe it would also be a good idea to upload a log (perhaps just the console
output)?
Like this, we might catch some warnings more easily that don't prevent a
Ah yes, I see what you mean, you are right: Always speaking about UTF-8,
multi-byte here isn't referring to the possibility of having several bytes
encode one code point, but to actual code points with more than one byte, thus
excluding the one-byte code points which are exactly the first 128
I agree and would go one small step further: UTF-8 is only an encoding, so we
should just say "unicode" for strings. If we need to restrict that, say to
disallow underscore in the beginning or to save a separation character like
space in attributes right now, we should do so at the character
I think there is some confusion here.
First, this whole regex stuff is only about the physical byte layout of the
netcdf classic file format. I would in principle suggest to completely focus on
netcdf4 files instead.
Second, I think CF should not concern itself with encodings and byte order
Speaking as someone that has been trying to make sense of very diverse CF files
with nothing but the CF-Convention in my hand, I have to say the fact that
dimension coordinates can be identified by name and dimension being the same is
a good thing.
It is very hard to correctly identify, for
That would be great. It looks like standard uml class diagrams. Maybe you even
have the source figures still?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
Also, the png figures don't scale very well. Any chance of svg, pdf, or eps
versions?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/cf-convention/cf-conventions/pull/250#issuecomment-595304088
This
There seem to be some technical problems at this point: `appi.adoc` is not
included in `cf-conventions.adoc`, probably something like
```
:numbered!:
include::appi.adoc[]
```
should be added to the end of the list of appendices.
When I do that locally, the document doesn't compile to pdf
How are they represented in CF right now? As far as I know only by 2d
coordinates (which doesn't codify the iso-coordinate lines).
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
Units have to come from udunits (see [CF-1.8,
3.1](http://cfconventions.org/Data/cf-conventions/cf-conventions-1.8/cf-conventions.html#units)),
which will also provide the conversion factors. In this sense, this proposal
seems unnecessary.
--
You are receiving this because you are subscribed
@mwengren a good way to deal with this is [rewriting
history](https://git-scm.com/book/en/v2/Git-Tools-Rewriting-History). In fact,
it is very rare to find a good PR without rewritten history, imho.
--
You are receiving this because you are subscribed to this thread.
Reply to this email
I agree that it would be good to have use cases.
@ngalbraith is also right that not everyone is writing their CF code based on
naked netCDF access. Indeed, I consider such an approach foolish, since CF is
far too rich by now to stand a series chance of getting it right.
However, while using
I have zero Unidata authority, but I'd like to state the obvious: Unicode is
complicated.
This may already account for the somewhat vague formulation in the NUG if one
takes a look at [the list of whitespace characters in
54 matches
Mail list logo