# Re: [CF-metadata] Feedback requested on proposed CF Simple Geometries

```Dear Chris

> > If the regions were irregular
> > polygons in latitude and longitude, nv would be the number of vertices and
> > the
> > lat and lon bounds would trace the outline of the polygon e.g. nv=3,
> > lat=0,90,0
> > and lon=0,0,90 describes the eighth of the sphere which is bounded by the
> > meridians at 0E and 90E and the Equator. I think, therefore, we do not
> > need an
> > additional convention for points or polygonal regions.
>
> this seems fine for this simple example, but burying a bunch of coordinates
> of a complex polygon in a text string in an attribute is really not a good
> idea -- the coordinates of a polygon should be in the array data one way or
> another, rather than having to parse out attribute strings.```
```
To avoid confusion:

I didn't suggest parsing attribute strings. The same numbers that Ben would put
in his x and y auxiliary coordinate variables for a single polygon can appear
in coordinate bounds variables according to the existing convention.

> * I suspect that geometries of this kind can be described by the ugrid
> > convention http://ugrid-conventions.github.io/ugrid-conventions, which is
> > compliant with CF. Their purpose is to describe a set of connected points,
> > edges or faces at which values are given,
>
> I'm not so sure -- UGRID is about defining a bunch of polygons that all
> share vertices, and are all of the same order (usually all triangles, or
> quads, or maybe hexes). if they are a mixture, you still store the full set
> (say, six vertices), while marking some as unused. But it's not that well
> set up for a bunch of polygons of different order.
>
> Not too bad if there are only one or two complex polygons, but it would be
> a bit weird -- you'd have vertices and boundaries, but no faces. And you'd
> lose t order of the vertices (thought that could probably be added to the
> UGRID standard)

OK. I didn't investigate this, but it would be good to know about it. If
ugrid can do something like this, but not all of it, maybe ugrid could be
extended. If ugrid seems too complicated for these cases, maybe a "light"
version of ugrid could be proposed for them. I think we should avoid having
two partially overlapping conventions.

> * So far CF does not say anything about the use of netCDF-4 features (i.e.
> > not
> > the classic model). We have often discussed allowing them but the general
> > argument is also made that there has to be a compelling case for providing
> > a
> > new way to do something which can already be done. (Steve Hankin often made
> > this argument, but since he's mostly retired I'll make it now in his name
> > :-)
> >
>
> Maybe it's time to embrace netcdf4? It's been a while! Though maybe for CF
> 2.* -- any movement on that?

I think, as we generally do, that we should adopt netCDF-4 features if there
is a definite need to do so. I mean something you can't do with an existing
mechanism, or which is done so much more easily with a new mechanism that it
justifies the extra effort of requiring alternatives to be programmed in
software. I'm not arguing against it in general, but I think it has to be
argued for each specific need within the convention.

CF2 is not well-defined. I have to admit to being nervous about that. I am
very much opposed to an idea of "starting all over again" and maintaining
two conventions in parallel (since old data would continue to exist for a long
time and so the old CF would have to be supported), and I also think backwards-
incompability has to be strongly justified. So I favour step-by-step evolution.
Another idea we've discussed, which I'm comfortable with, is of defining
"strict" compliance to the convention, which a data-writer could optionally
adhere to. This could exclude older features we wanted to deprecate. However
this is really not the subject of the discussion - it's another thread.

> I think the ragged array option ins fine -- though I haven't looked at vlen
> arrays enough to know if they offer a compelling alternative. One issue is
> that the programming environments that we use to work with the data may not
> have an equivalent of vlen arrays.

That's a good point, and a reason why we have to be cautious in general about