Hi Jon, Unfortunately I don't think there are easy answers to the issues you describe. My experience from the GIS world is that commercial software vendors advertise compliance to particular versions of standards and that this compliance is ratified against some kind of reference implementation (test software and/or data) maintained by the appropriate standards body. This certainly seems to be the way that the Open Geospatial Consortium approaches this conformance requirement. And this seems to be the approach suggested in Ethan's and John's recent responses, which I would happily echo.
However, the OGC and similar bodies have considerably more resources to bring to bear to the problem, resources the CF community doesn't appear to have at present. So do we adopt a similar methodology, but on a manageable scale? Or should we consider moving CF development and governance under the auspices of a body - like OGC (if they'd have us :-) - with deeper pockets? It occurs to me, however, that part of the problem may be due to the fact that CF is primarily focussed on defining an information model, not a programming interface. And while software practitioners have plenty of experience of developing solutions that conform to well-defined interfaces (netCDF and OGC's WMS, WFS & WCS being fine examples), I'd contend that they have much less experience of developing solutions that test whether or not a particular dataset conforms to a particular information model. Or that a conforming dataset makes any sense in a given end-user context - which seems to be what we're asking of CF- compliant software. Personally, I don't believe that's a tractable problem. Perhaps this interface-based approach is what CMOR and libCF are trying to do. Unfortunately I have little experience of either of these initiatives so I'll leave it to others to comment on these. But if we really want to get a handle on software compliance issues then, IMHO, I believe we need to go down the defined interface route. And the netCDF API itself should give us a good steer on how to go about this. Cheers, Phil > > And I could read this file today using, say, ncdump and ncview. Which > > clearly doesn't tell us much. > > This is a really important point. It would be very difficult, in the > general case, to ascertain whether a certain piece of software > actually interprets a certain CF attribute correctly. Conversely it > is perhaps unreasonable to expect a piece of software to implement > correctly every feature of a certain CF version. > > What a tool user really wants to know (I think) is, for a given NetCDF > file, which attributes in the file are correctly interpreted by the > tool. I can't think of a neat way to do this - perhaps tool > developers could publish a list of attributes that they claim to be > able to interpret for each version of the tool they produce? A given > tool might then implement 100% of CF1.0 but 50% of CF1.2 for example. > Then the CF community could maintain a list of tools that users could > go to to find out which tools might be most suited to their purpose. > > An add-on to the CF compliance checker could be created that, having > scanned a file for CF attributes, produces a list that says "Tool X > understands all of the attributes in this file, but Tool Y only > understands 7 out of 9". > > All this requires effort of course, but I think it's useful to > consider what we really mean when we call for "CF compliance". How > can we help users to judge which tools they should use and how can we > help data providers to ensure that their data can be interpreted by a > wide community? > > Jon
_______________________________________________ CF-metadata mailing list [email protected] http://mailman.cgd.ucar.edu/mailman/listinfo/cf-metadata
