The killer move would be to do something I advocated for years unsuccessfully: *separate SNOMED technology from content *and allow them to be independently licensable and used. Here, technology means representation (RF2 for example), open source programming libraries for working with ref-sets, specs and implems for e..g the constraint language, URIs and so on.

It should be possible for a country (the one I am most familiar with w.r.t. to terminology today is Brazil) to create an empty 'SNOMED container' of its own, and put its existing terminologies in there - typically procedure lists, drug codes, lab codes, devices & prosthesis codes, packages (chargeable coarse-grained packages like childbirth that you get on a health plan) and so on. There are usually < 20k or even 10k such codes for most countries (UK and US would an exception), not counting lab analyte codes (but even there, 2000 or so codes would take care of most results). But the common situation is that nearly every country has its own version of these things, and they are far smaller than SNOMED. Now, SNOMED's version of things is usually better for /some /of that content, but in some cases, /it is missing concepts/.

The ability to easily create an empty SNOMED repo, fill it with national vocabularies, have it automatically generate non-clashing (i.e. with other countries, or the core) concept codes and mappings, and then serve it from a standard CTS2 (or other decent standard) terminology service would have revolutionised things in my view. This pathway has not been obviously available however, and has been a real blockage. The error was not understanding that the starting point for most countries isn't the international core, it's their own vocabularies.

The second killer feature would have been to *make creating and managing ref-sets for data/form fields much easier*, based on a subsetting language that can be applied to the core, and tools that implement that. Ways are needed to make the local / legacy vocabularies that have been imported, to look like a regular ref-set.

The third killer feature would have been to *make translation tools work *on the basis of legacy vocabulary and new ref-sets, not on the basis of the huge (but mostly unused) international core.

I think IHTSDO's / SNOMED International's emphasis has historically been on curating the core content, and making/buying tools to do that (the IHTSDO workbench, a tool that comes with its own PhD course), rather than promulgating SNOMED technology and tooling to enable the mess of real world content in each country to be rehoused in a standard way, and incrementally joined up by mapping or other means to the core. I think the latter would have been more helpful.

There is additionally an elephant in the room: *IHTSDO (now SNOMED International) has been tied to a single terminology - SNOMED CT*, but it would have been better to have had a terminology standards org that was independent of any particular terminology, and worked to create a truly terminology-independent technology ecosystem, along with technical means of connecting terminologies to each other, without particularly favouring any one of them. It's just a fact that the world has LOINC, ICDx, ICPC, ICF and hundreds of other terminologies that are not going anywhere. What would be useful would be to:

 * classify them according to meta-model type - e.g. multi-hierarchy
   (Snomed); single hierarchy (ICDx, ICPC, ... ); multi-axial (LOINC);
   units (UCUM, ...), etc
 * build / integrate technology for each major category - I would guess
   < 10
 * help the owning orgs slowly migrate their terminologies to the
   appropriate representation and tools
 * embark on an exercise to graft in appropriate upper level
   ontology/ies, i.e. BFO2, RO, and related ontologies (this is where
   the <10 comes from by the way)
 * specify standards for URIs, querying, ref-sets that /work across all
   terminologies/, not just SNOMED CT

A further program would look at integrating units (but not by the current method of importing to SNOMED, which is a complete error because of the different meta-models), drugs and substances (same story), lab result normal and other range data, and so on. None of this can be done without properly studying and developing the underlying ontologies, which are generally small, but subtle.

I'll stop there for now. I suspect I have kicked the hornet's nest, but since Grahame kicked it first, and I can run faster than him, I feel oddly safe. Probably an illusion.

- thomas

On 13/03/2018 12:12, Grahame Grieve wrote:

    I am get the impression that SNOMED CT is hard to implement, and
    therefore wondered if we are at some kind of tipping point, like
    where HL7v3 was a few years ago, and some bright spark came along,
    and now we have FHIR that is gaining great traction in the health
    community due to the ease at which it can be implemented.


this is very true, and I wish that someone would stick their neck out and do this at scale with a community behind them. Many of the parameters for how it could be done are obvious around free and crowd-support etc. But the big problem is that there is no capacity for it to happen as a
palace revolution; it must be a full civil war first.

Grahame



_______________________________________________
openEHR-technical mailing list
openEHR-technical@lists.openehr.org
http://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org

--
Thomas Beale
Principal, Ars Semantica <http://www.arssemantica.com>
Consultant, ABD Team, Intermountain Healthcare <https://intermountainhealthcare.org/> Management Board, Specifications Program Lead, openEHR Foundation <http://www.openehr.org> Chartered IT Professional Fellow, BCS, British Computer Society <http://www.bcs.org/category/6044> Health IT blog <http://wolandscat.net/> | Culture blog <http://wolandsothercat.net/>
_______________________________________________
openEHR-technical mailing list
openEHR-technical@lists.openehr.org
http://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org

Reply via email to