Hi All

 

I want to ensure that the technical possibilities do not overwhelm the
clinical reality in this space. It is a very new space and, for
interoperability and sharing of applications, we need to align ideas for
shared data specifications as part of the process. It has been and remains
attractive to consider code repository support for this activity, but I
think a much less sophisticated approach that engages leaders in an
alignment process is most important.

 

The key decision in these early days is what to put in the parent archetype
and what to keep for localisation through specialisation or templates.

 

Cheers, Sam

 

From: [email protected]
[mailto:openehr-technical-bounces at openehr.org] On Behalf Of pablo pazos
Sent: Friday, 3 February 2012 2:50 AM
To: openehr technical
Subject: RE: Python / Django experience??

 

Hi all,

 

What I've been thinking is to share the same interface/protocol to do simple
tasks on distributed CKMs like:

 

1.      Adding an artifact (archetype, template, termset, terminology
service, process definition, gui template, etc.), lets think only about
archetypes for now.
2.      Updating an archetype (with version management)
3.      Listing all the archetypes
4.      Listing all the archetypes for one RM type (i.e. composition, entry,
action, etc.)
5.      Listing all the archetypes that archetype_id match a regexp
6.      Listing all the archetypes that match some text (free text search)
7.      Retrieve archetype in ADL/XML/JSON/YAML by archetype_id

 

 

Ian, about the requirements you mention:


> Basic requirements
> 
> 1. Query across multiple repositories using multiple criteria, based
> on something similar to the CKM ontology.

 

For this I think something like DNS protocol will do the job
(http://en.wikipedia.org/wiki/Domain_Name_System). DNS could do a
distributed search by redirecting the query if some server doesn't have the
answer. So, if we have a service to register artifacts on CKM servers
(service 1. on the list above), with an "artifact registered notification
protocol", and another protocol for "CKM server discovery", we could
implement the distributed search.


> 2. Establish references to remote assets, almost certainly caching the
> referenced asset locally

 

This would be the a mix of "adding and artifact" and "artifact registered
notification protocol".


> 3. Establish subscriptions to remote assets, to enable change notification
etc
> 

And this would be included in the "CKM server discovery protocol", that
could defined like some services provided by the NDP protocol, using
messages like RA, NS, NA, ... to discover CKM servers to create a CKM
network over Internet:
http://fengnet.com/book/CCIE%20Professional%20Development%20Routing%20TCPIP%
20Volume%20I/ch02lev1sec5.html 
I think some of these services could be found also in the ICMP protocol:
http://www.networksorcery.com/enp/protocol/icmp.htm

 

 

Just to clarify my thoughts, I don't think we need a network protocol!!! I
think we could create our protocols to handle artifacts in a distributed way
reusing some ideas from those proven protocols that our machines run
everyday to connect to the Internet and access distributed resourses.

 

 

How this stuff could work in reality?

 

1.      We need a set of "root CKM servers", always online, that could
answer our queries or redirect to some another server that could answer
(like DNS).
2.      In those servers (could be only one, like openEHR.org CKM), other
servers could advertise themselves to form part of the CKM network, this
could be done like an ICMP or NDP router advertisement. Those servers could
download also a list of servers currently connected to the network, and
update the list anytime.
3.      The children servers could be not always online, so each entry on
the root server registry should have a lifetime, that when reached, the
entry is eliminated from the list (like in ICMP
http://www.networksorcery.com/enp/protocol/icmp/msg9.htm ). This could
trigger a notification to the other members of the network, to update their
server list.
4.      When an artifact is added to a server, it should notify other
servers in the network, so they could know what server has the original copy
of the artifact, and maybe they can make a copy of the artifact that sould
be read-only on those servers that cache a copy. The cached archetype could
have a lifetime, that when reached, a new copy of that archetype should be
downloaded from the original server, if the server is still online, or renew
the lifetime if the original server is offline.
5.      Then a query received by any CKM could be responded or redirected to
other server, and all servers in the network could keep up with all the
archetypes created worldwide.

 

 

I don't know if this is crazy talk or if it's seems reasonable to you.
Please let me know :D

 

 

Kind regards,

Pablo.

 

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<http://lists.openehr.org/mailman/private/openehr-technical_lists.openehr.org/attachments/20120205/bce4ad42/attachment.html>

Reply via email to